lang
stringclasses 1
value | s2FieldsOfStudy
listlengths 0
8
| url
stringlengths 78
78
| fieldsOfStudy
listlengths 0
5
| lang_conf
float64 0.8
0.98
| title
stringlengths 4
300
| paperId
stringlengths 40
40
| venue
stringlengths 0
300
| authors
listlengths 0
105
| publicationVenue
dict | abstract
stringlengths 1
10k
⌀ | text
stringlengths 1.94k
184k
| tok_len
int64 509
40k
| openAccessPdf
dict | year
int64 1.98k
2.03k
⌀ | publicationTypes
listlengths 0
4
| isOpenAccess
bool 2
classes | publicationDate
timestamp[us]date 1978-02-01 00:00:00
2025-04-23 00:00:00
⌀ | references
listlengths 0
958
| total_tokens
int64 509
40k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Physics",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Physics",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0000c2f981838f81c47759242ea123b6121401a9
|
[
"Medicine",
"Computer Science",
"Physics"
] | 0.909939
|
Memory attacks on device-independent quantum cryptography.
|
0000c2f981838f81c47759242ea123b6121401a9
|
Physical Review Letters
|
[
{
"authorId": "144583853",
"name": "J. Barrett"
},
{
"authorId": "145439738",
"name": "R. Colbeck"
},
{
"authorId": "143622798",
"name": "A. Kent"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Phys Rev Lett"
],
"alternate_urls": [
"http://journals.aps.org/prl/",
"http://prl.aps.org/"
],
"id": "16c9f9d4-bee1-435d-8c85-22a3deba109d",
"issn": "0031-9007",
"name": "Physical Review Letters",
"type": "journal",
"url": "https://journals.aps.org/prl/"
}
|
Device-independent quantum cryptographic schemes aim to guarantee security to users based only on the output statistics of any components used, and without the need to verify their internal functionality. Since this would protect users against untrustworthy or incompetent manufacturers, sabotage, or device degradation, this idea has excited much interest, and many device-independent schemes have been proposed. Here we identify a critical weakness of device-independent protocols that rely on public communication between secure laboratories. Untrusted devices may record their inputs and outputs and reveal information about them via publicly discussed outputs during later runs. Reusing devices thus compromises the security of a protocol and risks leaking secret data. Possible defenses include securely destroying or isolating used devices. However, these are costly and often impractical. We propose other more practical partial defenses as well as a new protocol structure for device-independent quantum key distribution that aims to achieve composable security in the case of two parties using a small number of devices to repeatedly share keys with each other (and no other party).
|
## Memory Attacks on Device-Independent Quantum Cryptography
Jonathan Barrett,[1, 2,][ ∗] Roger Colbeck,[3, 4,][ †] and Adrian Kent[5, 4,][ ‡]
1Department of Computer Science, University of Oxford,
Wolfson Building, Parks Road, Oxford OX1 3QD, U.K.
2Department of Mathematics, Royal Holloway, University of London, Egham Hill, Egham, TW20 0EX, U.K.
3Institute for Theoretical Physics, ETH Zurich, 8093 Zurich, Switzerland.
4Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, ON N2L 2Y5, Canada.
5Centre for Quantum Information and Foundations, DAMTP, Centre for Mathematical Sciences,
University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA, U.K.
(Dated: 5[th] August 2013)
Device-independent quantum cryptographic schemes aim to guarantee security to users based
only on the output statistics of any components used, and without the need to verify their internal
functionality. Since this would protect users against untrustworthy or incompetent manufacturers,
sabotage or device degradation, this idea has excited much interest, and many device-independent
schemes have been proposed. Here we identify a critical weakness of device-independent protocols
that rely on public communication between secure laboratories. Untrusted devices may record their
inputs and outputs and reveal information about them via publicly discussed outputs during later
runs. Reusing devices thus compromises the security of a protocol and risks leaking secret data.
Possible defences include securely destroying or isolating used devices. However, these are costly
and often impractical. We propose other more practical partial defences as well as a new protocol
structure for device-independent quantum key distribution that aims to achieve composable security
in the case of two parties using a small number of devices to repeatedly share keys with each another
(and no other party).
Quantum cryptography aims to exploit the properties
of quantum systems to ensure the security of various
tasks. The best known example is quantum key distribution (QKD), which can enable two parties to share a
secret random string and thus exchange messages secure
against eavesdropping, and we mostly focus on this task
for concreteness. While all classical key distribution protocols rely for their security on assumed limitations on
an eavesdropper’s computational power, the advantage
of quantum key distribution protocols (e.g. [1, 2]) is that
they are provably secure against an arbitrarily powerful
eavesdropper, even in the presence of realistic levels of
losses and errors [3]. However, the security proofs require
that quantum devices function according to particular
specifications. Any deviation – which might arise from a
malicious or incompetent manufacturer, or through sabotage or degradation – can introduce exploitable security
flaws (see e.g. [4] for practical illustrations).
The possibility of quantum devices with deliberately
concealed flaws, introduced by an untrustworthy manufacturer or saboteur, is particularly concerning, since
(i) it is easy to design quantum devices that appear to
be following a secure protocol but are actually completely
insecure[1], and (ii) there is no general technique for identifying all possible security loopholes in standard quan
[∗Electronic address: [email protected]](mailto:[email protected])
[†Electronic address: [email protected]](mailto:[email protected])
[‡Electronic address: [email protected]](mailto:[email protected])
1 In BB84 [1], for example, a malicious state creation device could
be programmed to secretly send the basis used for the encoding
in an additional degree of freedom.
tum cryptography devices. This has led to much interest
in device-independent quantum protocols, which aim to
guarantee security on the fly by testing the device outputs [5–15]: no specification of their internal functionality is required.
Known provably secure schemes for deviceindependent quantum key distribution are inefficient,
as they require either independent isolated devices
for each entangled pair to ensure device-independent
security [6, 10–12, 16], or a large number of entangled
pairs to generate a short key [6, 16, 17]. Finding
an efficient secure device-independent quantum key
distribution scheme using two (or few) devices has
remained an open theoretical challenge. Nonetheless,
in the absence of tight theoretical bounds on the scope
for device-independent quantum cryptography, progress
to date has encouraged optimism (e.g. [18]) about the
prospects for device-independent QKD as a practical
technology, as well as for device-independent quantum
randomness expansion [13–15] and other applications of
device-independent quantum cryptography (e.g. [19]).
However, one key question has been generally neglected in work to date on device-independent quantum
cryptography, namely what happens if and when devices
are reused. Specifically, are device-reusing protocols composable – i.e. do individually secure protocols of this type
remain secure when combined? It is clear that reuse of
untrusted devices cannot be universally composable, i.e.
such devices cannot be securely reused for completely
general purposes (in particular, if they have memory,
they must be kept secure after the protocol). However,
for device-independent quantum cryptography to have
significant practical value, one would hope that devices
-----
can at least be reused for the same purpose. For example one would like to be able to implement a QKD
protocol many times, perhaps with different parties each
time, with a guarantee that all the generated keys can
be securely used in an arbitrary environment so long as
the devices are kept secure. We focus on this type of
composability here.
We describe a new type of attack that highlights pitfalls in producing protocols that are composable (in
the above sense) with device-independent security for
reusable devices, and show that for all known protocols
such composability fails in the strong sense that purportedly secret data become completely insecure. The leaks
do not exploit new side channels (which proficient users
are assumed to block), but instead occur through the
device choosing its outputs as part of a later protocol.
To illustrate this, consider a device-independent
scheme that allows two users (Alice and Bob) to generate and share a purportedly secure cryptographic key.
A malicious manufacturer (Eve) can design devices so
that they record and store all their inputs and outputs.
A well designed device-independent protocol can prevent
the devices from leaking information about the generated
key during that protocol. However, when they are reused,
the devices can make their outputs in later runs depend
on the inputs and outputs of earlier runs, and, if the protocol requires Alice and Bob to publicly exchange at least
some information about these later outputs (as all existing protocols do), this can leak information about the
original key to Eve. Moreover, in many existing protocols, such leaks can be surreptitiously hidden in the noise,
hence allowing the devices to operate indefinitely like hidden spies, apparently complying with security tests, and
producing only data in the form the protocols require,
but nonetheless actually eventually leaking all the purportedly secure data.
We stress that our results certainly do not imply that
quantum key distribution per se is insecure or impractical. In particular, our attacks do not apply to standard
QKD protocols in which the devices’ properties are fully
trusted, nor if the devices are trusted to be memoryless
(but otherwise untrusted), nor necessarily to protocols
relying on some other type of partially trusted devices.
Our target is the possibility of (full) device-independent
quantum cryptographic security, applicable to users who
purchase devices from a potentially sophisticated adversarial supplier and rely on no assumption about the devices’ internal workings.
The attacks we present raise new issues of composability and point towards the need for new protocol designs.
We discuss some countermeasures to our attacks that appear effective in the restricted but relevant scenario where
two users only ever use their devices for QKD exchanges
with one another, and propose a new type of protocol
that aims to achieve security in this scenario while allowing device reuse. Even with these countermeasures,
however, we show that security of a key generated with
Bob can be compromised if Alice uses the same device for
key generation with an additional party. This appears to
be a generic problem against which we see no complete
defence.
Although we focus on device-independent QKD for
most of this work, our attacks also apply to other deviceindependent quantum cryptographic tasks. The case of
randomness expansion is detailed in Appendix E.
Cryptographic scenario.—We use the standard cryptographic scenario for key distribution between Alice and
Bob, each of whom has a secure laboratory. These laboratories may be partitioned into secure sub-laboratories,
and we assume Alice and Bob can prevent communication between their sub-laboratories as well as between
their labs and the outside world, except as authorized by
the protocol. The setup of these laboratories is as follows.
Each party has a trusted private random string, a trusted
classical computer and access to two channels connecting
them. The first channel is an insecure quantum channel.
Any data sent down this can be intercepted and modified by Eve, who is assumed to know the protocol. The
second is an authenticated classical channel which Eve
can listen to but cannot impersonate; in efficient QKD
protocols this is typically implemented by using some
key bits to authenticate communications over a public
channel. Each party also uses a sub-laboratory to isolate
each of the untrusted devices being used for today’s protocol. They can connect them to the insecure quantum
channel, as desired, and this connection can be closed
thereafter. They can also interact with each device classically, supplying inputs (chosen using the trusted private
string) and receiving outputs, without any other information flowing into or out of the secure sub-laboratory.
As mentioned before, existing device-independent
QKD protocols that have been proven unconditionally
secure [6, 11, 12] require separate devices for each measurement performed by Alice and Bob with no possibility
of signalling between these devices[2], or are inefficient [17]
(in terms of the amount of key per entangled pair). For
practical device-independent QKD, we would like to remove both of these disadvantages and have an efficient
scheme needing a small number of devices.
Since the protocols in [11, 12] can tolerate reasonable
levels of noise and are reasonably efficient, we look first
at implementations of protocols taking the form of those
in [11, 12], except that Alice and Bob use one measurement device each, i.e., Alice (Bob) uses the same device to perform each of her (his) measurements. We call
these two-device protocols (Bob also has a separate isolated source device: see below). The memory of a device
can then act as a signal from earlier to later measurements, hence the security proofs of [11, 12] do not apply
(see also [20] where a different two-device setup is dis
2 Within the scenario described above, this could be achieved by
placing each device in its own sub-laboratory.
-----
1. Entangled quantum states used in the protocol are generated by a device Bob holds (which is separate and
kept isolated from his measurement device) and then
shared over an insecure quantum channel with Alice’s
device. Bob feeds his half of each state to his measurement device. Once the states are received, the quantum
channel is closed.
2. Alice and Bob each pick a random input Ai and Bi to
their device, ensuring they receive an output bit (Xi
and Yi respectively) before making the next input (so
that the i-th output cannot depend on future inputs).
They repeat this M times.
3. Either Alice or Bob (or both) publicly announces their
measurement choices, and the relevant party checks
that they had a sufficient number of suitable input combinations for the protocol. If not, they abort.
4. (Sifting.) Some output pairs may be discarded according to some public protocol.
5. (Parameter estimation.) Alice randomly and independently decides whether to announce each remaining bit
to Bob, doing so with probability µ (where Mµ ≫ 1).
Bob uses the communicated bits and his corresponding
outputs to compute some test function, and aborts if it
lies outside a desired range. (For example, Bob might
compute the CHSH value [21] of the announced data,
and abort if it is below 2.5.)
6. (Error correction.) Alice and Bob perform error correction using public discussion, in order to (with high
probability) generate identical strings. Eve learns the
error correction function Alice applies to her string.
7. (Privacy amplification.) Alice and Bob publicly perform privacy amplification [22], producing a shorter
shared string about which Eve has virtually no information. Eve similarly learns the privacy amplification
function they apply to their error-corrected strings.
TABLE I: Generic structure of the protocols we consider. Although this structure is potentially restrictive, most
protocols to date are of this form (we discuss modifications
later). Note that we do not need to specify the precise subprotocols used for error correction or privacy amplification.
For an additional remark, see Part I of the Appendix
cussed). It is an open question whether a secure key can
be efficiently generated by a protocol of this type in this
scenario. Here we demonstrate that, even if a key can be
securely generated, repeat implementations of the protocol using the same devices can render an earlier generated
key insecure.
Attacks on two-device protocols.—Consider a QKD protocol with the standard structure shown in Table I. We
imagine a scenario in which a protocol of this type is
run on day 1, generating a secure key for Alice and Bob,
while informing Eve of the functions used by Alice for error correction and privacy amplification (for simplicity we
assume the protocol has no sifting procedure (Step 4)).
The protocol is then rerun on day 2, to generate a second
key, using the same devices. Eve can instruct the devices
to proceed as follows. On day 1, they follow the protocol
honestly. However, they keep hidden records of all the
raw bits they generate during the protocol. At the end
of day 1, Eve knows the error correction and privacy amplification functions used by Alice and Bob to generate
the secure key.
On day 2, since Eve has access to the insecure quantum channel over which the new quantum states are distributed, she can surreptitiously modulate these quantum states to carry new classical instructions to the device in Alice’s lab, for example using additional degrees of
freedom in the states. These instructions tell the device
the error correction and privacy amplification functions
used on day 1, allowing it to compute the secret key generated on day 1. They also tell the device to deviate
from the honest protocol for randomly selected inputs,
by producing as outputs specified bits from this secret
key. (For example, “for input 17, give day 1’s key bit 5
as output”.) If any of these selected outputs are among
those announced in Step 5, Eve learns the corresponding
bits of day 1’s secret key. We call this type of attack, in
which Eve attempts to gain information from the classical
messages sent in Step 5, a parameter estimation attack.
If she follows this cheating strategy for Nµ[−][1] < M input bits, Eve is likely to learn roughly N bits of day 1’s
secret key. Moreover, only the roughly N output pairs
from this set that are publicly compared give Alice and
Bob statistical information about Eve’s cheating. Alice
and Bob cannot a priori identify these cheating output
pairs among the ≈ µM they compare. Thus, if the tolerable noise level is comparable to Nµ[−][1]M [−][1], Eve can (with
high probability) mask her cheating as noise. (Note that
in unconditional security proofs it is generally assumed
that eavesdropping is the cause of all noise. Even if in
practice Eve cannot reduce the noise to zero, she can
supply less noisy components than she claims and use
the extra tolerable noise to cheat).
In addition, Alice and Bob’s devices each separately
have the power to cause the protocol to abort on any
day of their choice. Thus – if she is willing to wait long
enough – Eve can program them to communicate some
or all information about their day 1 key, for instance
by encoding the relevant bits as a binary integer N =
b1 . . . bm and choosing to abort on day (N + 2)[3]. We call
this type of attack an abort attack. Note that it cannot
be detected until it is too late.
As mentioned above, some well known protocols use
many independent and isolated measurement devices.
These protocols are also vulnerable to memory attacks,
as explained in Appendix D.
3 In practice, Eve might infer a day (N +2) abort from the fact that
Alice and Bob have no secret key available on day (N +2), which
in many scenarios might detectably affect their behaviour then
or subsequently. Note too that she might alternatively program
the devices to abort on every day from (N + 2) onwards if this
made N more easily inferable in practice.
-----
Modified protocols.—We now discuss ways in which these
attacks can be partly defended against.
Countermeasure 1.—All quantum data and all public
communication of output data in the protocol come from
one party, say Bob. Thus, the entangled states used in
the protocol are generated by a separate isolated device
held by Bob (as in the protocol in Table 1) and Bob
(rather than Alice) sends selected output data over a
public channel in Step 5. If Bob’s device is forever kept
isolated from incoming communication, Eve has no way
of sending it instructions to calculate and leak secret key
bits from day 1 (or any later day).
Existing protocols modified in this way are still insecure if reused, however. For example, in a modified parameter estimation attack, Eve can pre-program Bob’s
device to leak raw key data from day 1 via output data
on subsequent days, at a low enough rate (compared to
the background noise level) that this cheating is unlikely
to be detected. If the actual noise level is lower than the
level tolerated in the protocol, and Eve knows both (a
possibility Alice and Bob must allow for), she can thereby
eventually obtain all Bob’s raw key data from day 1, and
hence the secret key.
In addition, Eve can still communicate with Alice’s
device, and Alice needs to be able to make some public
communication to Bob, if only to abort the protocol. Eve
can thus obtain secret key bits from day 1 on a later day
using an abort attack.
Countermeasure 2. [23] —Encrypt the parameter estimation information sent in Step 5 with some initial preshared seed randomness. Provided the seed required
is small compared to the size of final string generated
(which is the case in efficient QKD protocols [11, 12]),
the protocol then performs key expansion[4]. Furthermore,
even if they have insufficient initial shared key to encrypt the parameter estimation information, Alice and
Bob could communicate the parameter estimation information unencrypted on day 1, but encrypt it on subsequent days using generated key.
Note that this countermeasure is not effective against
abort attacks, which can now be used to convey all or
part of their day 1 raw key. This type of attack seems
unavoidable in any standard cryptographic model requiring composability and allowing arbitrarily many device
reuses if either Alice or Bob has only a single measurement device.
This countermeasure is also not effective in general cryptographic environments involving communication with multiple users who may not all be trustworthy. Suppose that Alice wants to share key with Bob on
day 1, but with Charlie on day 2. If Charlie becomes
corrupted by Eve, then, for example by hiding data in
4 QKD is often referred to as quantum key expansion in any case,
taking into account that a common method of authenticating the
classical channel uses pre-shared randomness.
the parameter estimation, Eve can learn about day 1’s
key (we call this an impostor attack ). This attack applies in many scenarios in which users might wish to use
device-independent QKD. For example, suppose Alice is
a merchant and Bob is a customer who needs to communicate his credit card number to Alice via QKD to
complete the sale. The next day, Eve can pose as a customer, carry out her own QKD exchange with Alice, and
extract information about Bob’s card number without
being detected.
Countermeasure 3.—Alternative protocols using additional measurement devices. Suppose Alice and Bob
each have m measurement devices, for some small integer
m ≥ 2. They perform Steps 1–6 of a protocol that takes
the form given in Table I but with Countermeasures 1
and 2 applied. They repeat these steps for each of their
devices in turn, ensuring no communication between any
of them (i.e., they place each in its own sub-laboratory).
This yields m error-corrected strings. Alice and Bob concatenate their strings before performing privacy amplification as in Step 7. However, they further shorten the
final string such that it would (with near certainty) remain secure if one of the m error-corrected strings were
to become known to Eve through an abort attack. (See
Table 2, and Appendix C for more details).
This countermeasure doesn’t avoid impostor attacks.
Instead, the idea is to prevent useful abort attacks (as
well as parameter estimation attacks due to Countermeasure 2), and hence give us a secure and composable
protocol, provided the keys produced on successive days
are always between the same two users. The information
each device has about day 1’s key is limited to the raw
key it produced. Thus, if each device is programmed to
abort on a particular day that encodes their day 1 raw
key, after an abort, Eve knows one of the devices’ raw
keys and has some information on the others (since she
can exclude certain possibilities based on the lack of abort
by those devices so far). After an abort, Alice and Bob
should cease to use any of their devices unless and until
such time that they no longer require that their keys remain secret. Intuitively, provided the set of m keys was
sufficiently shortened in the privacy amplification step,
Eve has essentially no information about the day 1 secret key, which thus (we conjecture) remains secure.
Countermeasure 4.—Alice and Bob share a small initial
secret key and use part of it to choose the privacy amplification function in Step 7 of the protocol, which may
then never become known to Eve.
Even in this case, Eve can pre-program Bob’s measurement device to leak raw data from day 1 on subsequent
days, either via a parameter estimation attack or via an
abort attack. While Eve cannot obtain bits of the secret key so directly in this case, provided the protocol
is composed sufficiently many times, she can eventually
obtain all the raw key. This means that Alice and Bob’s
residual security ultimately derives only from the initial
shared secret key: their QKD protocol produces no extra
permanently secure data.
-----
In summary, we have shown how a malicious manufacturer who wishes to mislead users or obtain data
from them can equip devices with a memory and use
it in programming them. The full scope of this threat
seems to have been overlooked in the literature on deviceindependent quantum cryptography to date. A task is
potentially vulnerable to our attacks if it involves secret
data generated by devices and if Eve can learn some function of the device outputs in a subsequent protocol. Since
even causing a protocol to abort communicates some information to Eve, the class of tasks potentially affected is
large indeed. In particular, for one of the most important
applications, QKD, none of the protocols so far proposed
remain composably secure in the case that the devices
are supplied by a malicious adversary.
One can think of the problems our attacks raise as
a new issue of cryptographic composability. One way
of thinking of standard composability is that a secure
output from a protocol must still have all the properties of an ideal secure output when combined with other
outputs from the same or other protocols. The deviceindependent key distribution protocols we have examined
fail this test because the reuse of devices can cause later
outputs to depend on earlier ones. In a sense, the underlying problem is that the usage of devices is not composably secure. This applies too, of course, for devices
used in different protocols: devices used for secure randomness expansion cannot then securely be used for key
distribution without potentially compromising the generated randomness, for example.
It is worth reiterating that our attacks do not apply
against protocols where the devices are trusted to be
memoryless. Indeed, there are schemes that are composably secure for memoryless devices [11, 12]. We also
stress that our attacks do not apply to all protocols for
device-independent quantum tasks related to cryptography. For example, even devices with memories cannot
mimic nonlocal correlations in the absence of shared entanglement [24, 25]. In addition, in applications that
require only short-lived secrets, devices may be reused
once such secrets are no longer required. Partially secure device-independent protocols for bit commitment
and coin tossing [19], in which the committer supplies
devices to the recipient, are also immune from our attacks, so long as the only data entering the devices come
from the committer.
Note too that, in practice the number of uses required
to apply the attacks may be very large, for example, in
the case of some of the abort attacks we described. One
can imagine a scenario in which Alice and Bob want to
carry out device-independent QKD no more than n times
for some fixed number n, each is confident in the other’s
trustworthiness throughout, the devices are used for no
other purpose and are destroyed after n rounds, and key
generation is suspended and the devices destroyed if a
single abort occurs. If the only relevant information con
veyed to Eve is that an abort occurs on one of the n days,
she can only learn at most log n bits of information about
the raw key via an abort attack. Hence one idea is that,
using suitable additional privacy amplification, Alice and
Bob could produce a device-independent protocol using
two measurement devices that is provably secure when
restricted to no more than n bilateral uses. It would be
interesting to analyse this possibility, which, along with
the protocol presented in Table 2, leads us to hold out
the hope of useful security for fully device-independent
QKD, albeit in restricted scenarios.
We have also discussed some possible defences and
countermeasures against our attacks. A theoretically
simple one is to dispose of – i.e. securely destroy or isolate
– untrusted devices after a single use (see Appendix B).
While this would restore universal composability, it is
clearly costly and would severely limit the practicality
of device-independent quantum cryptography. Another
interesting possibility is to design protocols for composable device-independent QKD guaranteed secure in more
restricted scenarios. However, the impostor attacks described above appear to exclude the possibility of composably secure device-independent QKD when the devices are used to exchange key with several parties.
Many interesting questions remain open. Nonetheless,
the attacks we have described merit a serious reappraisal
of current protocol designs and, in our view, of the practical scope of universally composable quantum cryptography using completely untrusted devices.
Added Remark: Since the first version of this paper,
there has been new work in this area that, in part, explores countermeasure 2 in more detail [26]. In addition,
two new works on device-independent QKD with only
two devices have appeared [27, 28]. Note that these do
not evade the attacks we present, but apply to the scenario where used devices are discarded.
Acknowledgements.—We thank Anthony Leverrier and
Gonzalo de la Torre for [23], Llu´ıs Masanes, Serge Massar and Stefano Pironio for helpful comments. JB was
supported by the EPSRC, and the CHIST-ERA DIQIP
project. RC acknowledges support from the Swiss National Science Foundation (grants PP00P2-128455 and
20CH21-138799) and the National Centre of Competence
in Research ‘Quantum Science and Technology’. AK was
partially supported by a Leverhulme Research Fellowship, a grant from the John Templeton Foundation, and
the EU Quantum Computer Science project (contract
255961). This research is supported in part by Perimeter
Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada
through Industry Canada and by the Province of Ontario
through the Ministry of Research and Innovation.
-----
[1] Bennett, C. H. & Brassard, G. Quantum cryptography:
Public key distribution and coin tossing. In Proceedings
of IEEE International Conference on Computers, Systems, and Signal Processing, 175–179. IEEE (New York,
1984).
[2] Ekert, A. K. Quantum cryptography based on Bell’s theorem. Physical Review Letters 67, 661–663 (1991).
[3] Renner, R. Security of Quantum Key Distribution. Ph.D.
thesis, Swiss Federal Institute of Technology, Zurich
(2005). Also available as quant-ph/0512258.
[4] Gerhardt, I. et al. Full-field implementation of a perfect
eavesdropper on a quantum cryptography system. Nature
Communications 2, 349 (2011).
[5] Mayers, D. & Yao, A. Quantum cryptography with imperfect apparatus. In Proceedings of the 39th Annual
Symposium on Foundations of Computer Science (FOCS98), 503–509 (IEEE Computer Society, Los Alamitos,
CA, USA, 1998).
[6] Barrett, J., Hardy, L. & Kent, A. No signalling and
quantum key distribution. Physical Review Letters 95,
010503 (2005).
[7] Acin, A., Gisin, N. & Masanes, L. From Bell’s theorem to
secure quantum key distribution. Physical Review Letters
97, 120405 (2006).
[8] Scarani, V. et al. Secrecy extraction from no-signaling
correlations. Physical Review A 74, 042339 (2006).
[9] Acin, A. et al. Device-independent security of quantum
cryptography against collective attacks. Physical Review
Letters 98, 230501 (2007).
[10] Masanes, L., Renner, R., Christandl, M., Winter, A. &
Barrett, J. Unconditional security of key distribution
from causality constraints. e-print quant-ph/0606049v4
(2009).
[11] H¨anggi, E. & Renner, R. Device-independent quantum
key distribution with commuting measurements. e-print
[arXiv:1009.1833 (2010).](arXiv:1009.1833)
[12] Masanes, L., Pironio, S. & Ac´ın, A. Secure deviceindependent quantum key distribution with causally independent measurement devices. Nature Communications 2, 238 (2011).
[13] Colbeck, R. Quantum and Relativistic Protocols For Secure Multi-Party Computation. Ph.D. thesis, University
[of Cambridge (2007). Also available as arXiv:0911.3814.](arXiv:0911.3814)
[14] Pironio, S. et al. Random numbers certified by Bell’s
theorem. Nature 464, 1021–1024 (2010).
[15] Colbeck, R. & Kent, A. Private randomness expansion
with untrusted devices. Journal of Physics A 44, 095305
(2011).
[16] Barrett, J., Kent, A. & Pironio, S. Maximally non-local
and monogamous quantum correlations. Physical Review
Letters 97, 170409 (2006).
[17] Barrett, J., Colbeck, R. & Kent, A. Unconditionally secure device-independent quantum key distribution with
[only two devices. e-print arXiv:1209.0435 (2012).](arXiv:1209.0435)
[18] Ekert, A. Less reality, more security. Physics World
(September 2009).
[19] Silman, J. et al. Fully distrustful quantum bit commitment and coin flipping. Physical Review Letters 106,
220501 (2011).
[20] H¨anggi, E., Renner, R. & Wolf, S. The impossibility of non-signalling privacy amplification. e-print
[arXiv:0906.4760 (2009).](arXiv:0906.4760)
[21] Clauser, J. F., Horne, M. A., Shimony, A. & Holt, R. A.
Proposed experiment to test local hidden-variable theories. Physical Review Letters 23, 880–884 (1969).
[22] Bennett, C. H., Brassard, G. & Robert, J.-M. Privacy
amplification by public discussion. SIAM Journal on
Computing 17, 210–229 (1988).
[23] de la Torre, G. & Leverrier, A. (2012). Personal communication.
[24] Barrett, J., Collins, D., Hardy, L., Kent, A. & Popescu, S.
Quantum nonlocality, Bell inequalities, and the memory
loophole. Physical Review A 66, 042111 (2002).
[25] Gill, R. D. Accardi contra Bell (cum mundi): The impossible coupling. In Moore, M., Froda, S. & L´eger, C. (eds.)
Mathematical Statistics and Applications: Festschrift for
Constance van Eeden, vol. 42 of IMS Lecture Notes –
Monograph Series, 133–154 (2003).
[26] McKague, M. & Sheridan, L. Reusing devices with memory in device independent quantum key distribution. e[print arXiv:1209.4696 (2012).](arXiv:1209.4696)
[27] Reichardt, B. W., Unger, F. & Vazirani, U. Classical command of quantum systems via rigidity of CHSH
[games. e-print arXiv:1209.0449 (2012).](arXiv:1209.0449)
[28] Vazirani, U. & Vidick, T. Fully device independent quan[tum key distribution. e-print arXiv:1210.1810 (2012).](arXiv:1210.1810)
[29] Carter, J. L. & Wegman, M. N. Universal classes of hash
functions. Journal of Computer and System Sciences 18,
143–154 (1979).
[30] Wegman, M. N. & Carter, J. L. New hash functions and
their use in authentication and set equality. Journal of
Computer and System Sciences 22, 265–279 (1981).
[31] Tomamichel, M., Renner, R., Schaffner, C. & Smith, A.
Leftover hashing against quantum side information. In
Proceedings of the 2010 IEEE Symposium on Information
Theory (ISIT10), 2703–2707 (2010).
[32] Trevisan, L. Extractors and pseudorandom generators.
Journal of the ACM 48, 860–879 (2001).
[33] De, A., Portmann, C., Vidick, T. & Renner, R. Trevisan’s
extractor in the presence of quantum side information. e[print arXiv:0912.5514 (2009).](arXiv:0912.5514)
[34] Tomamichel, M., Colbeck, R. & Renner, R. Duality between smooth min- and max-entropies. IEEE Transactions on information theory 56, 4674–4681 (2010).
[35] Fehr, S., Gelles, R. & Schaffner, C. Security and composability of randomness expansion from Bell inequalities.
[e-print arXiv:1111.6052 (2011).](arXiv:1111.6052)
[36] Vazirani, U. & Vidick, T. Certifiable quantum dice
or, testable exponential randomness expansion. e-print
[arXiv:1111.6054 (2011).](arXiv:1111.6054)
[37] Pironio, S. & Massar, S. Device-independent randomness
expansion secure against quantum adversaries. e-print
[arXiv:1111.6056 (2011).](arXiv:1111.6056)
Appendix A: Separation of sources and
measurement devices
We add here one important comment about the general structure of the generic protocol given in Table 1 of
the main text. There it was crucial that in Step 1, in the
-----
case where Bob (rather than Eve) supplies the states, he
does so using a device that is isolated from his measurement device. If, on the other hand, Bob had only a single
device that both supplies states and performs measurements, then his device can hide information about day 1’s
raw key in the states he sends on day 2. (This can be
done using states of the form specified in the protocol,
masking the errors as noise as above. Alternatively, the
data could be encoded in the timings of the signals or in
quantum degrees of freedom not used in the protocol.)
Appendix B: Toxic device disposal
As noted in the main text, standard cryptographic
models postulate that the parties can create secure
laboratories, within which all operations are shielded
from eavesdropping. Device-independent quantum cryptographic models also necessarily assume that devices
within these laboratories cannot signal to the outside
– otherwise security is clearly impossible. Multi-device
protocols assume that the laboratories can be divided
into effectively isolated sub-laboratories, and that devices in separate sub-laboratories cannot communicate.
In other words, Alice and Bob must be able to build arbitrary configurations of screening walls, which prevent
communication among Eve and any of her devices, and
allow only communications specified by Alice and Bob.
Given this, there is no problem in principle in defining
protocols which prescribe that devices must be permanently isolated: the devices simply need to be left indefinitely in a screened sub-laboratory. While this could be
detached from the main working laboratory, it must be
protected indefinitely: screening wall material and secure
space thus become consumed resources. And indeed in
some situations, it may be more efficient to isolate devices, rather than securely destroy them, since devices
can be reused once the secrets they know have become
public by other means. For example, one may wish to
securely communicate the result of an election before announcing it, but once it is public, the devices used for
this secure communication could be safely reused.
The alternative, securely destroying devices and then
eliminating them from the laboratory, preserves laboratory space but raises new security issues: consider, for example, the problems in disposing of a device programmed
to change its chemical composition depending on its output bit.
That said, no doubt there are pretty secure ways of
destroying devices, and no doubt devices could be securely isolated for long periods. However, the costs and
problems involved, together with the costs of renewing
devices, make us query whether these are really viable
paths for practical device-independent quantum cryptography.
Appendix C: Privacy Amplification
Here we briefly outline the important features of privacy amplification, which is a key step in the protocol. As
explained in the main text, the idea is to compress the
string such that (with high probability) an eavesdropper’s knowledge is reduced to nearly zero. This usually
works as follows. Suppose Alice and Bob share some random string, X, which may be correlated with a quantum
system, E, held by the eavesdropper. Alice also holds
some private randomness, R. The state held by Alice
and Eve then takes the form
�
ρXRE = PX (x)PR(r)|x⟩⟨x|X ⊗|r⟩⟨r|R ⊗ ρ[x]E[,]
x,r
where {ρ[x]E[}][x][ are normalized density operators, and]
PR(r) = 1/|R|. The randomness R is used to choose
a function fR ∈F, where F is some suitably chosen
set, to apply to X such that, even if she learns R, the
eavesdropper’s knowledge about the final string is close
to zero. If we call the final string S = fR(X), then Eve
has no knowledge about it if the final state takes the form
τS ⊗ ρRE, where τS is maximally mixed on S. However,
we cannot usually attain such a state, and instead measure the success of a protocol by its variation from this
ideal, measured using the trace distance, D. Denoting
the final state (after applying the function) by ρSRE, we
are interested in D(ρSRE, τS ⊗ ρRE).
Fortunately, several sets of function are known for
which the above distance can be made arbitrarily small.
Two common constructions are those based on twouniversal hash functions [3, 29–31] and Trevisan’s extractor [32, 33]. The precise details of these is not very important for the present work (we refer the interested reader
to the references), nor is it important which we choose.
However, it is worth noting that for two-universal hash
functions, the size of the seed needs to be roughly equal
to that of the final string, while for Trevisan’s extractor, this can be reduced to roughly the logarithm of the
length of the initial string (in the latter case, this may
allow it to be sent privately, if desired).
For both, the amount that the string should be compressed is quantified by the smooth conditional minentropy, which we now define. For a state ρAB, the nonsmooth conditional min-entropy is defined as
Hmin(A|B)ρ := max
σB [sup][{][λ][ ∈] [R][ : 2][−][λ][1][1][A][ ⊗] [σ][B][ ≥] [ρ][AB][}][,]
in terms of which the smooth min entropy is given by
Hmin[ε] [(][A][|][B][)][ρ][ := max]ρ¯AB [H][min][(][A][|][B][)][ρ][¯][.]
The maximization over ¯ρ is over a set of states that are
close to ρAB according to some distance measure (see,
for example, [34] for a discussion).
The significance for privacy amplification can be seen
as follows. In [3], it is shown that if f is chosen randomly
from a set of two-universal hash functions, and applied
-----
1. Entangled quantum states used in the protocol are generated by a device Bob holds (which is separate and
kept isolated from his measurement devices) and then
shared over an insecure quantum channel with Alice’s
first device. Bob feeds his half of each state to his first
measurement device. Once the states are received, the
quantum channel is closed.
2. Alice and Bob each pick a random input Ai and Bi
to their first device, ensuring they receive an output
bit (Xi and Yi respectively) before making the next
input (so that the i-th output cannot depend on future
inputs). They repeat this M times.
3. Bob publicly announces his measurement choices, and
Alice checks that for a sufficient number of suitable input combinations for the protocol. If not, Alice aborts.
4. (Sifting.) Some output pairs may be discarded according to some protocol.
5. (Parameter estimation.) Alice and Bob use their preshared key to randomly select some output pairs (they
select only a small fraction, hence the amount of key
required for this is small). For each of the selected
pairs, Bob encrypts his output and sends it to Alice.
Alice uses the communicated bits and her corresponding outputs to compute some test function, and aborts
if it lies outside a desired range.
6. (Error correction.) Alice and Bob perform error correction using public discussion, in order to (with high
probability) generate identical strings. Eve learns the
error correction function Alice applies to her string.
7. Alice and Bob repeat Steps 1–6 for each of their
m devices (ensuring the devices cannot communicate
throughout)
8. (Privacy amplification.) Alice and Bob concatenate
their m strings and publicly perform privacy amplification [22], producing a shorter shared string about
which Eve has virtually no information. In this step,
the size of their final string is chosen such that (with
high probability) it will remain secure even if one of
the raw strings or its error corrected version becomes
known.
TABLE 2: Structure of the protocol from the main
text with modifications as in Countermeasure 3. For
this protocol Alice and Bob each have m ≥ 2 measurement
devices, and Bob has one device for creating states. They are
all kept isolated from one another.
to the raw string X, as above, then for |S| = 2[t] and any
ε ≥ 0,
D(ρSRE, τS ⊗ ρRE) ≤ ε + [1] 2 [(][H]min[ε] [(][X][|][E][)][−][t][)].
2 [2][−] [1]
(An analogous statement can be made for Trevisan’s extractor [33].) Thus, if Alice compresses her string to
length t = Hmin[ε] [(][X][|][E][)][ −] [ℓ][, then the final state after ap-]
plying the hash function has distance ε + [1]2 [2][−][ℓ/][2][ to a]
state about which Eve has no knowledge.
Turning to the QKD protocol in Table 1 of the main
text, in the case of hashing the privacy amplification procedure consists of Alice selecting t depending on the test
function computed in the parameter estimation step. She
then uses local randomness to choose a hash function to
apply to her string, and announces this to Bob, who applies the same function to his string (since we have already performed error correction, this string should be
identical to Alice’s). The idea is that, if t is chosen appropriately, it is virtually impossible that the parameter
estimation tests pass and the final state at the end of
the protocol is not close to one for which Eve has no
knowledge about the final string.
In the modified protocol in Table 2, we expect each
pair of devices to contribute roughly the same amount of
smooth min entropy to the concatenated string. Thus,
since there are m devices, in order to tolerate the potential revelation of one of the error-corrected strings
through an abort attack, Alice should choose t to be
roughly (m − 1)/m shorter than she would otherwise.
Appendix D: Memory attacks on multi-device QKD
protocols
To illustrate further the generality of our attacks, we
now turn to multi-device protocols, and show how to
break iterated versions of two well known protocols.
Attacks on compositions of the BHK protocol
The Barrett-Hardy-Kent (BHK) protocol [6] requires
Alice and Bob to share MN [2] pairs of systems (where
M and N are both large with M ≪ N ), in such a
way that no measurements on any subset can effectively
signal to the others. In a device-independent scenario,
we can think of these as black box devices supplied by
Eve, containing states also supplied by Eve. Each device is isolated within its own sub-laboratory of Alice’s
and Bob’s, so that Alice and Bob have MN [2] secure sublaboratories each. The devices accept integer inputs in
the range {0, . . ., N − 1} and produce integer outputs in
the range {0, 1}. Alice and Bob choose random independent inputs, which they make public after obtaining all
the outputs. They also publicly compare all their outputs
except for those corresponding to one pair randomly chosen from among those in which the inputs differ by ±1
or 0 modulo N . If the publicly declared outputs agree
with quantum statistics for specified measurement basis
choices (corresponding to the inputs) on a singlet state,
then they accept the protocol as secure, and take the
final undeclared outputs (which are almost certainly anticorrelated) to define their shared secret bit.
The BHK protocol produces (with high probability)
precisely one secret bit: evidently, it is extremely inefficient in terms of the number of devices required. It
also requires essentially noise-free channels and errorfree measurements. Despite these impracticalities it il
-----
lustrates our theoretical point well. Suppose that Alice
and Bob successfully complete a run of the BHK protocol
and then (unauthorised by BHK) decide to use the same
2MN [2] devices to generate a second secret bit, and ask
Eve to supply a second batch of states to allow them to
do this.
Eve — aware in advance that the devices may be
reused — can design them to function as follows. In
the first run of the protocol, she supplies a singlet pair
to each pair of devices and the devices function honestly,
carrying out the appropriate quantum measurements on
their singlets and reporting the outcomes as their outputs. However, they also store in memory their inputs
and outputs. In the second run, Eve supplies a fresh
batch of singlet pairs. However, she also supplies a hidden classical signal identifying the particular pair of devices that generated the first secret bit. (This signal need
go to just one of this pair of devices, and no others.) On
the second run, the identified device produces as output
the same output that it produced on the first run (i.e. the
secret bit generated, up to a sign convention known to
Eve). All other devices function honestly on the second
run.
With probability [MN]MN[ 2][−][2][, the output from the cheating][1]
device on the second run will be made public, thus revealing the first secret bit to Eve. Moreover, with probability
1 − 23N [+][ O][(][N][ −][2][), this cheating will not be detected by]
Alice and Bob’s tests, so that Eve learns the first secret
bit without her cheating even being noticed.
There are defences against this specific attack. First,
the BHK protocol [6] can be modified so that only outputs corresponding to inputs differing by ±1 or 0 are
publicly shared.[5] While this causes Eve to wait many
rounds for the secret bit to be leaked, and increases the
risk her cheating will be detected, it leaves the iterated
protocol insecure. Second, Alice and Bob could securely
destroy or isolate the devices producing the secret key
bit outputs, and reuse all their other devices in a second
implementation. Since only the devices generating the
secret key bit have information about it, this prevents it
from being later leaked. While effective, this last defence
really reflects the inefficiency of the BHK protocol: to illustrate this, we turn next to a more efficient multi-device
protocol.
Attacks on compositions of the HR protocol
H¨anggi and Renner (HR) [11] consider a multi-device
QKD protocol related to the Ekert [2] protocol, in which
Alice and Bob randomly and independently choose one of
5 As originally presented, the BHK protocol requires public exchange of all outputs except those defining the secret key bit.
This is unnecessary, and makes iterated implementations much
more vulnerable to the attacks discussed here.
two or three inputs respectively for each of their devices.
If the devices are functioning honestly, these correspond
to measurements of a shared singlet in the bases U0, U1
(Alice) and V0, V1, V2 (Bob), defined by the following vectors and their orthogonal complements
U1 ↔|0⟩,
V0 ↔ cos(π/8)|0⟩ + sin(π/8)|1⟩,
U0, V2 ↔ cos(π/4)|0⟩ + sin(π/4)|1⟩,
V1 ↔ cos(3π/8)|0⟩ + sin(3π/8)|1⟩ .
The raw key on any given run is defined by the ≈ 1/6
of the cases in which U0 and V2 are chosen. Information
reconciliation and privacy amplification proceed according to protocols of the type described in the main text
(in which the functions used are released publicly).
Evidently, our attacks apply here too if (unauthorised
by HR) the devices are reused to generate further secret
keys. Eve can identify the devices that generate the raw
key on day 1, and request them to release their key as
cheating outputs on later days, gradually enough that the
cheating will be lost in the noise. Since the information
reconciliation and privacy amplification functions were
made public by Alice, she can then obtain the secret key.
Even if she is unable to communicate directly with the
devices for a long time (because they were pre-installed
with a very large reservoir of singlets), she can program
all devices to gradually release their day 1 outputs over
subsequent days, and so can still deduce the raw and
secret keys.
Alice and Bob could counter these attacks by securely
destroying or isolating all the devices that generated raw
key on day 1 — but this costs them 1/6 of their devices,
and they have to apply this strategy each time they generate a key, leaving (5/6)[N] of the devices after N runs,
and leaving them able to generate shorter and shorter
keys. As the length of secure key generated scales by
(5/6)[N] (or worse, allowing for fluctuations due to noise)
on each run, the total secret key generated is bounded
by ≈ 6M, where M is the secret key length generated on
day 1.
Note that, as in the case of the iterated BHK protocol, all devices that generate secret key become toxic and
cannot be reused. While the relative efficiency of the HR
protocol ensures a (much) faster secret key rate, it also
requires an equally fast device depletion rate. This example shows that our attacks pose a generic problem for
device-independent QKD protocols of the types considered to date.
Appendix E: Device-independent randomness
expansion protocols: attacks and defences
Device-independent quantum randomness expansion
(DVI QRE) protocols were introduced by two of us [13,
15], developed further by [14, 35–37], and there now exist schemes with unconditional security proofs [36]. The
-----
cryptographic scenario here is slightly different from that
of key distribution in that there is only one honest party,
Alice.
Alice’s aim is to expand an initial secret random string
to a longer one that is guaranteed secret from an eavesdropper, Eve, even if the quantum devices and states
used are supplied by Eve. The essential idea is that seed
randomness can be used to carry out nonlocality tests on
the devices and states, within one or more secure laboratories, in a way that guarantees (with numerical bounds)
that the outcomes generate a partially secret and random string. Privacy amplification can then be used to
generate an essentially fully secret random string, which
(provided the tests are passed) is significantly longer than
the initial seed.
There are already known pitfalls in designing such protocols. For example, although one might think that carrying out a protocol in a single secure laboratory guarantees that the initially secure seed string remains secure,
and so guarantees randomness expansion if any new secret random data is generated, this is not the case [15].
Eve’s devices may be programmed to produce outputs depending on the random seed in such a way that the length
of the final secret random string depends on the initial
seed. Protocols with this vulnerability are not composably secure. (To see this can be a practical problem, note
that Eve may infer the length of the generated secret random string from its use.)
A corollary of our results is that, if one wants to reuse
the devices to generate further randomness, it is crucial
to carry out DVI QRE protocols with devices permanently held within a single secure laboratory, avoiding
any public communication of device output data at any
stage. It is crucial too that the devices themselves are securely isolated from classical communications and computations within the laboratory, to prevent them from
learning details of the reconciliation and privacy amplification.
Even under these stringent conditions, our attacks still
apply in principle. For example, consider a noise-tolerant
protocol that produces a secret random output string of
variable length, depending on the values of test functions
of the device outputs (the analogue of QKD parameter
estimation for QRE) that measure how far the device
outputs deviate from ideal honest outputs. This might
seem natural for any single run, since – if the devices are
never reused – the length of the provably secret random
string that can be generated does indeed depend on the
value of a suitable test function. However, iterating such
a protocol allows the devices to leak information about
(at least) their raw outputs on the first run by generating
artificial noise in later rounds, with the level of extra
noise chosen to depend suitably on the output values.
Such noise statistically affects the length of the output
random strings on later rounds.
In this way, suitably programmed devices could ultimately allow Eve to infer all the raw outputs from the
first round, given observation of the key string lengths
created in later rounds. This makes the round one QRE
insecure, since given the raw outputs for round one, and
knowing the protocol, Eve knows all information about
the output random string for round one, except that determined by the secret random seed.
One defence against this would be to fix a length L for
the random string generated corresponding to a maximum acceptable noise level, and then to employ the Procrustean tactic of always reducing the string generated
to length L, regardless of the measured noise level.
Even then, though, unless some restriction is placed on
the number of uses, the abort attack on QKD protocols
described in the main text also applies here. The devices
have the power to cause the protocol to abort on any
round of their choice, and so – if she is willing to wait
long enough – Eve can program them to communicate
any or all information about their round 1 raw outputs
by choosing the round on which they cause an abort.
We also described in the main text a moderately costly
but apparently effective defence against abort attacks
on QKD protocols, in which Alice and Bob each have
several isolated devices that independently generate raw
sub-keys, which are concatenated and privacy amplified so that exposing a single sub-key does not significantly compromise the final secret key. This defence appears equally effective against abort attacks on deviceindependent quantum randomness expansion protocols.
Since quantum randomness expansion generally involves
only a single party, these protocols are not vulnerable to
the impostor attacks described in the main text. It thus
appears that it may be possible in principle to completely
defend them against memory attacks, albeit at some cost.
It is also worth noting that there are many scenarios in which one only needs short-lived randomness, for
example, in many gambling applications, bets are often
placed about random data that are later made public.
In such scenarios, once such random data have been revealed, the devices could be reused without our attacks
presenting any problem.
-----
| 13,355
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1201.4407, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://arxiv.org/pdf/1201.4407"
}
| 2,012
|
[
"JournalArticle"
] | true
| 2012-01-20T00:00:00
|
[
{
"paperId": "17c16c133ab46e66ea0a08f40d19b3308733c348",
"title": "Quantum cryptography: Public key distribution and coin tossing"
},
{
"paperId": "da3de28bfd3019d84c3c5fb54922cfa36b683484",
"title": "Reusing devices with memory in device independent quantum key distribution"
},
{
"paperId": "9c4c1da0698b78dd664d144b813b595480641f1e",
"title": "Unconditionally secure device-independent quantum key distribution with only two devices"
},
{
"paperId": "e1469396f6ff2eb3ebd592e27db829306ba8b712",
"title": "Classical command of quantum systems via rigidity of CHSH games"
},
{
"paperId": "050b4e393f4629c1869d8b39b5c1904cf36540c0",
"title": "Security and Composability of Randomness Expansion from Bell Inequalities"
},
{
"paperId": "c6d2abf4806ba7a7c45251be8e27d8b6d916a014",
"title": "Certifiable Quantum Dice - Or, testable exponential randomness expansion"
},
{
"paperId": "fa16254cda21e31d45a9920983f65a8eab1201da",
"title": "Device-independent randomness expansion secure against quantum adversaries"
},
{
"paperId": "c0a216cc6060649ff7dbd8c6816f44ce02e31caa",
"title": "Fully distrustful quantum bit commitment and coin flipping."
},
{
"paperId": "aae169e6f13726343cf569b627a6ff958135b511",
"title": "Private randomness expansion with untrusted devices"
},
{
"paperId": "7fcb2d74ecde9506138d1835c7da51a1f010e1e5",
"title": "Full-field implementation of a perfect eavesdropper on a quantum cryptography system."
},
{
"paperId": "f3208f01ccfe7b5a89f53815faf7a37e2cb671e5",
"title": "Device-Independent Quantum Key Distribution with Commuting Measurements"
},
{
"paperId": "34c35806d493709645bbec0f400c72bf208d2c36",
"title": "Secure device-independent quantum key distribution with causally independent measurement devices."
},
{
"paperId": "0460303f7bac4fc8eae01482b31b2fb98bf9e95e",
"title": "Leftover Hashing Against Quantum Side Information"
},
{
"paperId": "34b71802dac3b478504b34d23483bd43fd558040",
"title": "Trevisan's Extractor in the Presence of Quantum Side Information"
},
{
"paperId": "14fdbdc3a57c072a8332826ab3291b6a8e664f3b",
"title": "Quantum And Relativistic Protocols For Secure Multi-Party Computation"
},
{
"paperId": "29dd7297102a2f248323aa5450ea7b3980180091",
"title": "Random numbers certified by Bell’s theorem"
},
{
"paperId": "edbd126301d391bc917add68527eede51260bd9f",
"title": "Less reality, more security"
},
{
"paperId": "d9745d2a4d5fa96fcfd44e6111c6129bd8c836ab",
"title": "Duality Between Smooth Min- and Max-Entropies"
},
{
"paperId": "be2e91a735e4084ea92439f517f4d31713472b0a",
"title": "Device-independent security of quantum cryptography against collective attacks."
},
{
"paperId": "e76bb6dec7591c18ddddc4a18e60574aaac07f9f",
"title": "Secrecy extraction from no-signaling correlations"
},
{
"paperId": "dea44b39d8b8d8cbf16730a53c2c00365a19a29b",
"title": "Maximally Non-Local and Monogamous Quantum Correlations"
},
{
"paperId": "022f4ca76b07470a6ef1824fa546e5b093a9fbf2",
"title": "Security of quantum key distribution"
},
{
"paperId": "57afbbf50de42a137c1ae2d815db3f65bd7f79b2",
"title": "From Bell's theorem to secure quantum key distribution."
},
{
"paperId": "fc5b6f804af3f5937de217e2064e0a00f4b31d79",
"title": "No signaling and quantum key distribution."
},
{
"paperId": "4d9f0580409beb76d3576cadce59168686d49b3a",
"title": "Quantum nonlocality, Bell inequalities, and the memory loophole"
},
{
"paperId": "95ea3854f3e6bbb72d914e12e7e85b34b89518f1",
"title": "Appendix to \"Accardi contra Bell (cum mundi) : The Impossible Coupling\""
},
{
"paperId": "142e66fa3d21519838d7a4907303b562ed2a2f71",
"title": "Quantum cryptography with imperfect apparatus"
},
{
"paperId": "f8dcc3047eef8da135bca13b926b1e6cf50e7f3a",
"title": "Quantum cryptography based on Bell's theorem."
},
{
"paperId": "47cd50eec7f3d5ba6e4db9afdd9ea83545674236",
"title": "A personal communication"
},
{
"paperId": "594dfbdf3e3ba9df94a31ab9cb1e23912773b98a",
"title": "Privacy Amplification by Public Discussion"
},
{
"paperId": "345e83dd58f26f51d75e2fef330c02c9aa01e61b",
"title": "New Hash Functions and Their Use in Authentication and Set Equality"
},
{
"paperId": "feb061b699a2249f803baf159a991d63c64f9c99",
"title": "Universal Classes of Hash Functions"
},
{
"paperId": "8864c5214a30a7acd8d186f53e8991cd8bc88f84",
"title": "Proposed Experiment to Test Local Hidden Variable Theories."
},
{
"paperId": null,
"title": "The impossibility of non-signalling privacy amplification"
},
{
"paperId": "ca73bec3f9b1a483c64ccb51e8dd154fe87fbd42",
"title": "Extractors and pseudorandom generators"
},
{
"paperId": "374837e783a61f5b716bf8fe39476c25e1546362",
"title": "Parameter estimation"
},
{
"paperId": null,
"title": "No Signalling and Quantum Key Distribution a Quantum Protocol for Secret Bit Distribution"
},
{
"paperId": "fc60c0f724eb8436b9f10a530c27484b1c8be321",
"title": "Ju n 20 06 Unconditional security of key distribution from causality constraints"
},
{
"paperId": null,
"title": "Bob publicly announces his measurement choices, and Alice checks that for a sufficient number of suitable input combinations for the protocol. If not"
},
{
"paperId": null,
"title": "Alice or Bob (or both"
}
] | 13,355
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0002c60ed10a8868930b8f971af29e62b498f6b8
|
[] | 0.87827
|
Problems of evaluation of digital evidence based on blockchain technologies
|
0002c60ed10a8868930b8f971af29e62b498f6b8
|
JANUS NET e-journal of International Relation
|
[
{
"authorId": "1686911565",
"name": "Otabek Pirmatov"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
OBSERVARE
Universidade Autónoma de Lisboa
e-ISSN: 1647-7251
Vol. 14, Nº. 1 (May-October 2023)
# NOTES AND REFLECTIONS
PROBLEMS OF EVALUATION OF DIGITAL EVIDENCE BASED ON
BLOCKCHAIN TECHNOLOGIES[1]
**OTABEK PIRMATOV**
[[email protected]](mailto:[email protected])
Assistant Professor of the Department of Civil Procedural and Economic Procedural Law,
Tashkent State University of Law (Uzbekistan), Doctor of Philosophy in Law (PhD)
## Introduction
Digital evidence is fundamentally different from physical evidence and written evidence.
Securing physical evidence is primarily to prevent it from being lost or difficult to obtain
in the future.
Compared to traditional evidence, electronic evidence is fragile, easy to change and
delete, and difficult to guarantee its authenticity. For example, data on a personal
computer may be lost due to misuse, virus attack, etc. During the preparation of the
case, the video can be deleted in order to hide the facts. In fact, most electronic evidence
is stored in a central database. If the database is unreliable, the validity of the data is
not guaranteed. Obviously, how to ensure the authenticity and integrity of digital
evidence is very important when storing it.
Because digital evidence is created by special high-tech, it is easier to change it in
practice. More attention should be paid to its authenticity. Digital evidence is more likely
to be tampered with in practice.
The main methods of digital evidence storage (pre-trial provision) in civil court
proceedings are as follows:
1) sealing or closing the means of keeping the original of evidence;
2) printing, photographing and sound or visual recording;
1 This text is devoted the issues of evaluation of digital evidence based on blockchain technologies in civil
court proceedings. The article states that since it is not possible to change and delete evidence based on
block-chain technology, contracts based on blockchain technology and documents issued by government
bodies are considered acceptable evidence by the courts. It is highlighted that the usage of evidence based
on block-chain technology in conducting civil court cases will prevent the need for notarization of digital
evidence by the parties in the future.
-----
JANUS.NET, e journal of International Relations
e-ISSN: 1647-7251
Vol. 14, Nº. 1 (May-October 2023), pp. 279-288
_Notes and Reflections_
_Problems of evaluation of digital evidence based on blockchain technologies_
Otabek Pirmatov
3) drawing up reports;
4) authentication;
5) provision through a notary office;
6) storage through block-chain;
7) casting a time stamp (time stamp).
Block-chain is a database where data is securely stored. This is achieved by connecting
each new record with the previous one, resulting in a chain consisting of data blocks
("block chain" in English)—hence the name. Physically, the blockchain database is
distributed, allowing authorized users to independently add data. It is impossible to make
changes to previously stored data, as this action will break the chain, and it is
"immutability" that makes the block-chain a safe and reliable means of storing digital
records in public databases[2].
Officially, the history of “blocks and chains” begins on October 31, 2008, when someone
under the pseudonym Satoshi Nakamoto mentioned the blockchain in a white paper (base
document) about the network of the first cryptocurrency - bitcoin. The fundamental
principles for applying decentralization and immutability to document accounting were
laid down as early as the 1960s and 1970s, but the closest to them are the works of
scientists Stuart Haber and W. Scott Stornett, who in 1991 described a scheme for
sequentially creating blocks in which a hash is located. The technology was even
patented, but for its time it became a Da Vinci helicopter - there was no technical
possibility to implement the idea, and interest in it disappeared. The patent expired in
2004, just four years before Satoshi and his white paper appeared[3].
## 1. Literature review
S.S. Gulyamov defines block-chain as follows: blockchain (chain of blocks) is a distributed
set of data, in which data storage devices are not connected to a common server. These
data sets are called blocks and are stored in an ever-growing list of ordered records. Each
block will have a timestamp and a reference to the previous block. The use of encryption
ensures that users cannot write to the file without them, while the presence of private
keys can only modify a certain part of the blockchains.
In addition, encryption ensures synchronization of all users' copies of the distributed
chain of blocks (Gulyamov, 2019: 114).
Primavera De Filippi and Aaron Wright (2018) point out that block-chain technology is
different from other electronic evidence because it cannot be forgotten. The technology
itself has evidential value for the judicial system.
_Markus Kaulartz, Jonas Gross, Constantin Lichti, Philipp Sandner_ define block-chain
technology is getting increasingly renowned, as more and more companies develop
blockchain-based prototypes, e.g., in the context of payments, digital identities, and the
2 [https://www.gazeta.uz/uz/2022/08/26/blockchain-technology/](https://www.gazeta.uz/uz/2022/08/26/blockchain-technology/)
3 [https://www.forbes.ru/mneniya/456381-cto-takoe-blokcejn-vse-cto-nuzno-znat-o-tehnologii](https://www.forbes.ru/mneniya/456381-cto-takoe-blokcejn-vse-cto-nuzno-znat-o-tehnologii)
-----
JANUS.NET, e journal of International Relations
e-ISSN: 1647-7251
Vol. 14, Nº. 1 (May-October 2023), pp. 279-288
_Notes and Reflections_
_Problems of evaluation of digital evidence based on blockchain technologies_
Otabek Pirmatov
supply chain. One use case of blockchain is often seen in the tamper-proof storage of
information and documentation of facts. This is due to the fact that records on a blockchain are “practically resistant” to manipulation as a consequence of the underlying
cryptography and the consensus mechanism.
If a block-chain is used for storing information, the question arises whether the data stored
on a block-chain can be used as evidence in court. In the following article, we will analyze
this question[4].
According to Alexey Sereda, the correct usage of blockchain technologies will eliminate
the need for lawyers to perform certain mechanical tasks to a significant extent: checking
counterparties, contacting other experts (bodies), the need for notarization, etc. All this
allows lawyers to focus their efforts on solving other more important tasks[5].
Vivien Chan and Anna Mae Koo define blockchain is a decentralized and open distributed
ledger technology. Electronic data (e.g. in a transaction on an e-shopping platform, the
transaction time, purchase amount, currency and participants, etc.) will be uploaded to
a network of computers in “blocks”. Since the data saved in a blockchain is stored in a
network of computers in a specific form and is publicly available for anyone to view, the
data is irreversible and difficult to be manipulated.
Anyone who has handled an online infringement case knows the race against time in
preserving evidence. However, screenshots saved in PDF formats are easy to be
tampered with and are of scant probative value before the Chinese courts, unless
notarized. Making an appointment with, and appearing before a notary is another timeconsuming and expensive process.
With blockchain, these procedures can be simplified and improved in the following ways:
1. E-evidence can be saved as blockchain online instantaneously without a notary
public;
2. Cost for generating blockchain evidence is lower than traditional notarization;
3. Admissibility of block-chain evidence has been confirmed by statute and many courts
in China because of the tamper-free nature of block-chain technology;
4. Possible combination of online monitoring and evidence collection process: with
blockchain technology and collaboration with different prominent online platforms
(e.g. Weixin), it is possible to automate online monitoring of your intellectual
property—blockchain evidence is saved automatically when potential infringing
contents are found[6].
According to Matej Michalko, in the previous trials of dispute cases, evidence preservation
usually requires the involvement of a third-party authority such as a notary office, and
relevant persons are required to fix the evidence under the witness of the notary. With
the more frequent use of electronic evidence, most of the third-party electronic data
preservation platforms have investigated the pattern of “block-chain + evidence
4 [www.jonasgross.medium.com/legal-aspects-of-blockchain-technology-part-1-blockchain-as-evidence-in-](http://www.jonasgross.medium.com/legal-aspects-of-blockchain-technology-part-1-blockchain-as-evidence-in-court-704ab7255cf5)
[court-704ab7255cf5](http://www.jonasgross.medium.com/legal-aspects-of-blockchain-technology-part-1-blockchain-as-evidence-in-court-704ab7255cf5)
5 [https://blockchain24.pro/blokcheyn-i-yurisprudentsiya](https://blockchain24.pro/blokcheyn-i-yurisprudentsiya)
6 [https://www.lexology.com/library/detail.aspx?g=1631e87b-155a-40b4-a6aa-5260a2e4b9bb](https://www.lexology.com/library/detail.aspx?g=1631e87b-155a-40b4-a6aa-5260a2e4b9bb)
-----
JANUS.NET, e journal of International Relations
e-ISSN: 1647-7251
Vol. 14, Nº. 1 (May-October 2023), pp. 279-288
_Notes and Reflections_
_Problems of evaluation of digital evidence based on blockchain technologies_
Otabek Pirmatov
collection and preservation”, which is applying blockchain technology to the traditional
electronic evidence preservation practice (i.e., uploading the preserved evidence to a
block-chain platform). If it is necessary, you can apply online for an expert opinion from
the judicial expertise center. (Michalko, 2019: 7).
Today, the task of providing electronic evidence before the court is carried out by
notaries.
Data recorded on a blockchain is in essence a chronological chain of digitally signed
transactions. Thus, admissibility of block-chain evidence is highly correlated to
acceptance of electronic signatures in a legal setting. Not all electronic signatures provide
the same level of assurance. (Murray, 2016: 517-519).
The usage of this technology when concluding transactions or receiving any official
documents from the state greatly simplifies the process of proof, as it allows to track the
entire history of changes made to the information stored in the blockchain. It also reliably
protects them from illegal attempts to tamper or forge. Such evidence will be nearly
impossible to challenge, although the risk of hacking or fraudulent activity remains.
Second, if the court session is conducted using video conferencing, the blockchain can
be easily used by the participants in the court session. Given the development of remote
technologies caused by the coronavirus pandemic, this situation must be taken into
account. Thus, thanks to the use of blockchain, it is possible to significantly reduce the
time for consideration of cases in courts, increase the transparency of court proceedings
and ensure the necessary confidentiality of information.
If the contracts concluded by the parties are based on the blockchain technology or if the
state authorities draw up their documents based on the blockchain technology, then it
would be possible to evaluate the blockchain technology as evidence by the courts. Now
in our country, government bodies are signing their documents with Q-code.
According to Boris Glushenkov, the successful implementation of the blockchain will also
change the courts: firstly, there will be no need to make decisions for concrete things.
Second, evidence changes: electronic evidence is viewed with skepticism in courts.
Maybe blockchain can change that[7].
In civil litigation, evidence was evaluated as evidence only if it met each of the criteria of
relevance, admissibility, and reliability. Likewise, numerical evidence must meet the
requirements of relevance, acceptability, and reliability of evidence evaluation criteria.
Failure to evaluate digital evidence with one of the evidentiary evaluation criteria may
result in its inadmissibility as evidence in court.
According to Yuhei Okakita, In civil litigation, any form of evidence can generally be
submitted to the court. That is, the court accepts not only physical documents but also
digital data as evidence. Of course, civil procedure laws vary from country to country,
but electronic evidence is recognized in many legislations such as the EU, the United
States, or Japan. Since it can be said that blockchain certificates are a kind of digital
data, it should be accepted in most courts as admissible evidence.
7 [https://blockchain24.pro/blokcheyn-i-yurisprudentsiya](https://blockchain24.pro/blokcheyn-i-yurisprudentsiya)
-----
JANUS.NET, e journal of International Relations
e-ISSN: 1647-7251
Vol. 14, Nº. 1 (May-October 2023), pp. 279-288
_Notes and Reflections_
_Problems of evaluation of digital evidence based on blockchain technologies_
Otabek Pirmatov
So, you can submit the certificate to the court. However, the question is how judges
evaluate the evidence. Let's to through an example relevant for e.g. the German or
Japanese system: in these systems, it is up to the discretion of the judge to decide
whether the certificate will be taken into consideration. If the judge believes the
authenticity of the certificate, it will become the basis of the judgment.
Let's suppose that the claim of a defendant in a dispute could be validated with the data
certified with a blockchain transaction. The judge decides on the authenticity of the
submitted evidence based on the opinions of both parties. The defendant will explain the
concept of blockchain immutability achieved with the consensus mechanism, and the
other party will argue the possibility that the information on the blockchain has been
tampered with. After the judge considers both stories and takes a position regarding the
authenticity of the information, s/he will make a decision accordingly[8].
According to Zihui (Katt) Gu, For the blockchain evidence to be admissible, the
authenticity of the source of the electronic data must first be confirmed, whether through
examination of the original or comprehensive consideration of all the evidence at hand[9].
The admissibility of digital evidence is one of the problems of judicial evaluation of
evidence in civil litigation.
In ensuring the admissibility of electronic evidence in foreign countries, transferring it to
the blockchain software or evaluating the evidence in the blockchain software as
admissible evidence is of great importance.
According to Van Yojun, if blockchain technology can be applied to any digital evidence,
regardless of whether it is a criminal or civil trial, the general expected benefits can be
achieved, including: ensuring the integrity and accuracy of data, preventing the
tampering of data or evidence, increasing the transparency of legal proceedings, Court
proceedings are easy to follow, accelerated and simplified[10].
## 2. Issues of application of blockchain technology in the legislation of
foreign countries
The Federal Government of the United States has not exercised its constitutional power
to implement legislation regulating the admissibility of blockchain evidence in court.
Thus, states enjoy residual power to implement their own legislation. The Federal Rules
of Evidence establish a minimum requirement in what is referred to as the ‘best evidence
rule which establishes that the best evidence must be used at trial. Rule 1002 of the
Federal Rules of Evidence states “An original writing, recording, or photograph is required
in order to prove its content unless these rules or a federal statute provides otherwise”.
Several states have regulated blockchain through introducing their own legislation and
rules, particularly with regard to the regulation of cryptocurrency – or as termed by
various legislators, virtual currencies. New York kickstarted legislative developments in
8 [https://www.bernstein.io/blog/2020/1/17/can-digital-data-stored-on-blockchain-be-a-valid-evidence-in-](https://www.bernstein.io/blog/2020/1/17/can-digital-data-stored-on-blockchain-be-a-valid-evidence-in-ip-litigation)
[ip-litigation](https://www.bernstein.io/blog/2020/1/17/can-digital-data-stored-on-blockchain-be-a-valid-evidence-in-ip-litigation)
9 [http://illinoisjltp.com/timelytech/blockchain-based-evidence-preservation-opportunities-and-concerns/](http://illinoisjltp.com/timelytech/blockchain-based-evidence-preservation-opportunities-and-concerns/)
10
[https://www.ithome.com.tw/news/130752](https://www.ithome.com.tw/news/130752)
-----
JANUS.NET, e journal of International Relations
e-ISSN: 1647-7251
Vol. 14, Nº. 1 (May-October 2023), pp. 279-288
_Notes and Reflections_
_Problems of evaluation of digital evidence based on blockchain technologies_
Otabek Pirmatov
the USA through the regulation of virtual currency companie, and eventually several
states followed suit, with 32 states implementing their own rules and regulations. The
states of Illinois, Vermont, Virginia, Washington, Arizona, New York and Ohio have
passed or introduced legislation which specifically regulates the admissibility of
blockchain evidence in court[11].
In April 2018, 1 22 member states signed the Declaration for a European Blockchain
Partnership (EBP) in order to “cooperate on the development of a European Blockchain
Services Infrastructure.”2 With its ambitious goal of identifying initial use cases and
developing functional specifications by the end of the year, the EBP should be an
important catalyst for the use of blockchain technology by European government
agencies[12].
In October 2018, discussions were underway among the Azerbaijani Internet Forum (AIF)
for the Ministry of Justice to implement blockchain technology in several departments
within its remit. Currently, the Ministry provides more than 30 electronic services and 15
information systems and registries, including “electronic notary, electronic courts,
penitentiary service, information systems of non-governmental organizations”, and the
register of the population, among others. Part of the AIF’s plans is to introduce a “mobile
notary office” which would involve the notarization of electronic documents. Through this
process, the registry’s entries will be stored on blockchain which parties will be able to
access but not change, thus preventing falsification. Future plans also include employing
smart contracts in public utility services such as water, gas and electricity[13].
Blockchain technology is a new way to build a network. Today, almost all service systems
in the Internet system work on the basis of a centralized network, that is, the data
warehouse is located on a central server, and users receive data by connecting to this
server. The main difference of blockchain technology is that there is no need for a central
server and all network participants have equal rights. The network database is kept by
each user.
One of the main reasons why evidence based on blockchain technology is considered
admissible by courts is that blockchain technology is transparent, that is, it is not affected
by the human factor.
According to the Decision of the President of the Republic of Uzbekistan dated July 3,
2018, "On measures to develop the digital economy in the Republic of Uzbekistan":
− basic concepts in the field of "blockchain" technologies and principles of its operation;
− powers of state bodies, as well as process participants in the field of "blockchain"
technologies;
− measures of responsibility for using "blockchain" technologies for illegal purposes.
The State Services Agency of the Republic of Uzbekistan has decided that starting from
December 2020, the country's registry offices will operate based on blockchain
technology. However, as of today, this system has not yet been launched. It would be
11 [https://blog.bcas.io/blockchain_court_evidence](https://blog.bcas.io/blockchain_court_evidence)
12 [https://www.eublockchainforum.eu/reports](https://www.eublockchainforum.eu/reports)
13 [https://blog.bcas.io/blockchain_court_evidence](https://blog.bcas.io/blockchain_court_evidence)
-----
JANUS.NET, e journal of International Relations
e-ISSN: 1647-7251
Vol. 14, Nº. 1 (May-October 2023), pp. 279-288
_Notes and Reflections_
_Problems of evaluation of digital evidence based on blockchain technologies_
Otabek Pirmatov
appropriate if the documents issued not only by registry authorities, but also by tax
authorities, cadastral departments, transactions concluded by notary offices, and most
importantly, decisions of district and city mayors and reports issued by electronic auction,
e-active, would be accepted based on blockchain technology.
Agreements concluded by notary offices in civil courts, decisions of district and city
mayors, and reports issued by electronic auction serve as the main written evidence
confirming ownership rights.
Due to the widespread involvement of information technologies in all spheres of social
life in our country, the above bodies are also moving to receive documents in electronic
form.
Also, distribution of electricity based on blockchain technology is being carried out in
Uzbekistan based on South Korean technology. Perhaps, in the future, electricity
contracts in our country may be concluded on the basis of blockchain technology.
## 3. Discussion
With the development of the Internet and information technology, digital data has
gradually become an important part of the evidence system in civil court cases, which
cannot be ignored. Among all types of digital data, blockchain evidence is a relatively
new type.
A proper blockchain is not a proof itself, but a technical implementation method of
storing, transporting and correcting digital data.
Blockchain is just a storage technology, the purpose of which is to ensure the authenticity
and reliability of digital data. The most important thing is to determine the authenticity
of the digital data.
Improvements in blockchain technology can make electronic documents flow more
quickly and improve the efficiency of their assessment in courts. However, compared to
the traditional notarization method of securing electronic evidence, blockchain-based
evidence storage lags behind. That is, there are not enough normative legal documents
on the implementation of blockchain technologies in the field of justice. Notarization,
which has become a means of preventing falsification of electronic documents, is rarely
used in legal practice, because notarization of electronic evidence requires excessive time
and money for the parties.
It includes digital signatures, reliable time stamps and hash value verification to prove
the authenticity of the submitted data using blockchain technology. Parties must be able
to demonstrate how blockchain technology has been used to collect and store evidence.
Due to the decentralization of information in the blockchain network, it is very difficult
for hackers to exploit. Additionally, since each block contains the hash of the previous
block, any transaction within the blockchain is done by changing it.
Check Hash Value: After computing any electronic file using hash algorithm, only one
hash value can be obtained. If the content of the electronic file changes, the resulting
hash value will also change. The uniqueness and non-repeatability of the hash value
ensures the immutability of electronic files.
-----
JANUS.NET, e journal of International Relations
e-ISSN: 1647-7251
Vol. 14, Nº. 1 (May-October 2023), pp. 279-288
_Notes and Reflections_
_Problems of evaluation of digital evidence based on blockchain technologies_
Otabek Pirmatov
The verifier can use the hash value written to the blockchain to verify the original data
to verify that the data is valid and has not been tampered with.
Encrypting evidence can also ensure its safe storage. At a basic level, encryption uses a
secret key to ensure that only those with access can read the file by encrypting the file's
contents.
It is possible to prepare documents based on blockchain technology in applications such
as SharpShark, SynPat, WordProof, Waves, EUCD, DMCA.
The main reason why evidence based on blockchain technology is considered acceptable
evidence in foreign countries is its technological structure. We can see the following
unique features of it:
- at the discretion of one of the parties, it is not possible to change and add (falsify
and destroy) documents based on blockchain technology;
- documents based on blockchain technology are a technology resistant to hacker
attacks, which means that electronic evidence based on blockchain technology
cannot be tampered with by third parties;
- in blockchain technology, there is no need for a central server, and all network
participants have equal rights. A network database stores every user in it.
The lack of possibility of falsification and alteration of the evidence based on blockchain
technology makes it considered acceptable evidence by the courts.
According to the civil procedural law, the admissibility of the evidence must be confirmed
by certain means of proof according to this law.
In order to ensure the admissibility of electronic evidence, it is appropriate to create
electronic documents, electronic transactions using blockchain technology, and to
improve the legislation in this regard.
The following features of blockchain evidence should be considered:
1. To review the authenticity of the blockchain evidence. Specifically, it means that the
court should examine whether the blockchain evidence is likely to be tampered with
in the process of formation, transmission, extraction and display, and to the extent
of such possibility.
2. To review the legitimacy of the blockchain evidence. Specifically, it means that the
court should examine whether the collection, storage and extraction methods of
blockchain evidence comply with the law, and whether they infringe on the legitimate
rights and interests of others.
3. To review the relevance of blockchain evidence. Specifically, it means that the court
should examine whether there is a substantial connection between the blockchain
evidence and the facts to be proved[14].
14 [https://www.chinajusticeobserver.com/a/when-blockchain-meets-electronic-evidence-in-china-s-internet-](https://www.chinajusticeobserver.com/a/when-blockchain-meets-electronic-evidence-in-china-s-internet-courts)
[courts](https://www.chinajusticeobserver.com/a/when-blockchain-meets-electronic-evidence-in-china-s-internet-courts)
-----
JANUS.NET, e journal of International Relations
e-ISSN: 1647-7251
Vol. 14, Nº. 1 (May-October 2023), pp. 279-288
_Notes and Reflections_
_Problems of evaluation of digital evidence based on blockchain technologies_
Otabek Pirmatov
## Conclusion
Blockchain storage solves the problem of securely storing digital data. In a sense,
blockchain storage is an authentication or auxiliary storage method. Currently,
blockchain storage is a more indirect authentication method.
One of the peculiarities of blockchain technology in legal science is that the use of this
technology when concluding transactions or obtaining any official documents from
government authorities greatly simplifies the process of proof. Due to this, the blockchain
allows to track the entire history of changes made to the data stored in the "data" and
reliably protects against illegal attempts to tamper with or falsify the data. Such evidence
would be nearly impossible to challenge, but the risk of hacking or fraudulent activity
remains, albeit partially. Second, if court hearings are held online, the possibility of
blockchain use by court hearing participants will increase even more. Thus, due to the
use of blockchain, it is possible to significantly reduce the time of consideration of cases
in civil courts and to increase the transparency of judicial processes and ensure the
necessary confidentiality of information.
Because public offering of goods and services on social networks has become popular in
our country. Purchase of goods and services on social networks is carried out through
mutual correspondence. Correspondence in the social network can be deleted or
changed. This creates problems in evaluating social network correspondence as evidence
in civil courts.
The adoption of blockchain technologies by social networks may also lead to the use of
social media correspondence as evidence in courts in the future.
## References
Blockchain 24, consulted online, available at [https://blockchain24.pro/blokcheyn-i-](https://blockchain24.pro/blokcheyn-i-yurisprudentsiya)
[yurisprudentsiya](https://blockchain24.pro/blokcheyn-i-yurisprudentsiya)
Chan, Viviene (2020). Blockchain Evidence in Internet Courts in China: The Fast Track
for Evidence Collection for Online Disputes. Consulted online, available at
[https://www.lexology.com/library/detail.aspx?g=1631e87b-155a-40b4-a6aa-](https://www.lexology.com/library/detail.aspx?g=1631e87b-155a-40b4-a6aa-5260a2e4b9bb)
[5260a2e4b9bb](https://www.lexology.com/library/detail.aspx?g=1631e87b-155a-40b4-a6aa-5260a2e4b9bb)
De Filippi, Primavera and Wright, Aaron (2018). _Blockchain and the Law: The Rule of_
_Code. Harvard University Press._
Du, Guodong and Yu, Meng (2021). “When Blockchain Meets Electronic Evidence in
China's Internet Courts”, China Justice Observer Consulted online, available at
[https://www.chinajusticeobserver.com/a/when-blockchain-meets-electronic-evidence-](https://www.chinajusticeobserver.com/a/when-blockchain-meets-electronic-evidence-in-china-s-internet-courts)
[in-china-s-internet-courts](https://www.chinajusticeobserver.com/a/when-blockchain-meets-electronic-evidence-in-china-s-internet-courts)
European Union blockchain observatory & forum, blockchain for government and public
[services (Dec. 7, 2018), https://www.eublockchainforum.eu/reports](https://www.eublockchainforum.eu/reports)
-----
JANUS.NET, e journal of International Relations
e-ISSN: 1647-7251
Vol. 14, Nº. 1 (May-October 2023), pp. 279-288
_Notes and Reflections_
_Problems of evaluation of digital evidence based on blockchain technologies_
Otabek Pirmatov
Fedorov, Pavel (2022). “What is blockchain: everything you need to know about the
technology”, Forbes, consulted online, available at
[https://www.forbes.ru/mneniya/456381-cto-takoe-blokcejn-vse-cto-nuzno-znat-o-](https://www.forbes.ru/mneniya/456381-cto-takoe-blokcejn-vse-cto-nuzno-znat-o-tehnologii)
[tehnologii](https://www.forbes.ru/mneniya/456381-cto-takoe-blokcejn-vse-cto-nuzno-znat-o-tehnologii)
Gazeta.uz (2022). Blockchain technology is not a problem, but it is a problem that has
_to_ _be_ _solved._ Consulted online, available at
[https://www.gazeta.uz/uz/2022/08/26/blockchain-technology/](https://www.gazeta.uz/uz/2022/08/26/blockchain-technology/)
# Gross, Jonas (2020). Legal aspects of blockchain technology. Consulted online, availabe at
[www.jonasgross.medium.com/legal-aspects-of-blockchain-technology-part-1-](http://www.jonasgross.medium.com/legal-aspects-of-blockchain-technology-part-1-blockchain-as-evidence-in-court-704ab7255cf5)
[blockchain-as-evidence-in-court-704ab7255cf5](http://www.jonasgross.medium.com/legal-aspects-of-blockchain-technology-part-1-blockchain-as-evidence-in-court-704ab7255cf5)
Gulyamov, S. (2019). Blockchain technologies in the digital economy. Textbook, p. 114.
iThome (2022). Taiwan Takes the Lead in Judicial Blockchain Applications] An Inventory
_of_ _Global_ _Judicial_ _Blockchain_ _Applications,_ consulted online, available at
[https://www.ithome.com.tw/news/130752](https://www.ithome.com.tw/news/130752)
Michalko, Matej (2019). “Blockchain ‘witness’: a new evidence model in consumer
disputes”. International journal on consumer law and practice. V.7., p.7.
Murray, Andrew (2016). Information Technology Law, p. 517-519.
Okakita, Yuhei (2020). _Can digital data stored on Blockchain be valid evidence in IP_
_litigation?. Consulted online, available at_ [https://www.bernstein.io/blog/2020/1/17/can-](https://www.bernstein.io/blog/2020/1/17/can-digital-data-stored-on-blockchain-be-a-valid-evidence-in-ip-litigation)
[digital-data-stored-on-blockchain-be-a-valid-evidence-in-ip-litigation](https://www.bernstein.io/blog/2020/1/17/can-digital-data-stored-on-blockchain-be-a-valid-evidence-in-ip-litigation)
Pollacco, Alexia (2020). _The Interaction between Blockchain Evidence and Courts – A_
_cross-jurisdictional_ _analysis._ _Consulted_ _online,_ _available_ _at_
[https://blog.bcas.io/blockchain_court_evidence](https://blog.bcas.io/blockchain_court_evidence)
_The Illinois Journal of Law, Technology & Policy, consulted online, available at_
[http://illinoisjltp.com/timelytech/blockchain-based-evidence-preservation-](http://illinoisjltp.com/timelytech/blockchain-based-evidence-preservation-opportunities-and-concerns/)
[opportunities-and-concerns/](http://illinoisjltp.com/timelytech/blockchain-based-evidence-preservation-opportunities-and-concerns/)
**How to cite this note**
Pirmatov, Otabek (2023). Problems of evaluation of digital evidence based on blockchain
technologies. Notes and Reflections in Janus.net, e-journal of international relations. Vol. 14, Nº
1, May-October 2023. Consulted [online] on date of last visit, [https://doi.org/10.26619/1647-](https://doi.org/10.26619/1647-7251.14.1.01)
[7251.14.1.01](https://doi.org/10.26619/1647-7251.14.1.01)
-----
| 8,425
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.26619/1647-7251.14.1.01?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.26619/1647-7251.14.1.01, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://janusnet-ojs.autonoma.pt/index.php/janus/article/download/27/107"
}
| 2,023
|
[
"JournalArticle"
] | true
| null |
[] | 8,425
|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/000523657fe1a5879d72c099f619ea0de4424bff
|
[
"Medicine"
] | 0.840286
|
Plastic waste recycling: existing Indian scenario and future opportunities
|
000523657fe1a5879d72c099f619ea0de4424bff
|
International Journal of Environmental Science and Technology
|
[
{
"authorId": "143784002",
"name": "R. Shanker"
},
{
"authorId": "102022907",
"name": "D. Khan"
},
{
"authorId": "144851236",
"name": "R. Hossain"
},
{
"authorId": "50000506",
"name": "Md. T. Islam"
},
{
"authorId": "2160998228",
"name": "K. Locock"
},
{
"authorId": "98738371",
"name": "A. Ghose"
},
{
"authorId": "8123471",
"name": "V. Sahajwalla"
},
{
"authorId": "5999468",
"name": "H. Schandl"
},
{
"authorId": "2638939",
"name": "R. Dhodapkar"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Int J Environ Sci Technol"
],
"alternate_urls": [
"http://www.ceers.org/ijest/"
],
"id": "9c87166c-0eb9-40cd-ae69-415a31c9527b",
"issn": "1735-1472",
"name": "International Journal of Environmental Science and Technology",
"type": "journal",
"url": "https://link.springer.com/journal/13762"
}
|
This review article aims to suggest recycling technological options in India and illustrates plastic recycling clusters and reprocessing infrastructure for plastic waste (PW) recycling in India. The study shows that a majority of states in India are engaged in recycling, road construction, and co-processing in cement kilns while reprocessing capabilities among the reprocessors are highest for polypropylene (PP) and polyethylene (PE) polymer materials. This review suggests that there are key opportunities for mechanical recycling, chemical recycling, waste-to-energy approaches, and bio-based polymers as an alternative to deliver impact to India’s PW problem. On the other hand, overall, polyurethane, nylon, and polyethylene terephthalate appear most competitive for chemical recycling. Compared to conventional fossil fuel energy sources, polyethylene (PE), polypropylene (PP), and polystyrene are the three main polymers with higher calorific values suitable for energy production. Also, multi-sensor-based artificial intelligence and blockchain technology and digitization for PW recycling can prove to be the future for India in the waste flow chain and its management. Overall, for a circular plastic economy in India, there is a necessity for a technology-enabled accountable quality-assured collaborative supply chain of virgin and recycled material.
| ERROR: type should be string, got "https://doi.org/10.1007/s13762 022 04079 x\n\n**REVIEW**\n\n# Plastic waste recycling: existing Indian scenario and future opportunities\n\n**R. Shanker[2] · D. Khan[2] · R. Hossain[1] · Md. T. Islam[1] · K. Locock[3] · A. Ghose[1] · V. Sahajwalla[1] · H. Schandl[3] ·**\n**R. Dhodapkar[2]**\n\nReceived: 13 December 2021 / Revised: 23 February 2022 / Accepted: 4 March 2022 / Published online: 2 April 2022\n© The Author(s) under exclusive licence to Iranian Society of Environmentalists (IRSEN) and Science and Research Branch, Islamic Azad University 2022\n\n**Abstract**\nThis review article aims to suggest recycling technological options in India and illustrates plastic recycling clusters and reprocessing infrastructure for plastic waste (PW) recycling in India. The study shows that a majority of states in India are engaged\nin recycling, road construction, and co-processing in cement kilns while reprocessing capabilities among the reprocessors\nare highest for polypropylene (PP) and polyethylene (PE) polymer materials. This review suggests that there are key opportunities for mechanical recycling, chemical recycling, waste-to-energy approaches, and bio-based polymers as an alternative\nto deliver impact to India’s PW problem. On the other hand, overall, polyurethane, nylon, and polyethylene terephthalate\nappear most competitive for chemical recycling. Compared to conventional fossil fuel energy sources, polyethylene (PE),\npolypropylene (PP), and polystyrene are the three main polymers with higher calorific values suitable for energy production.\nAlso, multi-sensor-based artificial intelligence and blockchain technology and digitization for PW recycling can prove to be\nthe future for India in the waste flow chain and its management. Overall, for a circular plastic economy in India, there is a\nnecessity for a technology-enabled accountable quality-assured collaborative supply chain of virgin and recycled material.\n\n**Keywords Informal and formal sector · Biological recycling · Chemical recycling · Mechanical recycling · Digitization ·**\nBlockchain technology\n\n\n### Introduction\n\nPlastic has evolved into a symbol of human inventiveness as\nwell as folly which is an invention of extraordinary material\nwith a variety of characteristics and capacities. Although\nIndia is a highly populated country, it is ranked 12th among\nthe countries with mismanaged plastics but it is expected\n\nEditorial responsibility: Maryam Shabani.\n\n- D. Khan\[email protected]\n\n1 Centre for Sustainable Materials Research and Technology,\nSMaRT@UNSW, School of Materials Science\nand Engineering, UNSW Sydney, Sydney, NSW 2052,\nAustralia\n\n2 Council of Scientific and Industrial Research-National\nEnvironmental Engineering Research Institute\n(CSIR-NEERI), Nehru Marg, Nagpur 440 020, India\n\n3 Commonwealth Scientific and Industrial Research\nOrganisation (CSIRO) and Australian National University,\nCanberra, ACT 2601, Australia\n\n\nthat by the year 2025, it will be in 5th position (Neo et al.\n2021). Therefore, recycling or upscaling, or reprocessing of\nPW has become the urgency to curb this mismanagement\nof plastics and mitigate the negative impacts of plastic consumption and utilization from the environment. However,\nthis resource has not been given the required attention it\ndeserves after post-consumer use. Recycling or reprocessing\nof PW usually involves 5 types of processes based on the\nquality of the product manufactured upon recycling of the\nwaste, namely upgrading, recycling (open or closed loop),\ndowngrading, waste-to-energy plants, and dumpsites or\nlandfilling, as shown in Fig. 1 (Chidepatil et al. 2020). Usually, the PW is converted into lower-quality products such\nas pellets or granules, or flakes which are further utilized in\nthe production of various finished products such as boards,\npots, mats, and furniture (Centre for Science and Environment (CSE) 2021).\nPlastics have a high calorific value, with polymer energy\nvarying from 62 to 108 MJ/kg (including feedstock energy)\nwhich is much greater than paper, wood, glass, or metals\n(with exception of aluminum) (Rafey and Siddiqui 2021).\n\nV l (0123456789)1 3\n\n\n-----\n\n**Fig. 1 Different processing**\npathways for plastic waste\n(modified from Chidepatil et al.\n2020)\n\nPW mishandling is a significant concern in developing\nnations like India due to its ineffective waste management\ncollection, segregation, treatment, and disposal which\naccounts for 71% of mishandled plastics in Asia (Neo et al.\n2021). Though there are numerous sources for PW the major\nfraction is derived from the post-consumer market which\ncomprises both plastic and non-PWs and therefore, these\nwastes require to be washed and segregated accordingly\nfor conversion into the homogenous mixture for recycling\n(Rafey and Siddiqui 2021). According to a study carried out\nby the Federation of Indian Chambers of Commerce and\nIndustry (FICCI) and Accenture (2020), India is assumed to\nlose over $133 billion of plastic material value over the coming next 10 years until 2030 owing to unsustainable packaging out of which almost 75% of the value, or $100 billion,\ncan be retrieved. This review article focuses on levers and\nstrategies that could be put in place to transition India toward\na circular economy for plastics. This involves two key areas,\nthe first being reprocessing infrastructure in various states\nof India and the performance of the reprocessors in organized and unorganized sectors. The second key area for this\nstudy is an overview of the rapidly evolving area of plastic\nrecycling technologies, including mechanical recycling,\nchemical recycling, depolymerization, biological recycling,\nand waste-to-energy approaches. A brief description of the\ntechnologies is provided and their applicability to the Indian\ncontext discussed along with the role of digitization in PW\nrecycling.\n\n## 1 3\n\n\n### Research motivation and scope of the article\n\nThe research on Indian PW and its recycling pathways\naccording to the polymer types and its associated fates were\nstudied along with the published retrospective and prospective studies. Due to COVID-19, there is an exponential\nincrease in the PW and the urge to recycle this waste has\nbecome a necessity. Systematic literature studies from database collection of Web of Science (WoS) were performed\nwith keywords such as “PW recycling technologies in India”\nOR “PW management in India” OR “plastic flow in India”\nfrom 2000 to October 2021 (including all the related documents such as review papers, research papers, and reports)\nwhich in total accounted for 2627 articles only. When the\nsame keyword “plastic recycling” was searched without\ncontext to India, 5428 articles were published from 2000\nto 2021 among which only 345 articles were published by\nIndian authors. Figure 2 shows the distribution of papers on\nPW and related articles over the years. However, the number\nof review articles remains very limited concerning published\nresearch papers and reports for the same. Review articles\nplay a vital role in the substantial growth in the potential\nresearch areas for the enhancement of the proper management strategies in the respective domains. Recently, PW\nand its sustainable management necessity toward achieving\na circular economy have attracted researchers, due to its detrimental effects on humans and the environment.\n\n\n-----\n\n**Fig. 2 Yearly distribution of**\npapers related to plastic waste\nrecycling from 2000 to October\n2021\n\n\n640\n\n600\n\n560\n\n520\n\n480\n\n440\n\n400\n\n360\n\n320\n\n280\n\n240\n\n200\n\n160\n\n120\n\n\n### Reprocessing infrastructure and recycling rates for different types of plastics\n\nRecycling rates of plastics vary between countries depending upon the types of plastic. Some polymers are recycled\nmore than other types of polymers due to their respective\ncharacteristics and limitations. While PET (category 1) and\nHDPE (high-density polyethylene) (category 2) are universally regarded as recyclable, PVC (polyvinyl chloride) (category 3) and PP (category 5) are classified as “frequently\nnot recyclable” owing to their chemical characteristics, however, they may be reprocessed locally depending on practical\nconditions. LDPE (low-density polyethylene) (category 4)\nis however difficult to recycle owing to stress failure, PS\n(category 6) may or may not be recyclable locally, and other\ntypes of polymers (category 7) are not recyclable due to the\nvariety of materials used in its manufacturing (CSE 2021).\nAbout 5.5 million metric tonnes of PW gets reprocessed/\nrecycled yearly in India, which is 60% of the total PW produced in the country where 70% of this waste is reprocessed\nin registered (formal) facilities, 20% by the informal sector\nand the rest 10% is recycled at household level (CSE 2020).\nThe remaining 40% of PW ends up being uncollected/littered, which further results in pollution (water and land) and\nchoking of drains (CSE 2019a). PW is dumped into landfills\nat a rate of 2.5 million tonnes per year, incinerated at a rate\nof over 1 million tonnes per year, and co-processed as an\nalternative energy source in blast furnaces at a rate of 0.25\nmillion tonnes per year by cement firms (Rafey and Siddiqui\n2021). Thermoset plastics (HDPE, PET, PVC, etc.), which\n\n\n110 119\n85 [102 ]86 87\n76\n66\n\n35 41\n1 5 8 5 4 7 4 11 14 11 15 22\n\nResearch Paper Review Articles\n\nare recyclable, constitute 94% of total PW generated, and\nthe remaining 6% comprises other types of plastics which\nare multilayered, thermocol, etc. and are non-recyclable\n(CSE 2019b). Plastics such as PP, PS, and LDPE are partially recyclable but generally not recycled in India due to\nthe economic unviability of their recycling processes (CSE\n2020). Figure 3a shows the recycling rates of different kinds\nof plastics in India and Fig. 3b shows the percentage contribution of different recycling options in the Indian context.\n\n#### State‑wise facilities and flows of PW\n\nThe total plastic generation in India by 35 states and union\nterritories accounts for 34,69,780 tonnes/annum (~ 3.47 million tonnes/annum) in the year 2019–2020 (CPCB (Central\nPollution Control Board) 2021). Plastic processing in India\nwas 8.3 Mt in the 2010 financial year and increased to 22 Mt\nin 2020 (Padgelwar et al. 2021). Table 1 shows the state-wise\nPW generation, registered and unregistered plastic manufacturing/recycling units, and multiplayer manufacturing units\nacross the country. Furthermore, the main recycling clusters\nin India are presented in Fig. 4, wherein Gujarat (Dhoraji,\nDaman and Vapi), Madhya Pradesh (Indore), Delhi and\nMaharashtra (Malegaon, Mumbai (Dharavi and Bhandup),\nSolapur) are the main recycling hubs (Plastindia Foundation\n2018). Recycling processes and disposal methods for PW\nvary substantially across the states in India given in Table 1.\nDetails of some of the major infrastructure available in the\nstates are described in the following subsection.\n\n## 1 3\n\n\n-----\n\n**Fig. 3 a Recycling rates of**\ndifferent types of plastics in **(a)** 2.4%\nIndia (data extracted from CSE 7.6%\n2019b) and b percentage contribution of different recycling\noptions in the Indian context\n(CSE 2021)\n\n25%\n\n20%\n\nPVC HDPE\n\nThe door-to-door collection of solid waste is the most\ncommon practice for the collection of waste in almost all the\nstates. Urban Local Bodies (ULBs) of some states like Goa,\nHimachal Pradesh, Maharashtra, Uttarakhand, and West\nBengal are actively involved in the collection and segregation of waste (CPCB 2019; Goa SPCB 2020; MPCB 2020).\nFurther after collection and segregation of waste, the PW is\nsent to various disposal (landfills) and recycling pathways\n(recycling through material recovery, road construction,\nwaste-to-energy plants, RDF (refused derived fuel), etc.).\nGoa is the state where new bailing stations have been set up\nin addition to the existing facilities for the disposal of PW\n(Goa SPCB 2020). State like Kerala has taken the initiative\nfor the installation of reverse vending machines (RVMs) for\nplastic bottles in supermarkets and malls whereas Maharashtra ensures 100% collection of waste with proper segregation and transport of PW where 62% of the waste is being\nreprocessed through different methods (Kerala SPCB 2020;\nMPCB 2020). Special Purpose Vehicles (SPVs) in Punjab\nhave been effective for the collection of multilayered plastics\n(MLP) waste from different cities of the state and further\nbeing sent to waste-to-energy plants (Punjab Pollution Control Board (PPCB) 2018). Though almost all the states have\nimposed a complete ban on plastic bottles and bags, Sikkim\nwas the first state who enforce the ban into the state which\nresulted in the reduction in its carbon footprint (MoHUA\n2019). Many states such as Puducherry, Odisha, Tamil Nadu,\nTelangana, Uttar Pradesh, and West Bengal send their PW\nfor reprocessing in cement kilns (CPCB 2019). Some states\nlike Telangana have taken the initiative for source segregation of the waste from the households by separating the\nbins into dry and wet waste bins whereas the mixed waste\nis sent for further processing for road construction or in\ncement industries (Telangana State Pollution Control Board\n\n## 1 3\n\n\n(TSPCB) 2018). Along with all these facilities in different\nstates, several informal and unregistered recyclers are also\ncontributing to their best to combat PW mismanagement.\n\n#### Formal and informal sectors in India and their performance\n\nThe informal sector currently contributes 70% of PET recycling in India (Aryan et al. 2019). Approximately 6.5 tonnes\nto 8.5 tonnes per day of PW is collected by itinerant waste\nbuyers (IWBs) and household waste collectors in India, out\nof which 50–80% of PW is recycled (Nandy et al. 2015).\nKumar et al. (2018) mentioned that the average PW collected\nby a waste picker and an IWB was approximately 19 kg/d\nand 53 kg/d, respectively. According to ENF (2021), there\nare approximately 230 formal PW reprocessors in India,\nwho can recycle various types of the polymer as shown in\nFig. 5. However, the organized and unorganized sectors play\na vital role in the reprocessing of plastics in India. Table 2\nshows the distribution of organized and unorganized sectors along with the percentage growth in India. Most of the\noperations are currently related to mechanical recycling producing granules/pellets and flakes. In 30 states/UTs, there\nare 4953 registered units with 3715 plastic manufacturers/\nproducers, 896 recyclers, 47 compostable manufacturing,\nand 295 multilayered packaging units however, 823 unregistered units have been reported from different states (CPCB\n2021). However, data on reprocessing capability (material\nprocessed in terms of tonnes/year) of the individual recyclers\nare not readily available. With the limited data, it varies from\n2500 to 3000 tonnes/year whereas capacity for processing\nvarious PW varies from 600 to 26,250 tonnes/year (ENF\n2021).\n\n\n-----\n\n**Table 1 Plastic generation, plastic manufacturing, and recycling units in different states in India and status of plastic recycling and disposal in**\ndifferent states\n\n\nPossible recycling and\ndisposal methods involved\n\n\nMultilayer\nmanufacturing\nunits\n\n\nStates/UT Plastic generation (tonnes/\nannum)\n\n\nRegistered plastic manu- Unregistered plastic\nfacturing/recycling units manufacturing/recycling\nunits\n\n\nAndaman and Nicobar 386.85 – – – Recycling, Road construction\nAndhra Pradesh 46,222 Manufacturing units— – – Recycling, Road construc131 tion, Co-processing in\nCompostable units—1 cement kilns\n\nArunachal Pradesh 2721.17 – – – No information\nAssam 24,970.88 Manufacturing units—18 – 5 Road construction, Coprocessing in cement\nkilns\nBihar 4134.631 Manufacturing/Recycling Producers—225 – No information\nunits—8 Brand owners—203\nRecyclers—36\n\nChandigarh 6746.36 Recycling units—7 – – RDF processing plant\nChhattisgarh 32,850 Manufacturing units—8 – – Recycling, Co-processing\nRecycling units—8 in cement kilns, Wasteto-energy plant\nDaman Diu & Dadra 1947.7 343 – – No information\nNagar Haveli\n\nDelhi 230,525 Producers—840 – – Waste-to-energy plant\nGoa 26,068.3 Manufacturing units—35 – 1 Recycling, Co-processing\nCompostable unit—1 in cement kilns, Sanitary landfills\nGujarat 408,201.08 Manufacturing/Recycling – 10 Co-processing in cement\nunits—1027 kilns\nCompostable units—12\n\nHaryana 147,733.51 Manufacturing units—69 – 28 Road construction\nCompostable unit—1\n\nHimachal Pradesh 13,683 No information 24 79 Road construction, Coprocessing in cement\nkilns, Waste-to-energy\nplants\nJammu & Kashmir 74,826.33 259 45 – No information\nJharkhand 51,454.53 Manufacturing units—59 – – Road construction, Coprocessing in cement\nkilns, Reverse Vending\nMachines\nKarnataka 296,380 Manufacturing/Recycling 91 – Recycling, Co-processing\nunits—163 plants\nKerala 131,400 Manufacturing units— – – Recycling\n1266\nProducers—82\nRecycling units—99\nCompostable unit—1\n\nLakshadweep 46 – – – Recycling\nMadhya Pradesh 121,079 Manufacturing and Recy- – 22 Recycling, Road construccling units—164 tion, Co-processing in\nCompostable unit—1 cement kilns\n\nMaharashtra 443,724 Recycling units—62 42 – No information\nCompostable manufacturing units—6\n\nManipur 8292.8 Manufacturing units—4 – – No information\nMeghalaya 1263 4 – – Road construction\nMizoram 7908.6 – – – Recycling\n\n## 1 3\n\n\n-----\n\n**Table 1 (continued)**\n\nStates/UT Plastic generation (tonnes/\nannum)\n\n\nPossible recycling and\ndisposal methods involved\n\n\nRegistered plastic manu- Unregistered plastic\nfacturing/recycling units manufacturing/recycling\nunits\n\n\nMultilayer\nmanufacturing\nunits\n\n\nNagaland 565 Manufacturing units—4 – – Recycling, Road construction\nOdisha 45,339 Manufacturing units—13 – 3 Co-processing in cement\nkilns\nPunjab 92,890.17 Manufacturing/Recycling 48 4 Recycling\nunits—187\nCompostable units—2\nMaterial Recovery Facility—169\n\nPuducherry 11,753 Manufacturing/Recycling – 4 Road construction, Counits—49 processing in cement\nCompostable unit—1 kilns\n\nRajasthan 51,965.5 Manufacturing units—69 – 16 No information\nSikkim 69.02 – – – No information\nTamil Nadu 431,472 Manufacturing units—78 – 3 Recycling, Road construcRecycling units—227 tion, Co-processing in\ncement kilns\nTelangana 233,654.7 Manufacturing/Recycling – 2 Recycling, Road construcunits—316 tion, Co-processing in\ncement kilns\nTripura 32.1 Manufacturing units—26 – 2 No information\nRecycling units—4\n\n\nUttarakhand 25,203.03 Manufacturing/Recycling\nunits—33\nCompostable units—2\n\n\n15 28 Recycling\n\n\nUttar Pradesh 161,147.5 Manufacturing units—99 23 63 Road construction, CoRecycling units—16 processing in cement\nCompostable units—4 kilns, Waste-to-energy\n\nplant, Production of fibers and raw materials\nWest Bengal 300,236.12 Manufacturing/Recycling – 9 Road construction\nunits—157\nCompostable unit—1\n\nData sources: (Central Pollution Control Board 2019; Central Pollution Control Board 2021; CSE 2020; Goa State Pollution Control Board\n2020; Tamil Nadu Pollution Control Board 2020; Haryana State Pollution Control Board 2020; Jammu and Kashmir State Pollution Control\nBoard 2018; Kerala State Pollution Control Board 2020; Maharashtra Pollution Control Board 2020; Uttarakhand Pollution Control Board 2019;\nUttar Pradesh Pollution Control Board 2021)\n\n\nIn the Indian context, the scale of operation and quantity of material handled by the formal sector is insignificant\nwhen compared to the informal sector (Nallathambi et al.\n2018). However, data on the contribution of the informal\nsector in PW recycling in India are very limited (Kumar\net al. 2018). Formal recycling is constrained to clean, separated, pre-consumer waste in a few places in India, even if\nthe states have efficient recycling technology and resources,\nas in Gujarat and Maharashtra (TERI 2021). At present, the\ntotal numbers of organized and unorganized recycling units\nin India are 3500 and 4000, respectively (Satapathy 2017).\nThe formal recyclers face challenges in providing supply\nsecurity for reprocessed plastic materials as the current\nsupply is dominated by informal recyclers (TERI 2021). In\n\n## 1 3\n\n\nrecovering consumer waste (including PW), the informal\nsector and households play a vital role in the waste collection; approximately 6.5–8.5 Mt of PW are collected by\nthese entities, which is about 50–80% of the plastic produced\n(Nandy et al. 2015). PW collection, dismantling, sorting,\nshredding and cleaning, compounding, extrusions (pellet\nmaking) and new product manufacturing are the key activities done by the informal sector PW supply chain in India\n(WBCSD 2017).\nAmong the formal recyclers, Banyan Nation has implemented a proprietary washing technology to remove ink\nand markings from PW in the mechanical recycling process\n(Banyan Nation 2020). The recycler has integrated plastic recycling technology with data intelligence (real-time\n\n\n-----\n\n**Fig. 4 Plastic recycling clusters in India (Plastindia Foundation 2018)**\n\n**Fig. 5 Number of reprocessors** 120\naccording to polymer types\n\n104\n\nin India (ENF 2021). (Abbreviations: ABS: Acrylonitrile 100\nbutadiene styrene; HIPS: High 86\nimpact polystyrene; LLDPE: 80\nLinear low-density polyethyl- 73\nene; PA: Polyamide; PBT: Poly- 64\nbutylene terephthalate; SAN: 60\nStyrene acrylonitrile; POM:\nPolyoxymethylene; PMMA:\nPoly(methyl methacrylate); 40\nTPE: Thermoplastic elastomer)\n\n\n## 1 3\n\n\n-----\n\n**Table 2 Distribution of organized and unorganized plastic recycling units in India (Plastindia Foundation 2019)**\n\nParameters 2018 report 2019 report Percentage growth\n\nNo. of organized recycling units 3500 100 − 93%\nNo. of unorganized recycling units 4000 10,000 60%\nDirect manpower 600,000 100,000 − 83%\nIndirect manpower (including ragpickers) 1 million 1–1.5 million 50% (concerning upper limit)\nAmount of plastic waste recycled 5.5 million metric 6 million metric tonnes 8.3%\ntonnes\n\n\nlocation of informal sector PW collectors and their capacity\nfor waste processing), which has enhanced its performance\nin high-quality waste collection and recycling (Banyan\nNation 2020). The informal sector is largely involved in\nrecycling PET bottles (mainly collection and segregation).\nHorizontal turbo washers and aglow machines are widely\nused in PE granule production by the informal sector (Aryan\net al. 2019). The Alliance of Indian Waste Pickers comprises 30 organizations in 24 cities of the country, working\nin collaboration with waste pickers, acknowledging their\ncontribution, and urging for them to be integrated into the\nwaste management system. For the informal sector, a proper\ncollection network, linking GPS (Global Positioning System) to points of segregation, and tracking vehicles should\nbe considered in a consolidated framework (Jyothsna and\nChakradhar 2020).\nThe organized/formal and unorganized/informal sectors\nare not discrete and do not vie for waste; instead, they are\ninterdependent and coherent as the formal recyclers can\noperate because the informal sector performs the onerous\ntask of conveying utilizable PW to the formal sector in the\nform of aggregates, pellets, flakes and, in a few instances,\neven the finished product. Since formal commodities are\nthe ones who purchase their final goods, the informal sector relies on the formal sector. Furthermore, the informal\nsector's financial capability and ability to invest in infrastructure and equipment to manufacture goods on their own\nare restricted and therefore both communities have a mutual\nrelationship (CSE 2021).\n\n### Overview on plastic recycling technologies and their applicability to India\n\nFrom waste to material recovery, PW recycling can broadly\nbe categorized into mechanical recycling, chemical recycling, biological recycling, and energy recovery (Al-Salem\net al. 2017). The most preferable type of recycling is primary\nrecycling because of its contamination-free feature which\nfurther facilitates a smaller number of operating units resulting in the optimal amount of consumption of energy supply and resources which is further followed by secondary\n\n## 1 3\n\n\nrecycling (mechanical recycling) for recycling PW (CSE\n2021). However, processing difficulties and the quality\nof recyclates are the main drivers for seeking alternative\napproaches (Ragaert et al. 2017). Comparatively, tertiary\nrecycling or chemical/feedstock recycling is a less favored\nalternative because of high production and operational\ncosts, as well as the lack of scalable commercial technology in India whereas quaternary recycling which involves\nenergy recovery, energy from waste, or valorization of PW,\nis least preferred due to uncertainty around propriety and\nprominence of the technology, and the negative potential\nto convert land-based pollution to water and air pollution,\nbut anyhow more preferable than dumping into the landfill\n(Satapathy 2017; CSE 2021). Figure 6 shows the categorization of the recycling process of PW.\n\n#### Recycling technologies\n\n**Mechanical recycling (MR)**\n\nMechanical recycling (also known as secondary, material\nrecycling, material recovery, or back-to-plastics recycling)\ninvolves physical processes (or treatments) that convert PW\ninto secondary plastic materials. It is a multistep process\ntypically involving collection, sorting, heat treatment with\nreforming, re-compounding with additives, and extruding\noperations to produce recycled material that can substitute\nfor virgin polymer (Ragaert et al. 2017; Faraca and Astrup\n2019). It is conventionally capable of handling only singlepolymer plastics, such as PVC, PET, PP, and PS. It remains\none of the dominant recycling techniques utilized for postconsumer plastic packaging waste (PlasticsEurope 2021).\nThere are various key approaches to sorting and separating\nPW for MR, including zig-zag separator (also known as an\nair classifier), air tabling, ballistic separator, dry and wet\ngravity separation (or sink-float tank), froth flotation, and\nelectrostatic separation (or triboelectric separation). There\nare also some newer sensor-based separation technologies\navailable for PW which include plastic color sorting and\nnear-infrared (NIR) (Ministry of Housing & Urban Affairs\n(MoHUA) 2019). Fig. S1 of the supplementary material\n\n\n-----\n\n**Fig. 6 Plastic waste flow and recycling categorization (Modified from FICCI 2016; Sikdar et al. 2020; Tong et al. 2020)**\n\n\nshows the overall mechanical reprocessing infrastructure\nfor plastics.\nAfter the collected plastics are sorted, they are melted\ndown directly and molded into new shapes or are re-granulated (with the granules then directly reused in the manufacturing of plastic products). In the re-granulation process,\nplastic is melted down after being shredded into flakes, then\nprocessed into granules (Dey et al. 2020).\nDegradation and heterogeneity of PW create significant\nchallenges for recyclers involved in mechanical recycling\nas in many cases, recycled plastics do not have the same\nmechanical properties as virgin materials and therefore,\nseveral challenges emerge while recycling mono and mixed\nPW. Furthermore, difficulties in developing novel technologies to remove volatile organic compounds to improve the\nquality of recycled plastics is one of the key technological\nchallenges in mechanical recycling (Cabanes et al. 2020).\nDifferent polymers degenerate under their specific characteristics such as oxidation, light and heat, ionic radiation,\nand hydrolysis where thermal–mechanical degradation and\ndegradation during lifetime are the two ways by which it\n\n\noccurs while recycling or reprocessing of PW (Ragaert et al.\n2017). Faraca and Astrup (2019) also state that models to\npredict plastic performance based on the physical, chemical, and technical characteristics of PW will be critical in\noptimizing these processes. Other than technical challenges,\nthe mechanical recycling process possesses social and economic challenges such as sorting of mixed plastics, lack of\ninvestments and legislation, and quality of recycled products\n(Payne et al. 2019).\n\n**Chemical recycling**\n\nChemical recycling, tertiary recycling, or feedstock recycling refers to the transformation of polymers into simple\nchemical structures (smaller constituent molecules) which\ncan be utilized in a diverse range of industrial applications\nand/or the production of petrochemicals and plastics (Bhagat\net al. 2016; Jyothsna and Chakradhar 2020). This type of\nrecycling directly involves fuel and chemical manufacturers\n(Bhagat et al. 2016). Pyrolysis, hydrogenation, and gasification are some of the chemical recycling processes (Singh\n\n## 1 3\n\n\n-----\n\nand Devi 2019). The food packaging sector could be the\nmain industry to utilize outputs from the chemical recycling\nprocess (BASF 2021).\nWhen molecules, combustible gases, and/or energy are\ngenerated in a thermal degradation process, molecules, combustible gases, and/or energy are generated as multi-stream\noutputs whereas layered and complex plastics, low-quality\nmixed plastics, and polluted plastics are all viable targets\nfor chemical/feedstock recycling (CSE 2021). From an\noperational standpoint, utilizing residual chars and no flue\ngas clean-up requirements are the main advantages, while\nfrom an environmental point of view, reduction in landfilling coupled with reduced GHGs (green-house gases) and\nCO2 (carbon dioxide) emissions are added benefits. Ease of\nuse in electricity and heat production and easily marketed\nproducts are some of the financial advantages of pyrolysis\n(Al-Salem et al. 2010). Plasma pyrolysis is a state-of-the-art\ntechnology in which thermo-chemical properties are being\nintegrated with pyrolysis (MoHUA 2019). Fig. S2 of the\nsupplementary material shows the chemical valorization of\nwaste plastics. Although, cost and catalyst reuse capability in pyrolysis processes need further investigation (TERI\n2020). Due to high energy requirements and the low price of\npetrochemical feedstock compared to monomers developed\nfrom waste plastics, chemical recycling is not yet common\nat an industry scale (Schandl et al. 2020).\nProcessing of mixed waste remains a difficult task due to\nthe intricacy in the reactions where different types of polymers reflect completely distinct spectra following degradation pathways (Ragaert et al. 2017). The presence of PVC in\nthe waste stream possesses another problem due to its density and removal of hydrochloric acid (HCl) from products\nand thus resulting in incomplete segregation (Ragaert et al.\n2017). Other than this, lack of stable waste supply, suitable\nreactor technology, and presence of inorganics in the waste\nstream possess challenges in the chemical recycling of the\nplastics (Payne et al. 2019). Lack of investments, production\nof by-products and metal-based catalysts systems contribute\nto other significant difficulties in the chemical valorization\nof waste plastics (Cabanes et al. 2020; Kubowicz and Booth\n2017).\n\n**Depolymerization** Depolymerization of the plastics is\nthe result of chemical processing where various monomer\nunits are recovered which can be reused for the production\nof new plastics manufacturing or conversion into their raw\nmonomeric forms through processes such as hydrolysis,\nglycolysis, and alcoholysis (Bhandari et al. 2021; Mohanty\net al. 2021). This process is often used to recover monomers from a recoverable resin's grade to that of virgin resin\nsuch as PET, polyamides such as nylons, and polyurethanes\nwith excellent results, as well as the possibility to restore a\nsignificant resource from commodities that are difficult to\n\n## 1 3\n\n\nrecycle commercially (MoHUA 2019). This is the process\nby which the plastic polymers are converted into sulfur-free\nliquid power sources through chemical recycling where\nthese power sources facilitate energy recovery from PWs\n(Bhandari et al. 2021). According to the studies carried out\non depolymerization of mixed waste plastics, it has been\nreported that even a small quantity, for instance, 1 mg of\nthese plastics can yield 4.5 to 5.9 cal of energy with a little\namount of energy consumption of 0.8–1 kWh/h and therefore, this process can yield additional convenience for the\nhigh-quality recycling which is recently being used for the\nPET (Bhandari et al. 2021; Ellen MacArthur Foundation\n2017; Wołosiewicz-Głąb et al. 2017). In the anoxic conditions and the presence of specific catalytic additives, the\ndepolymerization is accomplished in a specially modified\nreactor where 350 °C is the highest reaction temperature\nwhich is converted to either liquid RDF or different gases\n(reutilized as fuel) and solids (reutilized as fuel in cement\nkilns) (MoHUA 2019).\n\n**Energy recovery** Gasification of PW is performed via reaction with a gasifying agent (e.g., steam, oxygen, and air) at\nhigh temperatures (approximately 500–1300 °C) to produce\nsynthetic gas or syngas. This can subsequently be utilized\nfor the production of many products, or as fuel to generate electricity, with outputs of a gaseous mixture of carbon\nmonoxide (CO), hydrogen (H2), carbon dioxide (CO2),\nand methane (CH4) via partial oxidation (Heidenreich and\nFoscolo 2015; Saebea et al. 2020). The amount of energy\nderived from this process is affected by the calorific input of\nPW where polyolefins tend to display higher calorific values. Table 3 shows calorific values of various plastic polymers and conventional fuels for comparison. Due to flexibil\n**Table 3 The calorific value of popular plastics and conventional fuels**\n(Zhang et al. 2021)\n\nFuel Calorific\nvalue (MJ/\nkg)\n\nPolyethylene 43.3–47.7\nPolypropylene 42.6–46.5\nPolystyrene 41.6–43.7\nPolyvinyl chloride 18.0–19.0\nPolyethylene terephthalate 21.6–24.2\nPolyamide 31.4\nPolyurethane foam 31.6\nMethane 53\nGasoline 46\nKerosene 46.5\nPetroleum 42.3\nHeavy oil 42.5\nHousehold plastic solid waste mixture 31.8\n\n\n-----\n\nity, robustness, and advantageous economics, gasification\nalong with pyrolysis is a leading technology for chemical\nrecycling. Characterization of PW is essential for developing optimal process design, particularly for HDPE, LDPE,\nPP, PS, PVC, and PET (Dogu et al. 2021). CSIR-IIP, India\n(Council of Scientific and Industrial Research-Indian Institute of Petroleum) and GAIL, India (Gas Authority of India\nLtd.) in collaboration, have been successful in producing\nfuel and chemicals from PW where PE and PP plastics have\nbeen converted to diesel, petrochemicals, and gasoline. 1 kg\nof these plastics can yield 850 ml of diesel, 500 ml of petrochemicals, and 700 ml of gasoline, along with LPG (CSIRIIP 2018) where the process ensures 100% conversion with\nno toxic emissions and is suitable for both small- and largescale industries (CSIR-IIP 2018).\n\n**Biological recycling**\n\nBiological recycling or organic recycling involves the breaking of PW with the intervention of microorganisms such as\nbacteria, fungus, or algae to produce biogas (CO2 for aerobic\nprocesses and CH4 for anaerobic processes). PW may be\nrecycled biologically through two methods namely aerobic\ncomposting and anaerobic digestion (Singh and Ruj 2015).\nAn enzymatic approach for biodegradation of PET is considered an economically viable recycling method (Koshti et al.\n2018). Table S1 in the supplementary data shows microorganisms responsible for the PW degradation process which\ncould be utilized in the biological recycling process. Blank\net al. 2020 reported that non-degradable plastics such as\nPET, polyethylene (PE), and polystyrene (PS) can be converted to biodegradable components such as polyhydroxyalkanoates (PHA) using a combination of pyrolysis and\nmicrobiology, which is an unconventional route to a circular\neconomy. Polyaromatic hydrocarbons, polyhydroxy valerate\n(PHV) and polyhydroxyalkanoate (PHH), polylactide (PLA),\nand other aliphatic polyesters are biodegradable, whereas\nmany aromatic polyesters are highly impervious to microbial\nassault (Singh and Ruj 2015). Fig. S3 of supplementary data\nshows an overview of the biodegradation of plastics.\nOxo-degradable plastics which is one of the major classes\nof bioplastics that possess challenges due to rapid breakage\ninto microplastics when conditions (sunlight and oxygen)\nare favorable (Kubowicz and Booth 2017). The behavior of\nspecific polymers interrupts their degradation into monomers due to which the microbial activity is ineffective for\nnon-hydrolyzable manufactured polymers as the activity of\nthe microorganisms responsible for the degradation differs\nconcerning the environmental conditions (Ali et al. 2021).\nOther challenges include the consumption of energy for\nrecycling and time for degradation of the generated microplastics along with socioeconomic challenges such as more\ntime and capital investment and lack of resources (Kubowicz\n\n\nand Booth 2017). Collection and separation of bio-PW and\na lack of effective policy contribute to some other barriers\nrelated to bio-based polymers and recycling.\n\n### Techno‑economic feasibility of different recycling techniques\n\nThe techno-economic feasibility study provides a medium\nto analyze the utilization (raw materials, resources, energy,\netc.) and end-of-life trail for different recovery pathways\nfor the conversion of PW by qualitative and quantitative\napproaches in technical and financial aspects (Briassoulis\net al. 2021a). The association of technical and economic\nprospects of reprocessing technologies and related products’\nmarket tends to have a compelling impact on the formation\nof policies to reduce PW. Hence, the techno-economic feasibility study is essential for the effective management of\nPW. The disparity in melting points and treatment technologies contributes to the major challenge for the recycling of\nmixed/multilayered plastic packaging waste which affects\nthe quality of the recycled product (Larrain et al. 2021).\nTable 4 shows different parameters for techno-economic\nfeasibility for recycling technologies. Though techno-economic feasibility study facilitates the understanding inadequacy prevails in terms of sustainability. This is overcome\nby Techno-Economic Sustainability Analysis (TESA) which\nstudies alternative methods for feedstock alteration, common\nenvironmental criteria (such as mass recovery efficiency, the\nimpact of additives, and emissions from recycling facility),\nand pathways for recycling and end-of-life of plastic products (Briassoulis et al. 2021b).\n\n### Utilization of PW and recycled products in India and contribution of major players toward plastic sustainability\n\nPost-consumer PW can be utilized to produce several products after recycling, such as laying roads, use in cement\nkilns, pavement blocks, tiles, bricks, boards, and clothes.\nDue to good binding properties, when PW is in a hightemperature molten state, it can be utilized in road laying (Rokade 2012). Mixing PP and LDPE in bituminous\nconcrete significantly increases the durability and fatigue\nresistance of roads (Bhattacharya et al. 2018). Various\nindustries based in different locations of the country utilizes PP, HDPE, and LDPE waste plastics to produce reprocessed granules and further use them in the production of\nchairs, benches, dustbins, flowerpots, plastic pellets, mobile\nstands, etc. Few informal recyclers produce eco-friendly\nt-shirts and napkins from PET waste bottles whereas some\nrecyclers convert PW to office accessories, furniture, and\n\n## 1 3\n\n\n-----\n\n**Table 4 Techno-economic feasibility parameters for recycling technologies (Briassoulis et al. 2021a; CSE 2021; ElQuliti 2016; Fivga and Dimi-**\ntriou 2018; Ghodrat et al. 2019; Larrain et al. 2021; NITI Aayog- UNDP 2021; Singh and Ruj 2015; Volk et al. 2021)\n\nFeasibility parameters Mechanical Chemical Biological for bioplastic\n\n\nTECHNOLOGICAL Type of polymer PET, HDPE, LDPE, PET, PP, PVC, PE, PS,\nlaminated plastics, lowquality mixed plastics\n\nEnergy requirements 300–500 kW/month for 1200–1500 kW for\n30–50 tonnes/month 80–100 kg PW/hour\n(depends on type of technology and polymer type)\n\nTemperature requirement 100–250 °C Pyrolysis—300–900 °C\nPlasma pyrolysis—1730–9730 °C\nGasification—500–1300 °C\n\n\nBio-PET, bio-PE, bio-PP, etc.\n\n40 TJ–1500 TJ (terajoule)\n\n130–150 °C\n\n\nBiodegradability Non-biodegradable Non-biodegradable Mostly biodegradable (PHA,\nPHV, PHH, PLA)\nRaw materials cost Rs. 6–40/kg Rs. 6–40/kg Rs. 10–30/kg\nECONOMICAL Quality of processed materi- Depending on polymer type Depend on type of technol- High-quality compostable\nals ogy and polymer type bio-polymer\nCost of recyclates Rs. 20–150/kg (depends on Rs. 20–40/l (diesel/fuel) Oxo-degradable plastics—Rs.\ntype of polymers and qual- 90–120/kg Biodegradable\nity of recycled products) films/bags—Rs. 400–500/kg\n\nRecycling facilities in India 7000–10,000 15–25 5–10\n(units)\n\nCost requirements (Operat- 50–60 lakhs/annum 50–65 lakhs for 1 TPD 1–2 crores/annum\ning and capital costs) (tonnes per day) plant\n\n\ndecorative garden items. Recycle India Hyderabad, in 2015,\nbuilt houses, shelter bus stops, and water tanks with PW bottles. Further, under this initiative, thousands of chips packets\nwere weaved into ropes, tied to metal frames, and used to\ncreate dining tables. Shayna Ecounified Ltd., Delhi, with the\nCSIR-National Physical Laboratory, Delhi, converted 340\ntonnes of HDPE, LDPE, and PP waste plastics to 11 lakh\ntiles and has commercialized them to other cities such as\nHyderabad, and companies such as L’Oréal International\nand Tata Motors. Further, few recyclers convert PW such as\nmilk pouches, oil containers, shower curtains, and household plastics to poly-fuel (a mixture of diesel, petrol, etc.).\nFew of them collect PET waste and recycle it into clothes,\nautomotive parts, battery cases, cans, carpets, etc. There are\nseveral other non-government organizations (NGOs), companies, and start-ups that are involved in the recycling of PW\nand its conversion to different types of products, even after\npost-consumer use.\nUsing shredded PW, in 2015–16, the National Rural\nRoad Development Agency laid around 7,500 km of roads\nin India. In 2002, Jambulingam Street in Chennai was constructed as the first plastic road in India (TERI 2018). Plastic\nfibers can replace common steel fibers for reinforcement.\nFire-retardant composites with a wide scope of applications\ncould be developed by blending recycled plastics with fly\nash (TERI 2020). HDPE, PVC, LDPE, PP, and PS have\n\n## 1 3\n\n\nyielded conflicting performance measures, which require\nfurther investigation into the performance of the pavement,\nmethods of improving compatibilization between plastic and\nasphalt, and economic and environmental implications of\nthe process.\nFor the reduction in packing, costs and rising issues\nrelated to PW and packaging, FMCGs (fast-moving consumer goods) industries have teamed up with the Packaging\nAssociation of Clean Environment (PACE), have primarily emphasized immediate benefits including a reduction in\nsize and resource consumption where these changes have\npromoted the usage of flexible packaging and pouches over\nrigid packaging forms. Major FMCG companies like Hindustan Unilever (HUL), Nestlé, and P&G have assured that\nthey will reduce the use of virgin plastics in packaging to\nhalf the amount by the year 2025 (PRI 2021). To promote\nthe utilization of recycled plastics, HUL incorporated recycled PET and recycled HDPE in the manufacturing of personal care products (Condillac and Laul 2020). Other companies like L’Oréal and Henkel had successfully eliminated\nPVC in 2018 along with the reduced use of cellophane to\n5.5% in 2019 and reduction in the utilization of carbon black\npackaging to make carbon-free toilet cleaners, respectively\n(PRI 2021). Beverage companies like PepsiCo, Coca-Cola\nIndia, and Bisleri which use a large quantity of PET bottles,\nhave collaborated with several recyclers to upcycle the PW\n\n\n-----\n\nproducts for the production of new recycled utilities such as\nclothes and bags (Condillac and Laul 2020). Similarly, other\ncompanies like Marico and Dabur are also actively involved\nin reducing the use of virgin plastics in its packaging and for\nthe implementation of a recycling initiative where Marico in\ncollaboration with Big Bazaar is providing incentives to the\ncustomers for dropping their used plastic bottles in the stores\nand Dabur is also competing in the race to become among\nfirst Indian FMCG company to be plastic-free (Condillac and\nLaul 2020). On the other side, apart from taking initiatives\nby various FMCG companies, a lot of efforts is being done\nfor the innovation toward plastic-free packaging materials\nand therefore, Manjushree Technopack (Bengaluru, India)\nlaunched its first plant for the production of post-consumer\nrecycled polymer up to 6000 metric tonnes/year to these\nindustries. Other than this, Packmile, a packaging company\nis producing no plastic alternative such as kraft paper (which\nis biodegradable and recyclable) for Amazon India (Condillac and Laul 2020).\n\n### Role of digitization in PW recycling\n\nAs the amount of waste is increasing by each successive\nyear, technology-driven methods can be established for\ncommunities to reduce, reuse and recycle PW in an ecofriendly manner. In light of this, Recykal (in south Indian\ncity Hyderabad), a digital technology firm developed an\nend-to-end, cloud-based fully automated digital solution\nfor efficient waste management by tracking waste collection\nand promoting recycling of non-biodegradable. Its services\nassist in the formation of a cross-value channel coalition and\nthe connection of various stakeholders such as waste generators (commercial and domestic users), waste collectors,\nand recyclers, assuring that transactions between the organizations with 100% transparency and accessibility (Bhadra\nand Mishra 2021). The quantities of waste received per day\nhave risen from 20 to 30 kg in the months following to over\n10,000 to 15,000 kg recently and offer incentives based on\nthe quality of recycled products (Bhadra and Mishra 2021).\nOne such Android-based application is proposed and developed by Singhal et al. (2021), for efficient collection by pickup or drop facility incorporated in the software. Segregation,\nas well as methods for recycling different types of plastics,\nare also suggested and in return, the users are rewarded with\nthe e-coupons accordingly (Singhal et al. 2021).\nFor improvement in plastic recycling, a variety of techniques have been used and blockchain is one among them,\nand it holds promise for enhancing plastic recycling and the\ncircular economy (CE). A distributed ledger, or blockchain,\nis made up of certain immutable ordered blocks which prove\nto be an excellent approach to commence all of their customers' transactions under the same technology (Khadke et al.\n\n\n2021). One such approach is the introduction of Swachhcoin for the management of household and industrial waste,\nand their conversion into usable high-value recoverable\ngoods such as paper, steel, wood, metals, and electricity\nwith efficient and environmentally friendly technologies\n(Gopalakrishnan and Ramaguru 2019). This is a Decentralised Autonomous Organization (DAO) that is controlled unilaterally via blockchain networks which utilize a combination of techniques such as multi-sensor driven AI to establish\nan incremental and iterative chain that relies on information\ntransferred between multiple ecosystem players, analyzes\nthese inputs, and offers significant recommendations based\non descriptive algorithms which will eventually make the\nsystem entirely self-contained, economical, and profitable\n(Gopalakrishnan and Ramaguru 2019). The purpose of AI in\nthis multi-sensor infrastructure purpose is to limit unpredictability and facilitate efficient and reliable separation by training the system to identify and distinguish them appropriately\n(Chidepatil et al. 2020). Most businesses favor blockchain\ntechnology because of its decentralized architecture and low\ntrading costs along with the associated benefits of accessibility, availability, and tamper-proof structures (Khadke et al.\n2021; Wong et al. 2021).\n\n### Discussion\n\nIndia is a major player in global plastic production and manufacturing. Technology, current infrastructure, and upcoming strategies by the Indian government are combined to\nprovide detailed suggestions for policymakers and researchers in the area of achieving a circular economy. The most\nimportant barrier in Indian PW management is the lack of\nsource segregation of the waste. As in many other countries, mechanical recycling is the leading recycling route for\nIndia’s rigid plastics. The influence of thermomechanical\ndeterioration should be avoided to get high-quality recycled\nmaterial with acceptable characteristics. The development\nof advanced quality measurement techniques for technology\nsuch as nondestructive, cost-effective methods to assess the\nchemical structure and mechanical performance could be\nkey to overcoming the obstructions. For instance, the performance of MR can be partially improved through simple\npackaging design improvements, such as the use of a single polymer instead of a multilayer structure. Furthermore,\nPS and PVC could be replaced with PP for the packaging\nfilm market. There are also issues with depolymerization\nselectivity and activity, ability, and performance trade-offs\nthat may need to be addressed before these methods have\nwide applicability. Based on our assessments, Indian policymakers should consider PET, polyamide 6 (PA 6), thermosetting resins, multilayer plastic packaging, PE, PS, PP,\nand fiber-reinforced composites for chemical recycling.\n\n## 1 3\n\n\n-----\n\nAs chemical recycling is innovation-intensive, assessing\neconomic feasibility is the main challenge for developing\ncountries like India. Overall, PUR, nylon, and PET appear\nmost competitive for chemical recycling. The more problematic mixed waste streams from multilayer packaging could\nbe more suited for pyrolysis along with PE, PP, PS, PTFE\n(polytetrafluoroethylene), PA, and PMMA (poly(methyl\nmethacrylate)). Substantial investment is required for\nhydrocracking which can deal with mixed plastics. Better\nguidance on the correct chemical recycling technology for\neach Indian PW stream may require technology readiness\nlevel (TRL) assessments as proposed by Solis and Silveira\n(2020), which require an increased number of projects and\ndata available on the (chemical) process optimization. Compared to conventional fossil fuel energy sources, PE, PP,\nand PS are the three main polymers with higher calorific\nvalue, making them suitable for energy production. There\nare some challenges, however, with this technology, such\nas the identification of specific optimal biodiesel product\nproperties which can be addressed using techniques such as\nLCA (life cycle assessment) and energy-based analysis. As\nthe practical module of the Indian PW management rules\nexplicitly shows the route to oil production from waste, this\nmay indicate a focus on this technology for the country in\nthe future as chemical recycling accounts for only 0.83%\n(as shown in Fig. 3b) among all the recycling technologies.\nAlthough a relatively high cost is associated with bio-polymers at present, it is expected that production costs will\nreduce due to economies of scale in the coming years. There\nare already numerous bioplastic food packaging materials\nin the market. Since food packaging constitutes a large portion of PW in India, a significant impact could be made for\nthe country if it is switched to more sustainable bio-based\npolymers. In India, the J&K Agro Industries Development\nCorporation Ltd, in collaboration with Earth soul, has\nintroduced the first bioplastic product manufacturing facility, with 960 tonnes per year production capacity whereas\nTruegreen (Ahmedabad) can manufacture 5000 tonnes per\nyear. Some of the major manufacturing plants in India are\nBiotech bags (Tamil Nadu), Ravi Industries (Maharashtra),\nEcolife (Chennai). Recently, plant-based bio-polymer has\nbeen introduced by an Indian company named Hi-Tech\nInternational (Ludhiana) to replace single-use and multi-use\nplastic products such as cups, bottles, and straws, which is\nIndia’s only compostable plastic which implies that plastics\nproduced from this bio-polymer will initiate its degeneration within 3–4 months and can completely disintegrate after\n6 months and also, a biodegradable plastic made is converted\nto carbon dioxide and the remaining constituents transforms\ninto water and biomass (Chowdhary 2021). However, there\nare several challenges associated with this technology.\nImprovements are required to sort bioplastic from other PW\ntypes to avoid waste stream contamination. There is also a\n\n## 1 3\n\n\nneed for optimization of anaerobic digestion parameters to\nensure the complete degradation of these materials. From\nthe Indian perspective, feedstock type with their respective\ninfrastructure availability and interactions between sustainability domains is critical for policymaking issues as most of\nthe recycling sectors are operated by informal sector workers. Commercialization of laboratory-based pyrolysis and\ngasification of bioplastic streams should be developed. Due\nto contaminated collection, there is limited recyclability in\nother PW streams, which should be considered as part of\nbio-based PW management. Though India recycles 60% of\nthe total waste generated and its recycling methods are quite\neffective in solving the problem of increasing PW in India,\nthere are still some major challenges or barriers linked with\nit. For more efficient management of all the PW produced,\nstakeholders need to understand and tackle the challenges\nfaced to curb plastic pollution in the country. Different types\nof recycling technologies have their respective associated\nchallenges and barriers (including technological and social)\nwhich need to be addressed as mentioned in Table S2 of the\nsupplementary data.\nRecycled plastics and the products made from these plastics are often expensive from the virgin plastics and therefore\ncompete for their place in the market. The reason behind this\nis the easy availability of raw materials (which are waste\nfrom the petroleum industry) for the production of virgin\nplastics. Other than this, even after mentioning that 60% of\nthe PW is being recycled, a massive amount of this waste\nis found littered and unrecycled in the environment which\ncontradicts the percentage of recycling as there is a lack of\nrelevant and accurate data for the same. Furthermore, Goods\nand Services Tax (GST) also plays a vital role to build market linkages between recycled and virgin products as the\navailability of recycled products is sporadic, the revenue\nor business model tends to collapse for these products and\naffects the recyclers if the PW is being exported where the\nGST rates decreased to 5% from 18% in 2017 (CSE 2021).\nThe increased input costs due to GST and customs taxes are\nbeing transferred to secondary waste collectors by lowering\nthe cost of recycled plastics. For instance, PET bottles were\nRs. 20/kg before GST came in which decreased to Rs. 12/\nkg after GST imposition, milk packets price varied from Rs\n12/kg to Rs 8/kg and similarly, the cost of HDPE dropped by\n30% post-GST (CSE 2021). With the introduction of GST in\nthe plastic value and supply chain, the informal sectors are\nfacing huge losses due to the availability of scrap at cheaper\ncosts. Therefore, the current GST structure has affected the\nmost fragile and vulnerable section of the plastic supply\nvalue chain.\nEnormous studies have been carried out related to different techniques for recycling for various types of polymers,\nvery limited research is available on the techno-economic\nfeasibility of these technologies and therefore, this could\n\n\n-----\n\nprovide a wide scope for the relevant research in India.\nOther than this feasibility study, there is a broad range of\nopportunities and possibilities to explore and analyze the\ntechnologies in India concerning sustainability (involving\nenvironmental and social parameters) through TESA.\nSeveral published reports claim that India recycles 60% of\nthe total PW generated annually which is the highest among\nother countries such as Germany and Austria with more than\n50% recycling. India’s recycling is mostly contributed by the\ninformal sectors but has not been documented accurately by\nthe governing bodies of the country. Moreover, information\non the recycling rate of 60% varies with different sources\nand creates disparity and ambiguity of the data. As depicted\nin Fig. 3b, India recycles 94.17% of waste plastics through\nmechanical recycling, while 0.93% is chemical or feedstock\nrecycling and 5% for energy recovery and alternative uses\nsuch as making roads, boards, and tiles. Compared with\nchemical recycling, mechanical recycling is the most popular technique due to ease of operation and low-cost expenditure as compared to feedstock or chemical recycling in which\nhigh finances and operational costs are involved along with\nthe lack of availability to ascendable technology. Landfill\ndumping is sometimes favored due to improper segregation\nof waste and ease of operation by agencies employed by\nULBs. Other than mechanical and chemical recycling, bioplastics are the emerging alternative for PW in India but lag\ndue to improper legislation, high cost, and unawareness of\nthe segregation of these types of plastics. This can be facilitated if eco-labeling and a proper coding system are introduced. Though these recycling technologies are widely used\nfor reprocessing the PW, elimination of plastics from the\nenvironment is still a far-fetched dream and merely adds a\nfew more years into the end-of-life of the plastics. Therefore,\nthere is a need for affirmative legislation and strict guidelines for the use of recycled products and the exploration of\nalternatives in different sectors. Active involvement of the\ninformal sectors and inclusive growth can be ensured as their\nlivelihood is dependent on PW.\n\n### Conclusion\n\nThe circular economy is a regenerative model which requires\nthe participation of accountable stakeholders. There should\nbe continuous interaction among stakeholders to share current practices dealing with PW as part of the plastic economy. It was found that there was incomplete and indistinct\nreporting on PW generation from individual states. Information exchange via technology application should eventually\nbe an integral part of the PW management value chain. Thus,\ngeneration estimation is an essential task to set targets for\nresource recovery and recycling, which connects the “global\ncommitment” element of the circular plastic economy and\n\n\nwaste minimization. Being part of the global commitment\nto “reducing, circulating and innovating” under the “plastic pact,” a national target could be set and a mechanism is\ndeveloped. In setting a national target, the “dialogue mechanism” would further invigorate inter-and multidisciplinary\nresearch and policy directions. Consumer behavior is an\nessential task as the end-users share equal responsibilities\nas the producer circular economy. Waste management is\na complex multi-actor-based operational system built on\nknowledge, technologies, and experience from a range of\nsectors, including the informal sector. Indigenous innovation\nand research at a regional scale, such as in Gujarat, Andhra\nPradesh, and Kerala, has set an example of a circular plastic\neconomy and would help in developing a further regional\ncircular plastic economy. Efficient recycling of mixed PW\nis an emerging challenge in the Indian recycling sector. As\nplastic downcycling and recycling is an energy-intensive\nprocess, energy supply from renewable energy sources such\nas solar and wind energy can potentially reduce the CO2\nemissions produced. The recovery and recycling of substantial volumes of PW need emerging technological and\nspecialized equipment, which in turn necessitates a considerable capital investment. Informal sectors being prominent in\nwaste management may be deprived of recognition, technology, and scientific understanding but their skills, knowledge,\nand experience can be utilized in the value chain of plastic\nflow. Also, there is a need to formalize the informal sectors\nwith proper incentivization and other benefits as they play\na major role in plastic flow in India. Additionally, there are\nno policies or rules for the treatment of the residues from the\nresult of recycling technologies and their production units,\nwhich needs to be addressed as the number of waste residues\ndepends on the quantum of waste and technique incorporated. Universities, research organizations, and most importantly, polymer manufacturers and most important policymakers should collaborate in renewable energy integration\nand process optimization.\nFurther detailed assessment using LCA should be performed in this regard to identify the optimized solutions.\nExtended producer responsibility (EPR) and other policy\nmechanisms would be integrated sooner or later; however,\none of the fundamental aspects is being part of the circular economy. Although segmented, it is believed that the\ninformal sector is very innovative, and they could also be\ntechnologically enabled. New app development and PW\ncollection campaigns through digitalization could increase\nnon-contaminated sources of PW. Specific manufacturing\nsectors such as flexible packaging, automobiles, electrical,\nand electronics should look at the plastic problem through\nthe lens of resource efficiency and climate change (CO2 and\nGHGs) perspectives. The sectors should develop innovative solutions so that recycled plastics can be re-circulated\nwithin the sectors where they will be the leading consumer.\n\n## 1 3\n\n\n-----\n\nThough there are a lot of available data on different types\nof recycling of plastics and the state-wise flow of plastics\nthere is no proper information on different types of plastic polymers and their respective flow in the value chain in\ndifferent states/UTs. There is a need for the fortification of\nrecycling different technologies for different polymers and\nfor this purpose, the multi-sensor-based AI and blockchain\ntechnology can prove effective in segregation and recycling\nof the PW in a more environmentally friendly manner which\nshould be implemented in all parts of the country for efficient PW management. Furthermore, the amount of PW can\nonly be controlled by the replacement of new virgin plastics\nand existing plastics with the desired recycled plastics along\nwith citizen sensitization. Overall, for a circular plastic economy in India, there is a necessity for a technology-enabled,\naccountable quality-assured collaborative supply chain of\nvirgin and recycled material.\n\n**Supplementary Information The online version contains supplemen-**\n[tary material available at https://doi.org/10.1007/s13762-022-04079-x.](https://doi.org/10.1007/s13762-022-04079-x)\n\n**Acknowledgments The authors wish to thank all who assisted in con-**\nducting this work.\n\n**Author contributions All the authors contributed to the study concep-**\ntion and design. Conceptualization and writing of the draft were done\nby Riya Shanker, Dr. Debishree Khan, Dr. Rumana Hossain, Anirban\nGhose, and Md Tasbirul Islam. The draft was revised and edited by\nKatherine Locock with the supervision of Dr. Heinz Schandl, Dr. Rita\nDhodapkar, and Dr. Veena Sahajwalla. All the authors have read and\napproved the final manuscript.\n\n**Funding The authors acknowledge project funding for “India – Aus-**\ntralia Industry and Research\nCollaboration for Reducing Plastic Waste” from CSIRO, Australia,\nthrough contract agreement.\n\n#### Declarations\n\n**Conflict of interest The authors declared that they have no conflict of**\ninterest.\n\n**Ethical approval There is no ethical approval required.**\n\n### References\n\nAl-Salem SM, Antelava A, Constantinou A, Manos G, Dutta A (2017)\nA review on thermal and catalytic pyrolysis of plastic solid waste\n[(PSW). J Environ Manag 197:177–198. https://doi.org/10.1016/j.](https://doi.org/10.1016/j.jenvman.2017.03.084)\n[jenvman.2017.03.084](https://doi.org/10.1016/j.jenvman.2017.03.084)\n\nAl-Salem SM, Lettieri P, Baeyens J (2010) The valorization of plastic solid waste (PSW) by primary to quaternary routes: From\nre-use to energy and chemicals. Prog Energy Combust Sci\n[36:103–129. https://doi.org/10.1016/j.pecs.2009.09.001](https://doi.org/10.1016/j.pecs.2009.09.001)\n\nAli SS, Elsamahy T, Al-Tohamy R, Zhu D, Mahmoud Y, Koutra E,\nSun J (2021) PWs biodegradation: mechanisms, challenges and\n\n## 1 3\n\n\n[future prospects. Sci Total Environ. https://doi.org/10.1016/j.](https://doi.org/10.1016/j.scitotenv.2021.146590)\n[scitotenv.2021.146590](https://doi.org/10.1016/j.scitotenv.2021.146590)\n\nSingh AP, Devi AS (2019) PW management: a review. Retrieved\n[from http://ijasrm.com/wp-content/uploads/2019/05/IJASRM_](http://ijasrm.com/wp-content/uploads/2019/05/IJASRM_V4S5_1488_233_237.pdf)\n[V4S5_1488_233_237.pdf](http://ijasrm.com/wp-content/uploads/2019/05/IJASRM_V4S5_1488_233_237.pdf)\n\nAnon (2020) Strategies for sustainable plastic packaging in India.\n[FICCI & Accenture. Retrieved from https://ficci.in/spdocument/](https://ficci.in/spdocument/23348/FICCI-Accenture-Circular-Economy-Report1.pdf)\n[23348/FICCI-Accenture-Circular-Economy-Report1.pdf](https://ficci.in/spdocument/23348/FICCI-Accenture-Circular-Economy-Report1.pdf)\n\nAryan Y, Yadav P, Samadder SR (2019) Life cycle assessment of the\nexisting and proposed PW management options in India: a case\n[study. J Clean Prod 211:1268–1283. https://doi.org/10.1016/j.](https://doi.org/10.1016/j.jclepro.2018.11.236)\n[jclepro.2018.11.236](https://doi.org/10.1016/j.jclepro.2018.11.236)\n\nBanyan Nation (2020) Recycling and monitoring PW Flow in the\ncity. ASCI J Manag 49:73–76\n[BASF (2021) Chemical recycling of PW. Retrieved from https://](https://www.basf.com/in/en/who-we-are/sustainability/we-drive-sustainable-solutions/circular-economy/chemcycling.html)\n\n[www.basf.com/in/en/who-we-are/sustainability/we-drive-susta](https://www.basf.com/in/en/who-we-are/sustainability/we-drive-sustainable-solutions/circular-economy/chemcycling.html)\n[inable-solutions/circular-economy/chemcycling.html](https://www.basf.com/in/en/who-we-are/sustainability/we-drive-sustainable-solutions/circular-economy/chemcycling.html)\n\nBhadra U, Mishra PP (2021) Extended producer responsibility in\nIndia: evidence from Recykal, Hyderabad. J Urban Manag\n[56:280. https://doi.org/10.1016/j.jum.2021.07.003](https://doi.org/10.1016/j.jum.2021.07.003)\n\nBhagat S, Bhardawaj A, Mittal P, Chandak P, Akhtar M, Sharma P\n(2016) Evaluating PW disposal options in Delhi using multicriteria decision analysis. Inst Integr Omics Appl Biotechnol\n7:25–35\nBhandari NL, Bhattarai S, Bhandari G, Subedi S, Dhakal KN (2021)\nA review on current practices of plastics waste management and\n[future prospects. J Inst Sci Technol 26(1):107–118. https://doi.](https://doi.org/10.3126/jist.v26i1.37837)\n[org/10.3126/jist.v26i1.37837](https://doi.org/10.3126/jist.v26i1.37837)\n\nBhattacharya RS, Chandrasekhar KMV, Deepthi PR, Khan A (2018)\nChallenges and opportunities: plastic management in India.\n[https://www.teriin.org/sites/default/files/2018-06/plastic-waste-](https://www.teriin.org/sites/default/files/2018-06/plastic-waste-management_0.pdf)\n[management_0.pdf](https://www.teriin.org/sites/default/files/2018-06/plastic-waste-management_0.pdf)\n\nBlank LM, Narancic T, Mampel J, Tiso T, O’Connor K (2020) Biotechnological upcycling of PW and other non-conventional feedstocks\n[in a circular economy. Curr Opin Biotechnol 62:212–219. https://](https://doi.org/10.1016/j.copbio.2019.11.011)\n[doi.org/10.1016/j.copbio.2019.11.011](https://doi.org/10.1016/j.copbio.2019.11.011)\n\nBriassoulis D, Pikasi A, Hiskakis M (2021) Recirculation potential of\npost-consumer/industrial bio-based plastics through mechanical\nrecycling-Techno-economic sustainability criteria and indicators.\nPolym Degrad Stab 183:109217\nBriassoulis D, Pikasi A, Hiskakis M (2021) Organic recycling of postconsumer/industrial bio-based plastics through industrial aerobic\ncomposting and anaerobic digestion-Techno-economic sustainability criteria and indicators. Polym Degrad Stab 190:109642.\n[https://doi.org/10.1016/j.polymdegradstab.2021.109642](https://doi.org/10.1016/j.polymdegradstab.2021.109642)\n\nCabanes A, Valdés FJ, Fullana A (2020) A review on VOCs from\n[recycled plastics. Sustain Mater Technol 25:e00179. https://doi.](https://doi.org/10.1016/j.susmat.2020.e00179)\n[org/10.1016/j.susmat.2020.e00179](https://doi.org/10.1016/j.susmat.2020.e00179)\n\nCentral Pollution Control Board (2019) Annual report for the year\n2018–2019 on implementation of PW management Rules.\n[Retrieved from https://cpcb.nic.in/uploads/plasticwaste/Annual_](https://cpcb.nic.in/uploads/plasticwaste/Annual_Report_2018-19_PWM.pdf)\n[Report_2018-19_PWM.pdf](https://cpcb.nic.in/uploads/plasticwaste/Annual_Report_2018-19_PWM.pdf)\n\nChidepatil A, Bindra P, Kulkarni D, Qazi M, Kshirsagar M, Sankaran\nK (2020) From trash to cash: how blockchain and multi-sensordriven artificial intelligence can transform circular economy of\n[PW? Adm Sci 10(2):23. https://doi.org/10.3390/admsci10020023](https://doi.org/10.3390/admsci10020023)\n\nCondillac R, Laul C (2020) Megatrend: reducing plastic usage across\n[FMCG value chain. Retrieved from http://bwhealthcareworld.](http://bwhealthcareworld.businessworld.in/article/Megatrend-Reducing-Plastic-Usage-across-FMCG-Value-Chain/24-07-2020-300857/)\n[businessworld.in/article/Megatrend-Reducing-Plastic-Usage-](http://bwhealthcareworld.businessworld.in/article/Megatrend-Reducing-Plastic-Usage-across-FMCG-Value-Chain/24-07-2020-300857/)\n[across-FMCG-Value-Chain/24-07-2020-300857/](http://bwhealthcareworld.businessworld.in/article/Megatrend-Reducing-Plastic-Usage-across-FMCG-Value-Chain/24-07-2020-300857/)\n\n[CPCB Annual Report 2019–2020 (2021). Retrieved from https://cpcb.](https://cpcb.nic.in/uploads/plasticwaste/Annual_Report_2019-20_PWM.pdf)\n\n[nic.in/uploads/plasticwaste/Annual_Report_2019-20_PWM.pdf](https://cpcb.nic.in/uploads/plasticwaste/Annual_Report_2019-20_PWM.pdf)\n\nCentre for Science and Environment (CSE) (2019a). The plastics\n[factsheet 1. https://cdn.cseindia.org/attachments/0.57139300_](https://cdn.cseindia.org/attachments/0.57139300_1570431848_Factsheet1.pdf)\n[1570431848_Factsheet1.pdf](https://cdn.cseindia.org/attachments/0.57139300_1570431848_Factsheet1.pdf)\n\n\n-----\n\nCentre for Science and Environment (CSE) (2019b) The plastics\n[factsheet 3. https://cdn.cseindia.org/attachments/0.97245800_](https://cdn.cseindia.org/attachments/0.97245800_1570432310_factsheet3.pdf)\n[1570432310_factsheet3.pdf](https://cdn.cseindia.org/attachments/0.97245800_1570432310_factsheet3.pdf)\n\nCSE (2020) Managing PW in India: challenges and Agenda. Retrieved\n[from https://www.cseindia.org/content/downloadreports/10352](https://www.cseindia.org/content/downloadreports/10352)\n\n[CSE (2021) Plastic recycling decoded. Retrieved from https://www.](https://www.cseindia.org/plastic-recycling-decoded-10885)\n\n[cseindia.org/plastic-recycling-decoded-10885.](https://www.cseindia.org/plastic-recycling-decoded-10885)\n[CSIR-IIP (2018) https://www.iip.res.in/waste-plastics-to-fuel-and-petro](https://www.iip.res.in/waste-plastics-to-fuel-and-petrochemicals/)\n\n[chemicals/](https://www.iip.res.in/waste-plastics-to-fuel-and-petrochemicals/)\n\nDey A, Dhumal CV, Sengupta P, Kumar A, Pramanik NK, Alam T\n(2020) Challenges and possible solutions to mitigate the problems\nof single-use plastics used for packaging food items: a review. J\n[Food Sci Technol. https://doi.org/10.1007/s13197-020-04885-6](https://doi.org/10.1007/s13197-020-04885-6)\n\nDogu O, Pelucchi M, Van de Vijver R, Van Steenberge PHM, D’Hooge\nDR, Cuoci A, Mehl M, Frassoldati A, Faravelli T, Van Geem\nKM (2021) The chemistry of chemical recycling of solid PW via\npyrolysis and gasification: State-of-the-art, challenges, and future\n[directions. Prog Energy Combust Sci 84:100901. https://doi.org/](https://doi.org/10.1016/j.pecs.2020.100901)\n[10.1016/j.pecs.2020.100901](https://doi.org/10.1016/j.pecs.2020.100901)\n\nEllen MacArthur Foundation (2017) The new plastics economy:\nrethinking the future of plastics and catalysing actions. Retrieved\n[from https://ellenmacarthurfoundation.org/the-new-plastics-econo](https://ellenmacarthurfoundation.org/the-new-plastics-economy-rethinking-the-future-of-plastics-and-catalysing)\n[my-rethinking-the-future-of-plastics-and-catalysing](https://ellenmacarthurfoundation.org/the-new-plastics-economy-rethinking-the-future-of-plastics-and-catalysing)\n\nElQuliti SAH (2016) Techno-economic feasibility study of waste-toenergy using pyrolysis technology for Jeddah municipal solid\nwaste. Int J Power Eng Energy 7(1):622–635\n[ENF (2021) Plastic recycling plants in India. https://www.enfrecycli](https://www.enfrecycling.com/directory/plastic-plant/India)\n\n[ng.com/directory/plastic-plant/India](https://www.enfrecycling.com/directory/plastic-plant/India)\n\nPrinciples for Responsible Investments (PRI) (2021) Engaging on\nplastic packaging: Fast Moving Consumer Goods. Retrieved\n[from https://www.unpri.org/plastics/engaging-on-plastic-packa](https://www.unpri.org/plastics/engaging-on-plastic-packaging-fast-moving-consumer-goods/7919.article)\n[ging-fast-moving-consumer-goods/7919.article](https://www.unpri.org/plastics/engaging-on-plastic-packaging-fast-moving-consumer-goods/7919.article)\n\nFaraca G, Astrup T (2019) PW from recycling centers: characterization\nand evaluation of plastic recyclability. Waste Manag 95:388–398.\n[https://doi.org/10.1016/j.wasman.2019.06.038](https://doi.org/10.1016/j.wasman.2019.06.038)\n\nFICCI (2016) Indian plastic industry: challenges & opportunities.\n[Retrieved from https://www.slideshare.net/TSMG-Chemicals/](https://www.slideshare.net/TSMG-Chemicals/indian-plastic-industry-challenges-opportunities)\n[indian-plastic-industry-challenges-opportunities](https://www.slideshare.net/TSMG-Chemicals/indian-plastic-industry-challenges-opportunities)\n\nFivga A, Dimitriou I (2018) Pyrolysis of PW for production of heavy\nfuel substitute: a techno-economic assessment. Energy 149:865–\n[874. https://doi.org/10.1016/j.energy.2018.02.094](https://doi.org/10.1016/j.energy.2018.02.094)\n\nGhodrat M, Alonso JA, Hagare D, Yang R, Samali B (2019) Economic\nfeasibility of energy recovery from waste plastic using pyrolysis\ntechnology: an Australian perspective. Int J Environ Sci Technol\n[16(7):3721–3734. https://doi.org/10.1007/s13762-019-02293-8](https://doi.org/10.1007/s13762-019-02293-8)\n\nGoa State Pollution Control Board (2020) Annual report for FY\n2019–2020 on implementation of PW (Management and Han[dling) Rules, 2016 as amended 2018. Retrieved from http://goasp](http://goaspcb.gov.in/Media/Default/uploads/Annual%20Report%20Plastic%202019-20.pdf)\n[cb.gov.in/Media/Default/uploads/Annual%20Report%20Plastic%](http://goaspcb.gov.in/Media/Default/uploads/Annual%20Report%20Plastic%202019-20.pdf)\n[202019-20.pdf](http://goaspcb.gov.in/Media/Default/uploads/Annual%20Report%20Plastic%202019-20.pdf)\n\nGopalakrishnan P, Ramaguru R (2019) Blockchain-based waste management. Int J Eng Adv Technol 8(5):2632–2635\nHaryana State Pollution Control Board (2020) Annual report under PW\n[management rule for the year 2019. Retrieved from https://hspcb.](https://hspcb.gov.in/content/plasticwaste/ARPWM19-20/ARPWM19-20.pdf)\n[gov.in/content/plasticwaste/ARPWM19-20/ARPWM19-20.pdf](https://hspcb.gov.in/content/plasticwaste/ARPWM19-20/ARPWM19-20.pdf)\n\nHeidenreich S, Foscolo PU (2015) New concepts in biomass gasifi[cation. Prog Energy Combust Sci 46:72–95. https://doi.org/10.](https://doi.org/10.1016/j.pecs.2014.06.002)\n[1016/j.pecs.2014.06.002](https://doi.org/10.1016/j.pecs.2014.06.002)\n\nIndian Plastics Industry Report, PlastIndia Foundation (2019).\n[Retrieved from https://www.plastindia.org/plastic-industry-sta-](https://www.plastindia.org/plastic-industry-status-report.php)\n[tus-report.php](https://www.plastindia.org/plastic-industry-status-report.php)\n\nJammu and Kashmir State Pollution Control Board (2018) Annual\n[report 2017–2018. Retrieved from http://jkspcb.nic.in/Write](http://jkspcb.nic.in/WriteReadData/userfiles/file/BMW/0.jpg)\n[ReadData/userfiles/file/BMW/0.jpg](http://jkspcb.nic.in/WriteReadData/userfiles/file/BMW/0.jpg)\n\nJyothsna TSS, Chakradhar B (2020) Current scenario of PW management in India: way forward in turning vision to reality. In:\n\n\nUrban mining and sustainable waste management. Springer,\nSingapore, pp 203–218\nKerala State Pollution Control Board (2020) Annual report 2019–\n[2020. Retrieved from https://www.keralapcb.nic.in/cmsadmin/](https://www.keralapcb.nic.in/cmsadmin/fileUploads/Annual%20Report%20PWM%20Rules%202019%20-%202020.pdf)\n[fileUploads/Annual%20Report%20PWM%20Rules%202019%](https://www.keralapcb.nic.in/cmsadmin/fileUploads/Annual%20Report%20PWM%20Rules%202019%20-%202020.pdf)\n[20-%202020.pdf](https://www.keralapcb.nic.in/cmsadmin/fileUploads/Annual%20Report%20PWM%20Rules%202019%20-%202020.pdf)\n\nKhadke S, Gupta P, Rachakunta S, Mahata C, Dawn S, Sharma M,\nDalapati GK (2021) Efficient plastic recycling and re-molding\ncircular economy using the technology of trust-blockchain. Sus[tainability 13(16):9142. https://doi.org/10.3390/su13169142](https://doi.org/10.3390/su13169142)\n\nKoshti R, Mehta L, Samarth N (2018) Biological recycling of\npolyethylene terephthalate: a mini-review. J Polym Environ\n[26:3520–3529. https://doi.org/10.1007/s10924-018-1214-7](https://doi.org/10.1007/s10924-018-1214-7)\n\nKubowicz S, Booth AM (2017) Biodegradability of plastics: challenges and misconceptions. [https://doi.org/10.1021/acs.est.](https://doi.org/10.1021/acs.est.7b04051)\n[7b04051](https://doi.org/10.1021/acs.est.7b04051)\n\nKumar A, Samadder SR, Kumar N, Singh C (2018) Estimation of the\ngeneration rate of different types of PWs and possible revenue\nrecovery from informal recycling. Waste Manag 79:781–790.\n[https://doi.org/10.1016/j.wasman.2018.08.045](https://doi.org/10.1016/j.wasman.2018.08.045)\n\nLarrain M, Van Passel S, Thomassen G, Van Gorp B, Nhu TT, Huysveld S, Billen P (2021) Techno-economic assessment of mechanical recycling of challenging post-consumer plastic packaging\n[waste. Resour Conserv Recycl 170:105607. https://doi.org/10.](https://doi.org/10.1016/j.resconrec.2021.105607)\n[1016/j.resconrec.2021.105607](https://doi.org/10.1016/j.resconrec.2021.105607)\n\nMaharashtra Pollution Control Board (2020) Annual report 2019–2020.\n[Retrieved from https://www.mpcb.gov.in/sites/default/files/plast](https://www.mpcb.gov.in/sites/default/files/plastic-waste/PWMAnnualreportMPCB201920.pdf)\n[ic-waste/PWMAnnualreportMPCB201920.pdf](https://www.mpcb.gov.in/sites/default/files/plastic-waste/PWMAnnualreportMPCB201920.pdf)\n\nMinistry of Housing & Urban Affairs (MOHUA) (2019) PW manage[ment issues, solutions & case studies. Retrieved from http://164.](http://164.100.228.143:8080/sbm/content/writereaddata/SBM%20Plastic%20Waste%20Book.pdf)\n[100.228.143:8080/sbm/content/writereaddata/SBM%20Plastic%](http://164.100.228.143:8080/sbm/content/writereaddata/SBM%20Plastic%20Waste%20Book.pdf)\n[20Waste%20Book.pdf](http://164.100.228.143:8080/sbm/content/writereaddata/SBM%20Plastic%20Waste%20Book.pdf)\n\nMohanty A, Borah RK, Fatrekar A, Krishnan S, Vernekar A (2021)\nStepping towards benign alternatives: sustainable conversion of\n[PW into valuable products. Chem Commun. https://doi.org/10.](https://doi.org/10.1039/D1CC03705F)\n[1039/D1CC03705F](https://doi.org/10.1039/D1CC03705F)\n\nNallathambi M, Prasad G, Samuel, S (2018) Life cycle inventories of\n[plastic recycling—India. Retrieved from https://www.ecoinvent.](https://www.ecoinvent.org/files/sectorial_report_sri_plastics_report.pdf)\n[org/files/sectorial_report_sri_plastics_report.pdf](https://www.ecoinvent.org/files/sectorial_report_sri_plastics_report.pdf)\n\nNandy B, Sharma G, Garg S, Kumari S, George T, Sunanda Y, Sinha B\n(2015) Recovery of consumer waste in India—a mass flow analysis for paper, plastic and glass and the contribution of households\nand the informal sector. Resour Conserv Recycl 101:167–181.\n[https://doi.org/10.1016/j.resconrec.2015.05.012](https://doi.org/10.1016/j.resconrec.2015.05.012)\n\nNeo ERK, Soo GCY, Tan DZL, Cady K, Tong KT, Low JSC (2021)\nLife cycle assessment of PW end-of-life for India and Indonesia.\n[Resour Conserv Recycl 174:105774. https://doi.org/10.1016/j.](https://doi.org/10.1016/j.resconrec.2021.105774)\n[resconrec.2021.105774](https://doi.org/10.1016/j.resconrec.2021.105774)\n\nNITI Aayog-UNDP Handbook on Sustainable Urban PW Management\n[(2021) Retrieved from https://www.niti.gov.in/sites/default/files/](https://www.niti.gov.in/sites/default/files/2021-10/Final_Handbook_PWM_10112021.pdf)\n[2021-10/Final_Handbook_PWM_10112021.pdf](https://www.niti.gov.in/sites/default/files/2021-10/Final_Handbook_PWM_10112021.pdf)\n\nPadgelwar S, Nandan A, Mishra AK (2021) PW management and\ncurrent scenario in India: a review. Int J Environ Anal Chem\n101(13):1894–1906. [https://doi.org/10.1080/03067319.2019.](https://doi.org/10.1080/03067319.2019.1686496)\n[1686496](https://doi.org/10.1080/03067319.2019.1686496)\n\nPayne J, McKeown P, Jones MD (2019) A circular economy approach\nto PW. Polym Degrad Stab 165:170–181. [https://doi.org/10.](https://doi.org/10.1016/j.polymdegradstab.2019.05.014)\n[1016/j.polymdegradstab.2019.05.014](https://doi.org/10.1016/j.polymdegradstab.2019.05.014)\n\nPlasticsEurope (2021) Recycling and energy recovery. Retrieved from\n\n[https://plasticseurope.org/#:~:text=Mechanical%20recycling%](https://plasticseurope.org/#:~:text=Mechanical%20recycling%20of%20plastics%20refers,chemical%20structure%20of%20the%20material.&text=It%20is%20currently%20the%20almost,99%25%20of%20the%20recycled%20quantities)\n[20of%20plastics%20refers,chemical%20structure%20of%20the%](https://plasticseurope.org/#:~:text=Mechanical%20recycling%20of%20plastics%20refers,chemical%20structure%20of%20the%20material.&text=It%20is%20currently%20the%20almost,99%25%20of%20the%20recycled%20quantities)\n[20material.&text=It%20is%20currently%20the%20almost,99%](https://plasticseurope.org/#:~:text=Mechanical%20recycling%20of%20plastics%20refers,chemical%20structure%20of%20the%20material.&text=It%20is%20currently%20the%20almost,99%25%20of%20the%20recycled%20quantities)\n[25%20of%20the%20recycled%20quantities.](https://plasticseurope.org/#:~:text=Mechanical%20recycling%20of%20plastics%20refers,chemical%20structure%20of%20the%20material.&text=It%20is%20currently%20the%20almost,99%25%20of%20the%20recycled%20quantities)\nPlastindia Foundation (2018) Report on The Indian Plastics Industry.\n[Retrieved from https://plastindia.org/pdf/Indian-Plastics-Indus](https://plastindia.org/pdf/Indian-Plastics-Industry-Report-2018-2.pdf)\n[try-Report-2018-2.pdf](https://plastindia.org/pdf/Indian-Plastics-Industry-Report-2018-2.pdf)\n\n## 1 3\n\n\n-----\n\nPunjab Pollution Control Board (2018) Annual report 2018. Retrieved\n[from https://ppcb.punjab.gov.in/Attachments/Plastic%20Waste/](https://ppcb.punjab.gov.in/Attachments/Plastic%20Waste/PlasticCPCB.pdf)\n[PlasticCPCB.pdf](https://ppcb.punjab.gov.in/Attachments/Plastic%20Waste/PlasticCPCB.pdf)\n\nRafey A, Siddiqui FZ (2021) A review of PW management in India—\n[challenges and opportunities. Int J Environ Anal Chem. https://](https://doi.org/10.1080/03067319.2021.1917560)\n[doi.org/10.1080/03067319.2021.1917560](https://doi.org/10.1080/03067319.2021.1917560)\n\nRagaert K, Delva L, Van Geem K (2017) Mechanical and chemical\n[recycling of solid PW. Waste Manag 69:24–58. https://doi.org/](https://doi.org/10.1016/j.wasman.2017.07.044)\n[10.1016/j.wasman.2017.07.044](https://doi.org/10.1016/j.wasman.2017.07.044)\n\nRokade S (2012) Use of waste plastic and waste rubber tyres in flexible highway pavements. In: International conference on future\nenvironment and energy, IPCBEE, vol 28\nSaebea D, Ruengrit P, Arpornwichanop A, Patcharavorachot Y (2020)\nGasification of PW for synthesis gas production. Energy Rep\n[6:202–207. https://doi.org/10.1016/j.egyr.2019.08.043](https://doi.org/10.1016/j.egyr.2019.08.043)\n\nSatapathy S (2017) An analysis of barriers for plastic recycling in the\nIndian plastic industry. Benchmark Int J 24(2):415–430\nSchandl H, King S, Walton A, Kaksonen AH, Tapsuwan S, Baynes\nTM (2020) National circular economy roadmap for plastics, glass,\npaper and tyres. Australia’s National Science Agency, CSIRO,\nAustralia\nSikdar S, Siddaiah A, Menezes PL (2020) Conversion of waste plastic\n[to oils for tribological applications. Lubricants 8(8):78. https://](https://doi.org/10.3390/lubricants8080078)\n[doi.org/10.3390/lubricants8080078](https://doi.org/10.3390/lubricants8080078)\n\nSingh RK, Ruj B (2015) PW management and disposal techniques[Indian scenario. Int J Plast Technol 19(2):211–226. https://doi.](https://doi.org/10.1007/s12588-015-9120-5)\n[org/10.1007/s12588-015-9120-5](https://doi.org/10.1007/s12588-015-9120-5)\n\nSinghal S, Singhal S, Neha, Jamal M (2021) Recognizing &automating the barriers of plastic waste management – collection and\nsegregation 8(4):775–779\nSolis M, Silveira S (2020) Technologies for chemical recycling of\nhousehold plastics—a technical review and TRL assessment.\n[Waste Manag 105:128–138. https://doi.org/10.1016/j.wasman.](https://doi.org/10.1016/j.wasman.2020.01.038)\n[2020.01.038](https://doi.org/10.1016/j.wasman.2020.01.038)\n\nChowdhary S (2021) Biopolymers: smart solution for solving the PW\n[problem. Retrieved from https://www.financialexpress.com/indus](https://www.financialexpress.com/industry/bio-polymers-smart-solution-for-solving-the-plastic-waste-problem/2267620/)\n[try/bio-polymers-smart-solution-for-solving-the-plastic-waste-](https://www.financialexpress.com/industry/bio-polymers-smart-solution-for-solving-the-plastic-waste-problem/2267620/)\n[problem/2267620/.](https://www.financialexpress.com/industry/bio-polymers-smart-solution-for-solving-the-plastic-waste-problem/2267620/)\nTamil Nadu Pollution Control Board (2020) Annual report on PW\n[management rules, 2016. Retrieved from https://tnpcb.gov.in/](https://tnpcb.gov.in/pdf_2019/AnnualRptPlasticwaste1920.pdf)\n[pdf_2019/AnnualRptPlasticwaste1920.pdf](https://tnpcb.gov.in/pdf_2019/AnnualRptPlasticwaste1920.pdf)\n\n## 1 3\n\n\nTelangana Pollution Control Board (2018) Annual report 2017–18.\n[Retrieved from https://tspcb.cgg.gov.in/CBIPMP/Plastic%20ann](https://tspcb.cgg.gov.in/CBIPMP/Plastic%20annual%20returns%202017-18.pdf)\n[ual%20returns%202017-18.pdf](https://tspcb.cgg.gov.in/CBIPMP/Plastic%20annual%20returns%202017-18.pdf)\n\nTERI (2020) PW management: turning challenges into opportunities.\n[Retrieved from https://www.teriin.org/sites/default/files/2020-12/](https://www.teriin.org/sites/default/files/2020-12/plastic-management_0.pdf)\n[plastic-management_0.pdf](https://www.teriin.org/sites/default/files/2020-12/plastic-management_0.pdf)\n\nTERI (2021) Circular Economy for plastics in India: A Roadmap.\n\n[https://www.teriin.org/sites/default/files/2021-12/Circular-Econo](https://www.teriin.org/sites/default/files/2021-12/Circular-Economy-Plastics-India-Roadmap.pdf)\n[my-Plastics-India-Roadmap.pdf](https://www.teriin.org/sites/default/files/2021-12/Circular-Economy-Plastics-India-Roadmap.pdf)\n\nTong Z, Ma G, Zhou D (2020) Simulating continuous counter-current\nleaching process for indirect mineral carbonation under microwave irradiation. J Solid Waste Technol Manag 46(1):123–131.\n[https://doi.org/10.5276/JSWTM/2020.123](https://doi.org/10.5276/JSWTM/2020.123)\n\nUttar Pradesh Pollution Control Board (2021) Annual report 2019–\n2020. Retrieved from [http://uppcb.com/pdf/Plastic-Annual_](http://uppcb.com/pdf/Plastic-Annual_090321.pdf)\n[090321.pdf](http://uppcb.com/pdf/Plastic-Annual_090321.pdf)\n\nUttarakhand Pollution Control Board (2019) Annual report 2018–2019.\nRetrieved from [https://ueppcb.uk.gov.in/files/annual_report_](https://ueppcb.uk.gov.in/files/annual_report_PWM.pdf)\n[PWM.pdf](https://ueppcb.uk.gov.in/files/annual_report_PWM.pdf)\n\nVolk R, Stallkamp C, Steins JJ, Yogish SP, Müller RC, Stapf D, Schultmann F (2021) Techno-economic assessment and comparison of\ndifferent plastic recycling pathways: a German case study. J Ind\n[Ecol. https://doi.org/10.1111/jiec.13145](https://doi.org/10.1111/jiec.13145)\n\nWBCSD (2017) Informal approaches towards a circular economy—\n[learning from the plastics recycling sector in India. https://www.](https://www.sustainable-recycling.org/wp-content/uploads/2017/01/WBCSD_2016_-InformalApproaches.pdf)\n[sustainable-recycling.org/wp-content/uploads/2017/01/WBCSD_](https://www.sustainable-recycling.org/wp-content/uploads/2017/01/WBCSD_2016_-InformalApproaches.pdf)\n[2016_-InformalApproaches.pdf](https://www.sustainable-recycling.org/wp-content/uploads/2017/01/WBCSD_2016_-InformalApproaches.pdf)\n\nWołosiewicz-Głąb M, Pięta P, Sas S, Grabowski Ł (2017) PW depolymerization as a source of energetic heating oils. In: E3S web of\n[conferences, vol 14. EDP Sciences, p 02044. https://doi.org/10.](https://doi.org/10.1051/e3sconf/20171402044)\n[1051/e3sconf/20171402044](https://doi.org/10.1051/e3sconf/20171402044)\n\nWong S, Yeung JKW, Lau YY, So J (2021) Technical sustainability\nof cloud-based blockchain integrated with machine learning for\n[supply chain management. Sustainability 13(15):8270. https://doi.](https://doi.org/10.3390/su13158270)\n[org/10.3390/su13158270](https://doi.org/10.3390/su13158270)\n\nZhang F, Zhao Y, Wang D, Yan M, Zhang J, Zhang P, Chen C (2021)\nCurrent technologies for PW treatment: a review. J Clean Prod\n[282:124523. https://doi.org/10.1016/j.jclepro.2020.124523](https://doi.org/10.1016/j.jclepro.2020.124523)\n\n\n-----\n\n"
| 24,818
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC8976220, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://link.springer.com/content/pdf/10.1007/s13762-022-04079-x.pdf"
}
| 2,022
|
[
"Review",
"JournalArticle"
] | true
| 2022-04-02T00:00:00
|
[
{
"paperId": "b939a479ff3ec6cf01a209b4c12fd78a4c971227",
"title": "Life cycle assessment of plastic waste end-of-life for India and Indonesia"
},
{
"paperId": "5b742c2a1a40ce55e7cea2bf8e44460d0647dcb9",
"title": "Extended Producer Responsibility in India: Evidence from Recykal, Hyderabad"
},
{
"paperId": "a4eafb7cd606b60c270332989e592fe89edba5ad",
"title": "Efficient Plastic Recycling and Remolding Circular Economy Using the Technology of Trust–Blockchain"
},
{
"paperId": "ea33eb8abb06d50e24bfa578f7cf4295b8b3a7a6",
"title": "Organic recycling of post-consumer /industrial bio-based plastics through industrial aerobic composting and anaerobic digestion - Techno-economic sustainability criteria and indicators"
},
{
"paperId": "2c6c7f6a4d789b2674eef64cf5dae7695318b200",
"title": "Plastic wastes biodegradation: Mechanisms, challenges and future prospects."
},
{
"paperId": "834f0c64491c23b9e4259ac85f4e092da8113a5a",
"title": "Technical Sustainability of Cloud-Based Blockchain Integrated with Machine Learning for Supply Chain Management"
},
{
"paperId": "9be64d7985ffcde9a03cff1ac5b4abf56a683f31",
"title": "Techno-economic assessment of mechanical recycling of challenging post-consumer plastic packaging waste"
},
{
"paperId": "2cfa3c5b472468b29c1711b3a3d36ada14117f4d",
"title": "A Review on Current Practices of Plastics Waste Management and Future Prospects"
},
{
"paperId": "1fd0fc2cb033ef0266c6f1fa830df8e5f4734420",
"title": "A review of plastic waste management in India – challenges and opportunities"
},
{
"paperId": "c8716667046cae2d87e57bbe3ef2a1c0b5b3f6a7",
"title": "The chemistry of chemical recycling of solid plastic waste via pyrolysis and gasification: State-of-the-art, challenges, and future directions"
},
{
"paperId": "899fd81f09a6e64778b2aa68b7bbb8e8afc4e653",
"title": "Techno‐economic assessment and comparison of different plastic recycling pathways: A German case study"
},
{
"paperId": "b4eabf58fd703f27ff8edfce43aaad6d05428797",
"title": "Challenges and possible solutions to mitigate the problems of single-use plastics used for packaging food items: a review"
},
{
"paperId": "e6453f435c110ce7b26c45144adb7a272233db08",
"title": "Current technologies for plastic waste treatment: A review"
},
{
"paperId": "c5454040923a0dad0ce8bcf724badc7b4052764b",
"title": "A review on VOCs from recycled plastics"
},
{
"paperId": "842cb059c36a08df9c84b7853fcce113892a5859",
"title": "Conversion of Waste Plastic to Oils for Tribological Applications"
},
{
"paperId": "34b0b5e8b1683c0295316a97316dfa05cd837886",
"title": "Recirculation potential of post-consumer /industrial bio-based plastics through mechanical recycling - Techno-economic sustainability criteria and indicators"
},
{
"paperId": "c6ba7d951eead3ea92f8ac2c7f951f7866d497d5",
"title": "From Trash to Cash: How Blockchain and Multi-Sensor-Driven Artificial Intelligence Can Transform Circular Economy of Plastic Waste?"
},
{
"paperId": "7e610944c89941af5a3d60208603c03d2ed64738",
"title": "Technologies for chemical recycling of household plastics - A technical review and TRL assessment."
},
{
"paperId": "6973e5535d642fa2f21025f6acd65e83cf88d4de",
"title": "Simulating Continuous Counter-Current Leaching Process for Indirect Mineral Carbonation Under Microwave Irradiation"
},
{
"paperId": "36a9327a874687f4c2572c8f5b5312015fd10306",
"title": "Gasification of plastic waste for synthesis gas production"
},
{
"paperId": "f846e0df46af25d80a1a33efefb70f8079cd5a2e",
"title": "Biotechnological upcycling of plastic waste and other non-conventional feedstocks in a circular economy."
},
{
"paperId": "36c72b61199471f71bf3c569b0b009dd3275f8e9",
"title": "Plastic waste management and current scenario in India: a review"
},
{
"paperId": "6954da28d2be123f065f547312e60e6c28d5ee6f",
"title": "Plastic waste from recycling centres: Characterisation and evaluation of plastic recyclability."
},
{
"paperId": "227ea997088be4bd9c56cc52f857a160433093e1",
"title": "A circular economy approach to plastic waste"
},
{
"paperId": "9f1c5581b50037777462baa9c2e49518eb5ce6d1",
"title": "Economic feasibility of energy recovery from waste plastic using pyrolysis technology: an Australian perspective"
},
{
"paperId": "d91ecc2762b3eac0d184191a7dce2820e785a635",
"title": "Life Cycle Assessment of the existing and proposed plastic waste management options in India: A case study"
},
{
"paperId": "e975ac55eb2359ae4a8dd3cc2a6e89c1f9b5cb16",
"title": "Estimation of the generation rate of different types of plastic wastes and possible revenue recovery from informal recycling."
},
{
"paperId": "7df826cd47dfe7a6ab1f5bda198871e1cf9b97d6",
"title": "Biological Recycling of Polyethylene Terephthalate: A Mini-Review"
},
{
"paperId": "e1b3b70a0de30b40b17796628ff9a4eb15578476",
"title": "Pyrolysis of plastic waste for production of heavy fuel substitute: A techno-economic assessment"
},
{
"paperId": "f8a9cccee3f8a964641f4253e6d9965dfee0a17d",
"title": "Mechanical and chemical recycling of solid plastic waste."
},
{
"paperId": "964124be9c8825167753e7107092852178764474",
"title": "Biodegradability of Plastics: Challenges and Misconceptions."
},
{
"paperId": "dea98d06a6053873f1ef9ad3329af7babb264f62",
"title": "A review on thermal and catalytic pyrolysis of plastic solid waste (PSW)."
},
{
"paperId": "854d29439bbc547a1892271e608cb8dd3e84229a",
"title": "An analysis of barriers for plastic recycling in the Indian plastic industry"
},
{
"paperId": "469394e99a9eda59d204ec8f3e66d1787f134a52",
"title": "Plasticwaste management and disposal techniques - Indian scenario"
},
{
"paperId": "9fd8010b5c3fbb0853f8bc1c5e15cd1b21b40324",
"title": "Recovery of consumer waste in India – A mass flow analysis for paper, plastic and glass and the contribution of households and the informal sector"
},
{
"paperId": "c3bd31615908cb121ce53df0d5e8138fcf6c165f",
"title": "New concepts in biomass gasification"
},
{
"paperId": "cfaa7b887e95f918db83edd8fcf32f57f445fac5",
"title": "The valorization of plastic solid waste (PSW) by primary to quaternary routes: From re-use to energy and chemicals"
},
{
"paperId": null,
"title": "Recognizing &automat-ing the barriers of plastic waste management -collection and segregation"
},
{
"paperId": null,
"title": "Chemical recycling of PW"
},
{
"paperId": null,
"title": "Stepping towards benign alternatives: sustainable conversion of PW into valuable products"
},
{
"paperId": null,
"title": "Recycling and energy recovery"
},
{
"paperId": null,
"title": "Plastic recycling decoded"
},
{
"paperId": null,
"title": "Recycling and monitoring PW Flow in the city"
},
{
"paperId": null,
"title": "Biopolymers: smart solution for solving the PW"
},
{
"paperId": null,
"title": "Megatrend: reducing plastic usage across FMCG value chain"
},
{
"paperId": null,
"title": "National circular economy roadmap for plastics, glass, paper and tyres. Australia's National Science Agency"
},
{
"paperId": null,
"title": "Strategies for sustainable plastic packaging in India. FICCI & Accenture"
},
{
"paperId": null,
"title": "Current scenario of PW management in India: way forward in turning vision to reality"
},
{
"paperId": null,
"title": "Blockchain-based waste management"
},
{
"paperId": null,
"title": "2019) PW management: a review. Retrieved from http:// ijasrm. com/ wp- conte nt/ uploa ds"
},
{
"paperId": null,
"title": "Retrieved from https:// www"
},
{
"paperId": null,
"title": "PW management: a review"
},
{
"paperId": null,
"title": "Annual report for the year 2018-2019 on implementation of PW management Rules"
},
{
"paperId": null,
"title": "Techno - economic feasibility study of waste - to - energy using pyrolysis technology for Jeddah municipal solid waste"
},
{
"paperId": null,
"title": "Report on The Indian Plastics Industry"
},
{
"paperId": null,
"title": "Life cycle inventories of plastic recycling—India"
},
{
"paperId": null,
"title": "Challenges and opportunities: plastic management in India"
},
{
"paperId": null,
"title": "The new plastics economy: rethinking the future of plastics and catalysing actions"
},
{
"paperId": null,
"title": "Informal approaches towards a circular economylearning from the plastics recycling sector in India"
},
{
"paperId": "06b73af2f643068472dae6df2752efc23a84ef88",
"title": "Plastic waste depolymerization as a source of energetic heating oils"
},
{
"paperId": null,
"title": "Evaluating PW disposal options in Delhi using multicriteria decision analysis"
},
{
"paperId": null,
"title": "Annual report under PW management rule for the year 2019"
},
{
"paperId": "ce59381b5c43271c502c4d03d16932153ebd5b9d",
"title": "Use of Waste Plastic and Waste Rubber Tyres in Flexible Highway Pavements"
},
{
"paperId": null,
"title": "Managing PW in India : challenges and Agenda"
},
{
"paperId": null,
"title": "pdf Ministry of Housing & Urban Affairs (MOHUA) (2019) PW management issues, solutions & case studies"
},
{
"paperId": null,
"title": "Uttarakhand Pollution Control Board (2019) Annual report 2018-2019"
}
] | 24,818
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Medicine",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/000548b90449dad8f1aaa3207fa6b77503c1d2a3
|
[
"Computer Science",
"Medicine"
] | 0.893853
|
A Distributed and Secure Self-Sovereign-Based Framework for Systems of Systems
|
000548b90449dad8f1aaa3207fa6b77503c1d2a3
|
Italian National Conference on Sensors
|
[
{
"authorId": "1401654761",
"name": "D. E. D. I. Abou-Tair"
},
{
"authorId": "2227684617",
"name": "Raad Haddad"
},
{
"authorId": "152732266",
"name": "Alá F. Khalifeh"
},
{
"authorId": "2310442",
"name": "S. Alouneh"
},
{
"authorId": "2237892027",
"name": "Roman Obermaisser"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"SENSORS",
"IEEE Sens",
"Ital National Conf Sens",
"IEEE Sensors",
"Sensors"
],
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-142001",
"http://www.mdpi.com/journal/sensors",
"https://www.mdpi.com/journal/sensors"
],
"id": "3dbf084c-ef47-4b74-9919-047b40704538",
"issn": "1424-8220",
"name": "Italian National Conference on Sensors",
"type": "conference",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-142001"
}
|
Security and privacy are among the main challenges in the systems of systems. The distributed ledger technology and self-sovereign identity pave the way to empower systems and users’ security and privacy. By utilizing both technologies, this paper proposes a distributed and self-sovereign-based framework for systems of systems to increase the security of such a system and maintain users’ privacy. We conducted an extensive security analysis of the proposed framework using a threat model based on the STRIDE framework, highlighting the mitigation provided by the proposed framework compared to the traditional SoS security. The analysis shows the feasibility of the proposed framework, affirming its capability to establish a secure and privacy-preserving identity management system for systems of systems.
|
# sensors
_Article_
## A Distributed and Secure Self-Sovereign-Based Framework for Systems of Systems
**Dhiah el Diehn I. Abou-Tair** **[1,]*** **, Raad Haddad** **[2]** **, Ala’ Khalifeh** **[1]** **, Sahel Alouneh** **[1,3]**
**and Roman Obermaisser** **[4]**
1 School of Electrical Engineering and Information Technology, German Jordanian University,
Amman 11180, Jordan; [email protected] (A.K.); [email protected] (S.A.)
2 Cloudyrion GmbH, 40221 Düsseldorf, Germany; [email protected]
3 College of Engineering, Al Ain University, Abu Dhabi 112612, United Arab Emirates
4 Faculty of Science and Technology, University of Siegen, 57076 Siegen, Germany;
[email protected]
***** Correspondence: [email protected]; Tel.: +962-6-429-4132
**Abstract: Security and privacy are among the main challenges in the systems of systems. The**
distributed ledger technology and self-sovereign identity pave the way to empower systems and
users’ security and privacy. By utilizing both technologies, this paper proposes a distributed and
self-sovereign-based framework for systems of systems to increase the security of such a system and
maintain users’ privacy. We conducted an extensive security analysis of the proposed framework
using a threat model based on the STRIDE framework, highlighting the mitigation provided by the
proposed framework compared to the traditional SoS security. The analysis shows the feasibility of
the proposed framework, affirming its capability to establish a secure and privacy-preserving identity
management system for systems of systems.
**Keywords: security; privacy; blockchains; distributed ledger; permission; system of systems**
**Citation: Abou-Tair, D.e.D.I.;**
Haddad, R.; Khalifeh, A.; Alouneh, S.;
Obermaisser, R. A Distributed and
Secure Self-Sovereign-Based
Framework for Systems of Systems.
_[Sensors 2023, 23, 7617. https://](https://doi.org/10.3390/s23177617)_
[doi.org/10.3390/s23177617](https://doi.org/10.3390/s23177617)
Academic Editors: Wenjuan Li,
Weizhi Meng, Sokratis Katsikas and
Peng Jiang
Received: 17 July 2023
Revised: 30 August 2023
Accepted: 30 August 2023
Published: 2 September 2023
**Copyright:** © 2023 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**1. Introduction**
Systems of systems (SoS) aim to achieve functions and services by integrating multiple
constituent systems (CSs). Each CS possesses various resources and services that SoS’s
users can independently access. These CSs are interconnected, and the SoS coordinates
their resources and services to comprehensively understand all available resources and
services [1–3]. The primary advantage of adopting an SoS is its ability to utilize the
resources of individual systems while considering numerous factors, such as the cost,
availability, reliability, safety, privacy, and security of resources. This has increased the
popularity of SoS applications in various domains, such as healthcare, aerospace and
automotive manufacturing, Industry 4.0, defense, and security systems [4–6]. For instance,
one healthcare system might provide health-related measurements, and another for data
analysis and processing, while a third system makes decisions for specific healthcare cases.
This enables the relevant healthcare personnel to detect and relay emergency conditions.
However, ensuring the security of SoS is complex due to the dynamic and diverse
nature of the SoS architecture, as each CS has its security measures and configurations.
These measures, designed to ensure the security of individual systems, may not apply
to the dynamic environment of the SoS. Consequently, there is an essential need for a
universal security framework for SoS that ensures the security of the individual CSs and
the SoS as a whole.
One of the significant concerns of SoS’s security is managing the customers’/users’ identities. Therefore, a robust identity management system (IDMS) is crucial for the overall
security of the SoS. In such a system, users within the CSs have digital identities that
enable them to interact and access the resources of the CSs. Within the SoS ecosystem,
-----
_Sensors 2023, 23, 7617_ 2 of 15
an IDMS manages users’ identities across different CSs networks via rules associated with
users’ digital identities and credentials. The distributed topology of the SoS requires a
decentralized IDMS, which differs from the majority of centralized IDMSs. Centralized
IDMSs often need more scalability, single points of failure, and vulnerability to identity
theft attacks, making them unsuitable for a distributed and scalable SoS environment.
In the proposed framework, a decentralized IDMS is realized based on self-sovereign
identity (SSI) using digital ledger technology (DLT) [7]. The benefit of using SSI is that
it preserves users’ privacy by granting them full control over sharing their data. It also
implies that CSs can verify users’ data without storing them, maintaining a stateless system.
Users can then access different resources in the SoS CSs without compromising their private
information. For instance, a user could verify their age to access a resource within a CS
without revealing their actual date of birth.
This paper proposes a secure, distributed, self-sovereign-based framework for SoS.
The system architecture serves the needs of the SoS by providing a scalable, secure, privacypreserving, and decentralized identity management system that maintains users’ privacy
and security.
The proposed framework addresses an essential SoS security feature, namely the right
to access a service from a security perspective, that is, equipping the SoS with a dynamic
access control mechanism. Furthermore, the proposed framework preserves users’ privacy
by utilizing self-sovereign identity technology, wherein the user is not required to disclose
their private information to access a service. Instead, the user needs to demonstrate that
they are entitled to access the service via a verifiable proof.
[The proposed framework is implemented using Hyperledger Indy (https://www.](https://www.hyperledger.org/use/hyperledger-indy)
[hyperledger.org/use/hyperledger-indy (accessed on 1 May 2023)). To demonstrate the](https://www.hyperledger.org/use/hyperledger-indy)
feasibility of the proposed framework, a security analysis was conducted to identify and
analyze potential threats and risks. Additionally, a threat model based on the STRIDE
framework was carried out, highlighting the mitigation provided by the proposed SoS
security framework compared to an SoS utilizing traditional centralized security measures.
The rest of this paper is organized as follows. Section 3 summarizes the most relevant
papers in the literature. The proposed security framework is presented in Section 3. The
system implementation and evaluation are discussed in Section 5. Finally, this paper is
concluded in Section 6.
**2. Security Challenges of Systems of Systems**
Systems of systems are large-scale collaborative systems where autonomous constituent systems work together to provide emerging services that exceed the local services
of the constituent systems. The CSs can be geographically distributed and belong to different organizations. Significant challenges are the lack of central control and information
about the internals of CSs, which prevent the centralized establishment of services. SoS is
increasingly important in different domains, such as transportation systems, smart grids,
smart production, healthcare, and defense systems. Many of these systems also exhibit nonfunctional requirements such as stringent temporal constraints and reliability requirements.
Since SoS plays an essential role in critical infrastructure and offers safety-relevant
services, the security in SoS must be considered. Firstly, attackers may affect the availability
of SoS using denial-of-service attacks. In addition, attackers can interfere in negotiating
service contracts and providing services between constituent systems. Therefore, the authenticity of service providers and service users must be ensured during the cooperation of
constituent systems. Finally, sensitive information, such as medical records in healthcare
applications, can be communicated. Therefore, secure services are required to ensure
security and privacy.
In particular, present-day SoS face the following security challenges:
- Confidentiality: Traditional systems encountered significant challenges in maintaining confidentiality. System owners and developers had to encrypt all user-related
information and store it securely in inaccessible locations to prevent unauthorized
-----
_Sensors 2023, 23, 7617_ 3 of 15
access. Their most significant challenge was if encryption keys were compromised
or weak encryption algorithms were used, which put all the information at risk of
potential leakage or unauthorized modification. Our proposed approach enhances the
confidentiality of user data and access keys by storing them in digital wallets located
on the user’s side in a secure, encrypted manner.
- Integrity: Centralized systems were plagued with privilege escalation and data manipulation issues, facilitating numerous malicious activities. Our proposed approach
gives users exclusive control over their identities, preventing them from being shared
or stored elsewhere. Moreover, data alteration will not affect the process, as it will
continually verify the submitted verifiable proof on the DLT. Any detected modification will cause the authentication and authorization processes to fail, thus inhibiting
further progression.
- Availability: When centralized systems experience downtime, users cannot authenticate themselves until the issue is resolved. However, the proposed approach, fortified
with blockchain technology, makes it considerably more difficult, or even impossible,
for attackers to disrupt the service or make it unusable for users, as it would require
significant computing power and resources.
The introduced services support security processes for addressing security risks in
SoS, such as OASoSIS [8]. The proposed security framework, with its encryption, identity
management, and authentication services, represents a mitigation approach for reducing
risks to SoS stakeholders.
**3. Related Work**
Systems of systems (SoS) solutions have attracted considerable research interests [9–21].
A study by Olivero et al. [9] addressed the problem of assessing security properties in SoS.
It proposed a Testing Security in the System of Systems (TeSSoS) approach, which included
modeling and testing security properties in SoS. TeSSoS adopted the perspective of attackers
to identify security flaws and propose the development of new features. The authors aimed
to provide an approach for assessing SoS security and continuing its development, paying
particular attention to security testing, modeling security features, evaluating human
factors relevance, and implementing control policies.
Guariniello and DeLasurentis [10] analyzed the implications of cyber-attacks on SoS.
They utilized a modified functional dependency analysis tool to model the tertiary effects
of such attacks. Their study primarily focused on risk assessment and did not specifically
address the security requirements of the SoS. The authors evaluated the robustness of the
SoS in terms of its ability to sustain an acceptable level of operation after a communication
disruption has occurred.
In their work [11,12], Trivellato et al. presented a service-oriented security framework
that aims to safeguard the information shared between entities within an SoS while also
ensuring the preservation of their autonomy and interoperability. To showcase the practical
viability of the framework, the authors implemented it within the context of the maritime
safety and security domain. By doing so, they demonstrated the applicability of the SoS in
this particular domain.
EL Hachem et al. [13] proposed a Model Driven Engineering method called Systemsof-Systems Security (SoSSec). This method was designed to model and analyze secure SoS
solutions, particularly in predicting high-impact cascading attacks during the architecture
phase. In their study, the authors demonstrated the effectiveness of the proposed method
by applying it to a real-life smart building SoS. The case study showed that the SoSSec
method successfully identified and analyzed the cascading attacks consisting of multiple
individual attacks.
In [14], Nguyen et al. performed a systematic mapping study (SMS) that aims to
evaluate the current state of Model-Based Security Engineering (MBSE) for Cyber-Physical
Systems (CPSs). The work showed a significant increase in primary studies related to MBSE
for CPSs, mainly in the security analysis. However, their work revealed a need for more
-----
_Sensors 2023, 23, 7617_ 4 of 15
engineering security solutions for CPSs. Furthermore, the SMS highlighted several critical
issues, such as the limited availability of tool support and the challenge of integrating
domain-specific languages (DSLs) to secure CPSs effectively.
In [16], Bicaku et al. proposed an automated and continuous standard compliance
verification framework based on a set of technically measurable indicators from security
standards. This framework enabled the verification of system compliance with various
security standards. Several advantages of the framework have been emphasized, such as
continuous monitoring, automation capabilities, and extensibility. Furthermore, the authors
analyzed several implementation-related challenges, such as the necessity for accurate and
up-to-date information regarding the standards. Consequently, this framework underlined
the significance of ensuring the compliance of SoS with security standards, presenting it as
a more effective and efficient alternative to traditional manual approaches.
Agrawal et al. [17] put forward a security schema for SoS that addresses the dynamic
and uncertain nature of the environment. Unlike the traditional approach of static security,
their schema incorporated mechanisms that continuously monitored the overall environment and used the collected observations to adjust the security posture dynamically. This
recognition of the ever-changing threat landscape distinguished their schema from the
static security approaches. The authors hypothesized that adopting such security schemata
would enable a systematic analysis of the security of complex systems and provide a
quantified assessment of the resilience of the security within an SoS.
Maesa et al. [20] presented a Blockchain-based access control protocol that utilized the
resource access policies and rights of public publication on the Blockchain. This approach
enabled users to have real-time access to the resources’ pairing information and policies,
as well as the authorized personnel to access those resources. By leveraging Blockchain
transparency and immutability, the protocol delivered reliable and accessible access control
management mechanisms.
Xu et al. [21] introduced the concept of Distributed Ledger-Based Access Control
(DL-BAC) specially designed for web applications. The proposed DL-BAC offered a decentralized access control mechanism while ensuring users’ privacy. Furthermore, by utilizing
distributed ledger technology, DL-BAC provided a secure and privacy-preserving approach
to access control in web applications, thus offering an alternative solution that eliminated
the need for a central trusted entity.
In our previous work [15], we proposed a systems-of-systems security framework
that utilizes multi-protocol label switching (MPLS). The main objective of the proposed
framework was to offer several advantages, including connectivity, reliability, and quality
of service. In addition, it included features such as traffic separation and isolation while
minimizing management and processing overhead. Furthermore, an advanced security
configuration for complex scenarios has been proposed by integrating IPsec and the MPLS,
enhancing overall security. However, it is important to mention that our work did not
consider the SoS identity management or the associated access control challenges. Additionally, we did not consider other threats, such as denial-of-service attacks, which can
impact network services like the domain name system (DNS).
Furthermore, in our other previous research discussed in [18], we proposed a distributed access control system that utilizes Blockchain technology to ensure secure and
privacy-preserving management of access to distributed resources. The system was specifically designed to be decentralized and distributed, enhancing its security and resilience
against potential attacks.
This work builds on our previous works [18,22] by proposing a new framework for a
secure, distributed, self-sovereign-based SoS. The proposed system architecture serves the
specific needs of SoS by providing a scalable, secure, privacy-preserving, and decentralized
identity management system. The main objective is to protect users’ privacy and security
while ensuring the necessary functionality for the SoS.
-----
_Sensors 2023, 23, 7617_ 5 of 15
**4. The Systems-of-Systems Security Framework**
_4.1. The Proposed Framework_
The proposed framework leverages distributed ledger technology to address security
and privacy challenges in the context of SoS. The dynamic and distributed nature of SoS
necessitates a decentralized security mechanism capable of fulfilling the security and
privacy requirements of the SoS environment. For instance, users may access multiple
resources distributed across different constituent systems; thus, the serving CSs must verify
their identities. Furthermore, the resources may require specific access credentials from the
users, who should be able to present access permission without compromising their private
information. Scalability is another vital factor in SoS due to its scalable nature, where users
can access many available resources and services distributed among several CSs. These
requirements are considered in the proposed decentralized self-sovereign-identity-based
security framework. Figure 1 depicts the proposed SoS security framework architecture.
The framework consists of several connected CSs, which are also connected to a distributed
ledger network. Additionally, the framework consists of credential issuers (CIs) and service
requesters (users). The role of the credential issuer is to issue digital credentials for users
registered inside the distributed ledger network and stored in the individual user’s wallet
as its sole owner. The user can use the credentials to create verifiable proof to gain access
to SoS resources. For instance, a verifiable proof can be derived from the user’s birth
certificate, which shows that the user is above a certain age limit without revealing the
actual date of birth. Moreover, the credentials could incorporate SoS resources’ access
control information to create a verifiable proof to access the resources. In what follows,
the framework’s main components are described.
CredentialsCredentials
Register/Update Issuers Issuers
Distributed Ledger Credential
Technology
ConstituentConstituent CSMCSM
System: CSSystem: CSSystem: CS Digital WalletCS Digital WalletCS
ES
ES
CSNSCSNS Internet Issue
Credential
ES ES
ConstituentConstituent CSMCSM
System: CSSystem: CSSystem: CS CS Digital WalletCS
Digital Wallet
ES Constituent Constituent System InitiatorSystem InitiatorConstituentSystem CS Digital WalletCS CSMCSM
Initiator Digital Wallet
ES ES Request/Present
Proof
CSNSCSNS
ES User Digital Wallet
ES ES CSNSCSNS
ES ES
**Figure 1. The proposed SoS security framework.**
5 of 15
**4. The Systems-of-Systems Security Framework**
The proposed framework leverages distributed ledger technology to address security
and privacy challenges in the context of SoS. The dynamic and distributed nature of SoS
necessitates a decentralized security mechanism capable of fulfilling the security and
privacy requirements of the SoS environment. For instance, users may access multiple
resources distributed across different constituent systems; thus, the serving CSs must verify
their identities. Furthermore, the resources may require specific access credentials from the
users, who should be able to present access permission without compromising their private
information. Scalability is another vital factor in SoS due to its scalable nature, where users
can access many available resources and services distributed among several CSs. These
requirements are considered in the proposed decentralized self-sovereign-identity-based
1 depicts the proposed SoS security framework architecture.
The framework consists of several connected CSs, which are also connected to a distributed
ledger network. Additionally, the framework consists of credential issuers (CIs) and service
requesters (users). The role of the credential issuer is to issue digital credentials for users
registered inside the distributed ledger network and stored in the individual user’s wallet
as its sole owner. The user can use the credentials to create verifiable proof to gain access
to SoS resources. For instance, a verifiable proof can be derived from the user’s birth
certificate, which shows that the user is above a certain age limit without revealing the
actual date of birth. Moreover, the credentials could incorporate SoS resources’ access
control information to create a verifiable proof to access the resources. In what follows,
the framework’s main components are described.
CredentialsCredentials
Register/Update Issuers Issuers
Distributed Ledger Credential
Technology
### Internet Issue
Credential
CSMCSM
Constituent Constituent Constituent CSMCSM
System InitiatorSystem InitiatorSystem CS Digital WalletCS
Initiator Digital Wallet
Request/Present
5 of 15
depicts the proposed SoS security framework architecture.
CredentialsCredentials
Register/Update Issuers Issuers
Credential
### Internet Issue
Credential
CSMCSM
Request/Present
Proof
User Digital Wallet
ES
ES
ES
ES
ES
ES
-----
_Sensors 2023, 23, 7617_ 6 of 15
4.1.1. Distributed Ledger Technology and Blockchain
Distributed ledger technology is an emerging technology for storing data in replicated databases (ledgers or data stores) across multiple sites managed by a distributed
server network (nodes). The main advantage of DLT is its decentralized nature for storing,
sharing, and synchronizing data across multiple nodes, utilizing a peer-to-peer communication paradigm. Blockchain is one type of DLT that transmits and stores data packages
named Blocks. These Blocks are joined together to form an append-only digital chain.
For data recording and synchronization across the Blockchain network, Cryptographic and
algorithmic methods are used [7].
4.1.2. Self-Sovereign Identity
SSI is a concept that enables users to have complete control over their identities and
personal data and enables services to control who can access them without the intervention
of a mediator (third party) [23]. This is achieved by storing the users’ identities in digital
wallets owned by the users and the services’ access requirements in digital wallets owned
by CSs. When users/services try to access a resource or service, they generate a verifiable
proof utilizing the credentials stored in their digital wallets in response to a proof request
from the verifier. The verifier in the context of the proposed SoS framework is the Broker or
the CSM, which will process the response data and check its authenticity, thus allowing or
denying access to the requested resources or services.
4.1.3. Credentials’ Issuers
Credential issuers are trusted entities that issue verifiable credentials in response to a
user’s credential request. Verifiable credentials include birth certificates, bank accounts, personal identities (e.g., government IDs, passports, and social security credentials), insurance
policy certificates, access control information, etc. These verifiable credentials are stored
in users’ digital wallets, from which verifiable proofs required by resources are derived.
For the proof verification process, CIs will register the credentials on the DLT.
4.1.4. The Digital Wallet
Both users and CSs have digital wallets to store verifiable credentials. In the context of
SoS, some resources may require certain credentials. If the user accesses such a resource,
they must provide proof of the required credentials, which can be derived from the verifiable credentials stored in their wallet. As for CSs, the digital wallet is needed to store their
verifiable credentials, which enable them to identify themselves to other CSs to use their
services. The users, bearing responsibility for their digital wallets containing their verifiable
credentials, are advised to link one of their biometric attributes, such as a fingerprint,
to access their digital wallet. This precautionary measure will mitigate the potential misuse
of user credentials in the event of unauthorized access to the digital wallet device.
4.1.5. The Broker
The Broker, referred to as the CS Initiator, is responsible for accepting users’ service
requests and contacting CSs to provide service offers that match the requests. The CS
Initiator then selects the optimal service offers based on the user’s predefined criteria, such
as cost and execution time. Additionally, the CS Initiator plays a vital role in ensuring users’
overall security and privacy by validating the general credentials requested by the CSs.
Once the Broker receives the service offers from the CSs, it will ask the user to provide the
necessary proof that allows them to access the resources. The Broker will then forward the
user proofs to the CSs, verifying them via the DLT. Once the proofs are verified, the CSs
will allow the user to access the requested services. In the proposed framework, each CS
has a digital wallet, which includes its identity as verifiable credentials issued by the SoS
service provider as a CI and used within the SoS network, thus creating a trustworthy
communication paradigm between the CSs.
-----
_Sensors 2023, 23, 7617_ 7 of 15
4.1.6. Constituent System Manager
The CSM handles all communication between the CS and the Broker. The CSM ensures
that the requested resources or services are available for usage. Furthermore, each CS has
specific security requirements to access resources or services. Also, CSM plays a role in
the security framework, as it is responsible for verifying the specific proofs provided by
the user on the DLT. As each CSM can verify its security requirements using DLT in a
decentralization manner, this improves the overall security of the framework.
_4.2. The Framework Work Flow_
Figure 2 shows the workflow of the proposed framework which can be summarized
as follows:
Start User requests offers for Broker tries to find the offers that match the userAre there any available No
specific services best offers
requirements?
Yes
Store offers in the queue
Get an offer from in order
the queue head
Offers
queue
No Does the user
own the Broker required
credential?
Yes
Each CSM verifies it's
Verify Broker proof CS specific security
requirements
Yes No
Verified ? Verified ?
No Yes
User can
Exit successfully access
CS services
**Figure 2. The workflow of the proposed framework.**
- Credential issuers issue verifiable credentials that are stored in users’ wallets and
registered on the DLT.
- The user will connect with the CS Initiator (Broker) to request the required service
from the CSs.
- The Broker will contact the CSs and request offers of services pertaining to the user
request, mentioning the execution time and cost of each service. The CSs’ responses
-----
_Sensors 2023, 23, 7617_ 8 of 15
will be queued according to the optimization criterion set by the application under
consideration. For example, the offers with the least computational cost will be queued
in ascending order according to the execution time. The Broker will then select the best
offer that matches the request’s requirement and constraints by solving a constraints
optimization problem, where the main objective function may vary depending on
the application requirements. Further details about the selection and optimization of
offers can be found in our previous work [22].
- The Broker will provide the user with the queued offers and their associated privacy
and security requirements. The user will then be able to evaluate the offers’ security
and privacy requirements to best suit their security and privacy needs.
- The Broker will request the user to provide a verifiable proof that indicates they
possess the necessary access credentials for the offered services. The Broker will verify
the user proof on behalf of the requested CS service provider. Additionally, the Broker
may ask the user to provide a verifiable proof, which will be sent directly to the CS
service provider for verification. These two types of proofs are distinguished in the
implementation Section 5.1 as general and specific proofs, respectively.
- The verifiable proofs will be verified by the Broker or CSs using the DLT.
**5. System Implementation and Evaluation**
_5.1. Implementation_
The testbed was implemented on the Ubuntu operating system, and the test machine
was equipped with a dual-core processor and 4 gigabytes of RAM. Through Docker, we
established a dedicated network solely for this experiment, ensuring effective network isolation and resource segregation implementation. The proposed framework was implemented
using Hyperledger Indy, an open-source project focusing on distributed DLT. Hyperledger
Indy’s DLT served as the foundation for adding the required nodes and entities to the
framework. This allowed for the creation and assignment of credentials to users, which
could then be stored in their wallets. Additionally, the distributed ledger was utilized
for authenticating identities via the Broker and CS managers, as described in Section 4.1.
The implementation leveraged the Indy SDK for Python, which provided the necessary
functions for interacting with the distributed ledger.
The implementation comprises several key components. Firstly, there is the Broker,
which assumes the responsibility of initiating communication between the CSs and the
users. This function ensures mutual trust and conducts the necessary verification process.
Additionally, as specified by the user, the Broker retrieves all available services or resources
from different CSs. Furthermore, the Broker selects the best offers based on multiple factors,
ensuring an optimal offer for each requested asset.
In this implementation, the various services offered by different CSs were equipped
with access control requirements. This means that only users who can provide the necessary
proof of having the access credentials can access the requested services. The Broker in this
implementation has additional functionality to gather the security requirements (access
control requirements) for each desired service, along with the optimal offer. The Broker
also maintains a risk assessment of sharing each user’s data to facilitate its operations
and help users choose a service from a CS that requires less user data and provides the
best security options. This prioritizes the user’s privacy and security, as outlined in the
proposed framework workflow depicted in Figure 2.
However, it is important to note that CSs have the ability to offer services to users
and include all the necessary service requirements. While a particular CS may not provide
all the services, it may still offer the best option for a specific service if available. All the
requirements are stored within each CS’s Metadata and provided to the Broker whenever
a user requests a specific service. Each CS has a dedicated CS manager who handles all
communication between the CS and the Broker. The CS manager ensures that the requested
service is available for usage, and if it is not, an offer that is not ready will not be presented.
-----
_Sensors 2023, 23, 7617_ 9 of 15
Additionally, the users’ identities and communication data are kept secure via encrypted and secured communication channels. To achieve this, users have a credential
containing their personal information. This credential can be used to generate verifiable
proof when requested. Following the principles of SSI, it is the user’s responsibility to
provide this verifiable proof, also known as a claim, to the verifier, which, in this case, is
the Broker. The Broker then authenticates the necessary information with the CS managers.
_5.2. Use Case and Evaluation of Health Care Services for SoS_
Figure 3 illustrates a practical use case in healthcare. It depicts a scenario where an
elderly individual with a heart condition needs to be monitored for potential heart attacks.
A pattern recognition service is used to identify heart attack symptoms; if an emergency
occurs, the relevant hospital should be notified. This use case involves finding a suitable
pattern recognition service for monitoring heartbeats, utilizing an expert system to analyze
the patient’s medical history, and discovering an emergency service provided by a nearby
hospital. Establishing a reliable SoS-application will provide the most appropriate services
for the desired application, specifically for the medical monitoring of the elderly person.
In this use case, each CS should have a CSM, which is the primary processing component
responsible for service discovery, inter-networking with other CSs using routers, admission
control, and scheduling.
Network Domain
Cloud Resource
Providers
Constituent System
(Elderly Home)
Constituent System
(Medical Pattern
Recognition Services)
Constituent System
Constituent System
(Medical
(Hospital)
Datacenter)
Constituent System
(Medical Expert
Services)
**Figure 3. SoS for healthcare services use case.**
This use case presents several significant challenges for security, including:
- Confidentialityof information is necessary for protecting the privacy of the elderly.
This includes safeguarding behavioral patterns like the locations and activities of
elderly individuals. Furthermore, the SoS must ensure that medical information is
not disclosed.
- Availability is crucial to ensure the proper delivery of safety-related services even in
the face of denial-of-service attacks. Any disruption in recognizing health issues and
emergency response would pose a medical risk to elderly individuals. For instance,
if a denial-of-service attack causes delays in the pattern recognition service’s response
time, the entire healthcare system may become unresponsive and fail to identify and
address medical emergencies promptly.
- Authenticity is necessary to prevent financial losses resulting from illegitimate interactions that impose costs on elderly individuals or insurance companies. For instance,
attackers may initiate unnecessary cloud services, leading to unnecessary expenses.
Similarly, authenticity is crucial in blocking illegitimate service providers who offer
unreliable services in the health monitoring context. An example would be a low
-----
_Sensors 2023, 23, 7617_ 10 of 15
quality pattern recognition service that compromises the overall accuracy of the health
monitoring system.
Addressing these security challenges is critical to ensure the successful implementation
and functioning of the healthcare SoS.
This use case has been evaluated using the proposed framework implementation to
prove its feasibility and scalability. To this end, the medial use case scenario has been
applied to the aforementioned developed testbed, where a patient requires thirty different
services from healthcare service providers. In the conducted simulation, the patient’s
request was distributed among different CSs according to the services’ availability and
compliance with the user requirements in a secure and privacy-preserving manner. To evaluate this, the patient services’ request was assessed by considering one CS providing all
requested services, then two, three, and up to thirty CSs providing the requested services
in a distributed manner.
When the patient request was initiated, the Broker offered the optimal offers with its
security and privacy requirements. On one hand, the Broker verified the patient’s general
proof needed to access the SoS services. On the other hand, each CSM verified the specific
proof provided by the patient to access the specific CS services.
Figure 4 depicts the response time versus the number of CSs used to provide the thirty
requested services by the patient. Each experiment was repeated five times to show the
results’ variability, which were plotted using an error bar representation. It is observed
that the response time increases linearly with the number of used CSs, which verifies the
system scalability with the increasing number of CSs. The response time includes the
delay incurred in verifying the general and specific proofs via the Broker and the CSMs,
respectively, which is time-consuming since it involves accessing the distributed ledger
technology network.
**Figure 4. Performance evaluation.**
Figure 5 illustrates the response time when the system was overloaded using concurrent users’ service requests. The number of users is increased by one, starting from
2 to 24 concurrent users, who requested the same services in parallel. This demonstrated
the proposed framework implementation’s ability to handle multiple users’ service requests
in parallel while verifying the services’ security requirements and the users’ authorization
to access them via the DLT. It was observed that the response time increased linearly as
-----
_Sensors 2023, 23, 7617_ 11 of 15
the number of users increased, which verified the system’s scalability with increasing the
number of concurrent users. However, as shown in the figure, when there were 24 concurrent active users, the proposed system reached its saturation point, with an exponential
increase in the response time.
**Figure 5. Incremental load testing.**
_5.3. Security Analysis_
This section conducted a security analysis to investigate the innovative security mechanisms applied within the proposed SoS security framework. The security analysis demonstrated how the proposed framework enhances the SoS environment with robust security
features and controls designed to ensure that the authentication and authorization processes of the constituent systems are conducted in a manner that supports both user and
system security and privacy. The proposed framework carefully checks and validates the
credentials, ensuring that the processes occur securely and privately.
The authorization and authentication processes were historically centralized within
the same infrastructure or underlying systems. Over time, organizations and system administrators transitioned to using a dedicated service to exclusively manage the authentication
and authorization processes. While this solution represented an improvement, it retained a
centralized architecture, storing all user-related data and permissions in one place, making
these systems highly attractive targets for attackers. By launching targeted attacks, attackers
could carry out various malicious activities, potentially leading to the leakage of sensitive
users’ and systems’ data or even discovering vulnerabilities to bypass these mechanisms,
impersonate users, escalate privileges, or act maliciously on behalf of other users.
This paper proposes a new methodology for user and system authentication and
authorization. It involves the main components that work together to improve the overall
security of the SoS to address its dynamic nature. Furthermore, to provide security and
protection for the users’ data, the credentials and the communication channels have been
identified as essential sources of threats that must be carefully considered.
- Credentials are issued by trusted entities and assigned to the user, and they are securely
and exclusively stored on the user’s side in a digital wallet. Digital wallets should use
robust encryption algorithms to prevent the use of credentials by unauthorized users in
the event of wallet theft or attack. Having the credentials stored on the users’ side will
significantly challenge the possible attacker who attacks the SoS infrastructure since
-----
_Sensors 2023, 23, 7617_ 12 of 15
the systems don’t include any users’ data. Additionally, during the authentication
process, users do not reveal sensitive information such as usernames, passwords,
or secret keys; instead, they supply encrypted, verifiable proof. This verifiable proof is
generated once and invalidated upon the completion of the verification procedure.
- Communication Channels among the components of the proposed SoS security framework play a significant role in maintaining security and privacy. These channels must
be secured and encrypted at all times of communication, which can be accomplished
using various methods. The proposed SoS security framework makes use of SSL/TLS
to ensure data encryption during data transmission. Given that most communications
are managed via APIs, the SoS security framework applies and implements the API
security controls across all the endpoints and infrastructure per the OWASP API Top
[10 security guidelines (https://owasp.org/API-Security/editions/2023/en/0x11-t1](https://owasp.org/API-Security/editions/2023/en/0x11-t10/)
[0/, accessed on 1 May 2023).](https://owasp.org/API-Security/editions/2023/en/0x11-t10/)
The emphasis on the security considerations in the proposed SoS security framework involves adhering to blockchain security best practices and consistently following
guidelines for protecting such infrastructure from various factors, such as human errors,
natural disasters, or any other potential impacts. Additionally, we employ APIs in our
module and implementation, as is often the case in real-world scenarios. Therefore, securing and hardening APIs is necessary, from receiving requests to returning responses,
and communication channels should always be encrypted using state-of-the-art encryption
methodologies and technologies.
_5.4. Threat Model_
A threat model was conducted for SoS utilizing centralized authentication methods
and demonstrated how the proposed SoS security framework presented in this paper assists
in mitigating the identified threats as as illustrated in Figure 6 and described in Table 1.
The STRIDE Framework was used to identify threats and assess their impacts across the six
categories of Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service
(DoS), and Privilege Elevation.
Authentication and Authorization via Centralized Systems
Identity Exploit
Spoofing Vulnerabilities
Insecure
Communication Log Tampering
Channels
### SoS
Authentication and Authorization via SSI
Users have
control over
their Data
User sends Verification is Communication
encrypted done on DLT Channels are
Proof Encrypted
**Figure 6. The SoS threat model.**
-----
_Sensors 2023, 23, 7617_ 13 of 15
**Table 1. SoS threat model based on STRIDE framework.**
**Mitigations Using the Proposed SSI-Based**
**Threat Category** **Identified Threats**
**Framework**
- User-related information is transmitted in an
encrypted state.
- Verifiable proof is invalidated once verified
by DLT.
- Session tampering infeasible in SSI.
- Validators can detect false proofs.
- DLT allows for the creation of an immutable
log of identity-related activities.
- Recording and verifying transactions and interactions can involve multiple parties, generating a traceable record.
- Establishing a verifiable sequence of events is
essential when there are alterations, deletions,
or claims of denial.
- Credentials are securely stored in a digital
wallet on the user’s side, employing robust
encryption algorithms.
- Users generate a single proof from the verified credential to authenticate themselves to
the SoS.
- No user data stored on SoS.
- SSI avoids centralized storage and
ensures encryption.
- DLTs are resilient to DoS attacks due to their
decentralization nature.
- Transactions are stored across multiple nodes,
making it difficult to target and avoid a single
point of failure.
- Attempts to change or escalate privileges with
false information are prevented.
- Users have control over what information
they share with the SoS.
- Requests for excessive permissions or access information are monitored and can
be rejected.
Spoofing Identity
Tampering
Repudiation
Information
Disclosure
Denial of Service
(DoS)
Elevation of
Privilege
- Weak encryption algorithms or lack of encryption
can increase risk of attack.
- Man-in-the-Middle (MiTM) attacks can be used
to impersonate another user’s identity.
- Session data tampering can be exploited by malicious actors to impersonate other users, compromising system security.
- Alteration or deletion of user activities within the
SoS is possible if security misconfigurations or
other types of vulnerabilities occur.
- Security flaws or failure to adhere to security best
practices for the utilized logging and monitoring
solutions may lead to the modification of user or
system logs.
- Misconfiguration or improper implementation
of centralized authentication and authorization
systems can lead to data leakage.
- Personally Identifiable Information (PII) or cleartext access keys can be exposed.
- Misuse of this information can compromise user
data and the SoS.
- Administrative access granted can pose a risk to
the SoS.
- Centralized authentication and authorization systems are vulnerable to DoS attacks.
- DoS attacks can make these services unavailable
to users, particularly when attackers specifically
target DNS systems to disrupt their availability.
- This prevents users from accessing the services
under the SoS.
- Exploitation of security vulnerabilities to gain
unauthorized access.
- Consumption of resources without legitimate access.
_5.5. Framework Practicality and Industry Adoption_
The proposed framework is practical and can be deployed using current technologies,
as it utilizes existing technologies recently used in many applications, such as self-sovereign
identity and digital ledger technology. Furthermore, the proposed framework was implemented using Hyperledger Indy, verifying its implementation feasibility. However,
integrating the framework into real-world systems of systems, such as automobiles, autonomous ships, manufacturing facilities, energy grids, and medical device networks, poses
significant challenges due to the lack of up-to-date communication infrastructure and the
absence of an e-government structure and associated legislation. Despite these challenges,
many countries are improving and enhancing their infrastructure, which can be seen in the
wide adoption of advanced wired and wireless infrastructure, such as fiber optics, fourth
and fifth-generation wireless infrastructure, and the deployment of cloud-based networks.
This will pave the way for adopting the proposed framework. Moreover, countries are mov
-----
_Sensors 2023, 23, 7617_ 14 of 15
ing toward leveraging their governmental services with an e-government infrastructure
and services paradigm.
**6. Conclusions**
In conclusion, the proposed framework provides a secure and scalable solution for
managing the identity of users within a SoS environment. By utilizing SSI and DLT,
the framework ensures the privacy and control of users’ data while enabling secure interactions between different CSs. Implementing the framework using Hyperledger Indy
showcases its feasibility and practicality in real-world scenarios. The security analysis highlights the framework’s ability to address essential security challenges based on the STRIDE
framework. By addressing these challenges, the proposed framework enhances the overall
security and functionality of SoS. Furthermore, the decentralized and distributed framework provides resilience against centralized attacks and scalability for future expansions.
Overall, the framework offers a promising solution to the security concerns in SoS environments and opens up opportunities for broader adoption in other domains. In a future work,
we will explore the possibility of adopting and implementing the proposed framework in a
real healthcare system and utilize a cloud-based environment with increased computational
capabilities, which, in turn, can serve a higher number of concurrent users.
**Author Contributions: Conceptualization, D.e.D.I.A.-T. and A.K.; Methodology, D.e.D.I.A.-T., R.H.,**
A.K., S.A. and R.O.; Software, D.e.D.I.A.-T. and R.H.; Validation, D.e.D.I.A.-T.; Resources, R.O.;
Writing—original draft, D.e.D.I.A.-T., R.H., A.K. and R.O.; Supervision, D.e.D.I.A.-T.; Project administration, R.O.; Funding acquisition, R.O. All authors have read and agreed to the published version of
the manuscript.
**Funding: This research was funded by the European research project FRACTAL under the Grant**
Agreement ID 877056 and the European research project EcoMobility under the Grant Agreement ID
101112306.
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. [Maier, M. Architecting Principles for Systems-of-Systems. Syst. Eng. 1998, 1, 267–284. [CrossRef]](http://doi.org/10.1002/(SICI)1520-6858(1998)1:4<267::AID-SYS3>3.0.CO;2-D)
2. Staker, R. Towards a knowledge based soft systems engineering method for systems of systems. In Proceedings of the INCOSE
[International Symposium, Melbourne, Australia, 1–5 July 2001; Voluem 11, pp. 391–398. [CrossRef]](http://dx.doi.org/10.1002/j.2334-5837.2001.tb02319.x)
3. Fisher, D. An Emergent Perspective on Interoperation in Systems of Systems; Technical Report; Software Engineering Institute,
Carnegie Mellon University: Pittsburgh, PA, USA, 2006.
4. Subramanian, S.V.; DeLaurentis, D.A. Application of Multidisciplinary Systems-of-Systems Optimization to an Aircraft Design
[Problem. Syst. Eng. 2016, 19, 235–251. [CrossRef]](http://dx.doi.org/10.1002/sys.21358)
5. Chalasani, S.; Wickramasinghe, N. Applying a System of Systems Approach to Healthcare. In Lean Thinking for Healthcare;
[Wickramasinghe, N., Al-Hakim, L., Gonzalez, C., Tan, J., Eds.; Springer: New York, NY, USA, 2014; pp. 287–297. [CrossRef]](http://dx.doi.org/10.1007/978-1-4614-8036-5_16)
6. Marcu, I.; Suciu, G.; B˘al˘aceanu, C.; Vulpe, A.; Dr˘agulinescu, A.M. Arrowhead Technology for Digitalization and Automation
[Solution: Smart Cities and Smart Agriculture. Sensors 2020, 20, 1464 . [CrossRef]](http://dx.doi.org/10.3390/s20051464)
7. Natarajan, H.; Krause, S.K.; Gradstein, H.L. Distributed Ledger Technology (DLT) and Blockchain; FinTech Note; no. 1; World Bank
Group: Washington, DC, USA, 2019.
8. Ki-Aries, D.; Faily, S.; Dogan, H.; Williams, C. Assessing system of systems information security risk with OASoSIS. Comput.
_[Secur. 2022, 117, 102690. [CrossRef]](http://dx.doi.org/10.1016/j.cose.2022.102690)_
9. Olivero, M.A.; Bertolino, A.; Dominguez-Mayo, F.J.; Escalona, M.J.; Matteucci, I. Security Assessment of Systems of Systems. In
Proceedings of the 2019 IEEE/ACM 7th International Workshop on Software Engineering for Systems-of-Systems (SESoS) and
13th Workshop on Distributed Software Development, Software Ecosystems and Systems-of-Systems (WDES), Montreal, QC,
[Canada, 28 May 2019; pp. 62–65. [CrossRef]](http://dx.doi.org/10.1109/SESoS/WDES.2019.00017)
10. Guariniello, C.; DeLaurentis, D. Communications, Information, and Cyber Security in Systems-of-Systems: Assessing the Impact
[of Attacks through Interdependency Analysis. Procedia Comput. Sci. 2014, 28, 720–727. [CrossRef]](http://dx.doi.org/10.1016/j.procs.2014.03.086)
11. Trivellato, D.; Zannone, N.; Etalle, S. A Security Framework for Systems of Systems. In Proceedings of the 2011 IEEE International
[Symposium on Policies for Distributed Systems and Networks, Pisa, Italy, 6–8 June 2011; pp. 182–183. [CrossRef]](http://dx.doi.org/10.1109/POLICY.2011.16)
12. Trivellato, D.; Zannone, N.; Glaundrup, M.; Skowronek, J.; Etalle, S. A semantic security framework for systems of systems. Int. J.
_[Coop. Inf. Syst. 2013, 22, 1350004. [CrossRef]](http://dx.doi.org/10.1142/S0218843013500044)_
-----
_Sensors 2023, 23, 7617_ 15 of 15
13. Hachem, J.E.; Chiprianov, V.; Babar, M.A.; Khalil, T.A.; Aniorte, P. Modeling, analyzing and predicting security cascading attacks
[in smart buildings systems-of-systems. J. Syst. Softw. 2020, 162, 110484. [CrossRef]](http://dx.doi.org/10.1016/j.jss.2019.110484)
14. Nguyen, P.H.; Ali, S.; Yue, T. Model-based security engineering for cyber-physical systems: A systematic mapping study. Inf.
_[Softw. Technol. 2017, 83, 116–135. [CrossRef]](http://dx.doi.org/10.1016/j.infsof.2016.11.004)_
15. Abou-Tair, D.e.D.I.; Alouneh, S.; Khalifeh, A.; Obermaisser, R. A Security Framework for Systems-of-Systems. In Advances in
_Computer Science and Ubiquitous Computing; Park, J.J., Loia, V., Yi, G., Sung, Y., Eds.; Springer: Singapore, 2018; pp. 427–432._
16. Bicaku, A.; Zsilak, M.; Theiler, P.; Tauber, M.; Delsing, J. Security Standard Compliance Verification in System of Systems. IEEE
_[Syst. J. 2022, 16, 2195–2205. [CrossRef]](http://dx.doi.org/10.1109/JSYST.2021.3064196)_
17. Agrawal, D. A new schema for security in dynamic uncertain environments. In Proceedings of the 2009 IEEE Sarnoff Symposium,
[Princeton, NJ, USA, 30 March–1 April 2009; pp. 1–5. [CrossRef]](http://dx.doi.org/10.1109/SARNOF.2009.4850378)
18. Abou-Tair, D.e.D.I.; Khalifeh, A. Distributed Self-Sovereign-Based Access Control System. IEEE Secur. Priv. 2022, 20, 35–42.
[[CrossRef]](http://dx.doi.org/10.1109/MSEC.2022.3148906)
19. Ahmed, M.R.; Islam, A.K.M.M.; Shatabda, S.; Islam, S. Blockchain-Based Identity Management System and Self-Sovereign
[Identity Ecosystem: A Comprehensive Survey. IEEE Access 2022, 10, 113436–113481. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2022.3216643)
20. Maesa, D.D.F.; Mori, P.; Ricci, L. Blockchain based access control. In Proceedings of the IFIP International Conference on
Distributed Applications and Interoperable Systems, Neuchatel, Switzerland, 19–22 June 2017; Springer: Berlin/Heidelberg,
Germany, 2017; pp. 206–220.
21. Xu, L.; Chen, L.; Shah, N.; Gao, Z.; Lu, Y.; Shi, W. DL-BAC: Distributed Ledger Based Access Control for Web Applications. In
Proceedings of the 26th International Conference on World Wide Web Companion, Perth, Australia, 3–7 April 2017; pp. 1445–1450.
22. Abou-Tair, D.e.D.I.; Khalifeh, A.; Alouneh, S.; Obermaisser, R. Incremental, Distributed, and Concurrent Service Coordination for
[Reliable and Deterministic Systems-of-Systems. IEEE Syst. J. 2020, 15, 2470–2481. [CrossRef]](http://dx.doi.org/10.1109/JSYST.2020.3020430)
23. Toth, K.C.; Anderson-Priddy, A. Self-sovereign digital identity: A paradigm shift for identity. IEEE Secur. Priv. 2019, 17, 17–27.
[[CrossRef]](http://dx.doi.org/10.1109/MSEC.2018.2888782)
**Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual**
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.
-----
| 12,885
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC10490802, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/1424-8220/23/17/7617/pdf?version=1693797625"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-09-01T00:00:00
|
[
{
"paperId": "857ec5f733616f0907788247ee2816065892c157",
"title": "Distributed Self-Sovereign-Based Access Control System"
},
{
"paperId": "3261e2fe730f83223b8ce841f15c68cba29d53b7",
"title": "Security Standard Compliance Verification in System of Systems"
},
{
"paperId": "6a6157fb6c7229d38856cc0852968549a4ef8663",
"title": "Incremental, Distributed, and Concurrent Service Coordination for Reliable and Deterministic Systems-of-Systems"
},
{
"paperId": "0a1172c20e6d68d78cd6f50d083d41d3d0271b7b",
"title": "Modeling, analyzing and predicting security cascading attacks in smart buildings systems-of-systems"
},
{
"paperId": "0256c722e23c110330b6c061742b6538d14d3496",
"title": "Arrowhead Technology for Digitalization and Automation Solution: Smart Cities and Smart Agriculture"
},
{
"paperId": "c58a4a4c7c411a04554ce5a585c4b7b31766ce2c",
"title": "Self-Sovereign Digital Identity: A Paradigm Shift for Identity"
},
{
"paperId": "2b0a6beaa6643aadfa3bee78989bed186d8fa687",
"title": "2019 IEEE/ACM 7th International Workshop on Software Engineering for Systems-of-Systems (SESoS) and 13th Workshop on Distributed Software Development, Software Ecosystems and Systems-of-Systems (WDES)"
},
{
"paperId": "75fc4d6c436b1ce7fe209c340b2c60ce907dcf15",
"title": "Security Assessment of Systems of Systems"
},
{
"paperId": "ae0c27924f356e8598e857121a6aa7dcd31eda81",
"title": "Distributed Ledger Technology (DLT) and blockchain"
},
{
"paperId": "8d50b93c207470f26437a1be37369b6fc686fea2",
"title": "Blockchain Based Access Control"
},
{
"paperId": "ac6a7075814f31ca2825eaef461c0d051c2a1456",
"title": "DL-BAC: Distributed Ledger Based Access Control for Web Applications"
},
{
"paperId": "2dd3a69ab03d6c80f792caa6c53d368f387c44f4",
"title": "Model-based security engineering for cyber-physical systems: A systematic mapping study"
},
{
"paperId": "3bbff5ecf086902009ab419eb53172432ad7f74f",
"title": "Application of Multidisciplinary Systems‐of‐Systems Optimization to an Aircraft Design Problem"
},
{
"paperId": "bfb9b5b0dc6668396cb8fde14335bb8e5f46044d",
"title": "Communications, Information, and Cyber Security in Systems-of-Systems: Assessing the Impact of Attacks through Interdependency Analysis"
},
{
"paperId": "682ccfcb052bf26488a56466c73406f221c2daf4",
"title": "A Semantic Security Framework for Systems of Systems"
},
{
"paperId": "bc9cad2bd827d667e6711064c25c707c00f68e8d",
"title": "A Security Framework for Systems of Systems"
},
{
"paperId": "7dadd919f146c20617856beb51505a2337af5517",
"title": "A new schema for security in dynamic uncertain environments"
},
{
"paperId": "2ac74d87239a83e4e3b218f6c24c0ec92abecafc",
"title": "An Emergent Perspective on Interoperation in Systems of Systems"
},
{
"paperId": "1b89d6f88db5d451ad6b0bbdbda1fdaee89db50a",
"title": "2.3.2 Towards a Knowledge Based Soft Systems Engineering Method for Systems of Systems"
},
{
"paperId": "f9939b386eb951b57d244d01c8660ecab55ac5de",
"title": "Architecting Principles for Systems‐of‐Systems"
},
{
"paperId": "6de00d6bd62be8abcc15aa597eab857569ae53e6",
"title": "Assessing system of systems information security risk with OASoSIS"
},
{
"paperId": "1e9d7633e6f346bc66bd451eaff656834e7d9c42",
"title": "Blockchain-Based Identity Management System and Self-Sovereign Identity Ecosystem: A Comprehensive Survey"
},
{
"paperId": "cfd200720c949f6ac4312ccaa88873d8925b84dc",
"title": "Advances in Computer Science and Ubiquitous Computing - CSA/CUTE 2017, Taichung, Taiwan, 18-20 December"
},
{
"paperId": "9330d2ed42a6b12f8ebf01c5c13ce6bbab799a22",
"title": "Lean thinking for healthcare."
},
{
"paperId": "cc19c97c2ed6729a70d8a556f495082f193ae5c2",
"title": "Applying a System of Systems Approach to Healthcare"
}
] | 12,885
|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/000634d00e45d43a7abbc57c02bea6d663cb9232
|
[
"Medicine",
"Computer Science"
] | 0.832738
|
DecGPU: distributed error correction on massively parallel graphics processing units using CUDA and MPI
|
000634d00e45d43a7abbc57c02bea6d663cb9232
|
BMC Bioinformatics
|
[
{
"authorId": "2916386",
"name": "Yongchao Liu"
},
{
"authorId": "38613433",
"name": "B. Schmidt"
},
{
"authorId": "1793395",
"name": "D. Maskell"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"BMC Bioinform"
],
"alternate_urls": [
"http://www.pubmedcentral.nih.gov/tocrender.fcgi?journal=13",
"http://www.biomedcentral.com/bmcbioinformatics/"
],
"id": "be3f884c-b44a-496a-a593-1cad3f89d254",
"issn": "1471-2105",
"name": "BMC Bioinformatics",
"type": "journal",
"url": "http://www.biomedcentral.com/bmcbioinformatics"
}
|
BackgroundNext-generation sequencing technologies have led to the high-throughput production of sequence data (reads) at low cost. However, these reads are significantly shorter and more error-prone than conventional Sanger shotgun reads. This poses a challenge for the de novo assembly in terms of assembly quality and scalability for large-scale short read datasets.ResultsWe present DecGPU, the first parallel and distributed error correction algorithm for high-throughput short reads (HTSRs) using a hybrid combination of CUDA and MPI parallel programming models. DecGPU provides CPU-based and GPU-based versions, where the CPU-based version employs coarse-grained and fine-grained parallelism using the MPI and OpenMP parallel programming models, and the GPU-based version takes advantage of the CUDA and MPI parallel programming models and employs a hybrid CPU+GPU computing model to maximize the performance by overlapping the CPU and GPU computation. The distributed feature of our algorithm makes it feasible and flexible for the error correction of large-scale HTSR datasets. Using simulated and real datasets, our algorithm demonstrates superior performance, in terms of error correction quality and execution speed, to the existing error correction algorithms. Furthermore, when combined with Velvet and ABySS, the resulting DecGPU-Velvet and DecGPU-ABySS assemblers demonstrate the potential of our algorithm to improve de novo assembly quality for de-Bruijn-graph-based assemblers.ConclusionsDecGPU is publicly available open-source software, written in CUDA C++ and MPI. The experimental results suggest that DecGPU is an effective and feasible error correction algorithm to tackle the flood of short reads produced by next-generation sequencing technologies.
|
http://www.biomedcentral.com/1471 2105/12/85
## SOFTWARE Open Access
# DecGPU: distributed error correction on massively parallel graphics processing units using CUDA and MPI
### Yongchao Liu[*], Bertil Schmidt and Douglas L Maskell
Background
Introduction
The ongoing revolution of next-generation sequencing
(NGS) technologies has led to the production of
high-throughput short read (HTSR) data (i.e. DNA
sequences) at dramatically lower cost compared to conventional Sanger shotgun sequencing. However, the
produced reads are significantly shorter and more errorprone. Additionally, de novo whole-genome shotgun
fragment assemblers that have been optimized for Sanger reads, such as Altas [1], ARACHNE [2], Celera [3]
and PCAP [4], do not scale well for HTSR data. Therefore, a new generation of de novo assemblers is required.
[* Correspondence: [email protected]](mailto:[email protected])
School of Computer Engineering, Nanyang Technological University, 639798,
Singapore
Several greedy short read assemblers, such as SSAKE
[5], SHARCGS [6], VCAKE [7] and Taipan [8], have
been developed based on contig extensions. However,
these assemblers have difficulties in assembling repeat
regions. The introduction of de Bruijn graphs for fragment assembly [9] has sparked new interests in using
the de Bruijn graph approach for short read assembly.
In the context of short read assembly, nodes of a de
Bruijn graph represent all possible k-mers (a k-mer is a
substring of length k), and edges represent suffix-prefix
perfect overlaps of length k-1. Short read assemblers
based on the de Bruijn graph approach include EULERSR [10], Velvet [11], ALLPATHS [12], ABySS [13], and
SOAPdenovo [14]. In a de Bruijn graph, each singlebase error in a read induces up to k false nodes, and
since each false node has a chance of linking to some
© 2011 Liu et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons
[Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in](http://creativecommons.org/licenses/by/2.0)
any medium, provided the original work is properly cited.
-----
http://www.biomedcentral.com/1471 2105/12/85
other node, it is likely to induce false path convergence.
Therefore, assembly quality of de-Bruijn-graph-based
assemblers is expected to improve by detecting and
fixing base errors in reads prior to assembly.
In addition to the error correction algorithms based
on the spectral alignment problem (SAP) in [9] and
[10], a new error correction algorithm called SHREC
[15] has been proposed using a generalized suffix trie.
Hybrid SHREC (hSHREC) [16] extends the work of
SHREC by enabling the correction of substitutions,
insertions, and deletions in a mixed set of short reads
produced from different sequencing platforms. Unfortunately, due to the large size of NGS datasets, the error
correction procedure before assembly is both time and
memory consuming. Many-core GPU computing architectures have evolved rapidly and have already demonstrated their powerful compute capability to reduce the
execution time of a range of demanding bioinformatics
applications, such as protein sequence database search
[17,18], multiple sequence alignment [19], and motif
finding [20]. As a first step, Shi et al. [21] implemented
CUDA-EC, a parallel error correction algorithm using
NVIDIA’s compute unified device architecture (CUDA),
based on the SAP approach [9], where a Bloom filter
data structure [22] is used to gain memory space efficiency. This algorithm has been further optimized by
incorporating quality scores and filtration approach in
[23]. However, the drawback of this approach is the
assumption that the device memory of a single GPU is
sufficient to store the genome information of the SAP,
i.e. the spectrum T(G) (see Spectral alignment problem
subsection). Thus, a distributed error correction
approach is a good choice to further reduce execution
time and to overcome memory constraints.
In this paper, we present DecGPU, the first parallel
and distributed error correction algorithm for largescale HTSR datasets using a hybrid combination of
CUDA and message passing interface (MPI) [24] parallel
programming models. DecGPU provides two versions: a
CPU-based version and a GPU-based version. The CPUbased version employs coarse-grained and fine-grained
parallelism using the MPI and Open Multi-Processing
(OpenMP) [25] parallel programming models. The
GPU-based version takes advantage of the CUDA and
MPI parallel programming models and employs a hybrid
CPU+GPU computing model to maximize the performance by overlapping the CPU and GPU computation.
The distributed feature of our algorithm makes it a feasible and flexible solution to the error correction of
large-scale HTSR datasets. Our algorithm is designed
based on the SAP approach and uses a counting Bloom
filter data structure [26] for memory space efficiency.
Even though our algorithm also uses the filtration
approach to reduce execution time like CUDA-EC, it
has intrinsic differences from CUDA-EC, such as distributed k-mer spectrums, hybrid combination of different parallel programming models, and CUDA kernel
implementations. Compared to the hSHREC algorithm,
DecGPU shows superior error correction quality for
both simulated and real datasets. As for the execution
speed, on a workstation with two quad-core CPUs, our
CPU-based version runs up to 22× faster than hSHREC.
Furthermore, on a single GPU, the GPU-based version
runs up to 2.8× faster than CUDA-EC (version 1.0.1).
When combined with Velvet (version 1.0.17) and ABySS
(version 1.2.1), the resulting DecGPU-Velvet and
DecGPU-ABySS assemblers demonstrate the potential of
our algorithm to improve de novo assembly quality for
de-Bruijn-graph-based assemblers by correcting sequencing errors prior to assembly.
Spectral alignment problem
The SAP approach detects and fixes base errors in a read
based on the k-mer set Gk of a genome G. Since the genome G is not known beforehand in a de novo sequencing
project, SAP approximates Gk using a k-mer spectrum T
(G). T(G) is the set of all solid k-mers throughout all
reads. A k-mer is called solid if its multiplicity throughout all reads is not less than a user-specified threshold
M, and weak otherwise. If every k-mer in a read has an
exact match in T(G), the read is called a T-string. Given
an erroneous read R, SAP is defined to find a T-string R[*]
with minimal Hamming distance to R.
Two heuristics of SAP have been suggested: the iterative approach [9] and the dynamic programming
approach [10]. The iterative approach attempts to transform weak k-mers in a read to solid ones by substituting
some possibly erroneous bases through a voting algorithm. The dynamic programming approach attempts to
find the shortest path that corresponds to a T-string
with minimal edit distance. The underlying algorithm
model of DecGPU is inspired by the iterative approach.
Bloom filter data structure
The spectrum T(G) is the fundamental data structure
for SAP-based error correction. For large-scale short
read error correction, the major challenges posed by
T(G) are the computational overhead for k-mer membership lookup and the memory constraint for k-mer
storage. Hash tables are advantageous in execution time
for membership lookup, but consume too much memory. Thus, we choose a Bloom filter, a very compact
hash-based data structure, to achieve efficiency in terms
of both lookup time and memory space. However, the
space efficiency of a Bloom filter is gained by allowing
false positive querying. The more elements inserted to
the Bloom filter, the higher the probability of false positive querying. As such, a Bloom filter is more suitable
-----
http://www.biomedcentral.com/1471 2105/12/85
for the cases where space resources are at a premium
and a small number of false positives can be tolerated.
Both conditions are met by our error correction algorithm, since false positives might only result in some
unidentified sequencing errors.
A classical Bloom filter uses a bit array with h asso
ciated independent hash functions, supporting insertion
and membership querying of elements. Initially, all
buckets (1 bit per bucket) in a classical Bloom filter are
set to zero. When inserting or querying an element, the
h hash values of the element are first calculated using
the h hash functions. When inserting an element, the
corresponding buckets indexed by the hash values are
set to 1. When querying an element, it returns the corresponding buckets. The element is likely to exist if all
buckets are 1; and definitely does not exist, otherwise.
The time for insertion and querying, of an element, is of
constant time complexity, O(h), and is also independent
of the number of inserted elements. The false positive
probability (FPP) of a classical Bloom filter is calculated
as
− _[hN][E]_
_NB_
_hNE_ [�][h]
�
_FPP =_
≈
�
1
−
⎛
1 _e_
⎜⎝ −
�
1
− [1]
_NB_
familiar languages [27]. A CUDA program is comprised of
two parts: a host program running one or more sequential
threads on a host CPU, and one or more parallel kernels
able to execute on Tesla [28] and Fermi [29] unified graphics and computing architectures.
A kernel is a sequential program launched on a set of
lightweight concurrent threads. The parallel threads are
organized into a grid of thread blocks, where all threads
in a thread block can synchronize through barriers and
communicate via a high-speed, per block shared memory (PBSM). This hierarchical organization of threads
enables thread blocks to implement coarse-grained task
and data parallelism and lightweight threads comprising
a thread block to provide fine-grained thread-level parallelism. Threads from different thread blocks in the same
grid are able to cooperate through atomic operations on
global memory shared by all threads. To write efficient
CUDA programs, it is important to understand the features of the different memory spaces, including noncached global and local memory, cached texture
and constant memory as well as on-chip PBSM and
registers.
The CUDA-enabled processors are built around a fully
programmable scalable processor array, organized into a
number of streaming multiprocessors (SMs). For the
Tesla architecture, each SM contains 8 scalar processors
(SPs) and shares a fixed 16 KB of PBSM. For the Tesla
series, the number of SMs per device varies from generation to generation. For the Fermi architecture, it contains 16 SMs with each SM having 32 SPs. Each SM in
the Fermi architecture has a configurable PBSM size
from the 64 KB on-chip memory. This on-chip memory
can be configured as 48 KB of PBSM with 16 KB of L1
cache or as 16 KB of PBSM with 48 KB of L1 cache.
When executing a thread block, both architectures split
all the threads in the thread block into small groups of
32 parallel threads, called warps, which are scheduled in
a single instruction, multiple thread (SIMT) fashion.
Divergence of execution paths is allowed for threads in
a warp, but SMs realize full efficiency and performance
when all threads of a warp take the same execution
path.
MPI is a de facto standard for developing portable
parallel applications using the message passing mechanism. MPI works on both shared and distributed memory
machines, offering a highly portable solution to parallel
programming on a variety of machines and hardware
topologies. In MPI, it defines each worker as a process
and enables the processes to execute different programs.
This multiple program, multiple data model offers more
flexibility for data-shared or data-distributed parallel
program design. Within a computation, processes communicate data by calling runtime library routines, specified for the C/C++ and Fortran programming languages,
(1)
⎞h
⎟⎠
=
�1 _e[−][α][�][h]_
−
where NB is the total number of buckets, NE is the
number of elements, and a = hNE/NB.
To construct T(G), we need to record the multiplicity
of each k-mer. However, because the classical Bloom filter does not store the number of k-mer occurrences,
DecGPU instead chooses a counting Bloom filter to
represent T(G). A counting Bloom filter extends a
bucket of the classical Bloom filter from 1 bit to several
bits. DecGPU uses 4 bits per bucket, supporting a maximum multiplicity of 15. When inserting an element, it
increases (using saturation addition) the counter values
of the corresponding buckets indexed by the hash
values. When querying an element, it returns the minimum counter value of all the corresponding buckets,
which is most likely to be the real multiplicity of the
element. A counting Bloom filter has the same FPP as
the corresponding classical Bloom filter.
CUDA and MPI programming models
More than a software and hardware co-processing architecture, CUDA is also a parallel programming language
extending general programming languages, such as C, C++
and Fortran with a minimalist set of abstractions for
expressing parallelism. CUDA enables users to write parallel scalable programs for CUDA-enabled processors with
-----
http://www.biomedcentral.com/1471 2105/12/85
including point-to-point and collective communication
routines. Point-to-point communication is used to send
and receive messages between two named processes,
suitable for local and unstructured communications.
Collective (global) communication is used to perform
commonly used global operations (e.g. reduction and
broadcast operations).
Implementation
DecGPU error correction algorithm
DecGPU consists of four major stages: (1) constructing
the distributed k-mer spectrum, (2) filtering out errorfree reads, (3) fixing erroneous reads using a voting
algorithm, (4) trimming (or discarding entirely) the fixed
reads that remain erroneous, and (5) an optional iterative policy between the filtering and fixing stages with
intention to correct more than one base error in a single
read. The second stage filters out error-free reads and
passes down the remaining erroneous reads to the third
stage. After the erroneous reads have been fixed, the
fixed reads are either passed up to another filtering
stage or down to the trimming stage, depending on
whether the optional iterative policy is used. For a fixed
read that remains erroneous, the trimming stage
attempts to find the user-satisfied longest substring of
the read, in which all k-mers are solid (the workflow
and data dependence between stages are shown in
Figure 1).
For DecGPU, a processing element (PE) Pi refers to
the i[th] MPI process. Each MPI process has a one-to-one
correspondence with a GPU device. Each Pi therefore
consists of two threads: a CPU thread and a GPU
thread. This hybrid CPU+GPU computing model provides the potential to achieve performance maximization
through the overlapping of CPU and GPU computation.
The input reads of each stage are organized into batches
to facilitate the overlapping. In the MPI runtime environment, DecGPU ensures the one-to-one correspondence between an MPI process and one GPU device by
automatically assigning GPU devices to processes using
a registration management approach. First, each process
registers its hostname and the number of qualified GPU
devices in its host to a specified master process. Secondly, the master process verifies the registrations by
checking that, for a specific host, the number of GPU
devices reported by all processes running on it must be
the same and must not be less than the number of the
processes. Finally, the master process enumerates each
host and assigns a unique GPU device identifier to each
process running on the host.
Distributed spectrum construction
DecGPU distributes the k-mer spectrum that uses a
counting Bloom filter. For the distributed spectrum,
each Pi holds a local spectrum T(G, Pi) that is a subset
of T(G). The set of all local spectrums {T(G, Pi)} forms
a partition of T(G); i.e. it holds:
⎧
⎪⎪⎨
⎪⎪⎩
�
_T(G, Pi)_
_T(G) =_
_NPE_
�
_T(G, Pi), and_
_i=1_
(2)
_T(G, Pj) = ∅, for i ̸= j_
where NPE is the number of PEs. DecGPU constructs
the distributed spectrum by (nearly) evenly distributing
the set of all possible k-mers (including their reverse
complements) over all PEs. The location of a k-mer is
determined using modular hashing. A k-mer is packed
into an integer Ik by mapping the bases {A, C, G, T} to
the numerical values {0, 1, 2, 3}. The index of the PE
that owns this k-mer is computed as Ik % NPE. This distributed spectrum reduces the number of k-mers in a
single spectrum by a factor of the number of PEs. Thus,
we are able to keep an acceptable probability of false
positives of T(G) with no need for a vast amount of
device memory in a single GPU. Using this distributed
spectrum, for the membership lookup of a k-mer, all
PEs must simultaneously conduct the membership
lookup of the k-mer in their local spectrums, and then
perform collective operations to gain the final result.
For the distributed spectrum construction, intuitively,
the most effective approach is to allow each PE to build
-----
http://www.biomedcentral.com/1471 2105/12/85
its local spectrum on its GPU device, where thousands
of threads on the GPU device simultaneously calculate
hash values of k-mers and determine their destinations.
However, this approach requires the support for devicelevel global memory consistency or atomic functions,
since different threads in the device might update the
counter value at the same address in the counting
Bloom filter. CUDA-enabled GPUs do not provide a
mechanism to ensure device-level global memory consistency for all threads in a kernel when the kernel is
running. CUDA does provide the support for atomic
functions, but they are not byte-addressable. If using an
integer for a bucket of a counting Bloom filter, the
memory space efficiency of the Bloom filter will be significantly lost. In this case, we choose the CPU + GPU
hybrid computing for the local spectrum construction of
each Pi (as shown in Figure 2). Since all input reads are
organized into batches, each Pi runs multiple iterations
to complete the spectrum construction with each iteration processing a read batch. In each iteration, the CPU
thread awaits the hash values of a read batch. When the
hash values of a read batch are available, the CPU
thread inserts k-mers, which are distributed to itself,
into its local spectrum using the corresponding hash
values. In the meantime, the GPU thread reads in
another batch of reads, calculates the hash values for
this batch, and then transfers the hash values as well as
the read batch to the CPU thread.
Using CUDA, one read is mapped to one thread,
where the thread computes the hash values of all k-mers
and their reverse complements and determines their
destination PEs in the read. All reads of a batch are
stored in texture memory bound to linear memory.
Because a k-mer is frequently accessed while calculating
the hash values, the k-mer is loaded from texture memory to shared memory for improving performance. All
the following stages store and access reads and k-mers
in the same manner. A conversion table in constant
memory is used for the conversion of a nucleotide base
to its complement. The hash value arrays are allocated
in global memory using the coalesced global memory
allocation pattern [15].
Filtering out error-free reads
The core of our distributed filtering algorithm is
described as follows. For a specific read, each Pi simultaneously checks in its local spectrum T(G, Pi) the solidity
of each k-mer of the read. Since each k-mer corresponds
to a position in a read, Pi uses a local solidity vector SV
(Pi) to record the k-mer existence for the read. If a kmer belongs to T(G, Pi), the corresponding position in
SV(Pi) is set to 0 and to 1 otherwise. After completing
the solidity check of all k-mers, all PEs perform a logical
AND reduction operation on the solidity vectors {SV
(Pi)} to gain the final global solidity vector SV. The read
is error-free if all the positions in SV are 0 and erroneous otherwise. For each erroneous read, the values of
SV are stored into a file, along with the read, for the
future use of the fixing stage.
Figure 3 shows the workflow of each PE for filtering
out error-free reads. For each Pi, the CPU thread
receives the set {SV(Pi)} of a read batch from the GPU
thread, performs logical AND reduction operations on
{SV(Pi)} in parallel with the other PEs, and then processes the read batch in parallel with the other PEs to
filter out error-free reads. Meanwhile, the GPU thread
-----
http://www.biomedcentral.com/1471 2105/12/85
reads in a batch of reads, calculates {SV(Pi)} of the batch
using its local spectrum T(G, Pi), and then transfers {SV
(Pi)} to the CPU thread. From this workflow, the calculation time of the solidity vectors on the GPUs does not
scale with the number of PEs, but the execution time of
the reduction operations and the error-free reads determination scales well with the number of PEs. Using
CUDA, one read is mapped to one thread which builds
the solidity vector of the read using T(G, Pi). The solidity vectors are allocated in global memory in a coalesced
pattern.
Fixing erroneous reads
If a mutation error occurs at position j of a read of
length l, this mutation creates up to min{k, j, l-j} erroneous k-mers that point to the same sequencing error.
The aim of our fixing algorithm is to transform the min
{k, j, l-j} weak k-mers to solid ones. In this case, a voting
algorithm is applied to correct the most likely erroneous
bases that result in these weak k-mers. The voting algorithm attempts to find the correct base by replacing all
possible bases at each position of the k-mer and checking the solidities of the resulting k-mers.
The core of our distributed fixing algorithm is
described as follows. For an erroneous read, each Pi
checks in T(G) the existence of all k-mers of the read
from left to right. Because each Pi does not hold a copy
of T(G), the existence check in T(G) is conducted using
the solidity vectors {SV} produced and saved by the filtering stage. If a k-mer does not belong to T(G), each Pi
invokes the voting algorithm to compute its local voting
matrix VM(Pi) using its local spectrum T(G, Pi). After
completing the voting matrix computation, all PEs perform an ADDITION reduction operation on the voting
matrices {VM(Pi)} to gain the final global voting matrix
VM of the read. Then, a fixing procedure is performed
using VM to correct the erroneous read. When enabling
the optional iterative policy, for an erroneous read, a
starting position SPOS is saved after completing the previous fixing iteration, which indicates that each k-mer
starting before SPOS is solid in the read. In the current
fixing iteration, the voting matrix computation starts
from SPOS. Actually, after substituting an erroneous
base with the voted (likely) correct base, we might introduce new errors even if there is really only one base
error in a read. Hence, it is not necessarily the case that
the more fixing iterations used, the more base errors
that are corrected. Figure 4 shows the pseudocode of
the CUDA kernel of the voting algorithm.
Figure 5 shows the workflow of each PE for fixing
erroneous reads. For each Pi, the CPU thread receives
the voting matrices {VM(Pi)} of a read batch from the
GPU thread, performs ADDITION reduction operations
on {VM(Pi)} in parallel with the other PEs, and then
fixes the erroneous reads in parallel with the other PEs.
The GPU thread computes its local voting matrices
{VM(Pi)} of a read batch using T(G, Pi), and then transfers the voting matrices to the CPU thread.
Using CUDA, one read is mapped to a thread which
performs the voting algorithm on the read to gain the
voting matrix. From Figure 4, the execution speed of the
voting algorithm on GPUs highly depends on how frequently the threads in a warp diverge. The solidity vectors of the reads, used for checking k-mer existence in
T(G), are stored in texture memory bound to linear
-----
http://www.biomedcentral.com/1471 2105/12/85
memory. The voting matrices are allocated in global
memory in a coalesced pattern.
Trimming erroneous reads
After fixing errors in erroneous reads, some reads are
still not T-strings. In this case, a trimming procedure is
performed on the fixed reads that remain erroneous.
For an erroneous read, all PEs cooperate to compute the
solidity vector SV of the read using the same algorithm
as in the filtering stage. After gaining SV, the algorithm
attempts to find the user-satisfied longest substring of
the read, in which all k-mers are solid. The read is
trimmed if such a substring is found and discarded
entirely, otherwise. Each Pi runs the same workflow as
in the filtering stage, except that after gaining the solidity vectors {SV} of a read batch, the CPU thread performs the trimming procedure in parallel with the other
PEs, instead.
Results
We have evaluated the performance of DecGPU from
three perspectives: (1) the error correction quality both
on simulated and real short read datasets; (2) de novo
assembly quality improvement after combining our algorithm with Velvet (version 1.0.17) and ABySS (version
1.2.1); and (3) the scalability with respect to different
number of compute resources for the CPU-based and
GPU-based versions respectively. Six simulated short
read datasets (the first six datasets in Table 1) and three
real Illumina GA short read datasets (the last three datasets in Table 1, named after their accession numbers in
NCBI Sequence Read Archive [30]) are used to measure
the accuracy of correction and the de novo assembly
quality. For the six simulated datasets, they are simulated from the E. coli K12 MG1665 reference genome
(NC_000913) with different read lengths, coverage and
error rates. For the three real datasets, the SRR001665
dataset is a paired-end dataset and the other two are
single-end. The SRR001665 dataset consists of about
20.8 million paired-end 36-basepair (bp) reads generated
Table 1 Simulated and real short read datasets
Datasets Read length Coverage Error rate No. of Reads
D30X1.5 36 30 1.5% 3866000
D30X3.0 36 30 3.0% 3860000
D75X1.5 36 75 1.5% 9666000
D75X3.0 36 75 3.0% 9666000
D150X1.5 72 150 1.5% 9666000
D150X3.0 72 150 3.0% 9666000
SRR006331 36 69 - 1693848
SRR016146 51 81 - 4438066
SRR001665 36 162 - 20816448
_TN_
_specificity =_ (4)
_TN + FP_
The results of the classification test are shown in
Table 3 for the six simulated datasets, where the sensitivity and specificity values have been multiplied by 100.
From the sensitivity measure, DecGPU and hSHREC
achieve comparable performance for all datasets, where
the sensitivity is > 99.80% for each dataset, meaning that
Table 2 Definitions for the read binary classification test
Classification Read Condition
Erroneous Error-free
Detected as erroneous TP FP
Detected as error-free FN TN
from a 200-bp insert size of an E. coli library
(SRX000429), and has been used in [13] and [14] to
assess the assembly qualities of various assemblers.
All the following tests are conducted on a workstation
computer and a computing cluster with eight compute
nodes that are connected by a high-speed Infiniband
switch. The workstation computer has two quad-core
Intel Xeon E5506 2.13 GHz processors and 16 GB RAM
running the Linux operating system (OS). For the computing cluster, each compute node consists of an AMD
Opteron 2378 quad-core 2.4 GHz processor and 8 GB
RAM running the Linux OS with the MVAPICH2
library [31]. Furthermore, two Tesla S1070 quad-GPU
computing systems are installed and connected to four
nodes of the cluster. A single Tesla T10 GPU of a Tesla
S1070 system consists of 30 SMs comprising 240 SPs
and 4 GB RAM. If not specified, for all the following
tests, DecGPU uses the default parameters (i.e. the kmer length is set to 21, the multiplicity threshold M to
6, the maximum allowable number of bases to be
trimmed to 4, and one fixing iteration), and hSHREC
sets the strictness value to 5 for the first four simulated
datasets and 6 for the last two simulated datasets, using
eight threads.
We have evaluated the performance of our algorithm
using the simulated datasets in terms of: (1) the ability
to detect reads as error-free or erroneous, and (2) the
ability to correct erroneous reads. The detection of erroneous reads is a binary classification test, where an input
read is classified into either the error-free group or the
erroneous group. Table 2 shows the corresponding definitions of true positive (TP), false positive (FP), true
negative (TN) and false negative (FN). The sensitivity
and specificity measures are defined as
_TP_
_sensitivity =_ (3)
_TP + FN_
-----
http://www.biomedcentral.com/1471 2105/12/85
Table 3 Summary of the classification test for simulated datasets
Datasets Algorithm TP FP FN TN Sensitivity Specificity
D30X1.5 DecGPU 1620660 349908 253 1895179 99.98 84.41
hSHREC 1617685 13998 3228 2231089 99.80 99.38
D30X3.0 DecGPU 2575411 660533 306 629750 99.99 48.81
hSHREC 2571520 31367 4197 1258916 99.84 97.57
D75X1.5 DecGPU 4053688 23 1024 5611265 99.97 100.00
hSHREC 4053827 4990124 885 621164 99.98 11.07
D75X3.0 DecGPU 6435328 3481 1621 3225570 99.97 99.89
hSHREC 6436305 3129803 644 99248 99.99 3.07
D150X1.5 DecGPU 6406078 2 5395 3254525 99.92 100.00
hSHREC 6411346 3185858 127 68669 100.00 2.11
D150X3.0 DecGPU 8578176 1 8651 1079172 99.90 100.00
hSHREC 8586743 1056392 84 22781 100.00 2.11
only very few erroneous reads remain undetected. However, as for the specificity measure, the performance of
hSHREC degrades very fast with the increase of dataset
size and coverage. For each of the last four simulated
datasets, the specificity of DecGPU is > 99.80%, clearly
outperforming hSHREC. For the two low-coverage
D30X1.5 and D30X3.0 datasets, DecGPU gives poorer
specificity than hSHREC. However, after setting the
multiplicity threshold M to 3 and 2, instead of the
default 6, DecGPU yields a specificity of 99.52% and
99.32% for the two datasets respectively, better than
hSHREC.
The performance of correcting erroneous reads is
evaluated using the simulated datasets from two aspects.
The first aspect is to compare the error rates before and
after error correction. The error rates are calculated by
doing a base-by-base comparison with their respective
original reads (without errors). It is possible that a corrected read does not have the same length with its original read. In this case, the shorter read is mapped with
no gaps to the longer one by iteratively changing the
starting positions. We choose the mapping with the
minimal number of base errors, and then add the number of bases in the shorter one to the total number of
bases for the future calculation of error rates. For
DecGPU, we vary the number of fixing iterations with
the intention to find and correct more than one erroneous base in a single read. We have compared the
accuracy and execution time of DecGPU to hSHREC
(see Table 4) on the above workstation with eight CPU
cores. Table 4 shows that DecGPU significantly reduces
the error rates of all datasets (particularly reducing the
error rate of D75X1.5 from 1.500% to 0.248% and the
error rate of D75X3.0 from 3.000% to 0.988%), clearly
outperforming hSHREC. Furthermore, on the dual
quad-core workstation, the CPU-based DecGPU version
runs up to 22× faster when performing one fixing
iteration and up to 19× faster when performing two fixing iterations compared to hSHREC. For DecGPU, the
error rates are further reduced for all datasets when
using two fixing iterations instead of only one. However,
we found that a further increase of iterations does not
significantly reduce the error rates further. As for the
execution time, the second fixing iteration does not
result in a large execution time increase, since it only
corrects the remaining erroneous reads.
The second aspect is to evaluate the correct correction
rate, incorrect correction rate, and the rate of newly
introduced errors, relative to the total number of original base errors. When performing error correction, correction operations will result in the following four cases:
- Correct Corrections (CC): meaning that original
erroneous bases have been changed to the correct
ones;
- Incorrect Corrections (IC): meaning that original
erroneous bases have been changed to other wrong
ones;
- Errors Unchanged (EU): meaning that original
erroneous bases remain the same;
- Errors Introduced (EI): meaning that original correct bases have been changed to be incorrect, thus
introducing new base errors.
In this paper, we define three measures relative to the
total number of original base errors: correct correction
rate RCC, incorrect correction rate RIC, and correction
error rate REI, to facilitate the error correction accuracy
comparison. RCC indicates the proportion of the original
erroneous bases that have been corrected, REI indicates
the proportion of the original erroneous bases that have
been changed to other wrong bases, and REI indicates
the ratio of the original correct bases that have been
changed to be incorrect. For RCC, the larger value
-----
http://www.biomedcentral.com/1471 2105/12/85
Table 4 The error rates and execution time comparison for DecGPU and Hybrid SHREC
Datasets Original Error Rate (%) Corrected Error Rate (%) Time (seconds)
DecGPU hSHREC DecGPU hSHREC
one fixing two fixing one fixing two fixing
D30X1.5 1.498 0.426 0.341 0.713 125 145 2721
D30X3.0 3.003 1.773 1.625 2.014 164 217 2882
D75X1.5 1.500 0.347 0.248 3.936 288 348 4380
D75X3.0 3.000 1.262 0.988 4.058 375 473 5079
D150X1.5 1.500 0.579 0.348 3.233 981 1118 11047
D150X3.0 3.001 1.781 1.241 4.082 1254 1489 12951
means the better performance, and for RIC and REI, the
smaller value the better performance. The RCC, RIC and
REI measures are calculated as
_CC_
_RCC =_ (5)
_CC + IC + EU_
_IC_
_RIC =_ (6)
_CC + IC + EU_
_EI_
_REI =_ (7)
_CC + IC + EU_
In this test, for DecGPU, we do not trim the fixed
reads that remain erroneous, and use two fixing iterations. For hSHREC, we only use the reads that have the
same lengths with their original reads after correction,
because the correspondence relationship between bases
is difficult to be determined for two reads of different
lengths. Table 5 shows the performance comparison in
terms of the three measures between DecGPU and
hSHREC, where the value of RCC, RIC and REI has been
multiplied by 100. For RCC, hSHREC yields better performance for the first three datasets and DecGPU
performs better for the last three datasets. However,
hSHREC degrades very rapidly (down to 5.73%) with the
increase of coverage and original error rate, while
DecGPU remains relatively consistent. For RIC and REI,
DecGPU clearly outperforms hSHREC for each dataset,
where DecGPU miscorrected ≤ 0.04% bases and introduced ≤ 0.08% new base errors, but hSHREC miscorrected ≥ 0.30% (up to 0.73%) bases, and introduced ≥
6.95% (up to 47.67%) new base errors.
Furthermore, we have measured the error correction
quality of DecGPU in terms of mapped reads after aligning the reads to their reference genome. We vary the
maximum allowable number of mismatches in a single
read (or seed) to see the proportion changes. The
SRR001665 dataset and Bowtie (version 0.12.7) [32]
short read alignment algorithm are used for the evaluation. For Bowtie, the default parameters are used except
for the maximum allowable number of mismatches, and
for hSHREC, we have set the strictness value to 7. The
proportion of mapped reads is calculated in three cases:
exact match, ≤ one mismatch, and ≤ two mismatches
(see Figure 6). After error correction with DecGPU, the
proportion of mapped reads is higher than the original
reads in each case. However, after error correction with
Table 5 Performance comparison with respect to RCC, RIC and REI measures
Datasets Algorithms CC IC EU EI RCC RIC REI
D30X1.5 DecGPU 1275967 191 809207 893 61.19 0.01 0.05
hSHREC 1736112 10960 214851 125381 88.49 0.56 6.95
D30X3.0 DecGPU 1611459 344 2567906 2932 38.55 0.01 0.08
hSHREC 2983112 27448 764097 326466 79.03 0.73 9.38
D75X1.5 DecGPU 3373714 388 1844213 530 64.65 0.01 0.02
hSHREC 1431267 27988 3256061 2219648 30.35 0.59 47.67
D75X3.0 DecGPU 5425615 746 5013497 1122 51.97 0.01 0.02
hSHREC 757454 29924 9248234 1250738 7.55 0.30 12.76
D150X1.5 DecGPU 7242425 2913 3196883 1004 69.36 0.03 0.04
hSHREC 741722 37618 9034830 3345778 7.56 0.38 34.47
D150X3.0 DecGPU 11221669 7593 9655700 2121 53.73 0.04 0.05
hSHREC 1152718 71504 18896523 3136637 5.73 0.36 15.94
-----
http://www.biomedcentral.com/1471 2105/12/85
100 96.9 96.7 97.4 97.1 97.7
92.3
90 86.6 88.2
79.8
80 Original
DecGPU
70 hSHREC
60
Exact match �1 mismatch �2 mismatches
**Maximum number of mismatches**
Figure 6 Percentage of mapped reads as a function of
maximum number of mismatches.
hSHREC, the proportion for each dataset goes down in
each case. This might be caused by the fact that some
reads become very short after error correction with
hSHREC.
Error correction prior to assembly is important for
short read assemblers based on the de Brujin graph
approach. To demonstrate how our algorithm affects de
novo assembly quality, we have assessed the assembly
quality before and after using our algorithm to correct
errors for two popular assemblers: Velvet (version
1.0.17) and ABySS (version 1.2.1). Both assemblers do
not internally incorporate error correction prior to
assembly. We have carefully tuned the parameters with
the intention to gain the highest assembly quality for
the stand-alone Velvet and ABySS assemblers. We compared the assemblers in terms of N50, N90 and
maximum contig or scaffold sizes using the three real
datasets. The N50 (N90) contig or scaffold size is calculated by ordering all assembled sequences by length, and
then adding the lengths from the largest to the smallest
until the summed length exceeds 50% (90%) of the
reference genome size. For these calculations, we use
the reference genome sizes of 877438, 2801838, and
4639675 for the datasets SRR006331, SRR016146 and
SRR001665 respectively. For the calculation of scaffold
sizes, the intra-scaffold gaps are included. To see the
difference in assembly quality before and after error correction, we use the same set of parameters with the
stand-alone assemblers for our resulting DecGPU-Velvet
(D-Velvet) and DecGPU-ABySS (D-ABySS) assemblers
to conduct the assembly work (assembly results are
shown in Table 6), where DecGPU uses two fixing iterations. From Table 6, D-Velvet yields superior N50 contig
sizes to Velvet, with not always higher N90 and maximum contig sizes, for all datasets. D-ABySS gives comparable N50, N90 and maximum contig sizes with
ABySS for all datasets. When scaffolding the paired-end
SRR001665, D-ABySS produces larger N50 scaffold size
than ABySS, but D-Velvet failed to outperform Velvet.
However, after further tuning the assembly parameters,
D-Velvet yields superior N50 scaffold size to Velvet for
SRR001665 (see Table 7). Moreover, larger N50 contig
sizes are produced by D-ABySS on SRR006331 and
SRR016146 respectively, which are better than the
outcome of ABySS. All these results suggest that our
algorithm has the potential to improve the de novo
assembly quality for de-Bruijn-graph-based assemblers.
hSHREC
Table 6 Assembly quality and parameters for different assemblers
Datasets Type Assembler N50 N90 MAX #Seq Parameters
SRR006331 Contig Velvet 6229 1830 21166 288 k = 23, cov_cutoff = auto
D-Velvet 7411 1549 17986 282
ABySS 5644 1505 15951 334 k = 24
D-ABySS 4789 1216 12090 371
SRR016146 Contig Velvet 34052 7754 112041 301 k = 31, cov_cutoff = auto
D-Velvet 34898 7754 134258 292
ABySS 34124 7758 112038 297 k = 33
D-ABySS 34889 7916 134314 297
SRR001665 Contig Velvet 17900 4362 73058 601 k = 29, cov_cutoff = auto
D-Velvet 18484 4687 73058 586
ABySS 18161 4364 71243 603 k = 30
D-ABySS 18161 4604 73060 595
Scaffold Velvet 95486 26570 268283 179 k = 31,exp_cov = auto, cov_cutoff = auto
D-Velvet 95429 26570 268084 175
ABySS 96308 25780 268372 124 k = 33, n = 10
D-ABySS 96904 27002 210775 122
Original
DecGPU
-----
http://www.biomedcentral.com/1471 2105/12/85
Table 7 Assembly quality and parameters after further tuning parameters for some datasets
Datasets Type Assembler N50 N90 MAX #Seq Parameters
SRR006331 Contig D-ABySS 6130 1513 16397 311 k = 24, c = 7
SRR001665 Contig D-ABySS 20068 5147 73062 565 k = 31, c = 12
Scaffold D-Velvet 101245 30793 269944 146 k = 31, exp_cov = 36, cov_cutoff = 13
The number of assembled sequences ("#Seq” column in
Tables 6 and 7) only counts in the sequences of lengths
≥ 100 bps, and the assembly output can be obtained
from Additional file 1.
The execution speed of DecGPU is evaluated using
the three real datasets in terms of: (1) scalability of the
CPU-based and GPU-based versions with respect to different number of compute resources, and (2) execution
time of the GPU-based version compared to that of
CUDA-EC (version 1.0.1) on a single GPU. Both of the
assessments are conducted on the already described
computing cluster. In addition to the absolute execution
time, we use another measure, called Million Bases Processed per Second (MBPS), to indicate execution speed
and make the evaluation more independent of datasets.
Table 8 gives the execution time (in seconds) and MBPS
of the two versions on different number of CPU cores
and different number of GPUs respectively. On a quadcore CPU, DecGPU achieves a performance of up to
1.7 MBPS for the spectrum construction ("Spectrum”
row in the table) and up to 2.8 MBPS for the error correction part ("EC” row in the table). On a single GPU,
our algorithm produces a performance of up to 2.9
MBPS for the spectrum construction and up to 8.0
MBPS for the error correction part. However, it can also
be seen that our algorithm does not show good runtime
scalability with respect to the number of compute
Table 8 Execution time and MBPS of DecGPU on different
number of compute resources
Datasets No. of CPU cores No. of GPUs
4 8 16 32 1 2 4 8
SRR006331 Spectrum Time(s) 36 19 11 7 21 15 9 9
MBPS 1.7 3.2 5.5 8.7 2.9 4.1 6.8 6.8
EC Time(s) 35 38 41 42 9 11 18 23
MBPS 1.7 1.6 1.5 1.5 6.8 5.5 3.4 2.7
SRR016146 Spectrum Time(s) 194 96 51 30 121 86 46 48
MBPS 1.2 2.4 4.4 7.5 1.9 2.6 4.9 4.7
EC Time(s) 194 168 175 206 63 53 43 45
MBPS 1.2 1.3 1.3 1.1 3.6 4.3 5.3 5.0
SRR001665 Spectrum Time(s) 473 247 136 86 297 231 133 137
MBPS 1.6 3.0 5.5 8.7 2.5 3.2 5.6 5.5
EC Time(s) 266 223 251 306 94 85 85 99
MBPS 2.8 3.4 3.0 2.4 8.0 8.8 8.8 7.6
resources for either version. This is because our algorithm intends to solve the memory constraint problem
for large-scale HTSR datasets, i.e. it requires the combination of results from distributed spectrums through
collective reduction operations on all reads, limiting its
runtime scalability. Subsequently, we compared the
execution speed of our algorithm with that of CUDAEC on a single Tesla T10 GPU (see Figure 7), where
CUDA-EC sets k-mer length to 21 and the minimum
multiplicity to 5. DecGPU runs on average about 2.4×
faster than CUDA-EC, with a highest of about 2.8 ×.
As mentioned above, DecGPU achieves memory effi
ciency through the use of a counting Bloom filter. From
Equation 1, the FPP of a counting Bloom filter depends
on the values h and a. DecGPU uses eight hash functions (i.e. h = 8) and has a maximal NB of 2[32]. Thus, for
specific values of a and FPP, we can calculate the maximal value of NE. Table 9 shows the FPP and the maximal NE for a counting Bloom filter for some
representative values of a. In the following, we will discuss how to estimate the maximal size of a short read
dataset that can be processed with a fixed FPP by NPE
MPI processes (i.e. we are using NPE counting Bloom filters on NPE compute nodes). Following [11], the
expected number of times a unique k-mer in a genome
is observed in a short read dataset with coverage C and
read length L can be estimated as
”
_E(Nkmer) =_ _[C][ (][L][ −]_ _[k][ + 1][)]_
_L_
(8)
-----
http://www.biomedcentral.com/1471 2105/12/85
Table 9 FPP and maximal NE for representative a value
a FPP Maximal NE
1 2.5 × 10[-2] 536870912
0.5 5.7 × 10[-4] 268435456
0.25 5.7 × 10[-6] 134217728
0.125 3.6 × 10[-8] 67108864
Thus, the number of reads NR in the dataset, which
can be processed with a fixed FPP by NPE MPI processes, can be estimated as
_NR = NPE_ _[N][E][ ×][ E][(][N][kmer][)]_ = NPE _[CN][E]_
× _L_ _k + 1_ × _L_
−
(9)
error correction quality for both simulated and real
datasets. On a workstation with two quad-core CPUs,
our CPU-based version runs up to 22× faster than
hSHREC. On a single GPU, the GPU-based version runs
up to 2.8× faster than CUDA-EC. Furthermore, the
resultant D-Velvet and D-ABySS assemblers demonstrate that our algorithm has the potential to improve
de novo assembly quality, through prior-assembly error
correction, for de-Bruijn-graph-based assemblers.
Although our algorithm does not show good parallel
runtime scalability with respect to the number of computing resources, the distributed characteristic of
DecGPU provides a feasible and flexible solution to
solve the memory scalability problem for error correction of large-scale datasets.
Availability and requirements
- Project name: DecGPU
[• Project home page: http://decgpu.sourceforge.net](http://decgpu.sourceforge.net)
- Operating system: 64-bit Linux
- Programming language: C++, CUDA, and MPI 2.0
- Other requirements: CUDA SDK and Toolkits 2.0
or higher
- Licence: GNU General Public License (GPL) version 3
Additional material
[Additional file 1: Assembled sequences of different assemblers. This](http://www.biomedcentral.com/content/supplementary/1471-2105-12-85-S1.ZIP)
file contains the assembled sequences (contigs or scaffolds) for the
assemblers Velvet, ABySS, DecGPU-Velvet and DecGPU-ABySS for the
three real datasets.
List of abbreviations
CPU: Central Processing Unit; CUDA: Compute Unified Device Architecture;
FPP: False Positive Probability; GPU: Graphics Processing Units; HTSR: HighThroughput Short Reads; MBPS: Million Bases Processed per Second; MPI:
Message Passing Interface; NGS: Next-Generation Sequencing; OpenMP:
Open Multi-Processing; OS: Operating System; PBSM: Per-Block Shared
Memory; SAP: Spectral Alignment Problem; SIMT: Single Instruction, Multiple
Thread; SM: Streaming Multiprocessor; SP: Scalable Processor; PE: Processing
Element.
Acknowledgements
The authors thank Dr. Shi Haixiang for his helpful discussion in short read
error correction problem, thank Dr. Zheng Zejun for his help in searching for
short read datasets, and thank Dr. Liu Weiguo for his help in providing the
experimental environments.
Authors’ contributions
YL conceptualized the study, carried out the design and implementation of
the algorithm, performed benchmark tests, analyzed the results and drafted
the manuscript; BS conceptualized the study, participated in the algorithm
optimization and analysis of the results and contributed to the revising of
the manuscript; DLM conceptualized the study, participated in the analysis
of the results, and contributed to the revising of the manuscript. All authors
read and approved the final manuscript.
Received: 9 July 2010 Accepted: 29 March 2011
Published: 29 March 2011
From Equation 9, we can see that NR is directly propor
tional to NPE; i.e. the maximal number of reads scales linearly with the number of compute nodes. Next, we use an
example to illustrate how the memory consumption of
our algorithm scales with the number of reads. For an
example dataset with C = 75 and L = 36, when NPE = 8,
the maximal NR is estimated as 2.24 billion (80.5 billion
bases) for a = 0.25 and as 4.47 billion (161.1 billion bases)
for a = 0.5. Because each bucket takes 4 bits and the maximal NB is 2[32], the peak memory consumption of a counting Bloom filter is 2 GB. Hence, the maximal total
memory consumption is only 2 GB × NPE = 16 GB for
such large a dataset. DecGPU uses a = 0.25 by default.
The above observations and discussions demonstrate
that DecGPU has superior capabilities in both error correction quality and execution speed compared to existing error correction algorithms. Even though our
algorithm does not show good parallel scalability with
respect to different number of computing resources, the
distributed feature of our algorithm does provide a feasible and flexible solution to the error correction of largescale HTSR datasets.
Conclusions
In this paper, we have presented DecGPU, the first parallel and distributed error correction algorithm for
large-scale HTSR using a hybrid combination of CUDA
and MPI parallel programming models. Our algorithm
is designed based on the SAP approach and uses a
counting Bloom filter data structure to gain space efficiency. DecGPU provides two versions: a CPU-based
version and a GPU-based version. The CPU-based version employs coarse-grained and fine-grained parallelism
using MPI and OpenMP parallel programming models.
The GPU-based version takes advantage of the CUDA
and MPI programming models, and employs a hybrid
CPU+GPU computing model to maximize the performance by overlapping the CPU and GPU computation.
Compared to hSHREC, our algorithm shows superior
-----
http://www.biomedcentral.com/1471 2105/12/85
References
1. Havlak P, Chen R, Durbin KJ, Egan A, Ren Y, Song XZ, Weinstock GM,
[Gibbs RA: The Atlas genome assembly system. Genome Res 2004,](http://www.ncbi.nlm.nih.gov/pubmed/15060016?dopt=Abstract)
14(4):721-732.
2. Batzoglou S, Jaffe DB, Stanley K, Butler J, Gnerre S, Mauceli E, Berger B,
[Mesirov JP, Lander ES: ARACHNE: a whole-genome shotgun assembler.](http://www.ncbi.nlm.nih.gov/pubmed/11779843?dopt=Abstract)
Genome Res 2002, 12(1):177-189.
3. Myers EW, Sutton GG, Delcher AL, Dew IM, Fasulo DP, Flanigan MJ,
Kravitz SA, Mobarry CM, Reinert KH, Remington KA, Anson EL, Bolanos RA,
Chou HH, Jordan CM, Halpern AL, Lonardi S, Beasley EM, Brandon RC,
Chen L, Dunn PJ, Lai Z, Liang Y, Nusskern DR, Zhan M, Zhang Q, Zheng X,
[Rubin GM, Adams MD, Venter JC: A whole-genome assembly of](http://www.ncbi.nlm.nih.gov/pubmed/10731133?dopt=Abstract)
[Drosophila. Science 2000, 287(5461):2196-2204.](http://www.ncbi.nlm.nih.gov/pubmed/10731133?dopt=Abstract)
4. [Huang X, Wang J, Aluru S, Yang SP, Hillier L: PCAP: a whole-genome](http://www.ncbi.nlm.nih.gov/pubmed/12952883?dopt=Abstract)
[assembly program. Genome Res 2003, 13(9):2164-2170.](http://www.ncbi.nlm.nih.gov/pubmed/12952883?dopt=Abstract)
5. [Warren RL, Sutton GG, Jones SJ, Holt RA: Assembling millions of short](http://www.ncbi.nlm.nih.gov/pubmed/17158514?dopt=Abstract)
[DNA sequences using SSAKE. Bioinformatics 2007, 23(4):500-501.](http://www.ncbi.nlm.nih.gov/pubmed/17158514?dopt=Abstract)
6. [Dohm JC, Lottaz C, Borodina T, Himmelbauer H: SHARCGS, a fast and](http://www.ncbi.nlm.nih.gov/pubmed/17908823?dopt=Abstract)
[highly accurate short-read assembly algorithm for de novo genomic](http://www.ncbi.nlm.nih.gov/pubmed/17908823?dopt=Abstract)
[sequencing. Genome Res 2007, 17(11):1697-1706.](http://www.ncbi.nlm.nih.gov/pubmed/17908823?dopt=Abstract)
7. Jeck WR, Reinhardt JA, Baltrus DA, Hickenbotham MT, Magrini V, Mardis ER,
[Dangl JL, Jones CD: Extending assembly of short DNA sequences to](http://www.ncbi.nlm.nih.gov/pubmed/17893086?dopt=Abstract)
[handle error. Bioinformatics 2007, 23(21):2942-2944.](http://www.ncbi.nlm.nih.gov/pubmed/17893086?dopt=Abstract)
8. [Schmidt B, Sinha R, Beresford-Smith B, Puglisi SJ: A fast hybrid short read](http://www.ncbi.nlm.nih.gov/pubmed/19535537?dopt=Abstract)
[fragment assembly algorithm. Bioinformatics 2009, 25(17):2279-2280.](http://www.ncbi.nlm.nih.gov/pubmed/19535537?dopt=Abstract)
9. [Pevzner PA, Tang H, Waterman MS: An Eulerian path approach to DNA](http://www.ncbi.nlm.nih.gov/pubmed/11504945?dopt=Abstract)
[fragment assembly. Proc Natl Acad Sci USA 2001, 98(17):9748-9753.](http://www.ncbi.nlm.nih.gov/pubmed/11504945?dopt=Abstract)
10. [Chaisson MJ, Pevzner PA: Short read fragment assembly of bacterial](http://www.ncbi.nlm.nih.gov/pubmed/18083777?dopt=Abstract)
[genomes. Genome Res 2008, 18(2):324-330.](http://www.ncbi.nlm.nih.gov/pubmed/18083777?dopt=Abstract)
11. [Zerbino DR, Birney E: Velvet: algorithms for de novo short read assembly](http://www.ncbi.nlm.nih.gov/pubmed/18349386?dopt=Abstract)
[using de Bruijn graphs. Genome Res 2008, 18(5):821-829.](http://www.ncbi.nlm.nih.gov/pubmed/18349386?dopt=Abstract)
12. Butler J, MacCallum I, Kleber M, Shlyakhter IA, Belmonte MK, Lander ES,
[Nusbaum C, Jaffe DB: ALLPATHS: de novo assembly of whole-genome](http://www.ncbi.nlm.nih.gov/pubmed/18340039?dopt=Abstract)
[shotgun microreads. Genome Res 2008, 18(5):810-820.](http://www.ncbi.nlm.nih.gov/pubmed/18340039?dopt=Abstract)
13. [Simpson JT, Wong K, Jackman SD, Schein JE, Jones SJ, Birol I: ABySS: a](http://www.ncbi.nlm.nih.gov/pubmed/19251739?dopt=Abstract)
[parallel assembler for short read sequence data. Genome Res 2009,](http://www.ncbi.nlm.nih.gov/pubmed/19251739?dopt=Abstract)
19(6):1117-1123.
14. Li R, Zhu H, Ruan J, Qian W, Fang X, Shi Z, Li Y, Li S, Shan G, Kristiansen K,
[Li S, Yang H, Wang J, Wang J: De novo assembly of human genomes](http://www.ncbi.nlm.nih.gov/pubmed/20019144?dopt=Abstract)
[with massively parallel short read sequencing. Genome Res 2010,](http://www.ncbi.nlm.nih.gov/pubmed/20019144?dopt=Abstract)
20(2):265-272.
15. [Salmela L: Correction of sequencing errors in a maxed set of reads.](http://www.ncbi.nlm.nih.gov/pubmed/20378555?dopt=Abstract)
Bioinformatics 2010, 26(10):1284-1290.
16. [Schröder J, Schröder H, Puglisi SJ, Sinha R, Schmidt B: SHREC: a short read](http://www.ncbi.nlm.nih.gov/pubmed/19542152?dopt=Abstract)
[error correction method. Bioinformatics 2009, 25(17):2157-2163.](http://www.ncbi.nlm.nih.gov/pubmed/19542152?dopt=Abstract)
17. [Liu Y, Maskell DL, Schmidt B: CUDASW++: optimizing Smith-Waterman](http://www.ncbi.nlm.nih.gov/pubmed/19416548?dopt=Abstract)
[sequence database searches for CUDA-enabled graphics processing](http://www.ncbi.nlm.nih.gov/pubmed/19416548?dopt=Abstract)
[units. BMC Research Notes 2009, 2:73.](http://www.ncbi.nlm.nih.gov/pubmed/19416548?dopt=Abstract)
18. [Liu Y, Schmidt B, Maskell DL: CUDASW++ 2.0: enhanced Smith-Waterman](http://www.ncbi.nlm.nih.gov/pubmed/20370891?dopt=Abstract)
[protein database search on CUDA-enabled GPUs based on SIMT and](http://www.ncbi.nlm.nih.gov/pubmed/20370891?dopt=Abstract)
[virtualized SIMD abstractions. BMC Research Notes 2010, 3:93.](http://www.ncbi.nlm.nih.gov/pubmed/20370891?dopt=Abstract)
19. Liu Y, Schmidt B, Maskell DL: MSA-CUDA: multiple sequence alignment
on graphics processing units with CUDA. 20th IEEE International
Conference on Application-specific Systems, Architectures and Processors 2009,
121-128.
20. Liu Y, Schmidt B, Liu W, Maskell DL: CUDA-MEME: accelerating motif
discovery in biological sequences using CUDA-enabled graphics
processing units. Pattern Recognition Letters 2010, 31(14):2170-2177.
21. [Shi H, Schmidt B, Liu W, Müller-Wittig W: A parallel algorithm for error](http://www.ncbi.nlm.nih.gov/pubmed/20426693?dopt=Abstract)
[correction in high-throughput short-read data on CUDA-enabled](http://www.ncbi.nlm.nih.gov/pubmed/20426693?dopt=Abstract)
[graphics hardware. J Comput Biol 2010, 17(4):603-615.](http://www.ncbi.nlm.nih.gov/pubmed/20426693?dopt=Abstract)
22. Bloom BH: Space/time trade-offs in hash coding with allowable errors.
Commu ACM 1970, 13:422-426.
23. Shi H, Schmidt B, Liu W, Müller-Wittig W: Quality-score guided error
correction for short-read sequencing data using CUDA. Procedia
Computer Science 2010, 1(1):1123-1132.
24. [Message Passing Interface (MPI) tutorial. [https://computing.llnl.gov/](https://computing.llnl.gov/tutorials/mpi)
[tutorials/mpi].](https://computing.llnl.gov/tutorials/mpi)
25. [OpenMP tutorial. [https://computing.llnl.gov/tutorials/openMP].](https://computing.llnl.gov/tutorials/openMP)
**Submit your next manuscript to BioMed Central**
**and take full advantage of:**
**• Convenient online submission**
**• Thorough peer review**
**• No space constraints or color figure charges**
**• Immediate publication on acceptance**
**• Inclusion in PubMed, CAS, Scopus and Google Scholar**
**• Research which is freely available for redistribution**
Submit your manuscript at
www.biomedcentral.com/submit
26. Fan L, Cao P, Almeida J, Broder AZ: Summary Cache: A Scalable WideArea Web Cache Sharing Protocol. IEEE/ACM Transaction on Network 2000,
8:3.
27. Nickolls J, Buck I, Garland M, Skadron K: Scalable parallel programming
with CUDA. ACM Queue 2008, 6(2):40-53.
28. Lindholm E, Nickolls J, Oberman S, Montrym J: NVIDIA Tesla: A unified
graphics and computing architecture. IEEE Micro 2008, 28(2):39-55.
29. NVIDIA: Fermi: NVIDIA’s Next Generation CUDA Compute Architecture.
[[http://www.nvidia.com/content/PDF/fermi_white_papers/](http://www.nvidia.com/content/PDF/fermi_white_papers/NVIDIA_Fermi_Compute_Architecture_Whitepaper.pdf)
[NVIDIA_Fermi_Compute_Architecture_Whitepaper.pdf].](http://www.nvidia.com/content/PDF/fermi_white_papers/NVIDIA_Fermi_Compute_Architecture_Whitepaper.pdf)
30. [NCBI homepage. [http://www.ncbi.nlm.nih.gov].](http://www.ncbi.nlm.nih.gov)
31. [MVAPICH2 homepage. [http://mvapich.cse.ohio-state.edu/overview/](http://mvapich.cse.ohio-state.edu/overview/mvapich2)
[mvapich2].](http://mvapich.cse.ohio-state.edu/overview/mvapich2)
32. [Langmead B, Tranell C, Pop M, Salzberg SL: Ultrafast and memory-efficient](http://www.ncbi.nlm.nih.gov/pubmed/19261174?dopt=Abstract)
[alignment of short DNA sequences to the human genome. Genome](http://www.ncbi.nlm.nih.gov/pubmed/19261174?dopt=Abstract)
Biology 2009, 10:R25.
-----
| 16,006
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC3072957, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://bmcbioinformatics.biomedcentral.com/counter/pdf/10.1186/1471-2105-12-85"
}
| 2,011
|
[
"JournalArticle"
] | true
| 2011-03-29T00:00:00
|
[
{
"paperId": "acfc2001b5ee7cd14f951102701902eadb7eb48a",
"title": "CUDA-MEME: Accelerating motif discovery in biological sequences using CUDA-enabled graphics processing units"
},
{
"paperId": "dc8e57be1c061dcc4ab16680f14ab94d9f6eeb39",
"title": "Quality-score guided error correction for short-read sequencing data using CUDA"
},
{
"paperId": "29d9f0e586c7de5dfddfebe90d456343e354f2cc",
"title": "Correction of sequencing errors in a mixed set of reads"
},
{
"paperId": "353e3d6a197d9d68a6e48f9b3476b780ff2fd5bf",
"title": "A Parallel Algorithm for Error Correction in High-Throughput Short-Read Data on CUDA-Enabled Graphics Hardware"
},
{
"paperId": "58f770443c4e9038f04f800cc7bedf7548ddd8b3",
"title": "CUDASW++2.0: enhanced Smith-Waterman protein database search on CUDA-enabled GPUs based on SIMT and virtualized SIMD abstractions"
},
{
"paperId": "c6e319a023f932d6d2ff8897c2c91e56e872db8e",
"title": "De novo assembly of human genomes with massively parallel short read sequencing."
},
{
"paperId": "b85aebac12c806e3e87b0bc0e75b4ffe0a7d896b",
"title": "A fast hybrid short read fragment assembly algorithm"
},
{
"paperId": "7b931f06d3df8f7848685a170b24494db4298627",
"title": "SHREC: a short-read error correction method"
},
{
"paperId": "1f3eb3ea10886fdb43e0bea82a305bfb8802f8d5",
"title": "MSA-CUDA: Multiple Sequence Alignment on Graphics Processing Units with CUDA"
},
{
"paperId": "6782425b8f6589045104c1065d2fd2d6a008b3d1",
"title": "ABySS: a parallel assembler for short read sequence data."
},
{
"paperId": "dbab18be6e8cd820dbbe666a3feec509cac0cb71",
"title": "CUDASW++: optimizing Smith-Waterman sequence database searches for CUDA-enabled graphics processing units"
},
{
"paperId": "ebe875cf08dd398e0ed25f518502301c984a9afe",
"title": "Ultrafast and memory-efficient alignment of short DNA sequences to the human genome"
},
{
"paperId": "9cbccbe327e51dffd5c9b8b3199b6958d000b902",
"title": "ALLPATHS: de novo assembly of whole-genome shotgun microreads."
},
{
"paperId": "cccdadaba3b4b598d0d963b9c9d431cec24c4bd4",
"title": "Velvet: algorithms for de novo short read assembly using de Bruijn graphs."
},
{
"paperId": "356869aa0ae8d598e956c7f2ae884bbf5009c98c",
"title": "NVIDIA Tesla: A Unified Graphics and Computing Architecture"
},
{
"paperId": "101941da6afb2480153551802249ffe5a72640d1",
"title": "Short read fragment assembly of bacterial genomes."
},
{
"paperId": "fbd59e17f72c3aedb3cd06af2e50a91526ccd81a",
"title": "SHARCGS, a fast and highly accurate short-read assembly algorithm for de novo genomic sequencing."
},
{
"paperId": "78aeb46662833e9208a35fcdbbcb5badc45e3b62",
"title": "Extending assembly of short DNA sequences to handle error"
},
{
"paperId": "dbb41f80dc43d139761a863eb7d6da773a726261",
"title": "Assembling millions of short DNA sequences using SSAKE"
},
{
"paperId": "aaba05318171827c7228561fbcc83347ac3bcac2",
"title": "The Atlas genome assembly system."
},
{
"paperId": "0ddc53f698d85b9050bbff0f12c1f6894b405562",
"title": "PCAP: a whole-genome assembly program."
},
{
"paperId": "c9284fd6a3bb033bf5e21a2fc512dd2be958bfad",
"title": "An Eulerian path approach to DNA fragment assembly"
},
{
"paperId": "56ebfeddc20fd9c17e64c4699c68eea6402176ac",
"title": "Summary cache: a scalable wide-area web cache sharing protocol"
},
{
"paperId": "722179d4f64c606c705c71db7b1c03f8c48d6a46",
"title": "A whole-genome assembly of Drosophila."
},
{
"paperId": "f39a2c11983b21fd5054d5393614959bfbc4e50f",
"title": "Space/time trade-offs in hash coding with allowable errors"
},
{
"paperId": null,
"title": "Scalable parallel programming with CUDA"
},
{
"paperId": "88bfc94ad402b8bfd624c7d2691ec9012fd2f5ed",
"title": "ARACHNE: a whole-genome shotgun assembler."
},
{
"paperId": null,
"title": "Message Passing Interface (MPI) tutorial. [https://computing.llnl.gov/ tutorials/mpi"
}
] | 16,006
|
en
|
[
{
"category": "Business",
"source": "external"
},
{
"category": "Medicine",
"source": "external"
},
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Medicine",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/000c351ffff4b7379817bf6a9c73c4d3617a1395
|
[
"Business",
"Medicine",
"Computer Science"
] | 0.92408
|
A Proof of Concept of a Mobile Health Application to Support Professionals in a Portuguese Nursing Home
|
000c351ffff4b7379817bf6a9c73c4d3617a1395
|
Italian National Conference on Sensors
|
[
{
"authorId": "144067094",
"name": "Márcia Esteves"
},
{
"authorId": "39549792",
"name": "Marisa Esteves"
},
{
"authorId": "145646334",
"name": "A. Abelha"
},
{
"authorId": "81807096",
"name": "José Machado"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"SENSORS",
"IEEE Sens",
"Ital National Conf Sens",
"IEEE Sensors",
"Sensors"
],
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-142001",
"http://www.mdpi.com/journal/sensors",
"https://www.mdpi.com/journal/sensors"
],
"id": "3dbf084c-ef47-4b74-9919-047b40704538",
"issn": "1424-8220",
"name": "Italian National Conference on Sensors",
"type": "conference",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-142001"
}
|
Over the past few years, the rapidly aging population has been posing several challenges to healthcare systems worldwide. Consequently, in Portugal, nursing homes have been getting a higher demand, and health professionals working in these facilities are overloaded with work. Moreover, the lack of health information and communication technology (HICT) and the use of unsophisticated methods, such as paper, in nursing homes to clinically manage residents lead to more errors and are time-consuming. Thus, this article proposes a proof of concept of a mobile health (mHealth) application developed for the health professionals working in a Portuguese nursing home to support them at the point-of-care, namely to manage and have access to information and to help them schedule, perform, and digitally record their tasks. Additionally, clinical and performance business intelligence (BI) indicators to assist the decision-making process are also defined. Thereby, this solution aims to introduce technological improvements into the facility to improve healthcare delivery and, by taking advantage of the benefits provided by these improvements, lessen some of the workload experienced by health professionals, reduce time-waste and errors, and, ultimately, enhance elders’ quality of life and improve the quality of the services provided.
|
# sensors
_Article_
### A Proof of Concept of a Mobile Health Application to Support Professionals in a Portuguese Nursing Home
**Márcia Esteves *** **, Marisa Esteves** **, António Abelha** **and José Machado**
Algoritmi Research Center, University of Minho, Campus Gualtar, 4470 Braga, Portugal;
[email protected] (M.E.); [email protected] (A.A.); [email protected] (J.M.)
***** Correspondence: [email protected]
Received: 30 August 2019; Accepted: 10 September 2019; Published: 12 September 2019
[����������](https://www.mdpi.com/1424-8220/19/18/3951?type=check_update&version=2)
**�������**
**Abstract: Over the past few years, the rapidly aging population has been posing several challenges**
to healthcare systems worldwide. Consequently, in Portugal, nursing homes have been getting
a higher demand, and health professionals working in these facilities are overloaded with work.
Moreover, the lack of health information and communication technology (HICT) and the use of
unsophisticated methods, such as paper, in nursing homes to clinically manage residents lead to
more errors and are time-consuming. Thus, this article proposes a proof of concept of a mobile
health (mHealth) application developed for the health professionals working in a Portuguese nursing
home to support them at the point-of-care, namely to manage and have access to information
and to help them schedule, perform, and digitally record their tasks. Additionally, clinical and
performance business intelligence (BI) indicators to assist the decision-making process are also defined.
Thereby, this solution aims to introduce technological improvements into the facility to improve
healthcare delivery and, by taking advantage of the benefits provided by these improvements,
lessen some of the workload experienced by health professionals, reduce time-waste and errors, and,
ultimately, enhance elders’ quality of life and improve the quality of the services provided.
**Keywords: business intelligence; elders; health information and communication technology; health**
professionals; mobile health; nursing homes; smart health
**1. Introduction**
Over the past few years, the world has been witnessing a huge demographic change:
the population is aging at an alarming rate. In fact, the statistics regarding the aging population
are concerning since, compared to the growth of the whole population, it is estimated the elderly
population is growing twice as quickly [1]. Consequently, this problem has been a matter of concern
for many countries since it is posing several challenges to healthcare systems worldwide [1–3].
Thus, as a consequence of the rapidly aging population, the costs of elderly care and the number of
elders in nursing homes have been increasing [1,3].
The harsh reality is that many countries are experiencing a growth in the proportion of elders
and, consequently, an increase of the number of service requirements for them that, at the moment,
they are not able to meet. Thus, due to the high demand for more and better medical services for the
elderly, there is a need to evaluate the state of these services and assess the need for improvements.
Portugal is not an exception to this concern. In fact, Portugal is, at the moment, one of the countries
with the largest aging population in the world [4] and, similar to other countries, this situation has
been negatively affecting several aspects of elderly care. In this sense, one of the major challenges
resulting from this situation is the increasing number of elders in nursing homes. Over the past few
years, nursing homes vacancies have been filling up at a quick rate, making the search for a place in
one of these facilities a massive challenge for many elders and families in Portugal [5,6].
-----
_Sensors 2019, 19, 3951_ 2 of 22
Additionally, health professionals working in nursing homes are, more than not, overloaded with
work since they are often few compared to the high number of elderly people [7,8]. In addition to the
aging population, one of the main factors causing this situation is the lack of investment and resources
in these facilities. In this context, nursing homes generally use unsophisticated and rudimentary
methods, namely paper, to record information and to clinically manage residents [9,10]. Naturally,
the paper-management of data is more error-prone and time-consuming since the risk of misplacing or
losing information is much higher. Moreover, health professionals constantly need to return to the
nursing stations to retrieve and record information, leading to a higher risk of forgetting information
or writing information in the wrong place.
Thus, the current need to fill the lack of resources and access to technology in nursing homes to
solve some of the problems faced by them and ultimately improve the nursing care delivered is apparent.
In fact, nursing homes could greatly benefit from the introduction of technological advancements,
such as HICT. The use of HICT, which refers to any form of electronic solution that allows manipulating,
managing, exchanging, retrieving, and storing digital information in healthcare settings, has dramatically
and positively changed the medical practice and is certainly here to stay [11–13].
Technologies encompassed in HICT have rapidly become a natural and indispensable part
of healthcare settings due to their many advantages, namely to enhance the management, access,
and sharing of information; to improve the quality, safety, and efficiency of healthcare delivery and
its outcomes; to reduce the occurrence of errors and adverse events; to support the decision-making
process; to decrease time-waste; and to improve productivity in healthcare systems [9–15]. In fact,
the use of HICT in medical contexts enables turning traditional healthcare towards smart healthcare,
which consists in the use of technology to improve healthcare delivery and the quality of services.
Nevertheless, despite the well-known benefits of HICTs, nursing homes have been lagging behind
in adopting them due to the lack of investment and effort by these facilities to adapt to technological
improvements [9–11,15,16].
Thereby, considering all the above mentioned, this manuscript aims to describe and evaluate
a proof of concept of a mHealth application developed for health professionals, more specifically the
doctors and nurses working in a Portuguese nursing home. The solution was developed to introduce
technological improvements in the facility and to support the health professionals in their daily tasks
and at the point-of-care, namely to manipulate and have access to information as well as to schedule,
perform, and record their job-related tasks. Moreover, clinical and performance BI indicators were also
defined to help health professionals to make more informed and evidence-based decisions.
It is important to mention that a mobile solution was chosen since a single hand-held device,
which can be used anywhere and at any time, can allow accessing and manipulating information at
the point-of-care. In this sense, the novelty of this project resides in the need to solve some of the
challenges faced by a nursing home suffering from the consequences of the aging population and the
absence of HICT. Additionally, the lack of literature and an integrated body of knowledge on the use
of HICT in nursing homes shows that there is still much work that needs to be done in this area.
Regarding the structure of this document, Section 2 corresponds to the state of the art in which the
body of knowledge related to this project is described. Then, in Section 3, the research methodologies
that were selected to successfully conduct this project are approached. Afterwards, in Section 4,
the developments tools that were chosen to develop the mobile application, namely the database,
web services, and interfaces, and to create examples of the BI indicators are identified as well as their
advantages. Section 5 gives a brief description of the Portuguese nursing home, i.e., of the case study,
for which the solution was developed in order to have a better understanding of the main challenges
faced by the institution. Then, the results achieved regarding the database, web services, interfaces,
and BI indicators developed are presented in Section 6. A brief discussion of the results obtained
is presented in Section 7. Finally, in Section 8, the main conclusions and contributions achieved are
identified and future work is presented.
-----
_Sensors 2019, 19, 3951_ 3 of 22
**2. State of the Art**
In this section, the general background related to the research area of this project is presented in
order to offer a deeper understanding about the novelty and relevance of this project, namely about
how mHealth and BI can positively impact and be beneficial for healthcare facilities, more specifically,
for the nursing home used as a case study in this study. Furthermore, the ethical issues associated with
the use of HICT in healthcare contexts are described since they were taken into account during all
stages of the development of this project. Finally, works related to the project carried out in this study
are also addressed.
_2.1. The Impact of Mobile Health in the Healthcare Industry_
In recent years, the rapid expansion of mobile technology, i.e., of technology that can be
used “on-the-move”, has been affecting several industries, and the healthcare industry is not an
exception [17–19]. In fact, the ubiquitous presence of mobile devices, such as smartphones and tablets,
and the rise in their adoption have led to the growth in the number of mobile applications. In this
sense, there is currently a wide range of mobile applications that offer a variety of features and,
more recently, mobile health applications have been expanding due to their potential to improve
healthcare delivery [18–20].
In this context, the use of mHealth, i.e., mobile devices and applications to support the
medical practice, has been transforming several aspects of the healthcare industry and proving
to be quite promising and beneficial for health professionals, namely to help them execute their
daily tasks, to manage and monitor patients, to access and manage clinical data, and to enhance
the decision-making process, among others [17,19,21–23]. However, mHealth has not only been
advantageous for healthcare providers but also for the consumers, allowing them to strengthen their
communication with healthcare organizations [20,24]. Therefore, the main benefits of mHealth are
as follows [17,20,21,24]:
- Convenient and faster accessibility to information since all data are gathered in a single source,
which can be used “on-the-move”;
- Reduction of time-waste since health professionals can manipulate information at the
point-of-care, not having to interrupt their workflow and go to another location to do so;
- Faster and better decision-making process, since health professionals can have access to up-to-date
information at the point-of-care, leading to more informed and based decisions;
- Faster and improved communication since mHealth helps connect all the professionals distributed
across the healthcare organization;
- Help healthcare organizations to strengthen their communication with healthcare consumers by
providing information to them at any given moment through appointment reminders, test result
notifications, diagnostics, and disease control, among others;
- Decrease errors and adverse events; and
- Improve quality of healthcare delivery and services.
It is important to mention that the use of mobile applications in healthcare settings is not
intended to replace desktop applications, which can be more powerful and less restrictive than mobile
applications, but to complement them and, especially, to enhance outcomes at the point-of-care [17].
In fact, in situations where rapid information exchange is needed, where information should be
entered at the point-of-care, and where health professionals are constantly on the move and have,
therefore, less time to spend on computers, mobile technology is highly beneficial compared to desktop
applications [21]. For instance, health professionals working in nursing homes could greatly benefit
from mobile technology since they are constantly in motion and have little time to spend on computers,
which are often located in nursing stations far away from the residents.
Therefore, the undeniable benefits of mHealth show that a higher investment should be done
in its adoption as it can improve the quality of healthcare delivery. However, mHealth applications
-----
_Sensors 2019, 19, 3951_ 4 of 22
should only be developed after truly understanding the needs of the intended users in order to develop
high quality and accurate applications and avoid their underutilization [17,19,21,22].
_2.2. Business Intelligence Transforms Clinical Information into Valuable Information_
Business intelligence corresponds to a set of methodologies, applications, processes, technologies,
and analytical tools that enables to gather, store, manipulate, process, and analyze data in order to
gain new and relevant information used by organizations to make informed and evidence-based
decisions [13,25–27]. In the healthcare industry, BI tools are essential to analyze the clinical
data constantly generated in order to obtain new knowledge used as evidence to support the
decision-making process [25–28]. Thereby, BI has emerged as a solution to make use of the complex
and huge amounts of information gathered daily in organizations, offering analytical tools able to
turn these data into meaningful, useful, and valuable information and, thus, make faster, informed,
and evidence-based decisions [27,29–31].
Furthermore, through the knowledge obtained, organizations are able to gain a deeper
understanding and insight on their performance and highlight problem areas and opportunities,
enabling them to plan and perform improvements if necessary [25,26,28,30,32]. Regarding the
healthcare industry, applying BI technology to electronic health records (EHRs) helps improve
healthcare delivery and its outcomes; reduce the occurrence of errors, adverse events, and costs;
and give economic value to the large amounts of clinical data generated daily, which otherwise would
be a burden to healthcare organizations [25,26,28,30,32,33].
The general architecture of the business intelligence process is illustrated in Figure 1.
**External databases** **Dashboards**
**External data sources** **Data warehouse** **Ad hoc query**
**ETL process**
**Operational databases** **Reporting**
**Data sources** **Presentation**
**Figure 1. General architecture of the business intelligence process (adapted from [34]).**
As shown in Figure 1, the components encompassed in the BI process include
the following [13,34–36]:
- Extract, transform, and load (ETL) process: enables extracting data from multiple sources, clean
and normalize these heterogeneous data to make them consistent and unambiguous, and load
the transformed data into a data warehouse (DW);
- Data warehousing process: enables building adequate DWs able to structure data and facilitate
their analysis; and
- Visualization, analysis, and interpretation of the data loaded into the DW: enables obtaining new
knowledge previously unknown to an organization. Thus, for this purpose, various analytical
|External data sources|Col2|
|---|---|
|Col1|Ad hoc query|
|---|---|
-----
_Sensors 2019, 19, 3951_ 5 of 22
tools and applications can be used, namely data mining tools and applications able to create
charts, reports, spreadsheets, and dashboards, among others.
Despite the opportunities and positive effects BI brings to organizations, this technology has not
yet attained its full potential and maturity in the healthcare industry [13,30]. However, the benefits of
BI tools in healthcare settings are indisputable and have, thus, continuously been explored through
the years.
_2.3. Ethical Issues in Medicine_
Without any doubt, the use of HICT, mHealth, and BI in the healthcare industry has been greatly
beneficial and advantageous for healthcare organizations since these technologies have the potential
to enhance the quality of the care delivered. However, despite the many benefits and opportunities
offered by these technologies, they are not without flaws. In fact, challenges may arise from the
implementation and use of solutions based on them, more specifically, ethical issues.
Nowadays, healthcare organizations produce daily vast amounts of EHRs and other types of
data related to both the patients and the organization. However, since these data are stored in health
information systems, patients are fearful that their confidentiality and privacy are compromised and
not guaranteed, since, compared to the traditional paper-based management of data, technological
advancements have made accessing data and violating privacy easier [22,37,38]. Additionally, the EHRs
of the patients can be consulted by various health professionals across the organization, which can
be problematic for patients who do not want their sensitive information shared and viewed by
other professionals [37,39].
In this sense, privacy issues and patient confidentiality should always be taken into account
and safeguarded while developing technological solutions. In fact, if the privacy and confidentiality
of the users are not protected and ensured, some of them may not want to use HICT solutions [39].
Furthermore, legal issues may arise if sensitive information of the users is disclosed without their
consent and if their privacy is lost. Therefore, it is important to define data access policies in order
to only give information access to authorized users [38,39]. Nonetheless, implementing security
protections remains a difficult task to perform, but it should always be taken into account and viewed
as a priority when developing HICT solutions [38].
On the other hand, regarding the introduction of mHealth solutions in healthcare settings,
some health professionals remain hesitant regarding their use despite the many advantages and
benefits provided by them. The main cause of this situation is the fact that many mHealth applications
are currently being used without having a complete understanding of their effectiveness, accuracy,
quality, and associated risks, which can, in extreme cases, impair healthcare delivery [17,22]. In this
sense, best-practice standards should be followed to ensure the quality, accuracy, and safety of
mHealth solutions during their design, development, and implementation [17,22,40]. Additionally,
these applications should go through a rigorous set of validation and evaluation methods to guarantee
their quality, accuracy, and safety in healthcare settings [17,22,40].
_2.4. Related Work_
Undeniably, the introduction of mobile devices and applications has been positively transforming
several aspects of the medical practice and providing many benefits to healthcare facilities, namely the
improvement of the quality of their services and healthcare delivery. Therefore, to shed light on the
potential of mHealth, some existing works are presented in this subsection.
Nowadays, the vast majority of mHealth applications focus on specific health dimensions and are,
thus, frequently oriented towards patients [41]. In this sense, there is currently an extensive amount of
mHealth applications available in the market and they are being used to monitor patients both at home
and in health facilities, to educate patients, to strengthen the communication between patients and health
facilities, and to offer better access to health services, diagnosis, and treatment, among others [42].
-----
_Sensors 2019, 19, 3951_ 6 of 22
In this context, it is possible to highlight several examples such as the mHealth monitoring system
named iCare [43], which uses smartphones and wireless sensors to monitor elderly people in the
comfort of their homes. This system is of particular interest since it enables remotely monitoring the
elderly anywhere and at any time, providing different services according to the health conditions of
each individual. Moreover, this system also acts as an assistant offering reminder, alarms, and medical
guidance to the elderly. On the other hand, home-based telerehabilitation for people with multiple
sclerosis was also addressed by Thirumalai et al. [44] through the development of a therapeutic exercise
application named TEAMS, which provides different exercises and programs according to the multiple
sclerosis level of the individual.
In the work of Parmanto et al. [45], a mHealth system called iMHere, which enables individuals
with chronic conditions to perform preventive self-care tasks at home and to remotely communicate
with clinicians without having to go to health facilities, is proposed. Finally, Bastos et al. [46] developed
the SmartWalk project, which promotes healthy aging by enabling elderly people to have a more
active lifestyle while being remotely monitored by health professionals. This project involved the
development of a mobile application connected to sensors that collect data while the elderly user walks
on a predefined route provided by the application. The health professionals are then able to analyze
these data to suggest modifications to the route and, thus, improve the health of the elderly user.
However, despite the predominance of patient-centered mHealth solutions in the market,
applications are also available for the management of health facilities and healthcare information
and to assist health professionals. In this context, Doukas, Pliakas, and Maglogiannis [47] proposed
a mobile healthcare information management system named @HealthCloud that enables medical
experts as well as patients to manage healthcare information. Thus, by using this system, users are able
to retrieve, upload, and modify medical content, such as health records and medical images. Moreover,
the authors affirmed that the system enables managing healthcare data in a pervasive and ubiquitous
way, leading to the reduction of medical errors since medical experts can effectively communicate
between each other and have access to patient-related information during decision-making. Similarly,
Landman et al. [48] developed a mobile application called CliniCam that enables clinicians to securely
capture clinical images, annotate them, and finally store them in the EHR. Thus, this application
enables making the images available to all credentialed clinicians across the hospital in a secure way.
To this end, various security features were adopted, such as user authentication, data encryption,
and secure wireless transmission.
Despite the existence of a large amount of patient-centered mHealth applications,
the implementation of mobile technology for the management of health facilities, namely of nursing
homes, and to assist health professionals and medical experts in their daily tasks remains to be
properly addressed, whereby further research is needed. In this context, this project was performed
as an answer for the lack of mobile solutions in nursing homes that focus primarily on the assistance
of health professionals in their job-related tasks and management of the facility. Thus, due to
the lack of applications similar to the one described in this manuscript, the health professionals
working in the nursing home used as a case study were constantly consulted in order to develop
a solution that answers to their needs. Furthermore, information gathered from the literature,
namely from Landman et al. [48], was also essential to promote security features.
**3. Research Methodologies**
This project was sustained by a set of well-defined steps with the intention of ensuring its success
and having an organized path to follow. In this context, the design science research (DSR) methodology
was used since it is suitable for HICT research projects. Additionally, this methodology was used
since the developed solution meets the needs of the health professionals working in the nursing
home and is able to solve the problems faced by them. In fact, by introducing the solution into the
nursing home, it is possible to substitute the paper-based management of information, support the
decision-making process, reduce time-waste and the occurrence of errors and adverse events, and,
-----
_Sensors 2019, 19, 3951_ 7 of 22
consequently, lessen the work overload experienced by health professionals as well as improve the
nursing care delivered.
The main purpose of the DSR methodology is to create and evaluate objects known as
artifacts, or more specifically, solutions, developed in order to solve and address organizational
problems [49–51]. In other words, the DSR methodology corresponds to a rigorous science research
method that encompasses a set of techniques, principles, and procedures followed to design and
develop successful solutions capable of solving problems faced by an organization and useful and
effective to face the problems at hand [49–51].
In this sense, the DSR methodology can be divided into six distinct steps, as illustrated in Figure 2.
**Possible research entry points**
**Design and**
**Problem-**
**Objective-** **development-** **Client/context**
**centered**
**centered solution** **centered** **initiated**
**initiation**
**initiation**
**Step 1** **Step 2** **Step 3**
**Problem** **Definition of the** **Design and** **Step 4** **Step 5** **Step 6**
**identification and** **solution’s** **development of** **Demonstration** **Evaluation** **Communication**
**motivation** **objectives** **the solution**
**Process iteration**
**Figure 2. Schematic representation of the steps encompassed in the DSR methodology (adapted from [50]).**
Therefore, since the DSR methodology was used for the development of this project, the problems
and challenges faced by the health professionals working in the nursing home used as a case study
had to be identified in order to motivate the development of the solution. Thus, focus groups,
semi-structured interviews, and questionnaires were made with the professionals working for the
nursing home as well as for the hospital that manages the facility in order to gather valuable
information capable of identifying and understanding the main challenges encountered by the
health professionals. It is important to mention that the focus groups, semi-structured interviews,
and questionnaires were performed with a group of ten participants, including nurses working in
the nursing home as well as information and communication technology (ICT) professionals and
other professionals working for both the nursing home and the hospital that manages the facility.
The participants were selected based on their availability and since they were the most suitable
to provide information concerning the challenges faced by the nursing home and the use of HICT
in the facility. Furthermore, an observation of the case study was also performed to have a better
understanding of the conditions of the nursing home.
Consequently, the objectives of the solution were defined according to the problems identified
and, afterwards, the features and architecture of the solution were designed and developed. Once the
solution was developed, it had to be demonstrated and evaluated through the execution of a proof of
concept, which included a strengths, weaknesses, opportunities, and threats (SWOT) analysis and the
technological acceptance model 3 (TAM3), in order to assess its usefulness, feasibility, and potential and
if improvements and changes were needed. Additionally, this study also involved the communication
of the problem and the solution to an audience, namely through the presentation of the solution to the
health professionals and the writing of scientific papers.
A proof of concept was performed in order to carry out a thorough evaluation of the solution and
to demonstrate its usefulness and potential. Therefore, a proof of concept enables to demonstrate in
practice the concepts, methodologies, and technologies encompassed in the development of a solution.
Additionally, it allows validating the developed solution towards the target audience and ensures
that the solution provides all of the requirements initially proposed. On the other hand, besides being
able to assess the usefulness, potential, and benefits of a solution, a proof of concept is also capable of
identifying potential issues and threats associated with the solution.
-----
_Sensors 2019, 19, 3951_ 8 of 22
Thus, the demonstration of the potential and feasibility of the mobile application involved
the execution of a SWOT analysis to identify its strengths, weaknesses, opportunities, and threats.
To this end, the TAM3 model was used to elaborate a questionnaire, which was performed with the
health professionals working in the nursing home and the results obtained were used as a basis in
the SWOT analysis. Thus, in this research project, the TAM3 model was followed to elaborate and
design a questionnaire, which was performed with the users of the solution to assess their acceptance
towards it.
Briefly, the TAM3 corresponds to a tool capable of predicting the acceptance of an information
technology (IT) solution by users in an organization as well as the likelihood of this technology being
adopted by them. To this end, the model considers that the acceptance and use of technology are
affected by the internal beliefs, attitudes, and intentions of users and that their satisfaction towards
IT results from the combination of the feelings and attitudes regarding a set of factors linked to
the adoption of the technology [52–54]. Therefore, the attitudes and acceptance of users towards
an IT solution influence and affect its successful implementation and use in an organization [55].
Thus, analyzing the acceptance of users towards a new IT solution is quite essential since the more
accepting they are, the more willing they are to make changes and spend their time and effort to use
the solution [55]. Organizations can then use the factors that affect the opinion of users towards the
acceptance of a new IT solution and manipulate these factors to promote its successful use.
**4. Development Tools**
In this section, the development tools and technologies used to develop the solution are described
as well as the reasons behind their selection and main advantages.
_4.1. MySQL Relational Database Management System_
Naturally, the development of any mobile application should include the definition and creation
of a database, if one does not already exist, to store and manipulate data. In this sense, the database
designed and developed for this project was created with MySQL.
MySQL is a relational database management system (RDBMS), meaning that it uses the relational
model, in which several tables are logically related to each other through relations existing between
them, as its database model [28,56]. Additionally, since it is a database management system (DBMS),
MySQL enables defining, modifying, and creating a database as well as inserting, updating, deleting,
and retrieving data from the database [56]. In addition, a DBMS offers controlled access to the
database, namely a security system that blocks unauthorized users when they try to access the
database, an integrity system that allows maintaining the consistency of data, a concurrency control
system that allows shared access of data, a recovery control system that resets the database to its
previous state in the case of a failure, and a catalog accessed by users to consult the descriptions of the
data stored in the database [56].
For the development of this project, MySQL was chosen to define and create the database since it
is a RDBMS as well as an open-source, fast, secure, reliable, and easy to use DBMS [28,57]. Additionally,
the server in which the database had to be deployed and implemented, which belongs to the hospital
that manages the nursing home, was already configured for this type of database, thus making MySQL
the most appropriate choice.
_4.2. PHP RESTful Web Services_
The communication and interaction between the mobile application and the MySQL database
was possible through the creation of RESTful web services, which were created using PHP.
RESTful web services are based on the representational state transfer (REST) architecture, which is
a client–server-based architecture, and depends on the hypertext transfer protocol (HTTP) protocol to
convey the messages [58,59]. Thus, the REST architecture offers a set of principles on how data should
be transferred over a network. RESTful web services are identified by uniform resource identifiers,
-----
_Sensors 2019, 19, 3951_ 9 of 22
which enable the interaction and exchange of messages with the web services over a network [58,59].
Moreover, by taking advantage of the specific features of HTTP, RESTful web services are able to GET,
PUT, DELETE, and POST data
Thus, the web services were created to enable the mobile application to send requests to the
database (via queries) and to send back to the application responses in the JavaScript object notation
(JSON) format. The web services created enable to select data from the database as well as update
and insert data. Consequently, to allow the communication between the mobile application and the
web services, an Apache server was used, which is a HTTP server capable of receiving and sending
HTTP messages.
PHP was chosen to develop the web services since it is an open-source, fast, and easy to use
language. On the other hand, the server in which the web services had to be implemented was
already configured for this programming language since other applications were developed for the
hospital that manages the nursing home using PHP. Thus, taking into account the reasons mentioned
above and to avoid maintenance and integration issues in the future, PHP revealed to be the most
appropriate choice.
_4.3. React Native JavaScript Framework_
The interfaces of the mobile application were created using React Native, which is a JavaScript
framework developed by Facebook for building native mobile applications, i.e., applications built for
specific mobile platforms [60–62].
React Native was released in 2015 and is based on React, which is a JavaScript library used to
build user interfaces and targets the web. However, React Native targets mobile platforms and enables
developers to simultaneously develop and maintain one application that can be deployed to both iOS
and Android [60,61]. Thus, developers do not need to develop distinct applications in order to target
these two platforms. It is important to note that, although the mobile application built in this study
was developed only for Android devices, choosing a cross-platform framework was still essential to
allow its quick and easy development for iOS devices in the future.
In recent years, React Native has been proving to have a lot of potential as a cross-platform
framework enabling developers to build native applications while having a high performance. On the
other hand, React Native provides many other benefits, such as [63]:
- It is an open-source and free platform, making the development of mobile applications a lot easier
since all documentation is available for free and it is community driven;
- Existence of a huge variety of third-party plugins and libraries to help and facilitate
mobile development;
- Existence of a hot reload feature allowing developers to see updates without recompiling their
application and updating its state;
- Existence of a live reload feature allowing developers to instantly reload their application without
recompiling it;
- Straightforward and easy to use since it has a modular and intuitive architecture; and
- Has a great performance in mobile devices since it makes use of the graphic processing unit.
Thereby, all of the reasons mentioned above made React Native the most indicated choice to
develop the interfaces of the mobile application. Furthermore, at the time of the development of this
project other applications were being developed for the hospital that manages the nursing home using
React and React Native. Thus, React Native revealed to be the obvious choice to avoid maintenance
and integration issues in the future.
_4.4. Power BI Business Analytics Platform_
One of the objectives of this project was to identify and define clinical and performance
indicators in order to make the decision-making process more evidence-based and accurate. However,
-----
_Sensors 2019, 19, 3951_ 10 of 22
it is important to mention that these indicators have not been created since the database does not
have real data yet. Furthermore, in the future, it is envisioned to introduce them in a web application.
Thus, to this end, Power BI was used to create examples of the clinical and performance indicators
defined with fictitious data.
Power BI is a business analytics platform released in 2013 by Microsoft Corporation that provides
BI tools to the users able to collect, analyze, visualize, and share data [64]. Thus, by aggregating data
from various data sources, such as Excel, MySQL databases, and CSV files, among others, Power BI is
capable of creating charts, reports, and graphs to obtain visuals and a better insight on the data [64].
On the other hand, Power BI is available in a desktop application, which is only executable on
Windows, and in a cloud service [64]. Whereas the desktop application is used to model data and
create reports, graphs, and charts, the cloud service is used to share and visualize them as well as
create them. Therefore, when users need to perform data modeling, the desktop application is the best
choice. However, to share dashboards, users need to use the cloud service.
Thus, the Power BI desktop application was used to create visual examples of the clinical and
performance indicators defined. The choice of using this BI platform was due to the fact that it is a free,
easy to use, and intuitive tool that enables to quickly create charts and graphs without too much effort
and to visualize them in a simple and explicit way.
**5. Case Study: A Portuguese Nursing Home**
As already stated, this study consisted in designing and developing a mobile application for
health professionals working in a Portuguese nursing home in order to assist them at the point-of-care,
e.g., to schedule, perform, and record tasks and to have access, record, consult, and manipulate
information, and to help them clinically manage the residents. It is important to mention that the
nursing home used as a case study for this project is managed by a Portuguese hospital. Therefore, the
professionals working for both the nursing home and the hospital were consulted throughout this project.
To have a better understanding of the relevance and motivation of this project, it was essential
to identify the main issues and challenges faced by the health professionals and the nursing home.
Therefore, focus groups, semi-structured interviews, and questionnaires were performed with the
professionals working for both the nursing home and the hospital in order to obtain valuable
information that could enlighten the main challenges faced by the nursing home. On the other hand,
the case study was also subjected to observation so as to have a better understanding of its conditions.
Thus, the following challenges were identified:
- HICT or any other form of technological progress is not used in the nursing home. Although there
is a computer in the nursing station, it is not used to record clinical information of the residents or
even to schedule tasks. Therefore, there are no EHRs and health professionals use handwritten
charts and medical records. Thus, since the information is stored on paper, the management of
information is a lot more time-consuming, especially at the point-the-care, as the professionals
have to consistently go back to the nursing station to manipulate information. Additionally,
this situation can lead to a higher risk of losing, misplacing, or forgetting, information as well as
documenting information in the wrong place.
- The job-related tasks of the health professionals are scheduled and documented in handwritten
charts or boards. This situation is particularly problematic since it is more error-prone, confusing,
and less organized.
- The nursing home does not have access to a wireless Internet connection. The health professionals
can only have access to an Internet connection in the nursing station where the computer is
located. This situation is especially challenging since it complicates the implementation of any
kind of mHealth solution.
- The number of health professionals compared to the high number of elderly people is low.
Consequently, at times, the health professionals are overloaded with work.
-----
_Sensors 2019, 19, 3951_ 11 of 22
- There was a failed attempt to implement a web application. The web application aimed to
shift from the paper-based to the computer-based management of data, allowing the health
professionals to schedule tasks, document them, and record clinical information. However,
the application was abandoned as it was time-consuming and not user-friendly.
In addition to the above mentioned, this project was also motivated by the fact that the health
professionals revealed their need for a solution that would allow them to perform their daily tasks
anywhere in the nursing home and in a more organized and faster way. Consequently, the need to
design and develop a solution that could assist the health professionals at the point-of-care by allowing
them to manipulate information anywhere in the facility was obvious. In this sense, a proof of concept
of a mobile application designed and developed to enhance the care delivered and elders’ quality of
life, reduce the occurrence of errors and time-waste, and ease some of the workload experienced by
them was conducted.
**6. Results**
As mentioned above, the interfaces of the mobile application were developed using React Native,
which is a JavaScript framework that enables building native mobile applications. It is important
to state that, although React Native allows using the same code to deploy to both iOS and Android
devices, the mobile application was only deployed for Android since Android devices are more
affordable and common and are, therefore, more likely to be provided by the nursing home when the
application is used in the future. However, if needed and after small modifications, the application can
be quickly and easily deployed to iOS devices.
On the other hand, the MySQL RDBMS was also used to define and create the database.
In this sense, SQL was the language used to manipulate and access the data stored in the database.
Furthermore, to enable the communication and transfer of data between the mobile application and
the database, RESTful web services were created using PHP. Therefore, the solution is divided into
three distinct elements, each with a different purpose. Figure 3 illustrates the architecture and different
interactions existing between the various elements of the mobile application.
**1** **2**
**Data request** **API request**
**Data response** **API response**
**Health**
**professionals** **4** **PHP RESTful** **3**
**API**
**mHealth** **MySQL**
**application** **database**
**Figure 3. Schematic illustration of the architecture of the mobile application.**
At this point in time, the mobile application is fully developed, and the web services and the
database are deployed in the server of the hospital that manages the nursing home. However,
the solution is still being evaluated and tested by the health professionals. Moreover, the mobile
application is not being used since the requirements, such as mobile devices and a reliable wireless
Internet connection, have not yet been provided to the nursing home. Nevertheless, until the
requirements are available to the nursing home, it is envisioned to continue improving the solution
through the opinions and knowledge continuously provided by the professionals.
Finally, it must be mentioned that, during all stages of the design and development of this project,
ethical issues were taken into account and safeguarded to guarantee that confidentiality issues do not
arise as well as the quality, accuracy, and safety of the solution. In this sense, the health professionals
were constantly consulted throughout the design and development of the solution in order to develop
an accurate and high quality mobile application. Furthermore, data privacy and confidentiality were
-----
_Sensors 2019, 19, 3951_ 12 of 22
promoted with the implementation of a login through which only authorized users, namely the nurses
and doctors, with encrypted login credentials, can have access to the information contained in the
solution. On the other hand, the solution will only be accessed by being connected to an Intranet
connection, i.e., the private network of the institution.
_6.1. Database and RESTful Web Services Definition and Implementation_
As mentioned above, the nursing home uses handwritten medical records and resorts to paper
to manipulate information. Consequently, the facility did not have any database implemented
prior to the development of this project. Therefore, before designing the interfaces of the mHealth
application, a database had to be defined in order to allow the application to have access and store data.
Thus, a MySQL relational database was defined and created taking into account the data that needed
to be stored. Then, the database was deployed and implemented in the server of the hospital that
manages the nursing home. However, it must be mentioned that the database remains to be populated
with data related to the residents and the health professionals.
In this sense, a database composed of 49 tables was designed and created, allowing the storage of:
- Data related to the users of the mobile application: personal information of the health
professionals (their full name, email, profile picture, telephone and mobile phone numbers,
date of birth, institution identification number, and gender, among others) is stored as well as
their login credentials.
- Personal data related to the residents (their full name, institution process number, bed and
bedroom numbers, admission date, date of birth, profile picture, telephone and mobile phone
numbers, and national health service number, among others) is stored.
- Personal data related to the informal caregivers and personal contacts of the residents (their full
name, telephone and mobile phone numbers, relationship with the resident, and observations,
among others) is stored.
- Clinical notes written by the doctors: The content of the note, the institution identification number
of the professional who wrote the note, the resident’s institution process number, and the date
and time of the creation of the note are stored.
- Nursing notes written by the nurses: Similar to the clinical notes of the doctors, the content of the
note, the institution identification number of the professional who created the note, the resident’s
institution process number, and the date and time of the creation of the note are stored.
- Clinical information related to the residents, namely their general evaluation (e.g., alcohol and
tobacco consumption), usual medication, clinical history (e.g., existence of diabetes, diseases,
allergies, and past surgeries and fractures), physical assessment (e.g., weight, height, blood
pressure, heart rate, skin integrity, turgidity, and color, vision, and hearing), nutritional and
eating patterns (e.g., type of diet, dentition, and use of a nasogastric tube), bowel and bladder
elimination patterns (e.g., use of adult diapers or of a urinary catheter), physical activity patterns
(e.g., strength of the limbs), sleeping patterns (e.g., insomnia problems and number of hours
of sleep during the day and night), and general assessment made by the health professionals
(e.g., emotional state or autonomy level) is stored.
- Data related to the wounds of the residents, namely the type of wound, pictures of the wound,
and its location, treatments, and start and finish dates are stored. The evolution of the wounds
is also documented through photos and observations provided by the health professionals.
Additionally, the various treatments used throughout the evolution of the wound are stored.
- Periodic evaluations recorded by the health professionals (blood pressure, weight, heart rate,
and axillary temperature) are stored. In this context, the date and time of the evaluation,
the institution identification number of the professional who made the evaluation, and the
resident’s institution process number are stored.
- Periodic evaluations of the capillary blood glucose of residents with diabetes are stored. Again,
the date and time of the evaluation, the institution identification number of the professional who
made the evaluation, and the resident’s institution process number are stored.
-----
_Sensors 2019, 19, 3951_ 13 of 22
- The history of the medical and inpatient reports of the residents: The date, type, and a brief
description of the report, among others, are stored.
- The nursing interventions scheduled by the health professionals through the identification of the
type of nursing intervention, the scheduled and realization dates of the intervention, the resident’s
institution process number, the institution identification numbers of the professionals who
scheduled and performed the nursing intervention, and the state of the intervention, i.e., if the
intervention was performed or not, are stored.
- Data related to the nursing home, namely the name of the institution and the bedroom and bed
numbers existing in the nursing home, are stored.
- Technical data on the types and sizes of urinary catheters and nasogastric tubes available and
types of wounds, injectable medications, nursing interventions, wounds location, and medical
and inpatient reports, among others, are stored.
Afterwards, RESTful web services written in PHP with SQL queries were developed to allow
the sharing of data between the frontend (the mobile application) and the backend (the database).
In this sense, numerous web services were created to allow users to manipulate data from the database,
namely to insert, update, and select data. Finally, similar to the database, the web services were
deployed in the server of the hospital.
_6.2. Mobile Application Features_
After designing and developing the database and the web services, the interfaces and the features
of the mobile application had to be designed and developed. For this purpose, React Native was
chosen, as stated above.
At first, when the user, i.e., the health professional, launches the mobile application, he needs to
sign up for an account if he does not have one. In this context, the user is requested to provide his
login credentials and personal data. In this context, the user is requested to specify if he is a nurse or
a doctor since these two user types have access to different features once signed in to the application.
Then, once the user has provided his login credentials and his personal data, the data are stored into
the database.
Alternatively, if the user already has an account, he can directly sign in to the mobile application
with his login credentials. Finally, if his login credentials match with the ones stored in the database,
the user is successfully signed in to the application, having access to the following features:
- Daily tasks: the user can consult the nursing interventions/tasks planned for the day and confirm
or cancel their execution. Furthermore, the user is also able to consult the tasks that were
already executed or cancelled. This feature is only available for nurses since, through interviews
performed with the health professionals, it was concluded that doctors do not schedule tasks
when present in the nursing home.
- Scheduled tasks: The user is able to consult the pending tasks, the cancelled tasks, and the
finished tasks scheduled in the future, i.e., after the current date. Additionally, he can also cancel
or confirm the execution of a task. For the same reasons mentioned above, this feature is only
available for nurses.
- Plan of the nursing home: Both user types can consult the list of bedrooms existing in the nursing
home. Then, by choosing one of the bedrooms, the user has access to the following information:
the number of beds available and the name of the residents living in the bedroom. For each
resident, the bed number is specified as well as the number of pending tasks associated with the
resident for the day.
- Management of the residents: If the user is a nurse, he is able to manage the residents living in
the nursing home. He can also view and edit their personal data as well as add new residents or
disable a given resident if needed. Additionally, the user can view and edit the informal caregivers
and personal contacts of each resident as well as add and remove contacts. However, if the user is
-----
_Sensors 2019, 19, 3951_ 14 of 22
a doctor, he is only able to view the personal data of the residents and the informal caregivers of
each resident. Thus, doctors cannot insert new residents and informal caregivers, disable them,
and edit their personal data.
- Clinical notes: If the user is a doctor, he is able to create new clinical notes and consult the clinical
notes’ history of each resident. However, nurses are only able to view the clinical notes’ history of
each resident since clinical notes can only be written by doctors.
- Nursing notes: If the user is a nurse, he is able to create new nursing notes and consult the
nursing notes’ history of each resident. However, doctors are only able to consult the nursing
notes’ history of each resident since nursing notes can only be written by nurses.
- Management of the clinical information of the residents: If the user is a nurse, he can manage,
i.e., edit and view, the clinical information of the residents. However, doctors can only view the
clinical information of the residents.
- Management of wounds: If the user is a nurse, he can manage the wounds of the residents and
consult the wound history of each resident. More specifically, the user can insert new wounds
for each resident as well as consult and record their evolution through photos and observations.
Additionally, it is also possible to consult the history of the treatments used throughout the
evolution of a wound and modify the current treatment if needed. Moreover, the user can also
download a PDF file of the evolution of a given wound. However, doctors can only consult
the wounds’ history of each resident, the evolution of each wound and of the treatments used,
and download the PDF file of the evolution of the wound.
- Periodic evaluations: This feature is available to both users and allows them to add new periodic
evaluations and consult the periodic evaluations’ history of each resident.
- Periodic evaluations of the capillary blood glucose: This feature is available to both users, enabling
them to add new periodic evaluations of the capillary blood glucose for residents with diabetes.
It is also possible to consult the history of the periodic evaluations of the capillary blood glucose
of each resident with diabetes.
- Inpatient reports: This feature is available to both users and allows them to add new inpatient
reports and consult the inpatient reports’ history of each resident.
- Medical reports: This feature is available to both users, allowing them to add new medical reports
and consult the medical reports’ history of each resident.
- Planning of nursing interventions: This feature is only available for nurses, enabling them to
schedule nursing interventions for each resident.
- Profile: This feature is available to both users, allowing them to have access and edit their
personal data.
- Sign out: This feature is available to both users and allows them to sign out of their accounts.
_6.3. Clinical and Performance Business Intelligence Indicators_
To analyze and gain a deeper understanding of the overall performance of the nursing home and
its health professionals as well as to improve the nursing care delivered and its outcomes, clinical
and performance indicators were defined. However, at the moment, these indicators have not yet
been created since the database does not have real data. Moreover, to create meaningful and valuable
indicators, data should be gathered over a relatively long period of time, which is not the case at the
moment. Furthermore, to have a better visualization and control over the indicators, it is envisioned to
implement them in a web application and not in the mobile solution.
Thereby, in the future, when enough data are gathered, it is envisioned to create, at least,
the following clinical and performance indicators:
- Percentage of nursing interventions realized per nurse: Pie chart indicator of the percentage
of nursing interventions realized per nurse over a time horizon, for instance, per month and
year. Thus, this indicator would enable highlighting if the nursing interventions are performed
proportionately among the nurses working in the nursing home and if a certain health professional
has a higher workload compared to others. Consequently, with the information obtained through
-----
_Sensors 2019, 19, 3951_ 15 of 22
this indicator, improvements and measures could be realized to have a better distribution of the
nursing interventions between the nurses.
- Total of realized and unrealized nursing interventions per month: Stacked column chart indicator
of the total of realized and unrealized (neither realized nor cancelled) nursing interventions.
This indicator would help identify abnormalities in the number of unrealized nursing interventions
as well as the months in which more tasks are performed or unrealized. Consequently, regarding
the former, if too many nursing interventions are unrealized, it may suggest that the nurses are not
performing their job as well as they should. For instance, it may shed light on wheather the nurses
are overloaded with work, not having enough time to perform all of their tasks. On the other hand,
regarding the latter, if some specific months are busier than others, more nurses could be present for
each shift in order for the nursing interventions to be realized as scheduled.
- Variation of the capillary blood glucose of a given resident over time: Line chart indicator of
the variation of the capillary blood glucose of a given resident over time. Thus, the health
professionals would be able to have a better visualization of variation of the capillary
blood glucose and, thus, more rapidly detect abnormalities and act on them. Additionally,
this indicator could also be extended to other types of evaluations, namely to analyze the
variation of the weight, blood pressure, heart rate, oxygen saturation, and axillary temperature
of a given resident over time.
- Percentage of wounds per resident: Bar chart indicator of the percentage of wounds per resident
over a time horizon, for instance, per month or year. Consequently, with this clinical indicator,
the health professionals would be able to identify the residents with an abnormal amount of
wounds and, thus, supervise them more closely so as to avoid and reduce the occurrence of
wounds for these residents.
- Percentage of wounds per wound type: Donut chart indicator of the percentage of wounds per
wound type over a time horizon, for instance, per month or year. Thus, through this clinical
indicator, the health professionals would be able to identify if certain wound types occur more
frequently than others. Consequently, according to the results obtained, further research and
improvements could be realized so as to identify and reduce wound-causing factors.
- Percentage of nursing interventions realized annually per type of nursing intervention: Bar chart
indicator of the percentage of nursing interventions realized annually per type of nursing
intervention. Therefore, through this indicator, the health professionals would be able to
identify and be aware of the nursing interventions that are not realized with the expected
frequency. Hence, with this knowledge, the health professionals could perform these nursing
interventions more frequently.
Figures 4–6 illustrate examples of some of the indicators mentioned above. Power BI was used
with fictitious data.
**Figure 4.** Indicator of the percentage of nursing interventions realized per nurse (created with
fictitious data).
-----
_Sensors 2019, 19, 3951_ 16 of 22
Administration of injectable medication 7,58%
Evaluation of the capillary blood glucose 5,28% 1174
Nursing Interventions
Nasogastric tube insertion 18,14% Realized
Periodic evaluation 5,54%
Urinary catheter insertion 9,63%
Indicator 5
Wound care 53,83%
0% 10% 20% 30% 40% 50% 60%
**Figure 5. Indicator of the percentage of nursing interventions realized annually per type of nursing**
intervention (created with fictitious data).
|11 Nursing In Rea|74 terventions lized|
|---|---|
1/1
**Figure 6. Indicator of the percentage of wounds per wound type (created with fictitious data).**
**7. Discussion**
After the development of the mobile application, a proof of concept was performed to validate
the usability, feasibility, and usefulness of the solution towards the target audience and to ensure
that the solution provides all of the requirements initially proposed. Therefore, a SWOT analysis was
elaborated to identify the strengths, weaknesses, opportunities, and threats related to the solution.
To this end, a questionnaire based on the TAM3 was conducted with the health professionals working
in the nursing home in order to assess their acceptability, i.e., how they accept and receive the mobile
application, and its results were used as a basis in the SWOT analysis. Furthermore, this analysis was
also based on personal opinion as well as valuable information obtained through semi-structured
interviews and focus groups realized with the professionals working for both the nursing home
and hospital.
It must be mentioned that the survey questionnaire was conducted with few health professionals.
Thus, not enough results were obtained to be presented. However, in the future, it is intended
to evaluate the mobile application with more health professionals and, thus, have a more
complete evaluation.
The SWOT analysis performed is presented hereafter. The following strengths were identified:
1/1
-----
_Sensors 2019, 19, 3951_ 17 of 22
- Decrease of time-waste and, consequently, an increase in productivity since the health
professionals can have access and record information at the point-of-care, i.e., they do not need to
constantly return to the nursing station;
- Decrease of the occurrence of errors since the solution reduces the risk of misplacing, losing,
or forgetting information;
- Enhancement of the nursing care delivered and elders’ quality of life due to the decrease of errors
and time-waste;
- Easier access and manipulation of information;
- Timely sharing and centralization of information;
- Optimization of the various processes occurring in the nursing home;
- Answer to the needs of the health professionals;
- Scheduling of tasks less confusing and more organized compared to hand-written boards;
- Reduction of the amount of paper generated daily with hand-written charts due to the shift from
the paper-based to the computer-based management of data;
- Evidence-based and more accurate decision-making process since the health professionals can
have access to information at the point-of-care;
- High usability since the mobile application has a simple, user-friendly, and intuitive design with
well-defined paths and organized information;
- High adaptability since the solution can easily be implemented in other nursing homes; and
- High scalability since new features can easily be added and the mobile application can easily
be maintained.
The following weaknesses can be pointed out:
- Need of a wireless Internet connection, which is not currently available in the nursing home;
- Need of mobile devices, namely mobile phones and tablets, in order to use the solution;
- Need to populate the database with real data, namely information of the residents and health
professionals, which will require time resources;
- Need to train the health professionals before using the solution; and
- Need to wait a relatively long period of time before creating the clinical and performance indicators.
The opportunities of the solution are as follows:
- Introduction and implementation of the mobile application in other nursing homes;
- Enhancement of other processes due to the technological improvement of the nursing home; and
- Creation of clinical and performance indicators due to the elimination of the paper-based
management of data and the storage of information in a database.
Finally, the following threats can be highlighted:
- Issues may emerge if a reliable wireless Internet connectivity is not available; and
- New systems and competition may arise due to the novelty of the solution, which approaches
recent problems.
In light of the above mentioned, it is possible to affirm how beneficial and influential mHealth
and BI are in healthcare organizations, namely to enhance the various processes occurring in them
and, consequently, to improve the care delivered and patients’ quality of life. In fact, through the
use of mobile applications, such as the solution described in this manuscript, the medical practice
can be completely transformed as they allow rapid and convenient access to and manipulation of
information at the point-of-care. Thus, for professionals constantly on the move, which is the case with
the health professionals working in the nursing home used as a case study, a mHealth solution such
as the one developed allows reducing time-waste since they do not need to interrupt their workflow,
decreasing the occurrence of errors since the likelihood of forgetting or misplacing information is
lower, and making faster and better decisions since they can have access to up-to-date information
-----
_Sensors 2019, 19, 3951_ 18 of 22
at the point-of-care making informed decisions. Furthermore, through BI tools, it is possible to use
and analyze the huge amounts of data gathered daily in organizations in order to turn these data into
valuable knowledge. In fact, the clinical and performance indicators defined in this research project
enable highlighting problem areas and opportunities existing in the nursing home and shed light on
the overall performance of the facility and its professionals.
Finally, regarding the ethical issues associated with the implementation of HICT in healthcare
contexts, they were safeguarded through the inclusion and consultation of the health professionals
during all stages of the design and development of the solution in order to develop an accurate
mHealth application of quality that actually meets the needs of its users. Additionally, privacy and
confidentiality issues were also taken into account since only authorized users, i.e., the nurses and
doctors working in the nursing home, can have access to the information displayed in the solution.
Moreover, data regarding login credentials were encrypted and the solution would only be available
through an Intranet connection, i.e., a private network. However, since implementing data security
protections is a difficult task to achieve, there is still some work that remains to be done in order
to respond completely to the privacy requirements that are constantly emerging. In this context,
it is planned to continuously improve the solution over time, through the encryption of all the data
stored in the database.
**8. Conclusions and Future Work**
The project described in this manuscript aimed to introduce HICT in a Portuguese nursing home
suffering from the consequences of the aging population and the usage of rudimentary methods and,
subsequently, take advantage of the benefits provided by HICT in order to improve elders’ quality
of life and the nursing care delivered. Therefore, considering the issues and challenges faced by the
nursing home used as a case study, a mobile application was designed and developed for the health
professionals working in the facility in order to help them manage the residents and assist them at
the point-of-care.
In the long-term, the research team foresees that the mobile application will allow easier and faster
access and manipulation of the information by the health professionals compared to the paper-based
management of data, since, after some time, a paper-based process is composed of several pages.
Additionally, it will help reduce time-waste and errors and, hence, improve elders’ quality of life
and the nursing care delivered as well as reduce some of the work overload experienced by health
professionals. Furthermore, it will enable to improve the overall performance of the nursing home and
health professionals as well as optimize some of the processes occurring in the facility.
Regarding future work, it is planned to provide the necessary resources to the nursing home since,
without them, the health professionals are not able to use the solution. Thus, it is intended to provide
mobile devices, such as tablets and mobile phones, and a reliable wireless Internet connection, namely
wireless Intranet, in order for the mobile application to be used. Afterwards, it is intended to populate
the database with real data related to the health professionals and the residents. It is important to
mention that the database already contains technical data (e.g., the sizes and types of urinary catheters
and nasogastric tubes available and types of wounds, among others) since this information was
gathered through the help of the health professionals.
On the other hand, the research team envisions designing and developing a web application to
assist the mobile application and, hence, integrate some of its features. In this sense, the web application
will integrate most of the features of the mobile application, allowing the health professionals to manage
the residents from a computer if they prefer to do so. Additionally, it is intended to integrate into the
web application a module to manage the users of the applications and another containing the clinical
and performance indicators mentioned previously. However, these indicators will only be available
when enough data are gathered, since, otherwise, the knowledge acquired would not be meaningful
and valuable. Furthermore, it is intended to continue the expansion of the mobile application through
-----
_Sensors 2019, 19, 3951_ 19 of 22
the addition of new and relevant features. Therefore, considering the above mentioned, the research
team envisions encouraging the continuous maintenance, growth, and expansion of the solution.
**Author Contributions: Conceptualization, M.E. (Márcia Esteves) and M.E. (Marisa Esteves); Investigation, M.E.**
(Márcia Esteves) and M.E. (Marisa Esteves); Methodology, M.E. (Márcia Esteves) and M.E. (Marisa Esteves);
Project administration, M.E. (Márcia Esteves), M.E. (Marisa Esteves), A.A. and J.M.; Resources, A.A. and J.M.;
Software, M.E. (Márcia Esteves); Supervision, M.E. (Marisa Esteves), A.A. and J.M.; Validation, M.E. (Márcia
Esteves) and M.E. (Marisa Esteves); Writing—original draft, M.E. (Márcia Esteves); and Writing—review and
editing, M.E. (Marisa Esteves).
**Funding: This research received no external funding.**
**Acknowledgments: This work has been supported by FCT – Fundação para a Ciência e Tecnologia within the**
Project Scope: UID/CEC/00319/2019.
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. Mostaghel, R. Innovation and Technology for the Elderly: Systematic Literature Review. J. Bus. Res. 2016, 69,
[4896–4900. [CrossRef]](http://dx.doi.org/10.1016/j.jbusres.2016.04.049)
2. Kuo, M.-H.; Wang, S.-L.; Chen, W.-T. Using Information and Mobile Technology Improved Elderly Home
[Care Services. HPT 2016, 5, 131–142. [CrossRef]](http://dx.doi.org/10.1016/j.hlpt.2016.02.004)
3. Howdon, D.; Rice, N. Health Care Expenditures, Age, Proximity to Death and Morbidity: Implications for
[an Ageing Population. J. Health. Econ. 2018, 57, 60–74. [CrossRef] [PubMed]](http://dx.doi.org/10.1016/j.jhealeco.2017.11.001)
4. United Nations. World Population Prospects: The 2015 Revision, Key Findings and Advance Tables; United Nations
Department of Economic and Social Affairs: New York, NY, USA, 2015.
5. Os Centros de Dia Atraem Cada Vez Menos Idosos, mas os Lares Estão Cheios. Available online:
[https://www.publico.pt/2018/12/15/sociedade/noticia/centros-dia-atraem-menos-lares-estao-cheios-](https://www.publico.pt/2018/12/15/sociedade/noticia/centros- dia-atraem-menos-lares-estao-cheios-1854827#gs.lswiFrPB)
[1854827#gs.lswiFrPB (accessed on 26 June 2019).](https://www.publico.pt/2018/12/15/sociedade/noticia/centros- dia-atraem-menos-lares-estao-cheios-1854827#gs.lswiFrPB)
6. [Segurança Social Encerrou 109 Lares Ilegais em 2018. Available online: https://www.rtp.pt/noticias/pais/](https://www.rtp.pt/noticias/pais/seguranca-social-encerrou-109-lares-ilegais-em-2018_v1125675)
[seguranca-social-encerrou-109-lares-ilegais-em-2018_v1125675 (accessed on 26 June 2019).](https://www.rtp.pt/noticias/pais/seguranca-social-encerrou-109-lares-ilegais-em-2018_v1125675)
7. Falta de Profissionais de Saúde nos Lares Coloca Idosos em Risco. Available online:
[https://www.publico.pt/2015/10/29/sociedade/noticia/quase-20-dos-idosos-em-lares-cairam-pelo-](https://www.publico.pt/2015/10/29/sociedade/noticia/quase-20-dos-idosos-em-lares-cairam-pelo-menos-uma-vez-em-meio-ano-1712731#gs.hJfTaLa5)
[menos-uma-vez-em-meio-ano-1712731#gs.hJfTaLa5 (accessed on 26 June 2019).](https://www.publico.pt/2015/10/29/sociedade/noticia/quase-20-dos-idosos-em-lares-cairam-pelo-menos-uma-vez-em-meio-ano-1712731#gs.hJfTaLa5)
8. Portugueses em Risco Devido à Falta de Enfermeiros que Estão Exaustos. Available online:
[https://www.dn.pt/portugal/interior/portugueses-em-risco-devido-a-falta-de-enfermeiros-que-](https://www.dn.pt/portugal/interior/portugueses-em-risco-devido-a-falta-de-enfermeiros-que-estao-exaustos---bastonaria-9075853.html)
[estao-exaustos---bastonaria-9075853.html (accessed on 26 June 2019).](https://www.dn.pt/portugal/interior/portugueses-em-risco-devido-a-falta-de-enfermeiros-que-estao-exaustos---bastonaria-9075853.html)
9. Alexander, G.L.; Wakefield, D.S. Information Technology Sophistication in Nursing Homes. J. Am. Med.
_[Dir. Assoc. 2009, 10, 398–407. [CrossRef] [PubMed]](http://dx.doi.org/10.1016/j.jamda.2009.02.001)_
10. Broughton, W.; Lashlee, H.; Marcum, C.; Wilson, G.M. Health Information Technology: A New World of
[Nursing Homes. J. Gerontol. Geriat. Res. 2013, 2. [CrossRef]](http://dx.doi.org/10.4172/2167-7182.1000122)
11. Rouleau, G; Gagnon, M.-P.; Côté, J. Impacts of Information and Communication Technologies on Nursing
[Care: An Overview of Systematic Reviews (Protocol). Syst. Rev. 2015, 4. [CrossRef] [PubMed]](http://dx.doi.org/10.1186/s13643-015-0062-y)
12. Gagnon, M.-P.; Desmartis, M.; Labrecque, M.; Car, J.; Pagliari, C.; Pluye, P.; Frémont, P.; Gagnon, J.;
Tremblay, N.; Légaré, F. Systematic Review of Factors Influencing the Adoption of Information and
[Communication Technologies by Healthcare Professionals. J. Med. Syst. 2012, 36, 241–277. [CrossRef]](http://dx.doi.org/10.1007/s10916-010-9473-4)
[[PubMed]](http://www.ncbi.nlm.nih.gov/pubmed/20703721)
13. Esteves, M.; Abelha, A.; Machado, J. The Development of a Pervasive Web Application to Alert Patients Based on
[Business Intelligence Clinical Indicators: A Case Study in a Health Institution. Wirel. Netw. 2019, 1–7. [CrossRef]](http://dx.doi.org/10.1007/s11276-018-01911-6)
14. Cresswell, K.M.; Sheikh, A. Health Information Technology in Hospitals: Current Issues and Future Trends.
_[Future Hosp. J. 2015, 2, 50–56. [CrossRef]](http://dx.doi.org/10.7861/futurehosp.15.015)_
15. Ko, M.; Wagner, L.; Spetz, J. Nursing Home Implementation of Health Information Technology: Review
of the Literature Finds Inadequate Investment in Preparation, Infrastructure, and Training. Inquiry 2018,
_[55, 1–10. [CrossRef]](http://dx.doi.org/10.1177/0046958018778902)_
16. Wei, Q.; Courtney, K.L. Nursing Information Flow in Long-term Care Facilities. Appl. Clin. Inform. 2018, 9,
[275–284. [CrossRef] [PubMed]](http://dx.doi.org/10.1055/s-0038-1642609)
-----
_Sensors 2019, 19, 3951_ 20 of 22
17. Ventola, C.L. Mobile Devices and Apps for Health Care Professionals: Uses and Benefits. Pharm. Ther. 2014,
_39, 356–364._
18. Aungst, T.D. Medical Applications for Pharmacists Using Mobile Devices. Ann. Pharmacother. 2013, 47,
[1088–1095. [CrossRef] [PubMed]](http://dx.doi.org/10.1345/aph.1S035)
19. O’Connor, Y.; O’Reilly, P. Examining the Infusion of Mobile Technology by Healthcare Practitioners in
[a Hospital Setting. Inform. Syst. Front. 2018, 20, 1297–1317. [CrossRef]](http://dx.doi.org/10.1007/s10796-016-9728-9)
20. Gagnon, M.-P.; Ngangue, P.; Payne-Gagnon, J.; Desmartis, M. m-health Adoption by Healthcare Professionals:
[A Systematic Review. J. Am. Med. Inform. Assoc. 2015, 23, 212–220. [CrossRef] [PubMed]](http://dx.doi.org/10.1093/jamia/ocv052)
21. Prgomet, M.; Georgiou, A.; Westbrook, J.I. The Impact of Mobile Handheld Technology on Hospital
Physicians’ Work Practices and Patient Care: A Systematic Review. J. Am. Med. Inform. Assoc. 2009,
_[16, 792–801. [CrossRef] [PubMed]](http://dx.doi.org/10.1197/jamia.M3215)_
22. Nouri, R.; Kalhori, S.R.N.; Ghazisaeedi, M.; Marchand, G.; Yasini, M. Criteria for Assessing the Quality of
[mHealth Apps: A Systematic Review. J. Am. Med. Inform. Assoc. 2018, 25, 1089–1098. [CrossRef] [PubMed]](http://dx.doi.org/10.1093/jamia/ocy050)
23. Fonseca, F.; Peixoto, H.; Braga, J.; Machado, J.; Abelha, A. Smart Mobile Computing in Pregnancy Care.
In Proceedings of the 34th International Conference on Computers and Their Applications, Honolulu,
HI, USA, 18–20 March 2019.
24. Silva, B.M.C.; Rodrigues, J.J.P.C.; de la Torre Díez, I.; López-Coronado, M.; Saleem, K. Mobile-health:
[A review of Current State in 2015. J. Biomed. Inform. 2015, 56, 265–272. [CrossRef] [PubMed]](http://dx.doi.org/10.1016/j.jbi.2015.06.003)
25. Foshay, N.; Kuziemsky, C. Towards an Implementation Framework for Business Intelligence in Healthcare.
_[Int. J. Inform. Manag. 2014, 34, 20–27. [CrossRef]](http://dx.doi.org/10.1016/j.ijinfomgt.2013.09.003)_
26. Bonney, W. Applicability of Business Intelligence in Electronic Health Record. Procedia Soc. Behav. Sci. 2013,
_[73, 257–262. [CrossRef]](http://dx.doi.org/10.1016/j.sbspro.2013.02.050)_
27. Popoviˇc, A.; Hackney, R.; Coelho, P.S.; Jakliˇc, J. Towards Business Intelligence Systems Success: Effects of
[Maturity and Culture on Analytical Decision Making. Decis. Support Syst. 2012, 54, 729–739. [CrossRef]](http://dx.doi.org/10.1016/j.dss.2012.08.017)
28. Marisa, E.; Miranda, F.; Machado, J.; Abelha, A. Mobile Collaborative Augmented Reality and Business
Intelligence: A System to Support Elderly People’s Self-care. In World Conference on Information Systems and
_[Technologies; Springer: Naples, Italy, 2018; pp. 195–204. [CrossRef]](http://dx.doi.org/10.1007/978-3-319-77700-9{_}20)_
29. Wang, Y.; Kung, L.; Byrd, T.A. Big Data Analytics: Understanding its Capabilities and Potential Benefits for
[Healthcare Organizations. Technol. Forecast. Soc. Chang. 2018, 126, 3–13. [CrossRef]](http://dx.doi.org/10.1016/j.techfore.2015.12.019)
30. Marisa, E.; Miranda, F.; Abelha, A. Pervasive Business Intelligence Platform to Support the Decision-making
Process in Waiting Lists. In Next-generation Mobile and Pervasive Healthcare Solutions; IGI Global: Hershey,
[PA, USA, 2018; pp. 186–202. [CrossRef]](http://dx.doi.org/10.4018/978-1-5225-2851-7.ch012)
31. Brandão, A.; Pereira, E.; Esteves, M.; Portela, F.; Santos, M.F.; Abelha, A.; Machado, J. A Benchmarking
Analysis of Open-Source Business Intelligence Tools in Healthcare Environments. Information 2016, 7, 57.
[[CrossRef]](http://dx.doi.org/10.3390/info7040057)
32. Rouhani, S.; Ashrafi, A.; Ravasan, A.Z.; Afshari, S. The Impact Model of Business Intelligence on Decision
[Support and Organizational Benefits. J. Enterp. Inf. Manag. 2016, 29, 19–50. [CrossRef]](http://dx.doi.org/10.1108/JEIM-12-2014-0126)
33. Brito, C.; Esteves, M.; Peixoto, H.; Abelha, A.; Machado, J. A Data Mining Approach to Classify Serum
Creatinine Values in Patients Undergoing Continuous Ambulatory Peritoneal Dialysis. Wirel. Netw. 2019, 1–9.
[[CrossRef]](http://dx.doi.org/10.1007/s11276-018-01905-4)
34. Chaudhuri, S.; Dayal, U.; Narasayya, V. An Overview of Business Intelligence Technology. Commun. ACM
**[2011, 54, 88–98. [CrossRef]](http://dx.doi.org/10.1145/1978542.1978562)**
35. Hoˇcevar, B.; Jakliˇc, J. Assessing Benefits of Business Intelligence Systems—A Case Study. Manag. J. Contemp.
_Manag. Issues 2010, 15, 87–119._
36. Ferreira, J.; Miranda, M.; Abelha, A.; Machado, J. O Processo ETL em Sistemas Data Warehouse.
In Proceedings of the INForum, Braga, Portugal, 9–10 September 2010; pp. 757–765.
37. Wallace, I.M. Is Patient Confidentiality Compromised with the Electronic Health Record?: A Position Paper.
_[Comput. Inform. Nurs. 2015, 33, 58–62. [CrossRef]](http://dx.doi.org/10.1097/CIN.0000000000000126)_
38. Abouelmehdi, K.; Beni-Hessane, A.; Khaloufi, H. Big Healthcare Data: Preserving Security and Privacy.
_[J. Big Data 2018, 5. [CrossRef]](http://dx.doi.org/10.1186/s40537-017-0110-7)_
39. Zhang, K.; Yang, K.; Liang, X.; Su, Z.; Shen, X.; Luo, H.H. Security and Privacy for Mobile Healthcare
[Networks: From a Quality of Protection Perspective. IEEE Wirel. Commun. 2015, 22, 104–112. [CrossRef]](http://dx.doi.org/10.1109/MWC.2015.7224734)
-----
_Sensors 2019, 19, 3951_ 21 of 22
40. Misra, S.; Lewis, T.L.; Aungst, T.D. Medical Application Use and the Need for Further Research and
Assessment for Clinical Practice: Creation and Integration of Standards for Best Practice to Alleviate Poor
[Application Design. JAMA Dermatol. 2013, 149, 661–662. [CrossRef] [PubMed]](http://dx.doi.org/10.1001/jamadermatol.2013.606)
41. Bravo, J.; Hervás, R.; Fontecha, J.; González, I. m-Health: Lessons Learned by m-Experiences. Sensors 2018,
_[18, 1569. [CrossRef] [PubMed]](http://dx.doi.org/10.3390/s18051569)_
42. Marcolino, M.S.; Oliveira, J.A.Q.; D’Agostino, M.; Ribeiro, A.L.; Alkmim, M.B.M.; Novillo-Ortiz, D. The Impact
[of mHealth Interventions: Systematic Review of Systematic Reviews. JMIR mHealth uHealth 2018, 6. [CrossRef]](http://dx.doi.org/10.2196/mhealth.8873)
43. Lv, Z.; Xia, F.; Wu, G.; Yao, L.; Chen, Z. iCare: A Mobile Health Monitoring System for the Elderly.
In Proceedings of the IEEE/ACM International Conference on Green Computing and Communications &
International Conference on Cyber, Physical and Social Computing, Hangzhou, China, 18–20 December 2010;
[pp. 699–705. [CrossRef]](http://dx.doi.org/10.1109/GreenCom-CPSCom.2010.84)
44. Thirumalai, M.; Rimmer, J.H.; Johnson, G.; Wilroy, J.; Young, H.-J.; Mehta, T.; Lai, B. TEAMS (Tele-Exercise
and Multiple Sclerosis), a Tailored Telerehabilitation mHealth App: Participant-Centered Development and
[Usability Study. JMIR mHealth uHealth 2018, 6. [CrossRef]](http://dx.doi.org/10.2196/10181)
45. Parmanto, B.; Pramana, G.; Yu, D.X.; Fairman, A.D.; Dicianno, B.E.; McCue, M.P. iMHere: A Novel mHealth
System for Supporting Self-Care in Management of Complex and Chronic Conditions. JMIR mHealth uHealth
**[2013, 1. [CrossRef] [PubMed]](http://dx.doi.org/10.2196/mhealth.2391)**
46. Bastos, D.; Ribeiro, J.; Silva, F.; Rodrigues, M.; Santos, R.; Martins, C.; Rocha, N.; Pereira, A. SmartWalk
Mobile—A Context-Aware m-Health App for Promoting Physical Activity Among the Elderly. In World
_[Conference on Information Systems and Technologies; Springer: Galicia, Spain, 2019; pp. 829–838. [CrossRef]](http://dx.doi.org/10.1007/978-3-030-16184-2_79)_
47. Doukas, C.; Pliakas, T.; Maglogiannis, I. Mobile Healthcare Information Management utilizing Cloud
[Computing and Android OS. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2010, 1037–1040. [CrossRef]](http://dx.doi.org/10.1109/IEMBS.2010.5628061)
48. Adam, L.; Emani, S.; Carlile, N.; Rosenthal, D.I.; Semakov, S.; Pallin, D.J.; Poon, E.G. A Mobile App for
Securely Capturing and Transferring Clinical Images to the Electronic Health Record: Description and
[Preliminary Usability Study. JMIR mHealth uHealth 2015, 3. [CrossRef]](http://dx.doi.org/10.2196/mhealth.3481)
49. Von Alan, R.H.; March, S.T.; Park, J.; Ram, S. Design Science in Information Systems Research. MIS Q. 2004,
_28, 75–105._
50. Peffers, K.; Tuunanen, T.; Rothenberger, M.A.; Chatterjee, S. A Design Science Research Methodology for
[Information Systems Research. J. Manag. Inf. Syst. 2007, 24, 45–77. [CrossRef]](http://dx.doi.org/10.2753/MIS0742-1222240302)
51. March, S.T.; Storey, V.C. Design Science in the Information Systems Discipline: An Introduction to the Special
[Issue on Design Science Research. MIS Q. 2008, 32, 725–730. [CrossRef]](http://dx.doi.org/10.2307/25148869)
52. Legris, P.; Ingham, J.; Collerette, P. Why do People use Information Technology? A Critical Review of the
[Technology Acceptance Model. Inf. Manag. 2003, 40, 191–204. [CrossRef]](http://dx.doi.org/10.1016/S0378-7206(01)00143-4)
53. Turner, M.; Kitchenham, B.; Brereton, P.; Charters, S.; Budgen, D. Does the Technology Acceptance Model
[Predict Actual Use? A Systematic Literature Review. Inf. Softw. Technol. 2010, 52, 463–479. [CrossRef]](http://dx.doi.org/10.1016/j.infsof.2009.11.005)
54. Portela, F.; Santos, M.F.; Machado, J.M.; Abelha, A.; Neves, J.; Silva, Á.; Rua, F. Intelligent Decision Support
in Intensive Care: Towards Technology Acceptance. In Proceedings of the 26th European Simulation and
Modelling Conference, Essen, Germany, 2–24 October 22012; pp. 260–266.
55. Pikkarainen, T.; Pikkarainen, K.; Karjaluoto, H.; Pahnila, S. Consumer Acceptance of Online Banking:
[An Extension of the Technology Acceptance Model. Internet Res. 2004, 14, 224–235. [CrossRef]](http://dx.doi.org/10.1108/10662240410542652)
56. Connolly, T.M.; Begg, C.E. Database Systems: A Practical Approach to Design, Implementation, and Management;
Pearson Education: London, UK, 2005.
57. Widenius, M.; Axmark, D.; Arno, K. MySQL Reference Manual: Documentation from the Source; O’Reilly Media,
Inc.: Sevastopol, CA, USA, 2002.
58. Mitchell, L.J. PHP Web Services: APIs for the Modern Web; O’Reilly Media, Inc.: Sevastopol, CA, USA, 2016.
59. Sheng, Q.Z.; Qiao, X.; Vasilakos, A.V.; Szabo, C.; Bourne, S.; Xu, X. Web Services Composition: A Decade’s
[Overview. Inform. Sci. 2014, 280, 218–238. [CrossRef]](http://dx.doi.org/10.1016/j.ins.2014.04.054)
60. Eisenman, B. Learning React Native: Building Native Mobile Apps with JavaScript; O’Reilly Media, Inc.:
Sevastopol, CA, USA, 2015.
61. Johansson, E.; Soderberg, J. Evaluating Performance of a React Native Feature Set. Bachelor’s Thesis,
Linköping University, Linköping, Sweden, 2018.
-----
_Sensors 2019, 19, 3951_ 22 of 22
62. Hansson, N.; Vidhall, T. Effects on Performance and Usability for Cross-Platform Application Development
Using React Native. Master’s Thesis, Umeå University, Umeå, Sweden, 2016.
63. [React Native. Available online: http://facebook.github.io/react-native/ (accessed on 2 July 2019).](http://facebook.github.io/react-native/)
64. [What is Power BI. Available online: https://powerbi.microsoft.com/en-us/what-is-power-bi/ (accessed on](https://powerbi.microsoft.com/en- us/what-is-power-bi/)
2 July 2019).
_⃝c_ 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
[(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.)
-----
| 20,991
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC6767027, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/1424-8220/19/18/3951/pdf"
}
| 2,019
|
[
"JournalArticle"
] | true
| 2019-09-01T00:00:00
|
[
{
"paperId": "9e6a7a7fb7c6ac98864e4e22cd86f73e247d5eb4",
"title": "SmartWalk Mobile - A Context-Aware m-Health App for Promoting Physical Activity Among the Elderly"
},
{
"paperId": "f003ec23e5008413911b16726976853ca97b1a80",
"title": "Smart Mobile Computing in Pregnancy Care"
},
{
"paperId": "59376ef8883ae72dee8d7a597a78bcc86b3f4dac",
"title": "A data mining approach to classify serum creatinine values in patients undergoing continuous ambulatory peritoneal dialysis"
},
{
"paperId": "2e6199133a283eb5d2d2ffff1826617fb69ed97a",
"title": "The development of a pervasive Web application to alert patients based on business intelligence clinical indicators: a case study in a health institution"
},
{
"paperId": "7e1867d205ed994926992f2a5ab8d07dfc37b9de",
"title": "Criteria for assessing the quality of mHealth apps: a systematic review"
},
{
"paperId": "d09c3987b426534c61a5beda4bf794cdf4f6a3b7",
"title": "Nursing Home Implementation of Health Information Technology: Review of the Literature Finds Inadequate Investment in Preparation, Infrastructure, and Training"
},
{
"paperId": "71b6982ab54beef304073f3642463d45d8926e54",
"title": "TEAMS (Tele-Exercise and Multiple Sclerosis), a Tailored Telerehabilitation mHealth App: Participant-Centered Development and Usability Study"
},
{
"paperId": "7d5f4a312d200517e8d1b11848050fef2b6d5921",
"title": "m-Health: Lessons Learned by m-Experiences"
},
{
"paperId": "2bfb5fca933bea8d70ca6b5c88820714c54278fb",
"title": "Nursing Information Flow in Long-Term Care Facilities"
},
{
"paperId": "5199b63227a1668f33b3f33ae49661e0083ba148",
"title": "Mobile Collaborative Augmented Reality and Business Intelligence: A System to Support Elderly People's Self-care"
},
{
"paperId": "4b4b63405efd22a96cc45b22c08124d62a475d6f",
"title": "Big healthcare data: preserving security and privacy"
},
{
"paperId": "6fe4e3772b98aaf5cae0ad3b1081a50550ac51e0",
"title": "The Impact of mHealth Interventions: Systematic Review of Systematic Reviews"
},
{
"paperId": "cf21a38be4caf47dd9fcfe23c4f41a2ef8dde4de",
"title": "Examining the infusion of mobile technology by healthcare practitioners in a hospital setting"
},
{
"paperId": "3552cd8b4488180e7f12d114ec5e55a5a5ccdb68",
"title": "Innovation and technology for the elderly: Systematic literature review"
},
{
"paperId": "56813bad070d166493e2f8452355394a185415bd",
"title": "A Benchmarking Analysis of Open-Source Business Intelligence Tools in Healthcare Environments"
},
{
"paperId": "807fc4a0b7dfb6fe3f799ca37716e76b02167673",
"title": "Using information and mobile technology improved elderly home care services"
},
{
"paperId": "7922ed12b57c907dae93675ce4b58856d8f7d59d",
"title": "The impact model of business intelligence on decision support and organizational benefits"
},
{
"paperId": "bfd242055cebf10c80c4c6d49f8bef5f2d86031e",
"title": "Learning React Native: Building Native Mobile Apps with JavaScript"
},
{
"paperId": "7a182168da0ce5e81b8a16a4aa709ac5448084d6",
"title": "Security and privacy for mobile healthcare networks: from a quality of protection perspective"
},
{
"paperId": "e692bcf21422b70759624e4d01e3959ea563a136",
"title": "Mobile-health: A review of current state in 2015"
},
{
"paperId": "bc6ba6fe0bd02df8f0ab7f65d2920b77ea780c4e",
"title": "Impacts of information and communication technologies on nursing care: an overview of systematic reviews (protocol)"
},
{
"paperId": "a12e480627f9c9f7e1bf3b9b993fd9d5bf1dbff7",
"title": "Is Patient Confidentiality Compromised With the Electronic Health Record?: A Position Paper"
},
{
"paperId": "c984162b6ab51165cf3951d23f076f90efb4920b",
"title": "Health information technology in hospitals: current issues and future trends"
},
{
"paperId": "d64f6a0f66c3fd9e082750405f9788f7720b654d",
"title": "A Mobile App for Securely Capturing and Transferring Clinical Images to the Electronic Health Record: Description and Preliminary Usability Study"
},
{
"paperId": "0e19c864d269544f33eb8bbd3e62ed4a1bbca432",
"title": "Web services composition: A decade's overview"
},
{
"paperId": "908886f43d9bcf1d0d16435742bdacef9bbf4b19",
"title": "Mobile devices and apps for health care professionals: uses and benefits."
},
{
"paperId": "b7d2d7fe8307d2b3d1a021bca769f4738d5b6e8a",
"title": "iMHere: A Novel mHealth System for Supporting Self-Care in Management of Complex and Chronic Conditions"
},
{
"paperId": "f92fdba47f5f8d0eb250e2b454c0fce322c653f2",
"title": "Medical Applications for Pharmacists Using Mobile Devices"
},
{
"paperId": "7c6c4d295cb4357c0c66a80a142c30c304c4a06b",
"title": "Health Information Technology: A New World of Nursing Homes"
},
{
"paperId": "abd230425cdd8c44e51aed7c13709140e3b0f318",
"title": "Medical application use and the need for further research and assessment for clinical practice: creation and integration of standards for best practice to alleviate poor application design."
},
{
"paperId": "83d6bbf97d756c8e39071e9b54730b48405d9054",
"title": "Applicability of Business Intelligence in Electronic Health Record"
},
{
"paperId": "a814b2e3dfbf6a545913765457c6c1c5217ec6ef",
"title": "Towards business intelligence systems success: Effects of maturity and culture on analytical decision making"
},
{
"paperId": "a08c8c6330be2e65b02c62885a7e4ea249cd18f8",
"title": "Systematic Review of Factors Influencing the Adoption of Information and Communication Technologies by Healthcare Professionals"
},
{
"paperId": "e536ba7e37bedc07d5cf665738d53616d758c856",
"title": "An overview of business intelligence technology"
},
{
"paperId": "dcfd50b82e5853860f6bf4f5a78cf2d679bf7fd7",
"title": "iCare: A Mobile Health Monitoring System for the Elderly"
},
{
"paperId": "b3bb6b1dbe07e67458c832c2ce74d4074e4f2efe",
"title": "Mobile healthcare information management utilizing Cloud Computing and Android OS"
},
{
"paperId": "f1284c769c4bb85ae07d0a8155021a5a01fc242b",
"title": "Assessing Benefits of Business Intelligence Systems – A Case Study"
},
{
"paperId": "e2d8597c45c6e4a9e9d11291614803067455d3af",
"title": "O processo ETL em sistemas data warehouse"
},
{
"paperId": "de270dcf3a03059af50f6da3a89ff1ba5631eb04",
"title": "Does the technology acceptance model predict actual use? A systematic literature review"
},
{
"paperId": "ba35f5411c493c651d98e5afc2ad4864f45bf679",
"title": "Review Paper: The Impact of Mobile Handheld Technology on Hospital Physicians' Work Practices and Patient Care: A Systematic Review"
},
{
"paperId": "af44cfcd29dfb753a47aa597548668b9c3aeebb0",
"title": "Information technology sophistication in nursing homes."
},
{
"paperId": "259b6e2968c76044be23d4e617f0b78b688dc6e1",
"title": "Design science in the information systems discipline: an introduction to the special issue on design science research"
},
{
"paperId": "186e57ffadbe45a30de067b0259357cded8db968",
"title": "A Design Science Research Methodology for Information Systems Research"
},
{
"paperId": "8a1a51f1777bd2e461b70b4074130dacb459d972",
"title": "Consumer acceptance of online banking: an extension of the technology acceptance model"
},
{
"paperId": "ad0494f2a46b4272530a79f274bab28ae6269acc",
"title": "DataBase Systems: A Practical Approach to Design, Implementation and Management (4th Edition) (International Computer Science)"
},
{
"paperId": "0ee5a26a6dc64d3089c8f872bd550bf1eab7051d",
"title": "Design Science in Information Systems Research"
},
{
"paperId": "97042899c410e7097e6d8613f670a05f63ee777d",
"title": "MySQL reference manual - documentation from the source"
},
{
"paperId": "75f0adfa7cb41a90a02dc77604e8ef466a74effc",
"title": "Pervasive Business Intelligence Platform to Support the Decision-Making Process in Waiting Lists"
},
{
"paperId": "a3b5d562a50192b8d4ff7006a637cda40cad93ff",
"title": "Health care expenditures, age, proximity to death and morbidity: Implications for an ageing population."
},
{
"paperId": "16575f23ff879e6353a55bbfbbcc54e27606bfc5",
"title": "Big data analytics: Understanding its capabilities and potential benefits for healthcare organizations"
},
{
"paperId": "1a52cc590c048164a3877162242c8b2ea5147378",
"title": "Evaluating performance of a React Native feature set"
},
{
"paperId": "8e317e75b10427617264d5917ca979a7347b3323",
"title": "m-Health adoption by healthcare professionals: a systematic review"
},
{
"paperId": "8af32f8e56336c8b94604b8aeb7ccc3f42b21d60",
"title": "Effects on performance and usability for cross-platform application development using React Native"
},
{
"paperId": "db73ab29491d151e03074e2671c6ffcc7d82310f",
"title": "Intelligent decision support in Intensive Care : towards technology acceptance"
},
{
"paperId": "6d1b2672761bb697c55f775a07ca1cf30f56a1c7",
"title": "Why do people use information technology? A critical review of the technology acceptance model"
},
{
"paperId": null,
"title": "UNITED NATIONS DEPARTMENT OF ECONOMIC AND SOCIAL AFFAIRS"
},
{
"paperId": "1c1f5288338bc899cb664c0722d1997ec7cda32d",
"title": "International Journal of Information Management towards an Implementation Framework for Business Intelligence in Healthcare"
},
{
"paperId": null,
"title": "This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license"
},
{
"paperId": null,
"title": "PHP Web Services: APIs for the Modern Web"
},
{
"paperId": null,
"title": "Os Centros de Dia Atraem Cada Vez Menos Idosos, mas os Lares Estão Cheios"
}
] | 20,991
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0010110e322b5ed622e9a57ff2aed1b092b3cf9e
|
[] | 0.863191
|
An Attribute-Based Access Control for IoT Using Blockchain and Smart Contracts
|
0010110e322b5ed622e9a57ff2aed1b092b3cf9e
|
Sustainability
|
[
{
"authorId": "35854526",
"name": "S. Zaidi"
},
{
"authorId": "35191617",
"name": "M. A. Shah"
},
{
"authorId": "31328150",
"name": "Hasan Ali Khattak"
},
{
"authorId": "152981613",
"name": "C. Maple"
},
{
"authorId": "1387447444",
"name": "Hafiz Tayyab Rauf"
},
{
"authorId": "1403052336",
"name": "Ahmed M. El-Sherbeeny"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://mdpi.com/journal/sustainability",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-172127"
],
"id": "8775599f-4f9a-45f0-900e-7f4de68e6843",
"issn": "2071-1050",
"name": "Sustainability",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-172127"
}
|
With opportunities brought by the Internet of Things (IoT), it is quite a challenge to maintain concurrency and privacy when a huge number of resource-constrained distributed devices are involved. Blockchain have become popular for its benefits, including decentralization, persistence, immutability, auditability, and consensus. Great attention has been received by the IoT based on the construction of distributed file systems worldwide. A new generation of IoT-based distributed file systems has been proposed with the integration of Blockchain technology, such as the Swarm and Interplanetary File System. By using IoT, new technical challenges, such as Credibility, Harmonization, large-volume data, heterogeneity, and constrained resources are arising. To ensure data security in IoT, centralized access control technologies do not provide credibility. In this work, we propose an attribute-based access control model for the IoT. The access control lists are not required for each device by the system. It enhances access management in terms of effectiveness. Moreover, we use blockchain technology for recording the attribute, avoiding data tempering, and eliminating a single point of failure at edge computing devices. IoT devices control the user’s environment as well as his or her private data collection; therefore, the exposure of the user’s personal data to non-trusted private and public servers may result in privacy leakage. To automate the system, smart contracts are used for data accessing, whereas Proof of Authority is used for enhancing the system’s performance and optimizing gas consumption. Through smart contracts, ciphertext can be stored on a blockchain by the data owner. Data can only be decrypted in a valid access period, whereas in blockchains, the trace function is achieved by the storage of invocation and the creation of smart contracts. Scalability issues can also be resolved by using the multichain blockchain. Eventually, it is concluded from the simulation results that the proposed system is efficient for IoT.
|
## sustainability
_Article_
# An Attribute-Based Access Control for IoT Using Blockchain and Smart Contracts
**Syed Yawar Abbas Zaidi** **[1]** **, Munam Ali Shah** **[1]** **, Hasan Ali Khattak** **[2,]*** **, Carsten Maple** **[3]** **,**
**Hafiz Tayyab Rauf** **[4]** **, Ahmed M. El-Sherbeeny** **[5]** **and Mohammed A. El-Meligy** **[5]**
1 Department of Computer Science, COMSATS University Islamabad, Islamabad 44500, Pakistan;
[email protected] (S.Y.A.Z.); [email protected] (M.A.S.)
2 School of Electrical Engineering and Computer Science, National University of Sciences and Technology
(NUST), Islamabad 44500, Pakistan
3 Secure Cyber Systems Research Group (SCSRG), University of Warwick, Coventry CV4 7AL, UK;
[email protected]
4 Department of Computer Science, Faculty of Engineering & Informatics, University of Bradford,
Bradford BD7 1DP, UK; [email protected]
5 Industrial Engineering Department, College of Engineering, King Saud University, P.O. Box 800,
Riyadh 11421, Saudi Arabia; [email protected] (A.M.E.-S.); [email protected] (M.A.E.-M.)
***** Correspondence: [email protected]
[����������](https://www.mdpi.com/article/10.3390/su131910556?type=check_update&version=2)
**�������**
**Citation: Zaidi, S.Y.A.; Shah, M.A.;**
Khattak, H.A.; Maple, C.; Rauf, H.T.;
El-Sherbeeny, A.M.; El-Meligy, M.A.
An Attribute-Based Access Control
for IoT Using Blockchain and Smart
Contracts. Sustainability 2021, 13,
[10556. https://doi.org/10.3390/](https://doi.org/10.3390/su131910556)
[su131910556](https://doi.org/10.3390/su131910556)
Academic Editor: Fadi Al-Turjman
Received: 14 August 2021
Accepted: 16 September 2021
Published: 23 September 2021
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright: © 2021 by the authors.**
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: With opportunities brought by the Internet of Things (IoT), it is quite a challenge to main-**
tain concurrency and privacy when a huge number of resource-constrained distributed devices are
involved. Blockchain have become popular for its benefits, including decentralization, persistence,
immutability, auditability, and consensus. Great attention has been received by the IoT based on the
construction of distributed file systems worldwide. A new generation of IoT-based distributed file
systems has been proposed with the integration of Blockchain technology, such as the Swarm and
Interplanetary File System. By using IoT, new technical challenges, such as Credibility, Harmonization, large-volume data, heterogeneity, and constrained resources are arising. To ensure data security
in IoT, centralized access control technologies do not provide credibility. In this work, we propose
an attribute-based access control model for the IoT. The access control lists are not required for each
device by the system. It enhances access management in terms of effectiveness. Moreover, we use
blockchain technology for recording the attribute, avoiding data tempering, and eliminating a single
point of failure at edge computing devices. IoT devices control the user’s environment as well as
his or her private data collection; therefore, the exposure of the user’s personal data to non-trusted
private and public servers may result in privacy leakage. To automate the system, smart contracts are
used for data accessing, whereas Proof of Authority is used for enhancing the system’s performance
and optimizing gas consumption. Through smart contracts, ciphertext can be stored on a blockchain
by the data owner. Data can only be decrypted in a valid access period, whereas in blockchains, the
trace function is achieved by the storage of invocation and the creation of smart contracts. Scalability
issues can also be resolved by using the multichain blockchain. Eventually, it is concluded from the
simulation results that the proposed system is efficient for IoT.
**Keywords: IoT; multichain; smart contract; interplanetary file system; access control**
**1. Introduction**
IoT has become the most promising technology in industry and academia. Some of the
aims of IoT are enabling, sharing, and collecting data anonymously from home appliances,
vehicles, and physical and intelligent devices. In 2017, more than 8.4 billion devices joined
this worldwide network, which shows the increased limit of 31% from 2016 [1]. On the
contrary, Gartner [2] forecasts that it will reach 25 billion by 2021, and by 2023, the buying
and selling of IoT data will become an essential part of many IoT systems. With a large
-----
_Sustainability 2021, 13, 10556_ 2 of 26
number of devices involved, storage-related challenges also arise, and along with that, data
protection and large-scale efficient data storage are significant issues [3].
New challenges and security risks keep increasing due to the increasing amount of
connected devices,as shown in Figure 1. Security devices are becoming vulnerable to
privacy-threatening attacks launched by malicious users, and because of these attacks, it is
difficult to completely control the widely distributed IoT devices. For controlling the data
leakage from IoT devices, an authorized access mechanism is needed to protect sensitive
and valuable information [4].
There is rapid technological advancement for user’s data sharing between the enterprises. By using data sharing applications, user experiences are improving in terms of
functionality. Approaches based on standard security techniques while sharing user data
without using any trusted authority have been addressed by Sherstha et al. [5]. The questions regarding what type of data and when or whom has been discussed by Meadows
et al. [6], in which the data sharing with increasing incentive is a matter of intense research. For personal data storing, certain privacy and security issues, such as data theft
and breaches, are present. When using the centralized authority, the deletion of user data
and not delivering user’s data are major problems [7].
Various technologies for the collection of data and sharing user data have been deployed using cloud computing, Federated learning [8], and RFID (Radio Frequency Identification). In strong privacy legislation, e.g., GDPR, the data owner’s consent needs to
be asked. The consent of data sharing and its use needs to be renewed, which provides
meaningful incentives [9].
To provide effective unauthorized control, one of the most important and useful
technologies is an access control system. Discretionary access control (DAC), which is
known as traditional access control, and identity-based access control (IBAC) both fail to
provide an appropriate result for the implementation of access control in IoT systems since
the access control list of each unknown identity in the IoT system is almost impossible to
make. Mandatory access control (MAC) is another technique that suffers from a single
point failure due to the central administrator’s imposition [10].
**Figure 1. IoT security and privacy requirements.**
-----
_Sustainability 2021, 13, 10556_ 3 of 26
_1.1. Attribute-Based Access Control (ABAC)_
A new type of dynamic, fine-grained, and flexible access control has been provided by
attribute-based access control (ABAC), in which the attribute authorities issue the identities
or roles to a set of attributes; therefore, making separate access control lists for every entity
present in the system is not required. It effectively simplifies access management due to
the smaller number of attributes compared to the number of users in the system [11].
The costs associated with the storage devices have been decreasing due to the advancement of storage technology. As compared to blockchains, the cost of cloud storage
services based on a centralized system are gradually increasing. From this point of view,
the future requires a decentralized storage system, which is independent of third-party interference, that honestly stores and transmits the user’s data. After the advent of Bitcoin, its
underlying blockchain technology provides a kind of decentralized storage facility [12,13].
The implementation of distributed file systems is expected to become a promising research
field because of the peer-to-peer study, such as Napster [14], Morpheus [15], Gnutella [16],
and Kazaa [17]. On the contrary, Bitcoin [18] is one of the most popular P2P network
systems and supports up to 100 million users. Blockchain is a hot topic for the business
community and technology giants [19]. In the network, system clients and storage resources are dispersed to form a distributed file system, where every user is a consumer
and creator of stored data.
The expectation of ensuring trust and reducing overhead for IoT systems [20,21]
has led the combination of Blockchain technology with IoT to become a promising trend,
through which a publicly verifiable, decentralized, and credible database can be established,
and a distributed trust of billions of connected things can also be achieved. In our daily
lives, the involvement of electronic devices are increasing day by day. For example, an
automatically repairing order by the coffee machine, the identification of parking lot usage,
and the detection of rubbish bin fullness are all electronic devices used daily [22].
_1.2. Paper Contributions_
In our proposed work, we propose a blockchain-based architecture similar to the
one proposed in [23] for enhancing the IoT security and privacy and to overcome the
authentication and access control issues present in existing IoT systems. Moreover, the main
contributions are as follows:
- We propose a blockchain-based network for reliable data sharing between resourceconstrained IoTs.
- Storing the huge data generated by IoTs, a distributed file system, i.e., IPFS or swarm,
is used.
- Proof of Authority (PoA) is used instead of Proof of Work (PoW), which increases
throughput and reduces the system latency.
- A smart-contract-based access control mechanism is implemented to securely share data.
- Through smart contracts, the data ciphertext can be stored in the blockchain by the
data owner.
- Data can only be decrypted in a valid access period given by the data owner.
- In blockchains, the trace function is achieved by the storage of invocation and the
creation of smart contracts.
- Validating the effectiveness of cpabe and the access model, extensive simulations are
performed in pylab, and the performance parameters are the total cost consumption
and cpu utilization.
- To resolve the scalability issues, different kinds of blockchains have been used for data
storing and data sharing.
- The simulation results show that our proposed scheme significantly reduces the
execution and transaction cost as well as the verification time of the transaction in
a blockchain.
The rest of the paper is organized as follows: Background information and the motivation behind the study are provided in Section 2. Preliminaries are discussed in Section 3.
-----
_Sustainability 2021, 13, 10556_ 4 of 26
Section 4 shows the literature review, whereas the system model and proposed methodology are demonstrated in Sections 5. Section 6 gives a description of our policy model.
The attacker model, security assumptions, and security features of the proposed model
are to be considered in Section 7. In Section 8, implementations related to the performance
evaluation have been provided, and finally, future work and conclusions are provided in
Section 9.
**2. Background and Motivation**
Over the past few years, the efforts and interest of using sensors and devices in our
daily life have been increasing. The smart and socially skilled objects’ development is
also increasing, which revolutionizes IoT [24] aspects, such as social interaction modeling
research and human management investigations. To address these aspects, many architectures have been proposed by researchers. The latest three architectures are the social
IoT (SIoT [25]), multiple IoT [26], and multiple IoT environment [27]. With the evolution
of these architectures, severe privacy and security issues have been caused. To address
these issues, in the last decade, different solutions have been proposed in terms of access
control [27,28], intrusion detection [29,30], and privacy [31].
IoT’s privacy and security with interconnected internet cause particular challenges
in areas of the computing network. It means that at every moment, from everywhere, an
attack can be created on the internet resources. As a result, numerous threats, such as
denial of service, fabrication of identity, physical threats, communication channel targeting,
and many more, have emerged. The biggest challenge in this research field is power
resource consumption and computational overheads on IoT devices. Many solutions
have been proposed by researchers, where strategies based on blockchain, homomorphic
encryption with data collecting objects, and attribute-based encryption for achieving
integrity, are provided [32].
IoT devices play a huge role in different aspects of life, e.g., security, energy, safety,
healthcare, smart grid, vanets, industry, entertainment, and can directly impact the quality
of life. However, in terms of battery power, network protocol, high-level computation,
and their infrequent connectivity, they have fundamentally constrained resources. Due
to these constraints, sustaining user privacy impacts the applicability of using advanced
technology. The huge risk of interconnected devices on the internet without having any
standard security scheme implementation is also present, from which security concerns,
such as data misuse, arise [33].
IoT devices collect personal information of users, such as their identity, contact number,
energy consumption, and location, which is more dangerous than simple security threats.
These devices reveal users’ information about their daily activities (e.g., watching movies,
playing, home activities, and gatherings).
Recently, the interest and efforts in IoT security have been growing. IoT can offer
a variety of services, whether they are of safety or non-safety applications. The most
important objective of enhanced safety in IoTs is to enhance the user’s security by providing
location privacy in a comfortable environment. From a non-safety perspective, many
applications and services, such as internet access, geo-location information, the weather
forecast for the comfort of user’s convenience as well as infotainment, are considered nonsafety services [34,35]. However, in terms of power consumption, network connectivity,
high-level computation, and their infrequent connectivity, they are have fundamentally
constrained resources [36,37].
Due to these constraints, sustaining user privacy may impact the applicability of using
advanced technologies [38]. The huge risk of interconnected devices on the internet without
having any standard security scheme is data misuse [39,40]. The challenging task for the
researchers in this research domain is power resource consumption and computational
overheads of IoT devices [41,42]. Many solutions to the mentioned challenges have been
proposed by researchers [43,44]. However, the solutions that are based on blockchains,
homomorphic encryption with data collecting objects, attribute-based encryption for achiev
-----
_Sustainability 2021, 13, 10556_ 5 of 26
ing integrity are dominant. We address the user transparency, security, privacy, and data
sharing incentive issues by proposing a new smart-contract-based technique that relies on
data sharing and user control privacy policies [32].
_2.1. Existing Access Control IoT Architectures and Related Challenges_
In a constrained environment, the application of lightweight security mechanisms
is required by the integration of physical objects. However, solutions designed with the
current access control and security standards are not meeting the requirement of nascent
ecosystems. Lightness, interoperability, end-to-end security, and scalability issues have
recently attracted researchers’ attention. Existing IoT architectures are outlined below.
2.1.1. Centralized Architecture
This approach consists of a trusted third party’s involvement for providing outsource
access control operations. The devices are managed by a gateway or back-end server
known as the Policy Decision Point (PDP). In stored access policies, the access requests are
analyzed by the server, as shown in Figure 2.
**Figure 2. Central vs. blockchain architectures.**
To access the end device’s data, the requesters should ask to pass by those trusted
third parties. This architecture relieves the processing burden of constrained IoT devices
(actuators, sensors, etc.). However, major disadvantages are seen in the context of IoT
architecture. By the use of a trusted third party, its end-to-end security drops. In the
decision-making process, the IoT devices role is strictly limited. The authorization requests
of users and resource owner (RO) access control policies are revealed by the trusted third
party. The privacy of the resource requester or owner is corrupted due to these conditions.
2.1.2. Trust Entity with Decentralized Architecture
The partial participation of IoT devices in access control decisions are present. From the
surrounding environment, the contextual information was sent to a trusted third party that
was gathered by IoT devices (e.g., power level, location, etc). The decision made by the
trusted third party was based on the access control requests with pre-defined policies and
the smart objects’ contextual information collection, as shown in Figure 3. To transfer the
information in a secure communication channel between the end devices and the trusted
third party, the additional security measures are required. In real-time scenarios, such as
healthcare, it is not suitable because of the nature of the contextual information transfer;
thus, it will not help in real-time access decisions. The requester and data owner’s privacy
is also not considered.
-----
_Sustainability 2021, 13, 10556_ 6 of 26
**Figure 3. Existing access control architectures.**
2.1.3. Distributed Architecture
In the device side, the processing of access control decisions is done in a distributed
manner. Due to the absence of a trusted third party, it shows impressive advantages
regarding the requester and resource owner privacy. The end users obtain more power in
defining their own policies and access control decisions with its edge intelligence principle.
Real-time smart access control decisions are also possible. The generated data of IoT
devices are less expensive in terms of cost management because the cloud back-end for
each device is not provided. The devices only have the authority to transmit information
in necessary conditions, and the achievement of end-to-end security makes it more secure
than the previous approaches.
_2.2. Issues Faced by the Present Architectures_
As shown in Figure 4, cloud-based servers, which have large storage capacities and
processing power, are connected with trusted entities that can have either decentralized
or centralized approaches. IoT devices’ authenticated and identification techniques are
discussed in [45], which are useful for small-scale IoT networks. However, it is not useful
for large IoT networks for the following reasons [46,47].
- Cost: Due to two main reasons, the IoT solutions are expensive:
- Infrastructure cost: There are billions of connected IoT devices that generate and
store a huge amount of data, while the servers are required for their interconnected communication costs and the analytical processing.
-----
_Sustainability 2021, 13, 10556_ 7 of 26
- High maintenance: Updating the software in the millions of IoT devices that have
a centralized cloud architecture and huge network equipment requires a high
maintenance cost.
- Scalability: The huge amount of IoT devices’ data generation and processing (big
data) causes a bottleneck to scaling the centralized IoT architectures. Data acquisition,
transmission, and storage can be handled by these application platforms.
- Single point of failure: In critical healthcare systems, it is very important to collect the
data timely. However, in cloud servers, a single point of failure may cause the whole
network to shut down.
- Lack of Transparency: Transparent security architecture needs to be developed because
of service providers’ irrefutable lack of trust for data collection by the millions of IoT
devices in centralized models.
- Insufficient security: A huge amount of connected insecure devices on the internet is
a major challenge in IoT privacy and security due to recent DoS attacks [48].
**Figure 4. Centralized vs. decentralized networks.**
**3. Preliminaries**
_3.1. Blockchain_
In the simple form, a blockchain is a distributed and decentralized ledger. Blockchain
is a technology based on a distributed ledger initially developed for crypto-currencies, such
as Bitcoin. In 2008, Satoshi Nakamoto introduced blockchain technology, which gained
attention over the years for its decentralized nature of data sharing and distributed network
of computing [49].
Blockchain consists of three main components: nodes, miners, and blocks, as shown
in Figure 5. Each block contains the nonce, hash, and data, but it does not have fixed
block limits. To secure the blockchain transactions, the nonce is joined with the data for
the collection of hash. The block is added after the mining process, in which a complex
mathematical problem is solved by the miners to find the nonce. To hack the blockchain,
high computational power is required, which is difficult for hackers. Due to its distributed
nature, as the number of blocks increases, it becomes more and more secure. The genesis
block is the first block of every blockchain. With the consensus mechanism, the addition of
blocks to a blockchain network with the majority of nodes’ approval is done.
_3.2. Multichain_
Multichain is a platform for the deployment and creation of a private blockchain
between organizations. It aims to overcome the control and privacy obstacles present in
the deployment of blockchain structures. For easy integration with existing systems, it can
easily work with windows and UNIX servers with the addition of a simple command line
and simple API interface.
-----
_Sustainability 2021, 13, 10556_ 8 of 26
**Figure 5. The structure of a blockchain.**
A multichain’s three main objectives to solve the problems of openness via the integrated management of user permissions, privacy, and mining are:
- To permit the selected transactions only;
- To permit the selected participants to see the blockchain’s activities;
- To conduct mining securely and without the associated costs of proof of work.
To resolve the scalability issues, multichain allows the users to set all the parameters
and the maximum block size of the blockchain in a configuration file [50]. Because the
blockchain contains the participant’s selected transactions that are of interest, it contains
hash up to 1 GB of off-chain data with auto delivery in the peer-to-peer network. The
genesis block’s miner can automatically receive administrative privileges, including the
management of other users and their accessing permissions.
_3.3. Smart Contracts_
Computer programs and codes that can work anonymously are known as smart
contracts. In a public blockchain network, all participating nodes have the privilege of
deploying the smart contract without any specific requirements. For this functionality,
the network participants pay a certain fee and agree on explicit conditions. In Ethereum,
solidity language is used for creating the contracts, while Metamask [51] is used for Id
creation. Finally, Remix IDE [52] is used for its online demonstration and application
results. Banking, supply chain, IoT, and insurance industries are deploying permissioned
smart contracts. A smart contract is also considered an agreement or consensus between
the two parties. Users cannot alter or delete the smart contract once it is published on
the blockchain network. No central authority involvement is needed for the validation of
tasks. The results computed by the vehicles and nodes do not have any interference from
outside the network. Through smart contracts, mobility services and smart transportation
are implemented and defined in IPFS by J Benet et al. [53], in which an infrastructure based
on distributed ledger technology (DLT) with distributed data management technologies
has been used for data sharing and smart services. In IPFS, an Ethereum smart contract
and an IOTA-based architecture for authenticity have been proposed by Zichichi et al. [54],
in which the entities’ coordination, access authorization, and users’ privacy have been
achieved. Zero-knowledge proof was used for the privacy offer, and a proof of location
guarantee was used. The rules stored by a smart contract include the following.
- The negotiation of terms;
- Automatic verification;
- Agreed terms execution.
Different kinds of functions that a smart contract consists of might be extracted from
other smart contracts or outside the blockchain. The reliance between transaction parties on
a central system can be removed due to the combination of smart contract and blockchain
technology. All the parties present in the blockchain network have a copy of the stored
smart contracts. The execution of agreed terms present in the smart contract are triggered
by an authorized event. Every transaction’s audit trail of events is stored. All the parties
present in the network can detect the changes in the transaction or contract. Therefore,
-----
_Sustainability 2021, 13, 10556_ 9 of 26
it creates a large secure system without having a centralized model’s trust, costs, and
risks issues.
To write the smart contracts, solidity programming language has been used due to
its lightweight coding condition. For the representation of each operation in the contracts,
Ethereum Virtual Machine code is used. The message data with the amount of Wei is sent
in the transaction as output, and a byte array is returned. A truffle framework is used for
testing and the deployment of Ethereum-based smart contracts.
**4. Related Work**
With the significant growth in the number of IoT devices, it has become a challenge to
store IoT data and an even bigger challenge to protect that data from unauthorized access
and harm. Another issue is trust; centralized servers are not always honest. These issues
are addressed in [55]. In order to remove these central servers from the system, the authors
have used blockchain and certificateless cryptography for storing and protecting the data.
Edge computing has been used for data storage management, whereas an un-validated IoT
framework has been presented [22].
_4.1. Ethereum-Based Existing Access Control Schemes_
In [56], a scheme is proposed for data storage and sharing using an encryption based
on Ethereum blockchains and attributes. A keyword search utility is provided using
a smart contract. An attribute-based access control mechanism is designed in [57] for
IoTs to simplify access management. To avoid a single point of failure and the loss of
integrity, a blockchain is deployed. The access control mechanism is deployed for low-cost
computations across IoT systems.
A scheme for providing availability and a keyword search is proposed in [57] using
blockchains. This keyword search function is different from that of [56] since the permission
for the keyword search is granted by the data owner in this scheme.
An attribute-based encryption scheme for encryption, keygen, and decryption with
verified outsourcing is proposed by Wang et al. The ciphertext complexity and size was
increased with the number of attributes in the access policy. It successfully reduces the
execution time but suffers from a high communication cost because computationally
expensive operations are performed by the encryption proxy server [58].
Many access control solutions that have a centralized model have been designed for
IoTs [59–61]. As a result of adopting a centralized system, there have been a lot of issues,
such as low scalability, no transparency for users information, and built-in interoperability
is also not provided. Access to a distant centralized server mostly requires connectivity,
and the access control decisions were moved away from the edge nodes. Many of these
issues are resolved by using the decentralized approaches presented in Table 1. In the
recent proposals presented in Table 2, the decentralized-based access control systems in
IoTs by using blockchain technology have been listed.
**Table 1. Existing Blockchain Techniques.**
**Ref** **Technology Used** **Contributions** **Addressed Problems**
[15] IoT and blockchain Blockchain-based simple mechanism for database IoTs applications Database
[62] IoT, smart contract, and blockchain A blockchain, smart contract, and IoT combina- Complex processes automation
tion is used for identifying solutions
[63] Blockchain edge/fog computing Edge/fog working relationship with blockchain Blockchain-enabled fog applications
[64] IoT and blockchain In IIOT, traceability and revocability with a Malicious users tracking and revocation
blockchain-based access control system
[65] IoT, smart contract, and blockchain Web interface for controlling entities information Identity, interoperability, and security of IoT
with smart contracts
-----
_Sustainability 2021, 13, 10556_ 10 of 26
**Table 2. Overview of existing literature.**
**Ref** **Author** **Description of Research** **Techniques** **Contributions** **Evaluation Criteria** **Limitations**
[55] Li et al. 2018 Blockchain for large-scale IoT Distributed Hash Tables (DHTs) Security, accountability, and trace- Transaction verificatio and dis- User authentication not provided;
data storage and protection and edge computing ability tributed data storage Will not work on a complicated access control scheme
[56] Wang et al. 2018 Blockchain-based fine-grained IPFS, Ethereum, and attributedecentralized storage scheme based encryption (ABE) technologies
Secure access control policies
achieved; keyword search function;
wrong results in the traditional cloud
storage is solved
IoTs authentication and attribute- Attribute revocation is not considbased AC ered
Security authentication in the de- No real-time scenario is considered
centralized AC
[57] Ding et al. 2019 Blockchain-based access control Elliptic curve digital signature al- Scalability, robustness, IoTs consenscheme gorithm and AKA protocol sus independence, low computation,
and communication overhead
[66] Do et al. 2017 Blockchain-based private key- Proof of storage and distributed Data integrity and enforcing proof- Anonymous access control and Outsourcing data storage; does
word searching scheme encrypted data storage of-retrievability off-chain not support credential revocation;
Boolean keywords
[67] Zhang et al. 2018 Blockchain/cloud-based data Cloud and hyper ledger fabric identity management, fine-grained
storage scheme access control, scalability, and distant
access
Data chain and behavior chain Single-system restriction and authenpermission levels tication
[68] Steichen et al. 2018 Blockchain-based decentralized Smart contract, IPFS, and Sharing of large sensitive files Fixed gas amount Authentication of nodes; more timeaccess control for IPFS Ethereum consuming
[69] Sifah et al. 2018 Chain-based big data access con- ECDSA and PoC Off-chain sovereign blockchain security and data mismanage- Inefficient in industries
trol infrastructure ment and execution time
[70] Zhang et al. 2018 Smart-contract-based access con- Ethereum smart contract plat- distributed and trustworthy access Gas price and timing Overhead and capital cost
trol for IoT form control for IoT systems
-----
_Sustainability 2021, 13, 10556_ 11 of 26
_4.2. Cipher-Text-Policy-Based Attribute-Based Encryption (CP-ABE) Schemes_
A CP-ABE-based outsource ABE scheme was proposed by Nguyen et al. [71]. In this
scheme, the users only specify the access policy before passing it to the delegatee (DG),
and the key generation center is responsible for the delegation key generation. Encryption
of data with an access policy is done by the delegatee.
By storing and pre-computing tuples, the authors in [72] speed up the encryption
process. The number of attributes in an access policy is directly proportional to the number
of tuples created during pre-computation. This requires extra memory to re-run precomputation after modifying the access policy. The size of the cipher text increases with
the number of attributes.
The access policy hiding in CP-ABE is an active research area. It supports many kinds
of access policies, such as Tree-based [73], threshold-based [74], AND-based [75], and the
linear secret-sharing systems matrix (LSSS) [76]. The access policy hiding in CP-ABE was
first introduced by Nishade et al. [75]. Multiple values of AND gates have been used that
have a limited range of expression. To reduce the cipher text size and hide the access policy,
schemes based on AND gates have been proposed in [77,78].
Sarhan and Carr proposed a distributed cryptographic agent-based secure multiparty
computation (ADB-SMC) access control, in which secure multiparty computation and active
data bundles can be combined with ABE. Instead of using the blockchain infrastructure,
distributed hash tables have been used, which affect the infrastructure costs but do not
reduce the communication and computation overheads.
Cipher text policy-based Attribute-Based Encryption (CP-ABE) enforces the policies in
an encrypted format that is useful for sensitive information. Most of the existing CP-ABE
schemes generate large-sized cipher text and secret keys. The cipher text and key size is
linear with the involved attributes, and the number of bilinear mapping pairs is directly
proportional to the attribute size. Because bilinear pairings are used in ABE, its use is
challenging for IoT devices due to the heavy computation for their small storage and
computation capacities. The use of CP-ABE with timely and minimal bilinear pairings
affects the access control computation in our work. Therefore, a comparison chart with
other access control schemes has been presented in Table 3, in which the features of our
model are illustrated.
**Table 3. Comparison with other models**
**Ref No** **Blockchain** **Scalability** **Adaptability** **Cost-Effective** **Privacy Efficiency** **Access Period**
[79] - x - - - x
[70] - x - - - x
[71] - - - x x
[55] - x x - x x
our model - - - - -
**5. System Model and Proposed Methodology**
In our solution, we propose an attribute-based access control mechanism for IoT
devices. By using Blockchain technology with cipher-text-policy-based attribute-based
encryption (CP-ABE), we avoid data tempering and eliminate a single point failure.
For lightweight authentication and meeting the high efficiency in IoT, we optimize the
access control process by creating smart contracts. We use two kinds of blockchain networks: a public blockchain for authentication purposes of the IoT devices, attribute servers,
and storing the user-defined policies, as shown in Figure 6. Conversely, in the consortium
blockchain, the hashes of transactions have been stored after the validation of user and
devices.
A typical IoT scenario is depicted in Figure 6. In an IoT system, three entities are
evolved—IoT devices, attribute servers, and the gateway. Devices, such as mobile phones,
computers, and smartwatches, can easily access the direct wire or WiFi connection. Con
-----
_Sustainability 2021, 13, 10556_ 12 of 26
versely, the dedicated gateway is required by certain lightweight devices. The registration
server is responsible for the collection and authorization of IoT devices and users. After
the server authentication, numerous data access requests and exchanges can be performed
by such entities.
In our blockchain network, each node has its own account through which the trade
transactions are performed. A pair of public and private keys are assigned by the registration server for the signing and addressing of transactions, which proves the identity of the
user and cannot be altered by any entity. Smart contracts and transactions are recorded
on unique addresses by the distributed blockchain. Therefore, every interaction of the
user will be considered as a transaction and recorded in the blockchain, which provides
transparent user access and traceability. To resolve scalability issues, multiple blockchains
are used for the generation of transactions and the deployment of smart contracts.
By using smart contracts, the access control management and authorization are provided. Every device requires its own credentials, and a user owns one or more IoT devices.
Therefore, each device would be individually authenticated by the user. This would create
an authentication overhead; however, by using smart contracts, the users with their devices
can be registered in a public Ethereum blockchain. By using a wallet address, the user and
its devices can be verified by attribute authority, and then the transactions are performed.
By using access control smart contracts, a single authorization server can be replaced by a
distributed authorization server.
Public Blockchain start
Registration server
Consortium Blockchain
Verification
Apply for attribute
after registration Attribute server Attribute server of device
through
public
blockchain
Attribute server
**Figure 6. The system model of our proposed architecture.**
_5.1. Smart Contract System_
The mechanism consists of four smart contracts, as shown in Figure 7, that are implemented on the Ethereum blockchain. The access control contract (ACC) consists of
the object attribute management contract (OAMC) and the subject attribute management
contract (SAMC), whereas the policy management contract (PMC) holds the policies of
each subject and object with their specified actions. The addition and deletion of attributes
is handled by the ACC and PMC.
-----
_Sustainability 2021, 13, 10556_ 13 of 26
**Figure 7. Building blocks of the access control mechanism.**
_5.2. Access Control Contract (ACC)_
In IoT systems, the requests from subjects to objects can be controlled by the ACC.
Subjects can execute the ACC by sending the required request information of transactions.
After successful authentication is provided by the PMC, OAMC, and SAMC, the ACC can
retrieve the subject and object attributes with the concerned policy information and verify
the results.
_5.3. Subject Attribute Management Contract (SAMC)_
SAMC is deployed for the management and storage of IoT system attributes. Subject
administrators only have the authority to execute smart contracts. For example, the administrators are owners in the case of IoT, whereas in the case of citizens, the city office acts
as an administrator. Each subject can be represented by a unique identifier in the system.
In our paper, we use an Ethereum account as an ID of a subject. Multiple attributes are
associated with each subject ID, as shown in Figure 7. In addition, deleting and updating
subject attributes can also be handled by the SAMC.
_5.4. Object Attribute Management Contract (OAMC)_
Object administrators manage and store the attributes of an object with the execution of
the OAMC. Multiple attributes are associated with uniquely identified Ethereum accounts.
Table 4 shows the attributes involved in our model.
In addition, deleting and updating object attributes can also be handled by the OAMC
through the application binary interfaces (ABIs) of objectdelete() and objectadd().
_5.5. Policy Management Contract (PMC)_
Attribute-based access control policies can be managed and executed by the policy management contract (PMC). Only the policy administrators have the authority to
execute the policies. A policy is a combination of subject and objects attributes with
their specified actions, as shown in Table 5. For example, subject attributes are Depart
-----
_Sustainability 2021, 13, 10556_ 14 of 26
ment=B:Organization=C, and object attributes are Department=B:Organization=C . Then,
Policy=Read only states that the user can only have read access.
**Table 4. Subject and Object Attributes.**
**Subject Attributes** **Object Attributes**
Name: Name:
Dept: Dept:
Org: Org:
Role: Place:
Others: Others:
**Table 5. Subject and Object Attributes with Actions.**
**Subject Attributes** **Object Attributes** **Actions**
Name: Name: Read:True
Dept=IS: Dept=IS: Write:False
Org:COMSATS: Org:COMSATS Execute:False
Role: Place:
Others: Others:
_5.6. Data Sharing Model_
In this section, as shown in the Figure 8, a user can upload the data after verification
from the public blockchain through the attribute server by implementing attribute-related
policies. Once completed, a user can extract the information if he/she satisfies the predefined conditions. The contract also provides policy updating, revocation of a policy, and an
ownership transfer. The contract for managing the attribute-based access control system
is written in Solidity and compiled using compiler version 0.4.20. For this purpose, we
use multichain to resolve the scalability issues present in the blockchain technology. In the
policy model, the detailed terminology has been defined. On the request of the data user,
encryption based on the cipher text policy has been done in a timely manner. Multichain
allows the users to set all the parameters and the maximum block size of a blockchain
in a configuration file [50]. A blockchain with the participant’s selected transactions contains hash up to 1 GB of off-chain data with auto delivery in the peer-to-peer network.
The administrative privileges can be automatically received by the genesis block’s miner,
including the management of other users and their accessing permissions.
-----
_Sustainability 2021, 13, 10556_ 15 of 26
**Figure 8. The data sharing model.**
**6. Policy Model**
In our scheme, encrypted files can be stored using smart contracts. By running
encryption and decryption algorithms, the data owners can store and retrieve their data
through the implementation of smart contracts. On the blockchain, every contract call
has been recorded. Therefore, the information between the data owner and user is nontempered and non-repudiated. In our model, four entities have been evolved: the data
owner, the data retriever, IPFS, and the Ethereum blockchain.
1. **Data owner: Upload encrypted data with assigning attributes sets and access control**
policies and is responsible for the creation and deployment of smart contracts.
2. **Data user: Access the encrypted data stored on IPFS. After satisfying access control po-**
lices and attribute sets, the secret key is obtained, which decrypts the encrypted data.
3. **IPFS: Used for the storage of encrypted data that can be stored by the data owners.**
4. **Ethereum: To store and retrieve the data, smart contracts have been deployed on the**
Ethereum blockchain.
The process in Figure 8 is as follows:
- After the device and user registration process using blockchain technology [65], the
data owner uploads the encrypted data with access control policies in smart contracts.
- The returned contract address with the encrypted data hash would be stored on IPFS.
- The path of data stored in the IPFS location can be returned to the data owner.
- In Ethereum, the encrypted data key has been stored in ciphertext format.
- When the data retriever sends the access request using the timely CP-ABE, the data
owner adds the policies under the effective period, encrypts the secret key, and stores
it in a smart contract.
- The data retriever that satisfies the access policies in an effective period of time
downloads the data and obtains the secret key from the contract.
_6.1. Attribute-Based Encryption_
We have implemented the cipher-text-policy-based attribute-based encryption (CPABE). Ciphertexts are attached to access policies, and attribute sets are associated with
secret keys. The secret key is used for recovering the cipher text if attribute sets satisfy the
access policy. The encryption of data in attribute-based encryption can be handled under
an access policy with certain attributes. During data encryption, the cipher text contains a
part of the access policy in the CP-ABE. Data encryption in classic public key cryptography
can be done for a specific individual entity using its private key. In this case, the sender
must know about the receiver and his public key. During the continuous changes in such
constructions, the addition and removal of the collaborator is done with every encrypted
-----
_Sustainability 2021, 13, 10556_ 16 of 26
dataset. Therefore, the encryption has to be done for every legitimate identity. For such
cases, the hybrid schemes have been proposed, but these schemes contain the limitation of
handling increasing participants.
CP-ABE allows a user to encrypt the data using attribute-based encryption instead
of knowing the respective individuals of those attributes. Through the cryptographic
mechanism, traditional access control systems’ trust issues can be solved, which is a silent
feature of attribute-based encryption. In that case, only legitimate users can decrypt
and access the data stored publicly. Individually generated private keys and attributes
assignment has to be done by the key management authority. However, the absolute trust
needed by a key server to issue a private key to only legitimate users and to revoke a
user’s key is a major drawback in existing schemes. Access rights transparency has also
not been provided. We address these issues in this paper. An example of encryption has
been presented in which the user who satisfies both notations can decrypt the data.
_6.2. Access Policy_
- Access policy P is a rule in ABE that returns either 0 or 1.
- Attributes set is A (A1, A2, ..., Am).
- If P answers 1 on A, only then can we say A satisfies P.
- Usually, to represent the fact that A satisfies P, the notation A = P is used.
- The case that A does not satisfy R is denoted as A!= P.
- We consider the AND gate policy in our construction.
- If Ai = Pi or Pi=* for all 1 <= i <= m, we say A = P; otherwise, A!= P.
- It is noted that the wildcard * in P plays the role of a “do not care” value.
For example:
access policy P = (Clinic : 1; physician ; * ; Pakistan);
a attributes set A1 = (Clinic :1 ; physician; male; Pakistan);
A2 = (Clinic :1 ; Nurse; male; Pakistan);
Then
A1 = P, A2 != P.
With the combination of the ABAC model and the data generated by IoT devices,
the flowchart of our models’s access control policy is defined in Figure 9, where Policy (P) =
(AS, AO, AP, AE):
- Attribute Subject = (userId,role, group);
- Attribute Object = (deviceId, MAC);
- Attribute Permission = (1, allow 0, deny);
- Attribute Environment = (createTime, endTime, allowed).
The access control with data storage has been composed based on the following
algorithms:
1. **Setup (PK, SK):**
Data owners execute the algorithm with the inputs, universal attributes set A, and
security parameter P, resulting in a public and secret key pair. Afterwards, the data can
encrypt with the AES encryption algorithm and hash using the SHA 256 algorithm as
H(data). Along with these attributes, the encrypted data can be uploaded, and in return,
the address or path of those data can be returned by the IPFS server.
2. **Encrypt (PK, T, sek)-> CT**
The public key, symmetric encryption key, and access tree structure can be used as
inputs, and the generated cipher text will be stored in a smart contract.
3. **KeyGen(sk, A) -> PrK**
The data owner executes the key generation algorithm after the collection of access
requests by the data retriever. The data owner assigns the data retriever a set of attributes
-----
_Sustainability 2021, 13, 10556_ 17 of 26
with the effective period of time. The algorithm outputs the private key Prk in return for
entering the secret key sk and set A attributes and stores it in the smart contract.
4. **Decrypt (PK,sk,CT) -> sek**
The data retriever executes the decryption algorithm after obtaining the effective
access period from the smart contract. It can only be performed within the valid access
period. By obtaining the cipher text CT and secret key from the smart contract and entering
them into the decryption algorithm with its public key PK, the data retriever can only get
the symmetric encryption key sek when it satisfies the access policy T. Afterwards, the data
can be decrypted with this key; otherwise, the data owner would change the policy, and no
one can access the information.
**Figure 9. Flowchart of the access policy.**
**7. Security Assumptions and Attacker Model**
In data accessing and sharing among IoTs, privacy and security are the main issues in
the existing models. The prevention of privacy leakage and potential security threats are
the main concern of our model. For our proposed model, the following attack and security
assumptions are considered.
- **Consistency of blockchain over nodes and timing:** Blockchain transactions are
accepted by the nodes present in the network.
- **Growth of the blockchain: The eventual integration of valid transactions into the**
blockchain.
- **Reliable and trustworthy gateway: The trustworthy and accessible gateway is as-**
sumed.
- **Trusted entities: Attribute servers and certification authority are trusted.**
- **Security of keys in the blockchain: Keys are secure and cannot be lost or stolen.**
- **Strong cryptographic measures: Cryptographic primitives, hashes, and signatures**
are not broken.
The attacker model for our research is given below.
- **Privilege Elevation:** The attacker convinces the device by declaring himself an
authenticated attribute entity and promoting a fake attribute-issuing entity. He also
replays a valid transaction previously performed by an attribute entity.
- **Identity Revealing Attack: To reveal the real identity of authorized devices and**
personal data collection, the malicious entity tries to target the devices.
-----
_Sustainability 2021, 13, 10556_ 18 of 26
- **Man-in-the-Middle Attack: The interception of shared data and data tempering by**
the malicious node between the IoT nodes and attribute servers.
- **Forgery Attack: The malicious attribute server has fake keys and signatures of an**
authentic user and transfers it to other entities to affect the network.
Security features of our proposed model are given below.
- **Privacy preservation: Crypto ID has been used for the communication between the**
entities present in our model. To enhance the privacy, we are not using the device’s
real identification number as its identity. All the transactions are done in an encrypted
format that preserves the identities of the users and devices.
- **Data Confidentiality: Using symmetric key encryption, the communication between**
the IoT devices are encrypted, which enhances the security and prevents tempering of
communication data.
- **Data Integrity: Data generated by IoT devices are encrypted using symmetric key**
encryption and stored in IPFS (an example of distributed file system). To provide data
integrity, we encrypt the data under a certain cipher-text-policy-based encryption,
and its hashes are stored in the blockchain so that data tempering is not possible.
- **Single Point of Failure: A distributed file system and multiple attribute servers have**
been used in our model, which eliminate the single point of failure. The attribute
servers only interact with the devices of their associated identities, which enhances
the system security.
**8. Performance Evaluation**
To analyze the performance and feasibility of our model, the Ubuntu 16.04 system with
4GB RAM, Intel core i3 has been used for the implementation of the prototype. For smart
contracts, solidity language and C++ has been used. The simulation of smart contracts are
performed using Remix IDE. Ganache [80] is used for providing virtual accounts and for
executing smart contracts, and Metamask, the extension of the chrome browser, is used
for Remix and Ganache connectivity. The PBC library is used for computing parings. For
testing, we use Truffle for smart contract testing at the development level and use Testnets,
e.g., Ganache(local blockchain) and Ropsten (online), for free smart contract deployment.
To validate the analysis of the ABE program, we implement the cipher text policy in
attribute-based encryption by using a cpabe toolkit. The algebraic operations are done with
the PBC library. For the implementation of crypto operations, libbswabe is used, and for
user interface and high level functions, cpabe is used.
_8.1. An Attribute-Based Access Control Model for IoTs_
User and device registration, storing data on distributed file systems, such as IPFS,
information and the management of data under specified policies, and its results are shown
in Figure 10.
-----
_Sustainability 2021, 13, 10556_ 19 of 26
### Attribute Based Access Control
1,200,000
1,000,000
800,000
600,000
400,000
200,000
0
Execution Cost Transaction Cost
**Figure 10. Attribute-Based Access control Model details.**
In Ethereum, gas is a small unit of cryptocurrency. The unit is deducted from the
users’ accounts when performing a transaction in the Ethereum. Figure 11 shows the gas
consumption of a smart contract’s operation. There are four different functions in the
smart contract that are used in the proposed work, which are: (a) grant, (b) init, (c) policy,
and (d) request. The gas consumption depends on the complexity of the smart contract.
The deployment of the smart contract is an expensive operation in Ethereum.
However, the transaction cost is incurred when sending the smart contract to the
Ethereum. The execution cost depends on the operations that are executed as a result of
the transaction. Therefore, the execution cost is included in the transaction cost.
**Figure 11. Gas per operation.**
Figure 12 shows the processing time for encryption and decryption operations of our
scheme. In our scheme, each user’s private key is associated with a group of attributes that
represent their capabilities. A decryption can only be done when satisfying a certain policy
requirement, which is why it took less time than encryption. As the number of attributes
increases, the processing time also increases.
-----
_Sustainability 2021, 13, 10556_ 20 of 26
**Figure 12. Operation time with the number of attributes.**
We used three attributes in the simulation and used the AND-gate-based access
structure for ensuring each attribute. The execution details of our system are shown
in Figure 13, and the details of sharing data with the owner are also provided. In the
registration setup, the web3j library has been used for access control. If the port combination
that contained the device name and service name under certain policies successfully verified
the transaction, the receiver can access the data.
**Figure 13. Access verification.**
_8.2. Cost Evaluation and Comparison_
For the deployment of smart contracts on the blockchain, the execution fee is required
from the users for the execution of contracts’ ABIs. To perform the tasks on Ethereum, a
gas unit will be used to measure the operations amount. More gas will be consumed for
more complex tasks. With the passage of time, the gas prices of Ethereum change. The total
cost for performing a task depend on the gas price and the amount of consumed gas. We
set the gas price to 5 Gwei, where 1 ETH = 1 10[9] (1,000,000,000) gwei. For example,
_×_
if we have a transaction of 20,000 gas, then its cost will be 20,000 5 = 100,000 gwei
_×_
(0.000100 ETH). 1 ether = 226.6946 gas = USD 357.839639 (as we accessed on September
2020), but now, Ethereum is sold as the world’s most expensive non-fungible token (NFT).
For evaluation, we compare our proposed model with [55,70]. The comparison charts are
given in Figures 14 and 15.
-----
_Sustainability 2021, 13, 10556_ 21 of 26
**Figure 14. Cost comparison with [55,70].**
Instead of using RC and JC in [70] and DR and VT in [55], we are calculating the
deployment cost of ACC, PMC, OAMC, and SAMC. The actual access cost of the proposed
scheme is 262,531 gas, which is almost USD 4.54325. The chart in Figure 14 shows that
our model consumes more cost than [55,70], but there is a monetary gap in the US dollars.
In one access control of [55,70], only one-to-one pairing has been done; thus, as the number
of subjects and objects increases, the monetary cost of the system also increases. However,
many-to-many subjects and objects pairing in the access control are achieved in our model.
In the case of [55,70], when subject and object pairs increase, the gas consumption also
increases, which costs more than that of our model.
**Figure 15. ACC time comparison with [55,70].**
Due to the attribute-based encryption and the complex interaction between the access
control and other contracts on Ethereum, it takes more time than other schemes, as shown
in Figure 15. It also depends on various factors, such as the computational power of
system. Additionally, the computational time in Ethereum may also vary time to time, so
the time of mining also affects the results. The network architecture also affects the system
-----
_Sustainability 2021, 13, 10556_ 22 of 26
performance. To evaluate the performance of our proposal, we compare the simple access
control with the cipher text policy-based attribute-based encryption and its implementation
with blockchain.
Verification costs of access control with respect to the number of attributes used in the
policy are shown in Figure 16. We used three architectures to evaluate the results: one is
for centralized verification of the access control with timely cpabe; a decentralized access
control with timely cpabe and blockchain; and the last one is a timely access control list
using blockchain technology. The results show that an additional cost has been adopted
by the decentralized architecture. The timely access control list is less efficient than timely
cpabe and increases the verification cost. Rather than not providing access verification
by the access control list, cpabe provides decentralized access management in a more
efficient way.
**Figure 16. Cpabe performance comparison.**
**9. Conclusions**
We propose an attribute-based access control mechanism for IoTs that provides local
access, authorization of clients, privacy, and interoperability by using smart contract data
sharing and user-controlled encoded policies. The user can own their data and have
authority to share it with other users. No scheme fulfills the requirements of our proposed
model. We used the ABAC model for its high compatibility and expressiveness.
We overcome the issues presented in [55,70], which are high computational time and
overhead from deploying the number of smart contracts for every additional user with a
single point failure and un-authentication of present users, using blockchain for authentication and smart contracts for the data access process in our mechanism. To overcome
the data-transfer-related communication assumptions, a secure mechanism of data storage
has been introduced. We also made an ownership contract of each user with its own
devices to enhance the privacy of our model. It is not feasible for actual user data to be
exposed by any entity in our blockchain architecture. The off-chain data are stored in
an encrypted format, which makes data tempering impossible. Only a consumer who
meets the specific policies can access the data after the invocation of smart contracts. In the
future, we will work on the security and privacy of IoT data from unauthenticated edge
nodes. Although the blockchain is providing reliability and decentralization, it has a few
drawbacks: scalability and monetary cost issues. We will be considering scalability and
reliability aspects using IOTA.
-----
_Sustainability 2021, 13, 10556_ 23 of 26
**Author Contributions: Conceptualization, M.A.S. and H.A.K.; Formal analysis, A.M.E.-S. and**
M.A.E.-M.; Funding acquisition, A.M.E.-S. and M.A.E.-M.; Investigation, M.A.S.; Methodology,
H.A.K. and H.T.R.; Resources, C.M.; Supervision, H.A.K.; Validation, H.T.R.; Writing—original draft,
S.Y.A.Z. and H.A.K. All authors have read and agreed to the published version of the manuscript.
**Funding: The authors extend their appreciation to King Saud University for funding this work**
through the researchers supporting project number (RSP-2021/133), King Saud University, Riyadh,
Saudi Arabia.
**Institutional Review Board Statement: Not applicable.**
**Informed Consent Statement: Not applicable.**
**Data Availability Statement: Not applicable.**
**Acknowledgments: The authors extend their appreciation to King Saud University for funding this**
work through the researchers supporting project number (RSP-2021/133), King Saud University,
Riyadh, Saudi Arabia.
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. Tung, L. IoT Devices Will Outnumber the World’s Population This Year for the First Time; ZDNet, A RED VENTURES COMPANY;
[Volume 1. Available online: https://www.zdnet.com/article/iot-devices-will-outnumber-the-worlds-population-this-year-for-](https://www.zdnet.com/article/iot-devices-will-outnumber-the-worlds-population-this-year-for-the-first-time/)
[the-first-time/ (accessed on 7 February 2017).](https://www.zdnet.com/article/iot-devices-will-outnumber-the-worlds-population-this-year-for-the-first-time/)
2. [Top, G.I. Strategic IoT Technologies and Trends, Gartner. Available online: https://www.gartner.com/en/newsroom/press-](https://www. gartner. com/en/newsroom/press-releases/2018-11-07-gartner-identifies-top-10-strategic-iot-technologies-and-trends)
[releases/2018-11-07-gartner-identifies-top-10-strategic-iot-technologies-and-trends (accessed on 4 September 2019).](https://www. gartner. com/en/newsroom/press-releases/2018-11-07-gartner-identifies-top-10-strategic-iot-technologies-and-trends)
3. Ekbatanifard, G. An Energy Efficient Data Dissemination Scheme for Distributed Storage in the Internet of Things. Comput.
_Knowl. Eng. 2018, 1, 1–8._
4. Ahmad, I.; Shah, M.A.; Khattak, H.A.; Ameer, Z.; Khan, M.; Han, K. FIViz: Forensics Investigation through Visualization for
[Malware in Internet of Things. Sustainability 2020, 12, 7262. [CrossRef]](http://doi.org/10.3390/su12187262)
5. Shrestha, A.; Vassileva, J. Towards decentralized data storage in general cloud platform for meta-products. In Proceedings of the
International Conference on Big Data and Advanced Wireless Technologies, Blagoevgrad, Bulgaria, 10 Novermber 2016; pp. 1–7.
6. [Meadows, A. To Share or Not to Share? That Is the (Research Data) Question. Available online: https://scholarlykitchen.sspnet.](https://scholarlykitchen.sspnet.org/2014/11/11/to-share-or-not-to-share-that-is-the-research-data-question)
[org/2014/11/11/to-share-or-not-to-share-that-is-the-research-data-question (accessed on 9 September 2021).](https://scholarlykitchen.sspnet.org/2014/11/11/to-share-or-not-to-share-that-is-the-research-data-question)
7. Kiran, S.; Khattak, H.A.; Butt, H.I.; Ahmed, A. Towards Efficient Energy Monitoring Using IoT. In Proceedings of the 2018 IEEE
21st International Multi-Topic Conference (INMIC), Karachi, Pakistan, 1–2 November 2018; pp. 1–4.
8. McMahan, B.; Ramage, D. Federated Learning: Collaborative Machine Learning without Centralized Training Data. Google
[AI Blog. Available online: https://ai.googleblog.com/2017/04/federated-learning-collaborative.html (accessed on 20 Septem-](https://ai.googleblog.com/2017/04/federated-learning-collaborative.html)
ber 2021).
9. Asghar, A.; Abbas, A.; Khattak, H.A.; Khan, S.U. Fog Based Architecture and Load Balancing Methodology for Health Monitoring
[Systems. IEEE Access 2021, 9, 96189–96200. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2021.3094033)
10. Andaloussi, Y.; Ouadghiri, M.; Maurel, Y.; Bonnin, J.; Chaoui, H. Access control in IoT environments: Feasible scenarios. Procedia
_[Comput. Sci. 2018, 130, 1031–1036. [CrossRef]](http://dx.doi.org/10.1016/j.procs.2018.04.144)_
11. Deebak, B.D.; Al-Turjman, F.M. Privacy-preserving in smart contracts using blockchain and artificial intelligence for cyber risk
measurements. J. Inf. Secur. Appl. 2021, 58, 102749.
12. [Gray, C. Storj vs. Dropbox: Why Decentralized Storage Is the Future. 2014. Available online: https://bitcoinmagazine.com/](https://bitcoinmagazine.com/articles/storjvs-Dropboxdecentralized)
[articles/storjvs-Dropboxdecentralized (accessed on 15 September 2021).](https://bitcoinmagazine.com/articles/storjvs-Dropboxdecentralized)
13. Šarac, M.; Pavlovi´c, N.; Bacanin, N.; Al-Turjman, F.; Adamovi´c, S. Increasing privacy and security by integrating a Blockchain
Secure Interface into an IoT Device Security Gateway Architecture. Energy Rep. 2021, 78, 1–8.
14. Ripeanu, M. Peer-to-peer architecture case study: Gnutella network. In Proceedings of the First International Conference on
Peer-to-Peer Computing, Linkoping, Sweden, 27–29 August 2001; pp. 99–100.
15. Tseng, H.; Zhao, Q.; Zhou, Y.; Gahagan, M.; Swanson, S. Morpheus: Creating application objects efficiently for heterogeneous
[computing. ACM SIGARCH Comput. Archit. News 2016, 44, 53–65. [CrossRef]](http://dx.doi.org/10.1145/3007787.3001143)
16. Giesler, M.; Pohlmann, M. The Anthropology of File Sharing: Consuming Napster as a Gift; Association for Consumer Research,
University of Minnesota Duluth: Duluth, MN, USA, 2003; Volume 3.
17. Good, N.; Krekelberg, A. Usability and privacy: A study of Kazaa P2P file-sharing. In Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems, Ft. Lauderdale, FL, USA, 5 April 2003; pp. 137–144.
18. Pouwelse, J.; Garbacki, P.; Epema, D.; Sips, H. The bittorrent p2p file-sharing system: Measurements and analysis. In International
_Workshop on Peer-to-Peer Systems; Springer: Berlin/Heidelberg, Germany, 2005; pp. 205–216._
19. Queiroz, M.; Telles, R.; Bonilla, S. Blockchain and supply chain management integration: A systematic review of the literature.
_[Supply Chain Manag Int. J. 2019. 25, 241–254. [CrossRef]](http://dx.doi.org/10.1108/SCM-03-2018-0143)_
-----
_Sustainability 2021, 13, 10556_ 24 of 26
20. Rehiman, K.; Veni, S. A trust management model for sensor enabled mobile devices in iot. In Proceedings of the 2017 International
Conference on I-SMAC (IoT in Social; Analytics and Cloud), Palladam, India, 10–11 February 2017; pp. 807–810.
21. Yuan, J.; Li, X. A reliable and lightweight trust computing mechanism for IoT edge devices based on multi-source feedback
[information fusion. IEEE Access 2018, 6, 23626–23638. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2018.2831898)
22. Shahid, H.; Shah, M.A.; Almogren, A.; Khattak, H.A.; Din, I.U.; Kumar, N.; Maple, C. Machine Learning-based Mist Computing
[Enabled Internet of Battlefield Things. ACM Trans. Internet Technol. (TOIT) 2021, 21, 1–26. [CrossRef]](http://dx.doi.org/10.1145/3418204)
23. Ouaddah, A. A Blockchain Based Access Control Framework for the Security and Privacy of IoT with Strong Anonymity Unlinkability and
_Intractability Guarantees, 1st ed.; Elsevier Inc.: London, UK, 2018; Volume 115._
24. Zhang, C.; Zhu, L.; Xu, C.; Sharif, K.; Du, X.; Guizani, M. LPTD: Achieving lightweight and privacy-preserving truth discovery in
[CIoT. Futur. Gener. Comput. Syst. 2019, 90, 175–184. [CrossRef]](http://dx.doi.org/10.1016/j.future.2018.07.064)
25. Atzori, L.; Iera, A.; Morabito, G. Siot: Giving a social structure to the internet of things. IEEE Commun. Lett. 2011, 15, 1193–1195.
[[CrossRef]](http://dx.doi.org/10.1109/LCOMM.2011.090911.111340)
26. Baldassarre, G.; Giudice, P.; Musarella, L.; Ursino, D. The MIoT paradigm: Main features and an ad-hoc crawler. Futur. Gener.
_[Comput. Syst. 2019, 92, 29–42. [CrossRef]](http://dx.doi.org/10.1016/j.future.2018.09.015)_
27. Baldassarre, G.; Giudice, P.; Musarella, L.; Ursino, D. A paradigm for the cooperation of objects belonging to different IoTs. In
Proceedings of the 22nd International Database Engineering & Applications Symposium, Villa San Giovanni, Italy, 18 June 2018;
pp. 157–164.
28. Liu, J.; Xiao, Y.; Chen, C. Authentication and access control in the internet of things. In Proceedings of the 2012 32nd International
Conference on Distributed Computing Systems Workshops, Macau, China, 18–21 June 2012; pp. 588–592.
29. Aloqaily, M.; Otoum, S.; Ridhawi, I.; Jararweh, Y. An intrusion detection system for connected vehicles in smart cities. Ad Hoc
_[Netw. 2019, 90, 101842. [CrossRef]](http://dx.doi.org/10.1016/j.adhoc.2019.02.001)_
30. Otoum, S.; Kantarci, B.; Mouftah, H. On the feasibility of deep learning in sensor network intrusion detection. IEEE Netw. Lett.
**[2019, 1, 68–71. [CrossRef]](http://dx.doi.org/10.1109/LNET.2019.2901792)**
31. Al-Turjman, F.; Zahmatkesh, H.; Shahroze, R. An overview of security and privacy in smart cities’ IoT communications. Trans.
_[Emerg. Telecommun. Technol. 2019, 1, e3677. [CrossRef]](http://dx.doi.org/10.1002/ett.3677)_
32. Nawaz, A.; Ahmed, S.; Khattak, H.A.; Akre, V.; Rajan, A.; Khan, Z.A. Latest Advances in Interent Of Things and Big Data with
Requirments and Taxonomy. In Proceedings of the 2020 Seventh International Conference on Information Technology Trends
(ITT), Abu Dhabi, United Arab Emirates, 25–26 November 2020; pp. 13–19.
33. Henna, S.; Davy, A.; Khattak, H.A.; Minhas, A.A. An Internet of Things (IoT)-Based Coverage Monitoring for Mission Critical
Regions. In Proceedings of the 2019 10th IFIP International Conference on New Technologies, Mobility and Security (NTMS),
Canary Islands, Spain, 24–26 June 2019; pp. 1–5.
34. Tewari, A.; Gupta, B. Security, privacy and trust of different layers in Internet-of-Things (IoTs) framework. Futur. Gener. Comput.
_[Syst. 2020, 108, 909–920. [CrossRef]](http://dx.doi.org/10.1016/j.future.2018.04.027)_
35. Rault, T.; Bouabdallah, A.; Challal, Y. Energy efficiency in wireless sensor networks: A top-down survey. Comput. Netw. 2014,
_[67, 104–122. [CrossRef]](http://dx.doi.org/10.1016/j.comnet.2014.03.027)_
36. Islam, S.; Kwak, D.; Kabir, M.; Hossain, M.; Kwak, K. The internet of things for health care: A comprehensive survey. IEEE Access
**[2015, 3, 678–708. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2015.2437951)**
37. Lee, I.; Lee, K. The Internet of Things (IoT): Applications, investments, and challenges for enterprises. _Bus. Horiz. 2015,_
_[58, 431–440. [CrossRef]](http://dx.doi.org/10.1016/j.bushor.2015.03.008)_
38. Zhang, Z.K.; Cho, M.; Wang, C.W.; Hsu, C.W.; Chen, C.K.; Shieh, S. IoT security: Ongoing challenges and research opportunities.
In Proceedings of the 2014 IEEE 7th International Conference on Service-Oriented Computing and Applications, Matsue, Japan,
17–19 November 2014; pp. 230–234.
39. Baccelli, E.; Hahm, O.; Günes, M.; Wählisch, M.; Schmidt, T. RIOT OS: Towards an OS for the Internet of Things. In Proceedings
of the 2013 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Turin, Italy, 14–19 April 2013;
pp. 79–80.
40. Dunkels, A.; Gronvall, B.; Voigt, T.; IEEE.Abomhara, M.; Køien, G. Contiki-a lightweight and flexible operating system for tiny
networked sensors. In Proceedings of the 29th Annual IEEE International Conference on Local Computer Networks, Tampa, FL,
USA, 16–18 November 2004; pp. 455–462.
41. Alromaihi, S.; Elmedany, W.; Balakrishna, C. Cyber security challenges of deploying IoT in smart cities for healthcare applications.
In Proceedings of the 2018 6th International Conference on Future Internet of Things and Cloud Workshops (FiCloudW),
Barcelona, Spain, 6–8 August 2018; pp. 140–145.
42. Alsaadi, E.; Tubaishat, A. Internet of things: Features, challenges, and vulnerabilities. Int. J. Adv. Comput. Sci. Inf. Technol. 2015,
_4, 1–13._
43. Abomhara, M.; Køien, G.M. Security and privacy in the Internet of Things: Current status and open issues. In Proceedings of the
2014 International Conference on Privacy and Security in Mobile Systems (PRISMS), Aalborg, Denmark, 11–14 May 2014; pp. 1–8.
44. Al-Turjman, F.; Baali, I. Machine learning for wearable IoT-based applications: A survey. In Transactions on Emerging Telecommuni_cations Technologies; Bernabe, J., Hernández, J., Moreno, M., Gomez, A., Eds.; Willey: Hoboken, NJ, USA, 2019._
45. Roman, R.; Zhou, J.; Lopez, J. On the features and challenges of security and privacy in distributed internet of things. Comput.
_[Netw. 2013, 57, 2266–2279. [CrossRef]](http://dx.doi.org/10.1016/j.comnet.2012.12.018)_
-----
_Sustainability 2021, 13, 10556_ 25 of 26
46. Cirani, S. A scalable and self-configuring architecture for service discovery in the internet of things. IEEE Internet Things J. 2014,
_[1, 508–521. [CrossRef]](http://dx.doi.org/10.1109/JIOT.2014.2358296)_
47. Tsai, C.W.; Lai, C.F.; Vasilakos, A. Future internet of things: Open issues and challenges. Wirel. Netw. 2014, 20, 2201–2217.
[[CrossRef]](http://dx.doi.org/10.1007/s11276-014-0731-0)
48. Antonakakis, M.; April, T.; Bailey, M.; Bernhard, M.; Bursztein, E.; Cochran, J.; Durumeric, Z.; Halderman, J.A.; Invernizzi, L.;
Kallitsis, M.; et al. Understanding the mirai botnet. In Proceedings of the 26th {USENIX} Security Symposium ({USENIX}
Security 17), Vancouver, BC, Canada, 16–18 August 2017; pp. 1093–1110.
49. Khattak, H.A.; Tehreem, K.; Almogren, A.; Ameer, Z.; Din, I.U.; Adnan, M. Dynamic pricing in industrial internet of things:
[Blockchain application for energy management in smart cities. J. Inf. Secur. Appl. 2020, 55, 102615. [CrossRef]](http://dx.doi.org/10.1016/j.jisa.2020.102615)
50. Kan, L.; Wei, Y.; Muhammad, A.; Siyuan, W.; Linchao, G.; Kai, H. A multiple blockchains architecture on inter-blockchain
communication. In Proceedings of the 2018 IEEE International Conference on Software Quality, Reliability and Security
Companion (QRS-C), Lisbon, Portugal, 16–20 July 2018; pp. 139–145.
51. Lee, W.M. Using the metamask chrome extension. In Beginning Ethereum Smart Contracts Programming; Apress: Berkeley, CA,
USA, 2019; pp. 93–126.
52. Ta¸s, R.; Tanrıöver, Ö.Ö. Building a decentralized application on the Ethereum blockchain. In Proceedings of the 2019 3rd
International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), Ankara, Turkey, 11–13 October
2019; pp. 1–4.
53. Benet, J. Ipfs-content addressed, versioned, p2p file system. arXiv 2014, arXiv:1407.3561.
54. Zichichi, M.; Ferretti, S.; D’Angelo, G. A distributed ledger based infrastructure for smart transportation system and social good.
In Proceedings of the 2020 IEEE 17th Annual Consumer Communications & Networking Conference (CCNC), Las Vegas, NV,
USA, 10–13 January 2020; pp. 1–6.
55. Li, R.; Song, T.; Mei, B.; Li, H.; Cheng, X.; Sun, L. Blockchain for Large-Scale Internet of Things Data Storage and Protection. IEEE
_[Trans. Serv. Comput. 2019, 12, 762–771. [CrossRef]](http://dx.doi.org/10.1109/TSC.2018.2853167)_
56. Wang, S.; Zhang, Y.; Zhang, Y. A blockchain-based framework for data sharing with fine-grained access control in decentralized
[storage systems. IEEE Access 2018, 6, 38437–38450. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2018.2851611)
57. Ding, S.; Cao, J.; Li, C.; Fan, K.; Li, H. A Novel Attribute-Based Access Control Scheme Using Blockchain for IoT. IEEE Access
**[2019, 7, 38431–38441. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2019.2905846)**
58. Wang, H.; He, D.; Shen, J.; Zheng, Z.; Zhao, C.; Zhao, M. Verifiable outsourced ciphertext-policy attribute-based encryption in
[cloud computing. Soft Comput. 2017, 21, 7325–7335. [CrossRef]](http://dx.doi.org/10.1007/s00500-016-2271-2)
59. Fernández, F.; Alonso, A.; Marco, L.; Salvachúa, J. A model to enable application-scoped access control as a service for IoT using
OAuth 2.0. In Proceedings of the 2017 20th Conference on Innovations in Clouds, Internet and Networks (ICIN), Paris, France,
7–9 March 2017; pp. 322–324.
60. Hummen, R.; Shafagh, H.; Raza, S.; Voig, T.; Wehrle, K. Delegation-based Authentication and Authorization for the IP-based
Internet of Things. In Proceedings of the 2014 Eleventh Annual IEEE International Conference on Sensing, Communication, and
Networking (SECON), Singapore, 30 June–3 July 2014; pp. 284–292.
61. Gusmeroli, S.; Piccione, S.; Rotondi, D. A capability-based security approach to manage access control in the internet of things.
_[Math. Comput. Model 2013, 58, 1189–1205. [CrossRef]](http://dx.doi.org/10.1016/j.mcm.2013.02.006)_
62. Biswas, K.; Muthukkumarasamy, V. Securing smart cities using blockchain technology. In Proceedings of the 2016 IEEE 18th
international conference on high performance computing and communications; IEEE 14th international conference on smart
city; IEEE 2nd international conference on data science and systems (HPCC/SmartCity/DSS, Sydney, NSW, Australia, 12–14
December 2016; pp. 1392–1393.
63. Rehan, M.; Rehmani, M. Blockchain-Enabled Fog and Edge Computing: Concepts, Architectures and Applications: Concepts, Architectures
_and Applications; CRC Press: Boca Raton, FL, USA, 2020._
64. Yu, K.P.; Tan, L.; Aloqaily, M.; Yang, H.; Jararweh, Y. Blockchain-enhanced data sharing with traceable and direct revocation in
[IIoT. IEEE Trans. Ind. Inform. 2021, 17, 7669–7678. [CrossRef]](http://dx.doi.org/10.1109/TII.2021.3049141)
65. Šimuni´c, S. Upotreba Blockchain Tehnologije za Registraciju i Upravljanje IoT Ure ¯dajima; Department of Computer, Faculty of
Engineering, University of Rijeka: Rijeka, Croatia, 2018.
66. Do, H.; Ng, W. Blockchain-Based System for Secure Data Storage with Private Keyword Search. In Proceedings of the 2017 IEEE
World Congress on Services (SERVICES), Honolulu, HI, USA, 25–30 June 2017.
67. Zhang, G.; Li, T.; Li, Y.; Hui, P.; Jin, D. Blockchain-Based Data Sharing System for AI-Powered Network Operations. J. Commun.
_[Inf. Netw. 2018, 3, 1–8. [CrossRef]](http://dx.doi.org/10.1007/s41650-018-0024-3)_
68. Steichen, M.; Fiz, B.; Norvill, R.; Shbair, W.; State, R. Blockchain-Based, Decentralized Access Control for IPFS. In Proceedings
of the 2018 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications
(GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), Halifax, NS, Canada,
30 July–3 August 2018; pp. 1499–1506.
69. [Sifah, E. Chain-based big data access control infrastructure. J. Supercomput. 2018, 74, 4945–4964. [CrossRef]](http://dx.doi.org/10.1007/s11227-018-2308-7)
70. Zhang, Y.; Kasahara, S.; Shen, Y.; Jiang, X.; Wan, J. Smart contract-based access control for the internet of things. IEEE Internet
_[Things J. 2019, 6, 1594–1605. [CrossRef]](http://dx.doi.org/10.1109/JIOT.2018.2847705)_
-----
_Sustainability 2021, 13, 10556_ 26 of 26
71. Nguyen, K.; Oualha, N.; Laurent, M. Securely outsourcing the ciphertext-policy attribute-based encryption. World Wide Web 2018,
_[21, 169–183. [CrossRef]](http://dx.doi.org/10.1007/s11280-017-0473-x)_
72. Oualha, N.; Nguyen, K. Lightweight attribute-based encryption for the internet of things. In Proceedings of the 2016 25th
International Conference on Computer Communication and Networks (ICCCN), Waikoloa HI, USA, 1–4 August 2016; pp. 1–6.
73. Hur, J.; Kang, K. Secure data retrieval for decentralized disruption-tolerant military networks. IEEE/ACM Trans. Netw. 2012,
_[22, 16–26. [CrossRef]](http://dx.doi.org/10.1109/TNET.2012.2210729)_
74. Bethencourt, J.; Sahai, A.; Waters, B. Ciphertext-policy attribute-based encryption. In Proceedings of the 2007 IEEE Symposium
on Security and Privacy (SP’07), Berkeley, CA, USA, 20–23 May 2007.
75. Nishide, T.; Yoneyama, K.; Ohta, K. Attribute-based encryption with partially hidden encryptor-specified access structures. In
Proceedings of the International Conference on Applied Cryptography and Network Security, New York, NY, USA, 3–6 June
2008; pp. 111–129.
76. Khan, F.; Li, H.; Zhang, L.; Shen, J. An expressive hidden access policy CP-ABE. In Proceedings of the 2017 IEEE Second
International Conference on Data Science in Cyberspace (DSC), Shenzhen, China, 26–29 June 2017; pp. 178–186.
77. Zhou, Z.; Huang, D.; Wang, Z. Efficient privacy-preserving ciphertext-policy attribute based-encryption and broadcast encryption.
_[IEEE Trans. Comput. 2013, 64, 126–138. [CrossRef]](http://dx.doi.org/10.1109/TC.2013.200)_
78. Phuong, T.; Yang, G.; Susilo, W. Hidden ciphertext policy attribute-based encryption under standard assumptions. IEEE Trans.
_[Inf. Forensics Secur. 2015, 11, 35–45. [CrossRef]](http://dx.doi.org/10.1109/TIFS.2015.2475723)_
79. Hammi, M.; Bellot, P.; Serhrouchni, A. BCTrust: A decentralized authentication blockchain-based mechanism. In Proceedings of
the 2018 IEEE Wireless Communications and Networking Conference (WCNC), Barcelona, Spain, 15–18 April 2018; pp. 1–6.
80. [Ganache. Trufflesuite. Ganache ONE CLICK BLOCKCHAIN SOLUTION. Available online: https://www.trufflesuite.com/](https://www.trufflesuite.com/ganache)
[ganache (accessed on 15 September 2021).](https://www.trufflesuite.com/ganache)
-----
| 20,556
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/su131910556?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/su131910556, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2071-1050/13/19/10556/pdf?version=1632448701"
}
| 2,021
|
[] | true
| 2021-09-23T00:00:00
|
[
{
"paperId": "6a6ad9eb495739f4c80e7c09598720c3d5c5dff7",
"title": "Federated Learning: Collaborative Machine Learning without\nCentralized Training Data"
},
{
"paperId": "83c60de585bcf134ed09cf49018c59af902595e0",
"title": "An Overview of Security and Privacy in Smart Cities"
},
{
"paperId": "fc686da7b517cc21bf0d3dee05d616b9ed4dec59",
"title": "Machine Learning-based Mist Computing Enabled Internet of Battlefield Things"
},
{
"paperId": "b221e6fe06e35cab3b8448b9126cb3abb4b16ebb",
"title": "Increasing privacy and security by integrating a Blockchain Secure Interface into an IoT Device Security Gateway Architecture"
},
{
"paperId": "5d307ddd5cd9897aaf53ac03c936d963aebe09ec",
"title": "Privacy-preserving in smart contracts using blockchain and artificial intelligence for cyber risk measurements"
},
{
"paperId": "a5299fcbd6fda2101e267b51cbd78e55e4cebb1a",
"title": "Blockchain-Enhanced Data Sharing With Traceable and Direct Revocation in IIoT"
},
{
"paperId": "662dc4815c6abd7ea00f48030f11270ffc321242",
"title": "Dynamic pricing in industrial internet of things: Blockchain application for energy management in smart cities"
},
{
"paperId": "575707b765f5a79161bc9b29c4442c95a410bdf1",
"title": "Latest Advances in Interent Of Things and Big Data with Requirments and Taxonomy"
},
{
"paperId": "4c3d2bf0076e30948749187e8aff7e0589477c24",
"title": "FIViz: Forensics Investigation through Visualization for Malware in Internet of Things"
},
{
"paperId": "a0f6d55cf54a063536dfe8a9362b62f7e8becf8d",
"title": "Blockchain-enabled Fog and Edge Computing"
},
{
"paperId": "efee1a3fc5117e0f5444b592d50b55d1001a1a53",
"title": "Security, privacy and trust of different layers in Internet-of-Things (IoTs) framework"
},
{
"paperId": "1cc7d83655500bdb64a51b6067fc8e660827eded",
"title": "A Distributed Ledger Based Infrastructure for Smart Transportation System and Social Good"
},
{
"paperId": "9f50cadc6e13ff53d1b8d80c144453fcf1217f8e",
"title": "Building A Decentralized Application on the Ethereum Blockchain"
},
{
"paperId": "f7b69b8babfa607f2e8b0371f24cb720a7a827d6",
"title": "Blockchain for Large-Scale Internet of Things Data Storage and Protection"
},
{
"paperId": "ac3ce702fbeb154e7779137806036261c938204b",
"title": "An overview of security and privacy in smart cities' IoT communications"
},
{
"paperId": "94554103c275907d052fc87598c032ca041a74c9",
"title": "An intrusion detection system for connected vehicles in smart cities"
},
{
"paperId": "185c29d2557595bf6769a50298b619f5de19d5ae",
"title": "An Internet of Things (IoT)-Based Coverage Monitoring for Mission Critical Regions"
},
{
"paperId": "c6ebe9be135b77ecee5aed12a2ca621a9f34978c",
"title": "Machine learning for wearable IoT‐based applications: A survey"
},
{
"paperId": "b3bfc404f0a617020154ede7850ccca27a41f458",
"title": "A Novel Attribute-Based Access Control Scheme Using Blockchain for IoT"
},
{
"paperId": "eddcb7b804fa2d5cc2823fd73462f9c9afe6dfca",
"title": "The MIoT paradigm: Main features and an \"ad-hoc\" crawler"
},
{
"paperId": "b95dda2f2e606c91b69f371c1a706248cb5f69bb",
"title": "On the Feasibility of Deep Learning in Sensor Network Intrusion Detection"
},
{
"paperId": "531aee46bc98995922b358619c8d80e4dffed168",
"title": "Blockchain and supply chain management integration: a systematic review of the literature"
},
{
"paperId": "3c03ca1d6bc5aa5e6306fed1d54cd37756c6e3f2",
"title": "Towards Efficient Energy Monitoring Using IoT"
},
{
"paperId": "85704485a83e25202d925e52cde0ff0637e37a6b",
"title": "Upotreba blockchain tehnologije za registraciju i upravljanje IoT uređajima"
},
{
"paperId": "f5dc170174405d643f40943bbe3959e9b2de8448",
"title": "Blockchain-Based Data Sharing System for AI-Powered Network Operations"
},
{
"paperId": "047f7e124073f3604991c0c4485d243f168290f0",
"title": "Cyber Security Challenges of Deploying IoT in Smart Cities for Healthcare Applications"
},
{
"paperId": "1dd2bde8cdb652708a716fb00e94fcd9cf19438b",
"title": "A Multiple Blockchains Architecture on Inter-Blockchain Communication"
},
{
"paperId": "28f1b9c67265b973cf182f596fadf1eeb5fd808b",
"title": "Blockchain-Based, Decentralized Access Control for IPFS"
},
{
"paperId": "593784aa3525e3c875e125c68cfbbb1da72a1005",
"title": "A Blockchain-Based Framework for Data Sharing With Fine-Grained Access Control in Decentralized Storage Systems"
},
{
"paperId": "fc1857db59d74774089a1d85ff1bb4ccc2245264",
"title": "A paradigm for the cooperation of objects belonging to different IoTs"
},
{
"paperId": "8ce5c72bd807b4f7512acd4bc3b4325df08cab9f",
"title": "A Reliable and Lightweight Trust Computing Mechanism for IoT Edge Devices Based on Multi-Source Feedback Information Fusion"
},
{
"paperId": "677dc4212103b2d39ab36692956741349797e5e4",
"title": "BCTrust: A decentralized authentication blockchain-based mechanism"
},
{
"paperId": "0565a4be367d4996faee9f1945bb7fab2c8a2b77",
"title": "LPTD: Achieving Lightweight and Privacy-Preserving Truth Discovery in CIoT"
},
{
"paperId": "19a3016125f77dc4725c8da7f130c6e2ada86961",
"title": "Chain-based big data access control infrastructure"
},
{
"paperId": "ce6a10bb54b0ed3043f1b62f0bd6d6a5671797a7",
"title": "Smart Contract-Based Access Control for the Internet of Things"
},
{
"paperId": "220a7eed5c859f596a0d9dbc194034d170a6af51",
"title": "Understanding the Mirai Botnet"
},
{
"paperId": "f2e2b0f7636773b64374895c0ef374a60201a935",
"title": "Securely outsourcing the ciphertext-policy attribute-based encryption"
},
{
"paperId": "7457ef01faf6e496dde840572947decdbb0a68c0",
"title": "Blockchain-Based System for Secure Data Storage with Private Keyword Search"
},
{
"paperId": "90284bef123436c89bac5b8ae8738b2b7fce3fbf",
"title": "An Expressive Hidden Access Policy CP-ABE"
},
{
"paperId": "fba03edb5cda1b7f93d6a7bca0d6f4d05919fa60",
"title": "A model to enable application-scoped access control as a service for IoT using OAuth 2.0"
},
{
"paperId": "6c176b3fa4cfebca9d0984fabcb5374da83d8838",
"title": "An Energy Efficient Data Dissemination Scheme for Distributed Storage in the Internet of Things"
},
{
"paperId": "f620f8f3ef922d3981a344733eb7f0f6d158276b",
"title": "A trust management model for sensor enabled mobile devices in IoT"
},
{
"paperId": "6a063f13e3a891d14ecf43aa92396cb781ec4e4b",
"title": "Securing Smart Cities Using Blockchain Technology"
},
{
"paperId": "f14be26821cfb472e63d36995772e29e94de3480",
"title": "Towards decentralized data storage in general cloud platform for meta-products"
},
{
"paperId": "6e0a55ecc79a46840a538871eac1ae0d4198890c",
"title": "Lightweight Attribute-Based Encryption for the Internet of Things"
},
{
"paperId": "9994ef865aaaad161bbf3dfd40387ecb916477b1",
"title": "Verifiable outsourced ciphertext-policy attribute-based encryption in cloud computing"
},
{
"paperId": "30c8e2773243262fc5e5bee8a3538f5acc2ec72c",
"title": "Morpheus: Creating Application Objects Efficiently for Heterogeneous Computing"
},
{
"paperId": "441b480d2b2d8d59aa28f0911b57d4e7d2c2f57b",
"title": "The Internet of Things (IoT): Applications, investments, and challenges for enterprises"
},
{
"paperId": "cddb22908f28a1636cbbdeb3a4f0e00f9cef05a9",
"title": "The Internet of Things for Health Care: A Comprehensive Survey"
},
{
"paperId": "8e2f74143b5a21e80649d68db27dbfb3e5d1121b",
"title": "Delegation-based authentication and authorization for the IP-based Internet of Things"
},
{
"paperId": "84c4fc22fec1656633e8dabe583bb3de23a09ea6",
"title": "IoT Security: Ongoing Challenges and Research Opportunities"
},
{
"paperId": "b17920b215c930968c0338a20d63c272bcc55df8",
"title": "A Scalable and Self-Configuring Architecture for Service Discovery in the Internet of Things"
},
{
"paperId": "40ede9054b10ded75742f64ce738c2e5595b03bc",
"title": "IPFS - Content Addressed, Versioned, P2P File System"
},
{
"paperId": "1fcf4d24a04405a2f7323d1a4c78f7f8e97fb0bb",
"title": "Energy efficiency in wireless sensor networks: A top-down survey"
},
{
"paperId": "71271e751e94ede85484250d0d8f7fc444423533",
"title": "Future Internet of Things: open issues and challenges"
},
{
"paperId": "4280c910d835f4f8d32fcaef5ffe4519da4224ec",
"title": "Security and privacy in the Internet of Things: Current status and open issues"
},
{
"paperId": "8552ab48bd3d0b4f54a513c1dbcb4159a7be04a8",
"title": "A capability-based security approach to manage access control in the Internet of Things"
},
{
"paperId": "9c50d15fdaca21569d46335fdd0108012a00e2a0",
"title": "On the features and challenges of security and privacy in distributed internet of things"
},
{
"paperId": "96e056b4fc55e52a6f1b3ec0d254f675259b6707",
"title": "RIOT OS: Towards an OS for the Internet of Things"
},
{
"paperId": "3309ecd8c0675d438aec2128fb4ebf2f814364a6",
"title": "Authentication and Access Control in the Internet of Things"
},
{
"paperId": "bf8f19ca0b08882290fc3aea3b56483732a07abd",
"title": "SIoT: Giving a Social Structure to the Internet of Things"
},
{
"paperId": "b58dde48df67dff3ea98a74b6306782a4d583976",
"title": "Attribute-Based Encryption with Partially Hidden Encryptor-Specified Access Structures"
},
{
"paperId": "7279bad1acc1eefe02ba58d678e190c668075fa8",
"title": "Ciphertext-Policy Attribute-Based Encryption"
},
{
"paperId": "196cbaf7ecdaa89b1a49637be3e3099faad37853",
"title": "The Bittorrent P2P File-Sharing System: Measurements and Analysis"
},
{
"paperId": "8f0ee1195ae7e74101505222498a0a882b96f53f",
"title": "Contiki - a lightweight and flexible operating system for tiny networked sensors"
},
{
"paperId": "64209551e11a231f87fea3060010f469196db8b7",
"title": "Usability and privacy: a study of Kazaa P2P file-sharing"
},
{
"paperId": "ff1dc94b030b6f1b605bc051275b88e85648338c",
"title": "Peer-to-peer architecture case study: Gnutella network"
},
{
"paperId": "88fd48294c1a78f3f839c493100b1ec5f3492082",
"title": "Fog Based Architecture and Load Balancing Methodology for Health Monitoring Systems"
},
{
"paperId": null,
"title": "Ganache ONE CLICK BLOCKCHAIN SOLUTION"
},
{
"paperId": null,
"title": "To Share or Not to Share? That Is the (Research Data) Question"
},
{
"paperId": null,
"title": "Trufflesuite"
},
{
"paperId": "49ab60f20c40a78afc651479c2770059ad0f5b31",
"title": "Using the MetaMask Chrome Extension"
},
{
"paperId": "0c001947cb734e8645979e6390d991e1f552c3fb",
"title": "Chapter Eight - A blockchain based access control framework for the security and privacy of IoT with strong anonymity unlinkability and intractability guarantees"
},
{
"paperId": null,
"title": "Strategic IoT Technologies and Trends"
},
{
"paperId": "2c41eedbab56fa15d90010369f6ad9c6c1c12340",
"title": "Access control in IoT environments: Feasible scenarios"
},
{
"paperId": "775b628207505dca33bccd2c69ccf5e140456e00",
"title": "Hidden Ciphertext Policy Attribute-Based Encryption Under Standard Assumptions"
},
{
"paperId": "de7a379812812ee427c898b718113eb23299da6e",
"title": "Secure Data Retrieval for Decentralized Disruption-Tolerant Military Networks"
},
{
"paperId": "b3a19e3a76c2e243f75d946aa88283856c3a657c",
"title": "Efficient Privacy-Preserving Ciphertext-Policy Attribute Based-Encryption and Broadcast Encryption"
},
{
"paperId": null,
"title": "Storj vs. Dropbox: Why Decentralized Storage Is the Future"
},
{
"paperId": "5bec02311d73c0108d4d70dd31b53f63bd1bd980",
"title": "Internet of Things: Features, Challenges, and Vulnerabilities"
},
{
"paperId": "604e160fd9dd32ce85d549c529dd9b50950cf0e9",
"title": "The Anthropology of File Sharing: Consuming Napster As a Gift"
},
{
"paperId": null,
"title": "IoT Devices Will Outnumber the World ’ s Population This Year for the First Time ; ZDNet , A RED VENTURES COMPANY ; Volume 1"
}
] | 20,556
|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Biology",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00112bc246d0ad07bf4c6ce0c2ec39f30c3015ca
|
[
"Medicine"
] | 0.868287
|
Genome-Wide Analysis of the Auxin/Indoleacetic Acid Gene Family and Response to Indole-3-Acetic Acid Stress in Tartary Buckwheat (Fagopyrum tataricum)
|
00112bc246d0ad07bf4c6ce0c2ec39f30c3015ca
|
International Journal of Genomics
|
[
{
"authorId": "2156127749",
"name": "Fan Yang"
},
{
"authorId": "2141813753",
"name": "Xiuxia Zhang"
},
{
"authorId": "2135901450",
"name": "Ruifeng Tian"
},
{
"authorId": "2143434134",
"name": "Liwei Zhu"
},
{
"authorId": "2170733414",
"name": "Fang Liu"
},
{
"authorId": "2189842824",
"name": "Qingfu Chen"
},
{
"authorId": "82603222",
"name": "Xuanjie Shi"
},
{
"authorId": "3842964",
"name": "D. Huo"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Int J Genom"
],
"alternate_urls": null,
"id": "ce1c5634-a0e1-4bb5-9ba6-82858adb8743",
"issn": "2314-436X",
"name": "International Journal of Genomics",
"type": "journal",
"url": "https://www.hindawi.com/journals/ijg/"
}
|
Auxin/indoleacetic acid (Aux/IAA) family genes respond to the hormone auxin, which have been implicated in the regulation of multiple biological processes. In this study, all 25 Aux/IAA family genes were identified in Tartary buckwheat (Fagopyrum tataricum) by a reiterative database search and manual annotation. Our study provided comprehensive information of Aux/IAA family genes in buckwheat, including gene structures, chromosome locations, phylogenetic relationships, and expression patterns. Aux/IAA family genes were nonuniformly distributed in the buckwheat chromosomes and divided into seven groups by phylogenetic analysis. Aux/IAA family genes maintained a certain correlation and a certain species-specificity through evolutionary analysis with Arabidopsis and other grain crops. In addition, all Aux/IAA genes showed a complex response pattern under treatment of indole-3-acetic acid (IAA). These results provide valuable reference information for dissecting function and molecular mechanism of Aux/IAA family genes in buckwheat.
|
Hindawi
International Journal of Genomics
Volume 2021, Article ID 3102399, 14 pages
[https://doi.org/10.1155/2021/3102399](https://doi.org/10.1155/2021/3102399)
# Research Article Genome-Wide Analysis of the Auxin/Indoleacetic Acid Gene Family and Response to Indole-3-Acetic Acid Stress in Tartary Buckwheat (Fagopyrum tataricum)
## Fan Yang,[1] Xiuxia Zhang,[2] Ruifeng Tian,[3] Liwei Zhu,[2] Fang Liu,[2] Qingfu Chen,[2]
Xuanjie Shi,[1,4] and Dongao Huo 2,3
1Henan Academy of Agricultural Sciences, Zhengzhou 450002, China
2Guizhou Normal University, Guiyang 550025, China
3College of Plant Science & Technology, Huazhong Agricultural University, Wuhan 430070, China
4Zhengzhou University, Zhengzhou 450001, China
Correspondence should be addressed to Dongao Huo; [email protected]
Received 2 June 2021; Revised 17 August 2021; Accepted 24 September 2021; Published 26 October 2021
Academic Editor: Monica Marilena Miazzi
[Copyright © 2021 Fan Yang et al. This is an open access article distributed under the Creative Commons Attribution License,](https://creativecommons.org/licenses/by/4.0/)
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Auxin/indoleacetic acid (Aux/IAA) family genes respond to the hormone auxin, which have been implicated in the regulation of
multiple biological processes. In this study, all 25 Aux/IAA family genes were identified in Tartary buckwheat (Fagopyrum
tataricum) by a reiterative database search and manual annotation. Our study provided comprehensive information of
Aux/IAA family genes in buckwheat, including gene structures, chromosome locations, phylogenetic relationships, and
expression patterns. Aux/IAA family genes were nonuniformly distributed in the buckwheat chromosomes and divided into
seven groups by phylogenetic analysis. Aux/IAA family genes maintained a certain correlation and a certain species-specificity
through evolutionary analysis with Arabidopsis and other grain crops. In addition, all Aux/IAA genes showed a complex
response pattern under treatment of indole-3-acetic acid (IAA). These results provide valuable reference information for
dissecting function and molecular mechanism of Aux/IAA family genes in buckwheat.
## 1. Introduction
Tartary buckwheat (Fagopyrum tataricum), also named as
bitter buckwheat or kuqiao, is an annual eudicot plant
belonging to the genus Fagopyrum [1]. It is originated in
southwest China and currently grown on western China,
Japan, South Korea, Canada, and Europe, for exhibits strong
abiotic resistance to harsh eco-climatic environments [2, 3].
Buckwheat is considered an important medicinal and edible
food crop, rich in protein, and a balance of essential amino
acids, as well as beneficial phytochemicals. ([4–6]. Flavonoids, especially rutin, significantly higher than in other
crops, have antifatigue properties and anti-inflammatory
activity and can be used to treat microangiopathy [7]. The
study on the mechanism of the important metabolites can
effectively promote the use of buckwheat. In addition, studying the resistance mechanism of buckwheat is not only ben
eficial to the production of buckwheat under stress but also
can get meaningful resistance genes for other crops. Auxin
plays an important role in controlling multitudinous vital
processes [8–11] and stress tolerance ([12–14]. It is significant to study the response of buckwheat to hormones.
The classical plant hormones, including auxins, cytokinins, gibberellins, abscisic acid, and ethylene, were discovered several decades ago. Recently, a number of additional
molecules have been identified that might also be classified
as plant hormones. While a considerable amount is known
about the biosynthesis and distribution of these hormones
in plants, the receptors and signal transduction pathways
of plant hormones are only beginning to be unraveled.
Auxin has many roles in plant growth and development. It
mediates elongation of stem and root growth, enlargement
of fruits and tubers, and promotion of cell division, through
regulating cell division, expansion, differentiation, and
-----
2 International Journal of Genomics
patterning [15, 16]. In an attempt to understand the
molecular mechanism of auxin action, six gene families that
regulating auxin-responsive have been identified and characterized from different species, which including the auxin
response factor (ARF) gene family [17], small auxin-up RNA
(SAUR) gene family [18–20], Gretchenhagen-3 (GH3) gene
family [21, 22], Auxin input carrier (AUX1) gene family
[23], Transport inhibitor response 1 (TIR1) gene family [24],
and auxin/indoleacetic acid (Aux/IAA) gene family [25, 26].
Dynamic spatial and temporal changes in auxin levels
can trigger gene reprogramming precisely and rapidly,
which requires auxin early response genes, such as the Aux/IAA, ARF, SAUR, and GH3 families. Among these genes,
Auxin/indole-3-acetic acid (Aux/IAA) family numbers have
identified as short-lived nuclear proteins that represent a
class of primary auxin-responsive genes and play a pivotal
role to perception and signaling of the plant hormone auxin
[27, 28]. At high auxin levels, Aux/IAA proteins can be
ubiquitinated by interacting with TIR1/AFB receptors and
subsequently degraded via the 26S proteasome [29, 30], the
different protein results in distinct auxin-sensing effects in
different tissues and developmental phases [31, 32], thereby
regulating the processes of plant growth and development in
a precise manner.
The first isolated Aux/IAA genes were the PS-IAA4/5
and PS-IAA6 genes from pea [33, 34]. Subsequently, 14
Aux/IAA genes were isolated from Arabidopsis based on
the homologues to the genes from pea [35]. With the advent
of genome sequencing, the IAA/Aux gene family has been
identified in more than 30 plant species by genome-wide
analysis ([36–39]. Over the past two decades, members of
this family have been intensely studied in Arabidopsis and
shown to have distinct functions in plant growth and development processes. The mechanism by which the Aux/IAA
gene family responds to auxin stimulation has been effectively analyzed [40]. Aux/IAA genes encode short-lived
nuclear proteins, comprising four highly conserved domains
[41], namely, domains I and II, which are located at
the N-terminus, and domains III and IV located at
the C-terminus. Domain I has the amphiphilic motif
LXLXLX that is associated with ethylene response factors,
can bind to corepressors, and is required for the transcriptional inhibitory function of Aux/IAA proteins [40, 42].
The domain II core sequence VGWPP is the target of Aux/IAA protein ubiquitination for degradation [43–45].
Domains III and IV are sites that bind to the auxin response
factor, and their secondary structure can be folded into a
helix-roentle-helix motif. Domain IV may also contribute
to the dimerization. Furthermore, in domains II and IV,
there are generally two nuclear localization signals (NLS)
[46]. In addition, the phosphorylation site of photosensitive
pigments between domains I and II suggests that the
Aux/IAA protein could mediate the auxin and optical
signaling pathways through phosphorylation of the photosensitive pigments [47]. While considerable information
has been obtained about the biosynthesis and distribution
of these hormones in plants, the receptors and signal transduction pathways for plant hormones are only beginning to
be unraveled.
Sequences derived from large-scale sequencing projects
are informative in functional genomics research, providing
an opportunity to scan gene families. Since the first publication of the buckwheat genome sequence, understanding of
the genome information of buckwheat has been greatly
enhanced [3]. In this study, we identified at least 25 putative
members of buckwheat Aux/IAA genes using a special
Aux/IAA domain hidden Markov model (HMM) of the
whole genome. Therefore, we performed bioinformatics
analyses, including phylogenetic, gene structure, and motif
composition analyses, to determine the chromosomal locations of the genes. Subsequently, phylogenetic comparisons
with Arabidopsis and other crops were performed. This
study contributes to the clarification of the functions of
Aux/IAA proteins and provides a foundation for further
comparative genomic studies in Tartary buckwheat.
## 2. Results
2.1. Identification and Annotation of the Aux/IAA Genes in
Tartary Buckwheat. A total of 25 genes (shown in Table 1)
were identified using Basic Local Alignment Search Tool
(BLAST) methods through the conserved sequences generated from the HMM profile in Pfam using the 261 aa
conserved sequences of Aux/IAA proteins based on the
potential orthologs in Arabidopsis. The genes confirmed to
contain conserved domains of Aux/IAA proteins, and the
transcripts with the lowest E-value of domain examination
were named FtAux/IAA genes. Gene sequence analysis of
the 25 FtAux/IAAs showed that the predicted protein
lengths were 160 and 890 aa, and the CDS sequences varied
in size from 540 bp to 2673 bp. Moreover, the pI (theoretical
isoelectric point) and MW (molecular weight) ranged from
5.4 to 9.15 and 20280.1 kDa to 99377.01 kDa, respectively.
2.2. Chromosomal Locations of FtAux/IAA. The FtAux/IAA
gene sequences were initially mapped onto the Tartary buckwheat genome, and all 25 FtAux/IAA genes were separately
mapped onto eight chromosomes. Most FtAux/IAA genes
were observed at the top and bottom arms of the chromosomes, and a cluster was distributed on different chromosomes (Figure 1). Four genes (16%) were located on Chr. 1,
and three genes on Chr. 2, which comprised 12% of the total
number of genes. Chr. 3 had six FtAux/IAA genes, which was
the highest number in a single chromosome. The lowest proportion of genes (4%) was on the Chr. 4, Chr. 5, and Chr. 8,
containing one gene each. There were four (15%) and five
(20%) genes on Chr. 6 and Chr. 7, respectively. In terms of
distribution, the genes of different families remained relatively regional, with all but a few of the 31 genes in the cluster,
whose number was between two and three decibels. In addition, the genes FtAux/IAA 01 and FtAux/IAA 02 were
located adjacent to each other on the first chromosome and
showed a tight chain. The same observation was found on
Chr. 2, Chr. 3, Chr. 6, and Chr. 7, where there were two, four,
two, and two closely linked genes, respectively. These data
suggest that the distribution of some FtAux/IAA genes on
the buckwheat genome probably results from either reverse
or direct tandem duplication.
-----
International Journal of Genomics 3
Table 1: Aux/IAA family in buckwheat.
Gene ID Chromosome CDS (bp) Introns No. of aa pl MW (kDa)
FtPinG0008442000.01.T01 Chr1 1005 5 334 8.07 36303.04
FtPinG0008443000.01.T01 Chr1 2250 13 749 5.4 83729.44
FtPinG0000387700.01.T01 Chr1 1809 14 602 6.03 67736.99
FtPinG0004315700.01.T01 Chr1 678 4 225 6.06 24894.23
FtPinG0005029300.01.T01 Chr2 744 4 247 7.52 26767.1
FtPinG0000809900.01.T01 Chr2 1071 5 356 6.77 38851.25
FtPinG0000807700.01.T01 Chr2 2613 13 870 5.4 96264.2
FtPinG0006568700.01.T01 Chr3 591 2 196 6.63 21774.63
FtPinG0001961200.01.T01 Chr3 558 1 185 6.38 20844.57
FtPinG0007273100.01.T01 Chr3 798 3 265 8.42 29805.87
FtPinG0005142100.01.T01 Chr3 573 2 190 8.29 21405.48
FtPinG0005142700.01.T01 Chr3 621 4 206 5.44 22602.56
FtPinG0004530400.01.T01 Chr3 738 3 245 7.69 27386.98
FtPinG0005535200.01.T01 Chr4 615 1 204 5.98 23337.24
FtPinG0005745300.01.T01 Chr5 540 2 179 5.37 20362.13
FtPinG0007581000.01.T01 Chr6 693 2 230 8.23 25518.44
FtPinG0007581100.01.T01 Chr6 603 1 200 6.81 22823.65
FtPinG0001971700.01.T01 Chr6 2673 13 890 5.66 99377.01
FtPinG0002984800.01.T01 Chr6 915 4 160 7.66 33443.35
FtPinG0002846500.01.T01 Chr7 585 3 194 9.15 21554.57
FtPinG0007414500.01.T01 Chr7 696 4 231 6.62 25370.15
FtPinG0007414000.01.T01 Chr7 543 2 180 6.75 20280.1
FtPinG0007012600.01.T01 Chr7 552 1 183 5.58 20986.88
FtPinG0009157200.01.T01 Chr7 702 4 233 6.2 25555.18
FtPinG0009368700.01.T01 Chr8 1077 5 358 8.4 38717.69
The information listed in Table 1 was obtained from Tartary Buckwheat Genome Project. CDS: coding sequence; aa: amino acids; pl: isoelectric point; MW:
molecular weight.
The genes in the same evolutionary group have a similar
structure and tend to have similar gene functions, which as it
has been shown in other species, such as Arabidopsis and
rice [48]. We analyzed the structure of introns and exons
of the FtAux/IAA gene sequences using the plaza database
[(https://bioinformatics.psb.ugent.be/plaza/versions/plaza) of](https://bioinformatics.psb.ugent.be/plaza/versions/plaza)
full-length cDNA (Figure 1). All FtAux/IAAs had different
numbers of exons and introns in the translated region; the
number of introns and exons varied from 1 to 14 and 2 to
15, respectively. Four genes (FtAux/IAA9, FtAux/IAA14,
FtAux/IAA17, and FtAux/IAA23) contained two exons and
one intron. FtAux/IAA8, FtAux/IAA15, and FtAux/IAA22
contained three exons and two introns. Genes with four
exons were FtAux/IAA10, FtAux/IAA13, and FtAux/IAA20.
There were eight genes, namely, FtAux/IAA1, FtAux/IAA4,
FtAux/IAA5, FtAux/IAA12, FtAux/IAA16, FtAux/IAA19,
FtAux/IAA21, and FtAux/IAA24, with five exons. FtAux/IAA6 and FtAux/IAA25 had six exons and five introns.
There were three genes (FtAux/IAA2, FtAux/IAA7, and
FtAux/IAA18) containing 14 exons, and FtAux/IAA3 contained the most number of exons. In general, genes of the
FtAux/IAA family showed rich structural variation in buckwheat and may be involved in various metabolic regulatory
networks and developmental processes.
2.3. Gene Peptide Sequence and Motif Composition of the
FtAux/IAA Gene Family. The peptide sequences of all 25
FtAux/IAAs are shown in Figure 2; all the results were verified using DNAMAN. The overall identity of the various
proteins is low, which is similar to those of the Aux/IAA
polypeptides previously determined in other plants. To
examine in detail the domain organization of FtAux/IAA
proteins, multiple sequence alignments of the full-length
protein sequences were performed using the ClustalX program. Alignment of the amino acid sequences of FtAux/IAA
revealed four typical highly conserved domains [34].
According to the Pfam outcome of the protein sequences,
most of the genes contained four conserved structures,
except for the missing domain I in the genes FtAux/IAA10
and FtAux/IAA17. In the second domain, many of the variations were the same in domains II, III, and IV. A pairwise
analysis of the full-length FtAux/IAA protein sequences
indicated that the overall identities ranged from 19% to
69%. However, the amino acid identity within the conserved
domains reached 90%. Domain I contained a leucine-rich
region and was the least conserved among the family
members. The proline-rich domain II was comparatively
more conserved. The classification of all the genes as
Aux/IAA family members was confirmed by constructing a
-----
4 International Journal of Genomics
Gene location
FtPinG0007581000.01.T01
FtPinG0007581100.01.T01 FtPinG0002846500.01.T01 FtPinG0009368700.01.T01
FtPinG0001971700.01.T01
FtPinG0002984800.01.T01
FtPinG0007414500.01.T01
FtPinG0007414000.01.T01
FtPinG0007012600.01.T01
FtPinG0008442000.01.T01
FtPinG0008443000.01.T01
FtPinG0000387700.01.T01
FtPinG0005029300.01.T01
FtPinG0000809900.01.T01
FtPinG0000807700.01.T01
FtPinG0005745300.01.T01
FtPinG0006568700.01.T01
FtPinG0001961200.01.T01
FtPinG0007273100.01.T01
FtPinG0005142100.01.T01
FtPinG0005142700.01.T01
FtPinG0004530400.01.T01
FtPinG0005535200.01.T01
FtPinG0009157200.01.T01
FtPinG0004315700.01.T01
Chr1 Chr2 Chr3 Chr4 Chr5 Chr6 Chr7 Chr8
Name Gene ID Gene structure
FtAux/IAA 01 FtPinG0008442000.01.T01
FtAux/IAA 02 FtPinG0008443000.01.T01
FtAux/IAA 03 FtPinG0000387700.01.T01
FtAux/IAA 04 FtPinG0004315700.01.T01
FtAux/IAA 05 FtPinG0005029300.01.T01
FtAux/IAA 06 FtPinG0000809900.01.T01
FtAux/IAA 07 FtPinG0000807700.01.T01
FtAux/IAA 08 FtPinG0006568700.01.T01
FtAux/IAA 09 FtPinG0001961200.01.T01
FtAux/IAA 10 FtPinG0007273100.01.T01
FtAux/IAA 11 FtPinG0005142100.01.T01
FtAux/IAA 12 FtPinG0005142700.01.T01
FtAux/IAA 13 FtPinG0004530400.01.T01
FtAux/IAA 14 FtPinG0005535200.01.T01
FtAux/IAA 15 FtPinG0005745300.01.T01
FtAux/IAA 16 FtPinG0007581000.01.T01
FtAux/IAA 17 FtPinG0007581100.01.T01
FtAux/IAA 18 FtPinG0001971700.01.T01
FtAux/IAA 19 FtPinG0002984800.01.T01
FtAux/IAA 20 FtPinG0002846500.01.T01
FtAux/IAA 21 FtPinG0007414500.01.T01
FtAux/IAA 22 FtPinG0007414000.01.T01
FtAux/IAA 23 FtPinG0007012600.01.T01
FtAux/IAA 24 FtPinG0009157200.01.T01
FtAux/IAA 25 FtPinG0009368700.01.T01
5′
0 1000 2000 3000 4000 5000 6000 7000 8000 9000
UTR
CDS
Figure 1: Distribution and gene structure of FtAux/IAA genes among eight chromosomes. Constrictions on the chromosomes (vertical bar)
indicate the position of genes. The chromosome numbers and sizes (Mb) are indicated at the top of each bar. The UTR and exon-intron
organization of the FtAux/IAA genes. The UTRs and exons and introns are represented by boxes and lines, respectively.
phylogenetic tree based on domain III and IV amino acid
sequences of the 25 FtAux/IAA and two representative proteins. Amino acid sequence analysis yielded the same results
as the gene structure analysis and the same results as other
gene family analyses.
2.4. Gene Structure and Motif Composition of the FtAux/IAA
Gene Family. To study the evolutionary relationship of the
buckwheat Aux/IAA family gene, a phylogenetic tree was
constructed using the amino acid sequences of the FtAux/IAA genes. The sequences of buckwheat Aux/IAA proteins
were further analyzed using the online software MEME to
understand the diversity and evolutionary relationships.
Figure 3 shows that FtAux/IAA proteins are grouped into
seven distinct clades, and each group contains a different
number, between one and five, of members of the FtAux/
IAA family. In group I, there were four members of FtAux/IAA14, FtAux/IAA23, FtAux/IAA09, and FtAux/IAA08.
Group II contained three members: FtAux/IAA22, FtAux/IAA15, and FtAux/IAA17. The five most common genes
were FtAux/IAA12, FtAux/IAA04, FtAux/IAA24, FtAux/IAA16, and FtAux/IAA21. In group IV, four members
named FtAux/IAA05, FtAux/IAA25, FtAux/IAA01, and
FtAux/IAA06 were on the branch. In group V, there was
only one gene, FtAux/IAA11. FtAux/IAA10, FtAux/IAA19,
FtAux/IAA13, and FtAux/IAA20 comprised group VI, and
four genes (FtAux/IAA03, FtAux/IAA18, FtAux/IAA02,
and FtAux/IAA07) comprised group VII. In all groups,
seven sister gene pairs were found to have a relatively close
relationship with other FtAux/IAA family members in the
evolutionary tree. These results indicate that the functions
of the FtAux/IAA genes in different groups are diverse.
-----
International Journal of Genomics 5
Figure 2: Multiple sequence alignment of the full-length FtAux/IAA proteins obtained with DNAMAN. Conserved domains of FtAux/IAA
proteins are underlined. The gene ID is mentioned on the left of each sequence and amino acid position on the right of each sequence.
The motifs with similar functional domain distributions
were highly conserved in family genes, although there were
significant differences (Figure 3(a)). In general, these genes
can be divided into two categories, with 20 genes carrying
three identical motifs and the gene FtAux/IAA containing
four motifs other than motif six, in addition to the same
sequence of motif 5-2-1. In addition, FtAux/IAA02, FtAux/IAA03, FtAux/IAA07, and FtAux/IAA18 contained nine
motifs, 6-9-4-3-7-10-8-2-1. These results are similar to those
reported in a previous study, suggesting that these motifs
may contribute to the specific functions of these genes
[49]. Gene domains with different functions are shown in
Figure 3(b), with 14 genes containing only the Aux/IAA
domain and seven genes containing only the herpes BLLF
1 superfamily domain and all the genes belonging in groups
I to VI. The FtAux/IAA03, FtAux/IAA18, and FtAux/IAA02
genes in group VII had three domains: B3, Auxin-resp, and
Aux/IAA superfamily; gene FtAux/IAA07 had four domains
in the order B3, Auxin-resp, herpes BLLF 1 superfamily, and
Aux/IAA superfamily.
2.5. Phylogenetic Analysis of the FtAux/IAA Genes in Maize,
Arabidopsis, Rice, and Sorghum. In order to analyze the phylogenetic organization, we performed a phylogenetic analysis
of 25 buckwheat Aux/IAAs and 36 Arabidopsis Aux/IAAs
by generating a phylogenetic tree based on the neighborjoining (NJ) method using MEGA [50]. Based on their phylogenetic relationships, we divided these Aux/IAAs into 10
groups, designated as groups I to X (Figure 4(a)). The family
genes showed stronger clustering between buckwheat and
-----
6 International Journal of Genomics
FtAux/IAA14
FtAux/IAA23
FtAux/IAA09
FtAux/IAA08
FtAux/IAA22
FtAux/IAA15
FtAux/IAA17
FtAux/IAA12
FtAux/IAA04
FtAux/IAA24
FtAux/IAA16
FtAux/IAA21
FtAux/IAA05
FtAux/IAA25
FtAux/IAA01
FtAux/IAA06
FtAux/IAA11
FtAux/IAA10
FtAux/IAA19
FtAux/IAA13
FtAux/IAA20
FtAux/IAA03
FtAux/IAA18
FtAux/IAA02
FtAux/IAA07
5′
3′
0 200 400 600 800 1000 1200
Motif 6
Motif 9
Motif 4
Motif 3
Motif 7
Motif 10
Motif 8
Motif 2
Motif 1
Motif 5
(a)
FtAux/IAA14
FtAux/IAA23
FtAux/IAA09
FtAux/IAA08
FtAux/IAA22
FtAux/IAA15
FtAux/IAA17
FtAux/IAA12
FtAux/IAA04
FtAux/IAA24
FtAux/IAA16
FtAux/IAA21
FtAux/IAA05
FtAux/IAA25
FtAux/IAA01
FtAux/IAA06
FtAux/IAA11
FtAux/IAA10
FtAux/IAA19
FtAux/IAA13
FtAux/IAA20
FtAux/IAA03
FtAux/IAA18
FtAux/IAA02
FtAux/IAA07
5′
3′
0 200 400 600 800 1000 1200
Auxin_resp
B3
AUX_IAA superfamily
FAM222A superfamily
Herpes_BLLF1 superfamily
AUX_IAA
(b)
Figure 3: Gene motif pattern and gene domains in FtAux/IAA genes from Tartary buckwheat. (a) The protein domains of FtAux/IAAs are
shown and are denoted by rectangles with different colors. (b) Gene domains with different functions are shown in different colored boxes.
Arabidopsis, and the nodes at the base of the larger clades
were not well supported, but the nodes at the base of many
smaller clades were robust. Buckwheat genes were concentrated in groups I, VI, VII, VIII, IX, and X. Genes in group
I were all buckwheat, and groups II, III, IV, and V contained
only Arabidopsis genes. In the other groups, the genes were
distributed in both buckwheat and Arabidopsis. Phylogenetic analysis was performed using 30 rice (blue), 28 maize
(green), 26 sorghum (gray), and 25 buckwheat (red) genes.
Interestingly, using phylogenetic analyses, some Aux/IAA
genes were suggested to form species-specific clades or subclades after the divergence of these species in this study.
2.6. The Expression of Aux/IAA Gene Family in Tartary
Buckwheat. To examine the physiological roles of the
FtAux/IAA genes and their response to auxin, we examined
their expression in the roots, stems, and leaves at the
two-leaf stage. The results of quantitative reverse
transcription-polymerase chain reaction (qRT-PCR) showing the expression of FtAux/IAA family genes in Tartary
-----
International Journal of Genomics 7
AT2G22670.2
AT1G80390.1
(a)
GRMZM2G134571 P01GRMZM2G141205 P01 FtAux/IAA16
(b)
Figure 4: Phylogenetic relationship of Aux/IAA proteins. (a) The tree was reconstructed using Aux/IAA sequences of Arabidopsis thaliana
(gray) and buckwheat (blue). Evolutionary distances were computed using the p-distance method and expressed in units of the number of
amino acid substitutions per site. (b) The tree was reconstructed using Aux/IAA sequences in Oryza sativa (blue), Sorghum bicolor (gray),
Zea mays (green), and buckwheat (red). Evolutionary distances were computed using the p-distance method and expressed in units of the
number of amino acid substitutions per site.
-----
8 International Journal of Genomics
FtAux/IAA 05
40
35
30
25
20
15
10
5
0
Leaf Stem Root
FtAux/IAA 10
90
80
70
60
50
40
30
20
10
0
Leaf Stem Root
FtAux/IAA 15
12
10
8
6
4
2
0
Leaf Stem Root
FtAux/IAA 20
80
70
60
50
40
30
20
10
0
Leaf Stem Root
FtAux/IAA 25
180
160
140
120
100
80
60
40
20
0
Leaf Stem Root
FtAux/IAA 01 FtAux/IAA 02 FtAux/IAA 03
140
120
100
80
60
40
20
0
Leaf Stem Root
FtAux/IAA 08
12
10
8
6
4
2
0
Leaf Stem Root
FtAux/IAA 13
16
14
12
10
8
6
4
2
0
Leaf Stem Root
FtAux/IAA 18
60
50
40
30
20
10
0
Leaf Stem Root
FtAux/IAA 23
7
6
5
4
3
2
1
0
Leaf Stem Root
16
14
12
10
8
6
4
2
0
Leaf Stem Root
FtAux/IAA 06
120
100
80
60
40
20
0
Leaf Stem Root
FtAux/IAA 11
35
30
25
20
15
10
5
0
Leaf Stem Root
FtAux/IAA 16
300
250
200
150
100
50
0
Leaf Stem Root
FtAux/IAA 21
600
500
400
300
200
100
0
Leaf Stem Root
2
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
Leaf Stem Root
FtAux/IAA 07
3.5
3
2.5
2
1.5
1
0.5
0
Leaf Stem Root
FtAux/IAA 12
100
90
80
70
60
50
40
30
20
10
0
Leaf Stem Root
FtAux/IAA 17
50
45
40
35
30
25
20
15
10
5
0
Leaf Stem Root
FtAux/IAA 22
180
160
140
120
100
80
60
40
20
0
Leaf Stem Root
FtAux/IAA 04
120
100
80
60
40
20
0
Leaf Stem Root
FtAux/IAA 09
12
10
8
6
4
2
0
Leaf Stem Root
FtAux/IAA 14
16
14
12
10
8
6
4
2
0
Leaf Stem Root
FtAux/IAA 19
90
80
70
60
50
40
30
20
10
0
Leaf Stem Root
FtAux/IAA 24
40
35
30
25
20
15
10
5
0
Leaf Stem Root
Figure 5: Expression of FtAux/IAA genes in different tissues from Tartary buckwheat. qRT-PCR was used to assess FtAux/IAA gene
transcript levels in total RNA samples extracted from the leaves, stems, and roots of seeding plants at the two-leaf stage.
buckwheat in different tissues are presented in Figure 5.
Overall, all genes except FtAux/IAA09 and FtAux/IAA23
were expressed in all three tissues. In the leaves, the expression levels of different genes varied greatly, and the relative
expression levels ranged from 1 to 3. Among all genes,
FtAux/IAA02 had the highest expression levels. However,
the expression levels of most genes in the leaves were significantly lower than those in the stem and root tissues. The
expression levels of the genes FtAux/IAA01, FtAux/IAA03,
FtAux/IAA07, FtAux/IAA13, FtAux/IAA18, FtAux/IAA20,
and FtAux/IAA25 in stem tissue were lower than those in
the leaf tissue, although different genes had higher expression
levels in the stem tissue. FtAux/IAA09 and FtAux/IAA23
genes were not expressed, FtAux/IAA11 and FtAux/IAA13
genes were slightly expressed, and the remaining genes were
primarily expressed in the roots. The tissue expression results
showed that 18 genes expressed at high levels in the stem, and
four genes, FtAux/IAA08, FtAux/IAA09, FtAux/IAA11, and
FtAux/IAA14, had significantly higher levels of expression
in the stem than in the leaf and root. FtAux/IAA02 had
higher expression in the leaves than in the roots and stems.
FtAux/IAA01, FtAux/IAA03, FtAux/IAA07, FtAux/IAA13,
FtAux/IAA20, and FtAux/IAA25 genes had the highest
expression in the roots. In addition, some genes showed significantly higher tissue expression than other member genes.
The relatively high expression in different tissues suggests
that the genes might play a role in seedling plant growth.
Expression levels were always in the middle of the upper
levels of different tissue expressions. These results are similar
to those of previous functional studies on soybean [51] and
Arabidopsis thaliana [52].
As an important gene family that responds to auxin
signaling, Aux/IAA is the most essential gene family that is
regulated by exogenous IAA. The expression patterns of
FtAux/IAAs in plantlets after IAA treatment were investigated using qRT-PCR. After treatment with 10 μmol L[–][1]
IAA for 3, 6, 9, and 12h, expression of Aux/IAA genes was
consistently upregulated compared to that of the control
(Figure 6). The expression levels of all 25 FtAux/IAAs displayed a similar pattern in response to IAA treatment, and
the expression levels were upregulated in all tissues. In addition, we found that the expression levels of FtAux/IAAs
showed different degrees of increase under short-time IAA
treatment, which is similar to the results of previous studies
[53]. After IAA treatment for 1 day, 2 days, and 3 days, expression of genes showed diversity in trends, and the expression of
genes such as FtAux/IAA04, FtAux/IAA07, FtAux/IAA14,
and FtAux/IAA24 was significantly upregulated over time.
However, expression of the genes FtAux/IAA01, FtAux/IAA02, FtAux/IAA06, FtAux/IAA10, FtAux/IAA12, FtAux/IAA16, FtAux/IAA17, FtAux/IAA21, and FtAux/IAA25 was
first upregulated and then downregulated. None of these was
downregulated upon long-term treatment. In general, different genes showed different trends upon treatment for longer
-----
International Journal of Genomics 9
FtAux/IAA 01
FtAux/IAA 02
FtAux/IAA 03
FtAux/IAA 04
FtAux/IAA 05
FtAux/IAA 06
FtAux/IAA 07
FtAux/IAA 08
FtAux/IAA 09
FtAux/IAA 10
FtAux/IAA 11
FtAux/IAA 12
FtAux/IAA 13
FtAux/IAA 14
FtAux/IAA 15
FtAux/IAA 16
FtAux/IAA 17
FtAux/IAA 18
FtAux/IAA 19
FtAux/IAA 20
FtAux/IAA 21
FtAux/IAA 22
FtAux/IAA 23
FtAux/IAA 24
FtAux/IAA 25
4.00
16.00
64.00
256.00
1024.00
Figure 6: The pattern of transcript levels of 25 FtAux/IAA genes in buckwheat after IAA treatment compared with that of the control in
different tissues. qRT-PCR was used to assess FtAux/IAA gene transcript levels in total RNA samples extracted from the leaves, stems,
and roots after IAA treatment at the two-leaf stage.
periods of time. The expression of some genes was also different in the different tissues.
## 3. Discussion
Auxin signaling is a key signaling pathway in many plant
biological processes, such as growth, organogenesis, and
response to a variety of environmental changes [54–56].
Among the six auxin-related gene families (Aux/IAA, ARF,
GH3, SUAR, AUX1, and TIR1), Aux/IAA is very important,
representing a class of primary auxin-responsive genes,
which are rapidly induced by auxin [57]. Therefore, studies
on the function of the Aux/IAA gene family are beneficial
for the analysis of plant development, stress resistance, and
other biological processes, as a gene family directly responding to IAA treatment [52, 58, 59]. In recent years, a large
number of Aux/IAA genes that regulate auxin signal transduction and auxin degradation have been identified in various plants ([25, 39, 52, 60] by the comprehensive application
of physiological, genetic, molecular, and biochemical
methods [15]. The complete genomic sequence has opened
new avenues for understanding the plant genome and identifying the gene family [3] in Tartary buckwheat.
The comprehensive identification and subsequent characterization of the Tartary buckwheat Aux/IAA gene family
members described here provide new insights into the
potential role of some Aux/IAA genes in mediating plant
responses to auxin, their putative function, and their mode
of action. In this study, 25 FtAux/IAA genes were identified,
and the number of FtAux/IAA members from Tartary buckwheat was found to be comparable to that of Arabidopsis
[52, 61], rice [25], maize [39], tomato [36], cucumber [37],
hybrid aspen [60], chickpea, and soybean [62, 63], although
their genome sizes are quite different. These results indicate
that the Aux/IAA gene family exists widely in the plant
kingdom. Phylogenetic comparison of Aux/IAA proteins
between Tartary buckwheat and Arabidopsis thaliana
showed that there were genes similar to Arabidopsis thaliana
genes in all but two branches. In addition, Tartary buckwheat had two independent branches, which had no corresponding Arabidopsis thaliana genes. The same trends
were observed in the comparisons with rice, maize, sorghum, and other species. As an illustration of the wide
-----
10 International Journal of Genomics
diversification of Aux/IAA proteins in higher plants, the two
clades are also expanded in Populus trichocarpa [38] and
Solanum lycopersicum [36]. This diversification is also
reflected by the important structural variations found within
the Aux/IAA proteins. This partially accounts for the Aux/IAA conservation in these species during the evolutionary
process ([25, 39, 64, 65]. Twelve of the 25 FtAux/IAA loci
formed six sister pairs in the NJ reconstructions, four of
which had strong bootstrap support, indicating that Aux/IAA genes in Tartary buckwheat may play nonredundant
roles during plant development. Considering that their
expression pattern is apparently restricted to narrow developmental stages and their atypical long-lived features, the
buckwheat noncanonical Aux/IAA proteins may have a specific function in mediating auxin responses during welldefined plant developmental events.
Gene structure analysis showed that the genes of this
family contained 2–15 exons and 1–14 introns. Eighteen of
the genes had UTR regions at either ends of the genes, and
another seven lacked UTRs at either ends. According to
motif structure, family genes can be divided into two groups.
One group had more than nine motif structures and showed
consistent sequences; however, there were differences in
location and gene length. In the other group, 21 genes
showed 3–4 motifs. These conserved motifs comprised several major conserved structures in the Aux/IAA family, such
as the Aux/IAA superfamily, Aux/IAA, and Herpes BLLF1
segments. These results show that a large proportion of Aux/IAA genes was produced by gene repeat events, such as segmental, tandem, or both, in the course of evolution [62, 66],
and the expanded Aux/IAA gene members in land plants create functional redundancy and may be associated with new
functions to adapt to environmental changes [63, 67, 68].
Gene expression patterns in Tartary buckwheat seedlings
and responses to short- and long-term hormonal stimuli
were identified using qRT-PCR analysis, providing new
insights regarding the potential role in mediating plant
responses to auxin. Transcript abundance in particular
organs at a given time is an important prerequisite for the
subsequent elucidation of the corresponding proteins
required for proper execution of developmental, metabolic,
and signaling processes. Virtually, all 25 FtAux/IAA genes
were expressed in all organs/tissues analyzed, but their
expression levels varied considerably. These genes can be
effectively differentially expressed in different tissues. There
were higher expression levels in the stem, and the expression
of these genes tended to be upregulated after IAA treatment.
The expression of FtAux/IAAs suggests that these genes could
be involved in the regulation of buckwheat growth and development. This study will pave the way for further functional
verification of the Aux/IAA gene family in buckwheat.
## 4. Materials and Methods
4.1. Plant Material and Hormone Treatments. Tartary buckwheat (Fagopyrum tataricum) seeds were sterilized, rinsed
with sterile water, and sown in an improved Hoagland recipe. Plants were grown under standard greenhouse conditions, and the conditions in the culture chamber rooms
were set as follows: 14 h day/10 h night cycle, 25/20[°]C
day/night temperatures, 80% relative humidity, and
250 mmolm-2 s-1 intense luminosity. The roots, stems,
and leaves at the seeding period were collected for expression analysis of the tissue-specific buckwheat auxin
response gene family. Seeds with the same growth were
treated with 10 μmol L-1 IAA for 24h in Hoagland liquid
medium. All tissues and organs were stored at -80[°]C for
RNA extraction.
4.2. Identification of the Auxin Response Gene Family in
Buckwheat. The Tartary buckwheat genome was downloaded from the Tartary Buckwheat Genome Project (TBGP;
available online: [http://www.mbkbase.org/Pinku1/).](http://www.mbkbase.org/Pinku1/) The
FtAux/IAA gene family members were identified using a
BLASTp search. The FtAux/IAA genes were searched using
two BLASTp methods, and the maximum number of
Aux/IAA genes was determined. First, all known Arabidopsis Aux/IAA genes were used to query the initial protein on
the TBGP website, and the candidate genes were identified
using a BLASTp search at a score value of ≥100 and e −
value ≤ 1 × 10 − 10. Second, the HMM file corresponding to
the Aux/IAA domain (PF02519) was downloaded from the
[Pfam protein family database (http://pfam.sanger.ac.uk/).](http://pfam.sanger.ac.uk/)
The Aux/IAA genes were retrieved from the Tartary buckwheat genomic database using HMMER3.0. The default
parameter cutoff was set to 0.01. The existence of the Aux/IAA
core sequences was verified with the PFAM and SMART programs, and the HMMER results of all candidate genes that
might contain the Aux/IAA domain were further verified.
The sequence length, molecular weight, isoelectric point, and
subcellular localization of the Aux/IAA proteins were deter[mined using the ExPasy website (available online: http://web](http://web.expasy.org/protparam/)
[.expasy.org/protparam/) ([69, 70].](http://web.expasy.org/protparam/)
4.3. Chromosomal Distribution Analysis of Aux/IAA Family
Genes. All FtAux/IAA genes were mapped to the chromosomes from the physical location information obtained from
the Tartary buckwheat genomic database using Circos [71].
Multiple collinear scanning toolkits (MCScanX) were used
to analyze gene duplication events using default parameters
[72]. To reveal the synteny relationship of orthologous
Aux/IAA genes between Tartary buckwheat and other species selected, the syntenic analysis maps were constructed
using the Dual Systeny Plotter software (available online:
[https://github.com/CJ-Chen/TBtools) [73]. The substitution](https://github.com/CJ-Chen/TBtools)
of nonsynonymous (Ka) and synonymous (Ks) for each
repeated Aux/IAA gene was calculated using the KaKs_Calculator 2.0 [74].
4.4. Gene Structure and Motif Characterization of
FtAux/IAA Genes. Multiple sequence alignments of FtAux/IAAs were performed using DNAMAN through the highly
conserved domains [24], to explore the structure of FtAux/IAA genes using the default parameter Clustal W [70]. In
addition, the structural differences between FtAux/IAA proteins were predicted by comparing several conserved motif
sequences with MEME Suite [75]. Motifs were evaluated
[using the Gene Structure Display Server (GSDS; http://gsds](http://gsds.cbi.pku.edu.cn/)
-----
International Journal of Genomics 11
Table 2: Primer sequences of FtAux/IAA genes for qRT-PCR.
Name Primer (5[′]- >3[′])
ATGGTGCTCCATATCTGCGG//
FtAux/IAA 01
CAATAGCGTCAGCGCCTTTC
GAGCAAAGCGTCAGCAAACA//
FtAux/IAA 02
CTGGGTACCGTGAACTGCTT
CCCTATTTCCTGCCAAGCCA//
FtAux/IAA 03
GGTCAACACCGAACAAACGG
AGAAAAACGGCGATGTCCCT//
FtAux/IAA 04
CGAGTCCTATGGCTTCCGAC
TGAGAACGATGTGGGAACCG//
FtAux/IAA 05
ACATCTTCTCCAAAGCCGCA
GACTGGATGCTTGTGGGTGA//
FtAux/IAA 06
AATGGCGTCAGAGCCTTTCA
ATTGCCCCAAGTAGGAAGCC//
FtAux/IAA 07
CCACGTGTTGTCGTGCAAAT
GCTGTCCAAGAAGAACCCGA//
FtAux/IAA 08
CCATCCCACAATCTGTGCCT
CGGGTTAATGGATCCGGGTT//
FtAux/IAA 09
ACGAACATCTCCCACGGAAC
CGCAGCCTCCAAATCAATCG//
FtAux/IAA 10
AGACGCGCAACCTCTTTACA
GGCCTCCAGTTTGCTCGTAT//
FtAux/IAA 11
CGAACGCTTTCGGTTCTTCC
AGACAGAGCTCACTCTCGGT//
FtAux/IAA 12
GGCGACCAGAGAGGTTCAAA
GCCGGTGAACTCATTCCGTA//
FtAux/IAA 13
AGCCGCTTTACGGTCGATAG
CCAACCGACGACCACAAGTA//
FtAux/IAA 14
TATAGGATTGAACCGGCGGC
TTCAATGGGGTCAACCTCCG//
FtAux/IAA 15
ACGAGCATCCAATCTCCGTC
GGCCACCAGTGAGGTCATAC//
FtAux/IAA 16
ATCGCCGTCTTTGTCTTCGT
GCACTTCTTCCGATGCAAGC//
FtAux/IAA 17
TGGTGGCCATCCAACAACTT
CTCAGGGTCACAGTGAGCAG//
FtAux/IAA 18
AGTCGGACTAGCCCTTGGAT
GAAGCTCCAAGCACCAATGC//
FtAux/IAA 19
TTTGAGCGGCAAGAAGACCT
GTCACTGAACTCGCAAGGGA//
FtAux/IAA 20
CTCGCTTCCACATGCAAAGG
AGAGGCTTCTCTGAGACCGT//
FtAux/IAA 21
TTCTCCGCGACCATTGACTC
ACAACGTTGATGCCTCCGAA//
FtAux/IAA 22
ATAAGGTGCTCCGTCCATGC
AAAAGACCCGAGAGCGATCC//
FtAux/IAA 23
CCCACGGAACATCTCCTACG
GCCGTCCAAAAGAGTTGCAG//
FtAux/IAA 24
GACCAACATCCAATCCCCGT
TTAAGGCTTGGACTGCCTGG//
FtAux/IAA 25
ATGGCGTCGGAACCTTTCAT
[.cbi.pku.edu.cn/) with the following parameters: the opti-](http://gsds.cbi.pku.edu.cn/)
mum motif width was 6–200, and the maximum number
of motifs was 20 [76].
4.5. Analysis of Phylogenetic Relationships. Phylogenetic
analysis of all complete FtAux/IAA protein sequences was
performed using the MEGA 7 program by the NJ method
[69]. The phylogenetic trees were divided into different
groups according to the conserved domain, and a bootstrap
test was carried out with 1000 iterations [77, 78]. The same
methods were applied to analyze the evolutionary relationships between buckwheat and Arabidopsis. In addition, the
evolutionary relationships between buckwheat and rice,
maize, and sorghum were analyzed using MEGA 7.
4.6. RNA Isolation and qRT-PCR Analysis. Total RNA was
extracted using a total RNA extraction kit (Sangon, Shanghai,
China, SK1321), and genomic DNA was removed with
RNase-free DNase I treatment [12]. The first cDNA strand
was generated by reverse transcription using M-MLV (TakaRa,
Dalian, China), according to the manufacturer’s protocol.
The gene expression level of the housekeeping gene histone 3 (GenBank ID: HM628903) of Tartary buckwheat was
used as the endogenous control [79]. The gene-specific
primers are summarized in Table 2, and the qRT-PCR reactions were performed in a total volume of 20 _μL (2_ _μL diluted_
cDNA, 1 _μL each forward and reverse primer, 10_ _μL SYBR_
Premix Ex Taq, and 6 _μL ddH2O). The qPCR program was_
as follows: 95[°]C for 3min, followed by 30 cycles of 95[°]C for
15s, 60[°]C for 30s, and 72[°]C for 20s. Gene expression was calculated using the 2-ΔΔc method [80], and the mean of three
biological replicates indicated their relative expression levels.
## Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
## Conflicts of Interest
The authors declare no conflict of interest.
## Authors’ Contributions
DH and XS conceived and designed the experiments. FY, XZ
and RT wrote the manuscript. DH, FY, LZ, QC and FL performed the experiments and analyzed the data. XS and DH
revised the manuscript. All authors have read and gave final
approval for publication. Fan Yang, Xiuxia Zhang, and Ruifeng Tian contributed equally to the paper. Xuanjie Shi and
Dongao Huo contributed equally to the paper.
## Acknowledgments
This work was supported by the National Key R&D Program of
China(2019YFD1001300, 2019YFD1001303), the National
Natural Science Foundation of China (31960415), the Henan
Postdoctoral Research Project (001702029), the Henan
Academy of Agricultural Sciences Special Fund for Scientific
Research Development (2020YQ12), the Guizhou Provincial
Science and Technology Foundation ([2019]1232), the Qiankehe Platform Talent ([2020] 31960415), and the Teaching
Content and Curriculum System Reform Project of Higher Education Institutions in Guizhou Province (2019202, 2020035).
-----
12 International Journal of Genomics
## References
[1] Y. Wang and C. G. Campbell, “Tartary buckwheat breeding
(Fagopyrum tataricum L. Gaertn.) through hybridization with
its Rice-Tartary type,” Euphytica, vol. 156, no. 3, pp. 399–405,
2007.
[2] M. Zhou, C. Wang, L. Qi, X. Yang, Z. Sun, and Y. Tang,
“Ectopic expression of Fagopyrum tataricum FtMYB12
improves cold tolerance in Arabidopsis thaliana,” Journal of
Plant Growth Regulation, vol. 34, no. 2, pp. 362–371, 2015.
[3] L. J. Zhang, X. X. Li, B. Ma et al., “The Tartary buckwheat
genome provides insights into rutin biosynthesis and abiotic
stress tolerance,” Molecular Plant, vol. 10, no. 9, pp. 1224–
1237, 2017.
[4] G. Bonafaccia, M. Marocchini, and I. Kreft, “Composition and
technological properties of the flour and bran from common
and Tartary buckwheat,” Food Chemistry, vol. 80, no. 1,
pp. 9–15, 2003.
[5] N. Fabjan, J. Rode, I. J. Kosir, Z. Wang, Z. Zhang, and I. Kreft,
“Tartary buckwheat (Fagopyrum tataricum Gaertn.) as a
source of dietary rutin and quercitrin,” Journal of Agricultural
and Food Chemistry, vol. 51, no. 22, pp. 6452–6455, 2003.
[6] H. Wang, R. F. Chen, T. Iwashita, R. F. Shen, and J. F. Ma,
“Physiological characterization of aluminum tolerance and
accumulation in tartary and wild buckwheat,” New Phytologist,
vol. 205, no. 1, pp. 273–279, 2015.
[7] M. Nishimura, T. Ohkawara, Y. Sato et al., “Effectiveness of
rutin-rich Tartary buckwheat (Fagopyrum tataricum Gaertn.)
“Manten-Kirari” in body weight reduction related to its antioxidant properties: a randomised, double-blind, placebo-controlled
study,” Journal of Functional Foods, vol. 26, pp. 460–469, 2016.
[8] S. Abel and A. Theologis, “Early genes and auxin action,” Plant
Physiology, vol. 111, no. 1, pp. 9–17, 1996.
[9] R. Kumar, A. K. Tyagi, and A. K. Sharma, “Genome-wide analysis of auxin response factor (ARF) gene family from tomato
and analysis of their role in flower and fruit development,”
Molecular Genetics and Genomic, vol. 285, no. 3, pp. 245–
260, 2011.
[10] W. D. Teale, I. A. Paponov, and K. Palme, “Auxin in action:
signalling, transport and the control of plant growth and
development,” Nature Reviews Molecular Cell Biology, vol. 7,
no. 11, pp. 847–859, 2006.
[11] A. W. Woodward and B. Bartel, “Auxin: regulation, action and
interaction,” Annals of Botany, vol. 95, no. 5, pp. 707–735,
2005.
[12] D. Du, R. Jin, J. Guo, and F. Zhang, “Infection of embryonic
callus with Agrobacterium enables high-speed transformation
of maize,” International Journal of Molecular Sciences, vol. 20,
no. 2, p. 279, 2019.
[13] M. Hoffmann, M. Hentrich, and S. Pollmann, “Auxin-oxylipin
crosstalk: relationship of antagonists,” Journal of Integrative
Plant Biology, vol. 53, no. 6, pp. 429–445, 2011.
[14] K. Ljung, “Auxin metabolism and homeostasis during plant
development,” Development, vol. 140, no. 5, pp. 943–950,
2013.
[15] K. D. Liu, C. C. Yuan, H. L. Li et al., “Genome-wide identification and characterization of auxin response factor (ARF) family genes related to flower and fruit development in papaya
(Carica papaya L),” BMC Genomics, vol. 16, no. 1, p. 901, 2015.
[16] Y. J. Wang, D. X. Deng, Y. T. Shi, N. Miao, Y. L. Bian, and Z. T.
Yin, “Diversification, phylogeny and evolution of auxin
response factor (ARF) family: insights gained from analyzing
maize ARF genes,” Molecular Biology Reports, vol. 39, no. 3,
pp. 2401–2415, 2012.
[17] D. L. Remington, T. J. Vision, T. J. Guilfoyle, and J. W. Reed,
“Contrasting modes of diversification in the Aux/IAA and
ARF gene families,” Plant Physiology, vol. 135, no. 3,
pp. 1738–1752, 2004.
[18] M. N. Markakis, A. K. Boron, B. Van Loock et al., “Characterization of a small auxin-up RNA (SAUR)-like gene involved in
Arabidopsis thaliana development,” PLoS One, vol. 8, no. 11,
article e82596, 2013.
[19] N. Stortenbeker and M. Bemer, “The SAUR gene family: the
plant’s toolbox for adaptation of growth and development,”
Journal of Experimental Botany, vol. 70, no. 1, pp. 17–27, 2019.
[20] D. Weijers and J. Friml, “SnapShot: auxin signaling and transport,” Cell, vol. 136, no. 6, pp. 1172–1172.e1, 2009.
[21] G. Hagen and T. Guilfoyle, “Auxin-responsive gene expression: genes, promoters and regulatory factors,” Plant Molecular Biology, vol. 49, no. 3/4, pp. 373–385, 2002.
[22] E. Pierdonati, S. J. Unterholzner, E. Salvi et al., “Cytokinindependent control of GH3 group II family genes in the Arabidopsis root,” Plants, vol. 8, no. 4, p. 94, 2019.
[23] K. Hoyerova, P. Hosek, M. Quareshy et al., “Auxin molecular
field maps define AUX1 selectivity: many auxin herbicides
are not substrates,” New Phytologist, vol. 217, no. 4,
pp. 1625–1639, 2018.
[24] N. Dharmasiri, S. Dharmasiri, and M. Estelle, “The F-box protein TIR1 is an auxin receptor,” Nature, vol. 435, no. 7041,
pp. 441–445, 2005.
[25] M. Jain, N. Kaur, R. Garg, J. K. Thakur, A. K. Tyagi, and J. P.
Khurana, “Structure and expression analysis of early auxinresponsive Aux/IAA gene family in rice (Oryza sativa),” Functional Integrative Genomics, vol. 6, no. 1, pp. 47–59, 2006.
[26] J. Luo, J. J. Zhou, and J. Z. Zhang, “Aux/IAA gene family in
plants: molecular structure, regulation, and function,” International Journal of Molecular Sciences, vol. 19, no. 1, p. 259, 2018.
[27] P. W. Oeller, J. A. Keller, J. E. Park, J. E. Silbert, and
A. Theologis, “Structural characterization of the early indoleacetic acid-inducible genes, PS-IAA4/5 and PS-IAA6, of pea
(Pisum sativum L.),” Journal of Molecular Biology, vol. 233,
no. 4, pp. 789–798, 1993.
[28] M. Yamamoto and K. T. Yamamoto, “Differential effects of 1naphthaleneacetic acid, indole-3-acetic acid and 2,4-dichlorophenoxyacetic acid on the gravitropic response of roots in an
auxin-resistant mutant of arabidopsis, auxl,” Plant and Cell
Physiology, vol. 39, no. 6, pp. 660–664, 1998.
[29] N. Dharmasiri, S. Dharmasiri, D. Weijers et al., “Plant development is regulated by a family of auxin receptor F box proteins,” Developmental Cell, vol. 9, no. 1, pp. 109–119, 2005.
[30] X. Tan, L. I. A. Calderon-Villalobos, M. Sharon et al., “Mechanism of auxin perception by the TIR1 ubiquitin ligase,”
Nature, vol. 446, no. 7136, pp. 640–645, 2007.
[31] M. Lavy and M. Estelle, “Mechanisms of auxin signaling,”
Development, vol. 143, no. 18, pp. 3226–3229, 2016.
[32] J. Trenner, Y. Poeschl, J. Grau, A. Gogol-Döring, M. Quint,
and C. Delker, “Auxin-induced expression divergence between
Arabidopsis species may originate within the TIR1/AFBAUX/IAA-ARF module,” Journal of Experimental Botany,
vol. 68, p. 539, 2017.
[33] W. M. Ainley, J. C. Walker, R. T. Nagao, and J. L. Key,
“Sequence and characterization of two auxin-regulated genes
-----
International Journal of Genomics 13
from soybean.,” The Journal of Biological Chemistry, vol. 263,
no. 22, pp. 10658–10666, 1988.
[34] S. Abel, P. W. Oeller, and A. Theologis, “Early auxin-induced
genes encode short-lived nuclear proteins,” Proceedings of the
National Academy of Sciences, vol. 91, no. 1, pp. 326–330, 1994.
[35] T. W. Conner, V. H. Goekjian, P. R. LaFayette, and J. L. Key,
“Structure and expression of two auxin-inducible genes from
Arabidopsis,” Plant Molecular Biology, vol. 15, no. 4,
pp. 623–632, 1990.
[36] C. Audran-Delalande, C. Bassa, I. Mila, F. Regad, M. Zouine,
and M. Bouzayen, “Genome-wide identification, functional
analysis and expression profiling of the Aux/IAA gene family
in tomato,” Plant and Cell Physiology, vol. 53, no. 4, pp. 659–
672, 2012.
[37] D. Gan, D. Zhuang, F. Ding, Z. Yu, and Y. Zhao, “Identification and expression analysis of primary auxin-responsive
Aux/IAA gene family in cucumber (Cucumis sativus),” Journal
of Genetics, vol. 92, no. 3, pp. 513–521, 2013.
[38] U. C. Kalluri, S. P. DiFazio, A. M. Brunner, and G. A. Tuskan,
“Genome-wide analysis of Aux/IAA and ARF gene families in
Populus trichocarpa,” BMC Plant Biology, vol. 7, no. 1, p. 59,
2007.
[39] Y. Wang, D. Deng, Y. Bian, Y. Lv, and Q. Xie, “Genome-wide
analysis of primary auxin-responsive Aux/IAA gene family in
maize (Zea mays L.),” Molecular Biology Reports, vol. 37, no. 8,
pp. 3991–4001, 2010.
[40] S. B. Tiwari, G. Hagen, and T. J. Guilfoyle, “The roles of auxin
response factor domains in auxin-responsive transcription,”
The Plant Cell, vol. 15, no. 2, pp. 533–543, 2003.
[41] S. X. Wang, F. Y. Shi, X. X. Dong, Y. X. Li, Z. H. Zhang, and
L. I. He, “Genome-wide identification and expression analysis
of auxin response factor (ARF) gene family in strawberry
(Fragaria vesca),” Journal of Integrative Agriculture, vol. 18,
no. 7, pp. 1587–1603, 2019.
[42] H. Szemenyei, M. Hannon, and J. A. Long, “TOPLESS
mediates auxin-dependent transcriptional repression during
Arabidopsis embryogenesis,” Science, vol. 319, no. 5868,
pp. 1384–1386, 2008.
[43] N. A. Eckardt, “Auxin and the power of the proteasome in
plants,” The Plant Cell, vol. 13, no. 10, pp. 2161–2163, 2001.
[44] S. Kepinski and O. Leyser, “The Arabidopsis F-box protein
TIR1 is an auxin receptor,” Nature, vol. 435, no. 7041,
pp. 446–451, 2005.
[45] S. B. Tiwari, G. Hagen, and T. J. Guilfoyle, “Aux/IAA proteins
contain a potent transcriptional repression domain,” The Plant
Cell, vol. 16, no. 2, pp. 533–543, 2004.
[46] J. Kim, K. Harter, and A. Theologis, “Protein–protein interactions among the Aux/IAA proteins,” Proceedings of the
National Academy of Sciences, vol. 94, no. 22, pp. 11786–
11791, 1997.
[47] A. Colon-Carmona, R. You, T. Haimovitch-Gal, and
P. Doerner, “Spatio-temporal analysis of mitotic activity with
a labile cyclin-GUS fusion protein,” Plant Journal, vol. 20,
no. 4, pp. 503–508, 1999.
[48] P. J. Overvoorde, Y. Okushima, J. M. Alonso et al., “Functional
genomic analysis of the AUXIN/INDOLE-3-ACETICACID
gene family members in Arabidopsis thaliana,” Plant Cell,
vol. 17, no. 12, pp. 3282–3300, 2005.
[49] L. J. Zhao, Z. W. Zhang, L. I. Yu, and T. Y. Wang, “Genetic
diversity in tartary buckwheat based on ISSR markers,” Plant
Genetic Resources, vol. 7, no. 2, pp. 159–164, 2006.
[50] K. Tamura, D. Peterson, N. Peterson, G. Stecher, M. Nei, and
S. Kumar, “MEGA5: molecular evolutionary genetics analysis
using maximum likelihood evolutionary distance, and maximum parsimony methods,” Molecular Biology and Evolution,
vol. 28, no. 10, pp. 2731–2739, 2011.
[51] Y. N. Wang, K. X. Li, L. Chen et al., “MicroRNA167-directed
regulation of the auxin response factors GmARF8a and
GmARF8b is required for soybean nodulation and lateral root
development,” Plant Physiology, vol. 168, no. 3, pp. 984–999,
2015.
[52] K. A. Dreher, J. Brown, R. E. Saw, and J. Callis, “The Arabidopsis Aux/IAA protein family has diversified in degradation and
auxin responsiveness,” The Plant Cell, vol. 18, no. 3, pp. 699–
714, 2006.
[53] E. K. Yoon, J. H. Yang, J. Lim, S. H. Kim, S. K. Kim, and W. S.
Lee, “Auxin regulation of the microRNA390-dependent transacting small interfering RNA pathway in Arabidopsis lateral
root development,” Nucleic Acids Research, vol. 38, no. 4,
pp. 1382–1391, 2010.
[54] N. Dharmasiri and M. Estelle, “Auxin signaling and regulated
protein degradation,” Trends in Plant Science, vol. 9, no. 6,
pp. 302–308, 2004.
[55] S. Goldental-Cohen, A. Israeli, N. Ori, and H. Yasuor, “Auxin
response dynamics during wild-type and entire flower development in tomato,” Plant & Cell Physiology, vol. 58, no. 10,
pp. 1661–1672, 2017.
[56] E. Sundberg and L. Østergaard, “Distinct and dynamic auxin
activities during reproductive development,” Cold Spring Harbor Perspectives in Biology, vol. 1, 2009.
[57] A. Theologis, T. V. Huynh, and R. W. Davis, “Rapid induction
of specific mRNAs by auxin in pea epicotyl tissue,” Journal of
Molecular Biology, vol. 183, no. 1, pp. 53–68, 1985.
[58] T. J. Guilfoyle and G. Hagen, “Auxin response factors,” Current Opinion in Plant Biology, vol. 10, no. 5, pp. 453–460, 2007.
[59] D. Weijers, E. Benkova, K. E. Jäger et al., “Developmental specificity of auxin response by pairs of ARF and Aux/IAA transcriptional regulators,” EMBO Journal, vol. 24, no. 10,
pp. 1874–1885, 2005.
[60] R. Moyle, J. Schrader, A. Stenberg et al., “Environmental and
auxin regulation of wood formation involves members of the
Aux/IAA gene family in hybrid aspen,” The Plant Journal,
vol. 31, no. 6, pp. 675–685, 2002.
[61] J. W. Reed, “Roles and activities of Aux/IAA proteins in Arabidopsis,” Trends in Plant Science, vol. 6, no. 9, pp. 420–425, 2001.
[62] V. K. Singh and M. Jain, “Genome-wide survey and comprehensive expression profiling of Aux/IAA gene family in chickpea and soybean,” Trends in Plant Science, vol. 6, p. 918, 2015.
[63] V. K. Singh, M. Rajkumar, R. Garg, and M. Jain, “Genomewide identification and co-expression network analysis provide insights into the roles of auxin response factor gene family
in chickpea,” Scientific Reports, vol. 7, no. 1, p. 10895, 2017.
[64] X. Huang, X. Bai, T. Guo et al., “Genome-wide analysis of the
PIN auxin efflux carrier gene family in coffee,” Plants, vol. 9,
no. 9, p. 1061, 2020.
[65] E. Liscum and J. W. Reed, “Genetics of Aux/IAA and ARF
action in plant growth and development,” Plant Molecular
Biology, vol. 49, no. 3/4, pp. 387–400, 2002.
[66] W. Wu, Y. Liu, Y. Wang et al., “Evolution analysis of the Aux/IAA gene family in plants shows dual origins and variable
nuclear localization signals,” International Journal of Molecular Sciences, vol. 18, no. 10, p. 2107, 2017.
-----
14 International Journal of Genomics
[67] K. Ishizaki, “Evolution of land plants: insights from molecular
studies on basal lineages,” Bioscience, Biotechnology, and Biochemistry, vol. 81, no. 1, pp. 73–80, 2017.
[68] D. Weijers and D. Wagner, “Transcriptional responses to the
auxin hormone,” Annual Review of Plant Biology, vol. 67,
no. 1, pp. 539–574, 2016.
[69] J. D. Thompson, D. G. Higgins, and T. J. Gibson, “CLUSTAL
W: improving the sensitivity of progressive multiple sequence
alignment through sequence weighting, position-specific gap
penalties and weight matrix choice,” Nucleic Acids Research,
vol. 22, no. 22, pp. 4673–4680, 1994.
[70] J. D. Thompson, T. J. Gibson, and D. G. Higgins, “Multiple
sequence alignment using ClustalW and ClustalX,” Current
protocols in bioinformatics, vol. 1, pp. 2-3, 2003.
[71] M. Liu, Z. Ma, A. Wang et al., “Genome-wide investigation of
the auxin response factor gene family in tartary buckwheat
(Fagopyrum tataricum),” International Journal of Molecular
Sciences, vol. 19, no. 11, pp. 3526–3544, 2018.
[72] Y. Wang, H. Tang, J. D. Debarry et al., “MCScanX: a toolkit for
detection and evolutionary analysis of gene synteny and collinearity,” Nucleic Acids Research, vol. 40, no. 7, article e49, 2012.
[73] C. Liu, T. Xie, C. Chen et al., “Genome-wide organization and
expression profiling of the R2R3-MYB transcription factor
family in pineapple (Ananas comosus),” BMC Genome,
vol. 18, no. 1, p. 503, 2017.
[74] D. Wang, Y. Zhang, Z. Zhang, J. Zhu, and J. Yu, “KaKs_Calculator 2.0: a toolkit incorporating gamma-series methods and
sliding window strategies,” Genomics, Proteomics & Bioinformatics, vol. 8, no. 1, pp. 77–80, 2010.
[75] W. Liu, Z. Zhang, W. Li et al., “Genome-wide identification
and comparative analysis of the 3-hydroxy-3-methylglutaryl
coenzyme a reductase (HMGR) gene family in Gossypium,”
Molecules, vol. 23, no. 2, pp. 193–211, 2018.
[76] T. L. Bailey, M. Boden, F. A. Buske et al., “MEME SUITE: tools
for motif discovery and searching,” Nucleic Acids Research,
vol. 37, pp. W202–W208, 2009.
[77] B. K. Karanja, L. Fan, X. Liang et al., “Genome-wide characterization of the WRKY gene family in radish (Raphanus sativus L.)
reveals its critical functions under different abiotic stresses,”
Plant Ccell Reports, vol. 36, no. 11, article 17571773, 2017.
[78] K. Tamura, G. Stecher, D. Peterson, A. Filipski, and S. Kumar,
“MEGA6: molecular evolutionary genetics analysis version
6.0,” Molecular Biology and Evolution, vol. 30, no. 12,
pp. 2725–2729, 2013.
[79] Y. C. Bai, C. L. Li, J. W. Zhang et al., “Characterization of two
tartary buckwheat R2R3-MYB transcription factors and their
regulation of proanthocyanidin biosynthesis,” Plant Physiology, vol. 152, no. 3, pp. 431–440, 2014.
[80] S. Chandra, A. Z. Kazmi, Z. Ahmed et al., “Genome-wide identification and characterization of NB-ARC resistant genes in
wheat (Triticum aestivum L.) and their expression during leaf
rust infection,” Plant Cell Reports, vol. 36, no. 7, pp. 1097–
1112, 2017.
-----
| 19,288
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC8564212, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://downloads.hindawi.com/journals/ijg/2021/3102399.pdf"
}
| 2,021
|
[
"JournalArticle"
] | true
| 2021-10-26T00:00:00
|
[
{
"paperId": "12c5d877d5fb35d6db4aa848e7056979c78d59c2",
"title": "Genome-Wide Analysis of the PIN Auxin Efflux Carrier Gene Family in Coffee"
},
{
"paperId": "ac8092d76b8fbf6e90b61fd0ffb57fbdf8675e97",
"title": "Genome-wide identification and expression analysis of auxin response factor (ARF) gene family in strawberry (Fragaria vesca)"
},
{
"paperId": "c1e383c3169ab741097c2a156505190a573bbf40",
"title": "Cytokinin-Dependent Control of GH3 Group II Family Genes in the Arabidopsis Root"
},
{
"paperId": "92fdbacb11744d583e97efd95f522e10ed79c774",
"title": "Infection of Embryonic Callus with Agrobacterium Enables High-Speed Transformation of Maize"
},
{
"paperId": "c10df8d82c36a60116bdab50b00c2f7100e30dd6",
"title": "Genome-Wide Investigation of the Auxin Response Factor Gene Family in Tartary Buckwheat (Fagopyrum tataricum)"
},
{
"paperId": "9bef5039ddb6d4773f265e65a1f94aafaac99dcc",
"title": "The SAUR gene family: the plant's toolbox for adaptation of growth and development."
},
{
"paperId": "223c995803e0f120d82848524888bb758775f554",
"title": "Auxin molecular field maps define AUX1 selectivity: many auxin herbicides are not substrates."
},
{
"paperId": "380dd15012752b6659988849d083b978adf40011",
"title": "Genome-Wide Identification and Comparative Analysis of the 3-Hydroxy-3-methylglutaryl Coenzyme A Reductase (HMGR) Gene Family in Gossypium"
},
{
"paperId": "cd5956adf41ebdb0103072a3b273f1654df093ae",
"title": "Aux/IAA Gene Family in Plants: Molecular Structure, Regulation, and Function"
},
{
"paperId": "62831be1a67957f0ab142ddfea4172180ff055e1",
"title": "Auxin Response Dynamics During Wild-Type and entire Flower Development in Tomato"
},
{
"paperId": "4283148f081b914ce04ed155788669c201f7cd33",
"title": "Evolution Analysis of the Aux/IAA Gene Family in Plants Shows Dual Origins and Variable Nuclear Localization Signals"
},
{
"paperId": "a01db4e808355b181a428d0495f04f6cca2822a0",
"title": "The Tartary Buckwheat Genome Provides Insights into Rutin Biosynthesis and Abiotic Stress Tolerance."
},
{
"paperId": "40cf1f681ae01a5632a5a4e4e7109cbcaef41ec7",
"title": "Genome-wide identification and co-expression network analysis provide insights into the roles of auxin response factor gene family in chickpea"
},
{
"paperId": "6fde2e8a3e14bbe658c93e71e1ee91494f29a713",
"title": "Genome-wide characterization of the WRKY gene family in radish (Raphanus sativus L.) reveals its critical functions under different abiotic stresses"
},
{
"paperId": "02af0c8a4321d9eb81ae3e9cf3a25c59c9cebbc2",
"title": "Genome-wide organization and expression profiling of the R2R3-MYB transcription factor family in pineapple (Ananas comosus)"
},
{
"paperId": "4f8cee97a9a21c76ff6ef1270eab0a790e77d6fb",
"title": "Genome-wide organization and expression profiling of the R2R3-MYB transcription factor family in pineapple (Ananas comosus)"
},
{
"paperId": "6e7dc857d20c974fa0c276e6dda816f956612eb8",
"title": "Genome-wide identification and characterization of NB-ARC resistant genes in wheat (Triticum aestivum L.) and their expression during leaf rust infection"
},
{
"paperId": "b0dcbdede6b6ffe02b4bb2dced7d7341a642daa5",
"title": "Evolution of land plants: insights from molecular studies on basal lineages"
},
{
"paperId": "376a486d8009f9df2c666397381154d6bd7d6c9e",
"title": "Auxin-induced expression divergence between Arabidopsis species may originate within the TIR1/AFB–AUX/IAA–ARF module"
},
{
"paperId": "efa7c5891d74b322eed1d8112d51951d2df23882",
"title": "Effectiveness of rutin-rich Tartary buckwheat (Fagopyrum tataricum Gaertn.) ‘Manten-Kirari’ in body weight reduction related to its antioxidant properties: A randomised, double-blind, placebo-controlled study"
},
{
"paperId": "e7aa2306d5d5f7c63c16a9fced359e4df6050966",
"title": "Mechanisms of auxin signaling"
},
{
"paperId": "9f735fee72f121fa6067b79d293c63c1b6390758",
"title": "Transcriptional Responses to the Auxin Hormone."
},
{
"paperId": "257aea49a48a3fc1e60b2e159eaaf9b9343253b2",
"title": "Genome-wide identification and characterization of auxin response factor (ARF) family genes related to flower and fruit development in papaya (Carica papaya L.)"
},
{
"paperId": "5ffe4544398e15439e7828ac3b07590471307881",
"title": "Genome-wide survey and comprehensive expression profiling of Aux/IAA gene family in chickpea and soybean"
},
{
"paperId": "88afc2c020040cdc481063f81eededd0959cbdbb",
"title": "MicroRNA167-Directed Regulation of the Auxin Response Factors GmARF8a and GmARF8b Is Required for Soybean Nodulation and Lateral Root Development1[OPEN]"
},
{
"paperId": "01f1383074adbe9a7121e21d2d3876ce390e35cd",
"title": "Ectopic Expression of Fagopyrum tataricumFtMYB12 Improves Cold Tolerance in Arabidopsis thaliana"
},
{
"paperId": "3d8d9fbecfcb9e88594077383be1dfb3db23b2ab",
"title": "Characterization of two tartary buckwheat R2R3-MYB transcription factors and their regulation of proanthocyanidin biosynthesis."
},
{
"paperId": "3235abe0af534d371fb6c50796cde8f246c76682",
"title": "Identification and expression analysis of primary auxin-responsive Aux/IAA gene family in cucumber (Cucumis sativus)"
},
{
"paperId": "4c2b018392eca7e54588ae6ed8505985c878d8ba",
"title": "MEGA6: Molecular Evolutionary Genetics Analysis version 6.0."
},
{
"paperId": "c5db713ce15f21496ba4e2fc913073d7ef6fbea6",
"title": "Characterization of a Small Auxin-Up RNA (SAUR)-Like Gene Involved in Arabidopsis thaliana Development"
},
{
"paperId": "4067d0ad61859c22661165e017c356a98bc12ae0",
"title": "Auxin metabolism and homeostasis during plant development"
},
{
"paperId": "c7529895439b0cf694cf338e7b3a84a17bea49d6",
"title": "Genome-wide identification, functional analysis and expression profiling of the Aux/IAA gene family in tomato."
},
{
"paperId": "93a83472e1389fac3c6d794837308002f6e1112d",
"title": "MCScanX: a toolkit for detection and evolutionary analysis of gene synteny and collinearity"
},
{
"paperId": "c700eee5f49366deed6d15f625083709cd323485",
"title": "MEGA5: molecular evolutionary genetics analysis using maximum likelihood, evolutionary distance, and maximum parsimony methods."
},
{
"paperId": "47223d29c17e4f7445d0d0287549d518c3fc85f7",
"title": "Diversification, phylogeny and evolution of auxin response factor (ARF) family: insights gained from analyzing maize ARF genes"
},
{
"paperId": "a737f6bb07ad865b5989ddda65c7920f3a244085",
"title": "Auxin-oxylipin crosstalk: relationship of antagonists."
},
{
"paperId": "b054d523593a92f209c08499df5f1b56013ea906",
"title": "Genome-wide analysis of auxin response factor (ARF) gene family from tomato and analysis of their role in flower and fruit development"
},
{
"paperId": "3c11f5769160a2ad77eebfda770d364af177702c",
"title": "Genome-wide analysis of primary auxin-responsive Aux/IAA gene family in maize (Zea mays. L.)"
},
{
"paperId": "727448871e676908060345d8529f06b8c3d28045",
"title": "KaKs_Calculator 2.0: A Toolkit Incorporating Gamma-Series Methods and Sliding Window Strategies"
},
{
"paperId": "d4aa79d643d979fb9e349812540d7b2041038f41",
"title": "Auxin regulation of the microRNA390-dependent transacting small interfering RNA pathway in Arabidopsis lateral root development"
},
{
"paperId": "56c4f08aff376c07f3cb014f0ed104d279cc5bf3",
"title": "Distinct and dynamic auxin activities during reproductive development."
},
{
"paperId": "f804dfb886b1424571f0446df10074af63c57921",
"title": "MEME Suite: tools for motif discovery and searching"
},
{
"paperId": "3bdb170354c05c92940267bdb9a9ffd53980e860",
"title": "SnapShot: Auxin Signaling and Transport"
},
{
"paperId": "244307b4ad5efd72f69e084c12fd9454e48fc10a",
"title": "TOPLESS Mediates Auxin-Dependent Transcriptional Repression During Arabidopsis Embryogenesis"
},
{
"paperId": "102dea5c10f7a5cce8084b8509e22b75d2b42b32",
"title": "Genome-wide analysis of Aux/IAA and ARF gene families in Populus trichocarpa"
},
{
"paperId": "24494b4d26680a74f2f8a708f4277378d0db387e",
"title": "Mechanism of auxin perception by the TIR1 ubiquitin ligase"
},
{
"paperId": "86b102bd2d05287c5aa1aa5f98f87e4aab653c4d",
"title": "Tartary buckwheat breeding (Fagopyrum tataricum L. Gaertn.) through hybridization with its Rice-Tartary type"
},
{
"paperId": "a8ef6022ed810f10786faecd0f2188ac0b811e09",
"title": "Auxin in action: signalling, transport and the control of plant growth and development"
},
{
"paperId": "77a257cb704be6f776159de61fea8ded074e156f",
"title": "The Arabidopsis Aux/IAA Protein Family Has Diversified in Degradation and Auxin Responsiveness[W]"
},
{
"paperId": "a1904bbbb8969542543885951ac8dc5a27dcdad7",
"title": "Functional Genomic Analysis of the AUXIN/INDOLE-3-ACETIC ACID Gene Family Members in Arabidopsis thaliana[W]"
},
{
"paperId": "a044b5421862a1faeba4f460045747e91d5c21dd",
"title": "Plant development is regulated by a family of auxin receptor F box proteins."
},
{
"paperId": "50e9daf99c6c278ba3d1bfcaee29da2c7a9ee033",
"title": "The Arabidopsis F-box protein TIR1 is an auxin receptor"
},
{
"paperId": "c80f59cbe073dbdec3866035985232438580c3fe",
"title": "The F-box protein TIR1 is an auxin receptor"
},
{
"paperId": "ab3c9d317b572934d70164673acf0e29d57b0c85",
"title": "Developmental specificity of auxin response by pairs of ARF and Aux/IAA transcriptional regulators"
},
{
"paperId": "63544f859e4940b11509615c04a728eaa57e033a",
"title": "Auxin: regulation, action, and interaction."
},
{
"paperId": "7306fc3744c7cfe15256ef2b81034209bca259fd",
"title": "Contrasting Modes of Diversification in the Aux/IAA and ARF Gene Families1[w]"
},
{
"paperId": "764988a6cc9a3fbf781e1b7ca95baba0df0cd2f5",
"title": "Auxin signaling and regulated protein degradation."
},
{
"paperId": "95a56d13fe88a3ca6f9f627eb4a20a60a9463979",
"title": "Aux/IAA Proteins Contain a Potent Transcriptional Repression Domain"
},
{
"paperId": "94f8b3cc23e42dc23c3b5d7cae6b7e32b46967d4",
"title": "Tartary buckwheat (Fagopyrum tataricum Gaertn.) as a source of dietary rutin and quercitrin."
},
{
"paperId": "b7d6eebc6c247afaee69c0b91ac18b86d526bf25",
"title": "The Roles of Auxin Response Factor Domains in Auxin-Responsive Transcription Article, publication date, and citation information can be found at www.plantcell.org/cgi/doi/10.1105/tpc.008417."
},
{
"paperId": "86cd46e58a1618358b7e09e989f5b33aac4241d4",
"title": "Multiple Sequence Alignment Using ClustalW and ClustalX"
},
{
"paperId": "feb6bb344edbbf2da50ba7f839d0b14c43394813",
"title": "Environmental and auxin regulation of wood formation involves members of the Aux/IAA gene family in hybrid aspen."
},
{
"paperId": "b0a09fbc959f81ee15dd0a22842f87a177420041",
"title": "Genetics of Aux/IAA and ARF action in plant growth and development"
},
{
"paperId": "e2da859193e565f3a16afe8560edc46b33d90382",
"title": "Auxin-responsive gene expression: genes, promoters and regulatory factors"
},
{
"paperId": "50b8bd7fbd98a16d16d54d9dfd364d963bc04370",
"title": "Auxin and the Power of the Proteasome in Plants"
},
{
"paperId": "d1551371c30b49cc56931e9300e5e46d196c3d18",
"title": "Auxin Response Factors"
},
{
"paperId": "38d3c85e99fe8d849b81950550284073d9c7f4fb",
"title": "Roles and activities of Aux/IAA proteins in Arabidopsis."
},
{
"paperId": "0973d77b1fb1ffa5a84bad265433ac22cf9ffd29",
"title": "Differential effects of 1-naphthaleneacetic acid, indole-3-acetic acid and 2,4-dichlorophenoxyacetic acid on the gravitropic response of roots in an auxin-resistant mutant of arabidopsis, aux1."
},
{
"paperId": "f74430394687d3faf7b4a6926bc57ed7db41d9da",
"title": "Protein-protein interactions among the Aux/IAA proteins."
},
{
"paperId": "eb86b67afe91a972f9fe9e2bdb8a98bf36c9f3d6",
"title": "Early Genes and Auxin Action"
},
{
"paperId": "847eab3db971b86aa434f2a60ffeef7317be5d42",
"title": "CLUSTAL W: improving the sensitivity of progressive multiple sequence alignment through sequence weighting, position-specific gap penalties and weight matrix choice."
},
{
"paperId": "46957b90b0c10854e9e0f50d1b9427f9be1be6c7",
"title": "Early auxin-induced genes encode short-lived nuclear proteins."
},
{
"paperId": "7d705370124c570e3378f8355f8f137823217d5a",
"title": "Structural characterization of the early indoleacetic acid-inducible genes, PS-IAA4/5 and PS-IAA6, of pea (Pisum sativum L.)."
},
{
"paperId": "86c761cc4dc4facdff3cd7d3e687012f4a2961ac",
"title": "Structure and expression of two auxin-inducible genes from Arabidopsis"
},
{
"paperId": "12ec518ebadd28944568d9fe3a776b7e4bb7ad76",
"title": "Rapid induction of specific mRNAs by auxin in pea epicotyl tissue."
},
{
"paperId": "110fecd419e116b5f99eeff8788885a372fa2975",
"title": "Physiological characterization of aluminum tolerance and accumulation in tartary and wild buckwheat."
},
{
"paperId": "5204c91192f52470ab38f994ad0554d77e8b6ddf",
"title": "Genetic Diversity in Tartary Buckwheat Based on ISSR Markers"
},
{
"paperId": "4963e299611079b914a3bae1501b39bfff6c74c5",
"title": "Structure and expression analysis of early auxin-responsive Aux/IAA gene family in rice (Oryza sativa)"
},
{
"paperId": "d3359a339fbc9d2764d952f36c9bdfadc9cc532a",
"title": "Composition and technological properties of the flour and bran from common and tartary buckwheat"
},
{
"paperId": "78b0e99051f106c10454389ec1b6d8d2c6f1385e",
"title": "Technical advance: spatio-temporal analysis of mitotic activity with a labile cyclin-GUS fusion protein."
},
{
"paperId": null,
"title": "Sequence and characterization of two auxin-regulated genes 12 International Journal of Genomics from soybean"
}
] | 19,288
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00159a43bf50d7133c490a38339afdd626c5a975
|
[
"Computer Science"
] | 0.854539
|
HPBS: A Hybrid Proxy Based Authentication Scheme in VANETs
|
00159a43bf50d7133c490a38339afdd626c5a975
|
IEEE Access
|
[
{
"authorId": "2377947592",
"name": "Hua Liu"
},
{
"authorId": "2109041752",
"name": "Haijiang Wang"
},
{
"authorId": "1405959012",
"name": "Huixian Gu"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://ieeexplore.ieee.org/servlet/opac?punumber=6287639"
],
"id": "2633f5b2-c15c-49fe-80f5-07523e770c26",
"issn": "2169-3536",
"name": "IEEE Access",
"type": "journal",
"url": "http://www.ieee.org/publications_standards/publications/ieee_access.html"
}
|
As a part of intelligent transportation, vehicle ad hoc networks (VANETs) have attracted the attention of industry and academia and have brought great convenience to drivers. As an open communication environment, any user can broadcast messages in the system. However, some of these users are malicious users and malicious users can broadcast false messages to interfere with the normal operation of the system. Therefore, we needed to authenticate the identity of the message sender. Currently, there are two main authentication methods in VANETs, one using public key infrastructure (PKI) to verify message integrity and sender identity, and the other using anonymous authentication schemes. Due to the high computational and transport overhead involved in validation, the certification efficiency of most existing schemes is not satisfactory. Therefore, these schemes are generally not applicable to real-world scenarios. In order to improve the efficiency of certification and satisfy the security requirements, in this paper, we proposed a hybrid proxy based authentication scheme (HPBS). In HPBS, by introducing the concept of agent vehicles and integrating identity-based and PKI-based hybrid authentication, we solved three problems in the VANETs environment: (1) improving the effectiveness of roadside units (RSUs) in terms of authenticating messages; (2) reducing the computational burden of RSUs; (3) protecting the privacy of users. The simulation results illustrate that the scheme not only ensures network security, but also greatly improves the efficiency of information verification.
|
Received August 18, 2020, accepted August 31, 2020, date of publication September 3, 2020, date of current version September 16, 2020.
_Digital Object Identifier 10.1109/ACCESS.2020.3021408_
# HPBS: A Hybrid Proxy Based Authentication Scheme in VANETs
HUA LIU, HAIJIANG WANG, AND HUIXIAN GU
School of Electronic and Information Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, China
Corresponding author: Haijiang Wang ([email protected])
This work was supported in part by the Natural Science Foundation of Zhejiang Province under Grant LQ20F020010.
**ABSTRACT As a part of intelligent transportation, vehicle ad hoc networks (VANETs) have attracted the**
attention of industry and academia and have brought great convenience to drivers. As an open communication
environment, any user can broadcast messages in the system. However, some of these users are malicious
users and malicious users can broadcast false messages to interfere with the normal operation of the system.
Therefore, we needed to authenticate the identity of the message sender. Currently, there are two main
authentication methods in VANETs, one using public key infrastructure (PKI) to verify message integrity
and sender identity, and the other using anonymous authentication schemes. Due to the high computational
and transport overhead involved in validation, the certification efficiency of most existing schemes is not
satisfactory. Therefore, these schemes are generally not applicable to real-world scenarios. In order to
improve the efficiency of certification and satisfy the security requirements, in this paper, we proposed
a hybrid proxy based authentication scheme (HPBS). In HPBS, by introducing the concept of agent
vehicles and integrating identity-based and PKI-based hybrid authentication, we solved three problems in the
VANETs environment: (1) improving the effectiveness of roadside units (RSUs) in terms of authenticating
messages; (2) reducing the computational burden of RSUs; (3) protecting the privacy of users. The simulation
results illustrate that the scheme not only ensures network security, but also greatly improves the efficiency
of information verification.
**INDEX TERMS Proxy vehicle, privacy, proxy based authentication, pseudonym, vehicular ad-hoc network.**
**I. INTRODUCTION**
With the rapid development of artificial intelligence, wireless
technology, automobiles and ad-hoc networks, the concepts
of Intelligent Traffic System (ITS) and smart city have
become more and more popular. In this context, the potential
of vehicular ad hoc networks (VANETs) which can provide
better driving services and road safety has attracted extensive
attention from the government, academia and the business
community. However, as an open communication environment, the security of VANETs communication has become
an urgent problem to be solved [1].
In VANETs, vehicle-to-vehicle communication (V2V) and
vehicle-to-infrastructure communication (V2I) are carried
out in an open wireless channel environment. If we did not
protect the communication properly [2], the personal privacy
(geographical location, identity information and personal
interests, etc.) of users will be easily acquired by attackers. Therefore, a message authentication scheme should be
proposed to solve this problem.
The associate editor coordinating the review of this manuscript and
approving it for publication was Fan Zhang.
Security issues in VANETs have been widely studied
in many literatures [3]–[6]. However, except security problems, the efficiency of certification should not be ignored,
which is one of the key reasons why VANETs can be
deployed. According to the dedicated short-range communication (DSRC) protocol, each vehicle needs to broadcast a
large amount of information periodically which includes the
information of traffic conditions, vehicle speed, and service
requests [7]. So, the message authentication scheme not only
needs to satisfy security requirements, but also needs to be
able to authenticate a large number of messages in a relatively
short period of time.
At present, the existing authentication schemes [8]–[14]
are mainly divided into two categories: the traditional public
key infrastructure (PKI) scheme and the scheme based on
identity. In traditional PKI schemes, the storage capacity of
the vehicle is greatly required because enough pseudonyms
and key pairs need to be distributed from certificate authority
(CA). When vehicles send or receive messages, each message
must be accompanied by a certificate, which greatly increases
the overhead of transmission. When a vehicle is deregistered, the CA needs to put all the vehicle’s pseudonymous
-----
certificates on the certificate revocation list (CRL). As
the number of unregistered vehicles increases, the CRL
will accumulate indefinitely, which will result in obvious
computational and transmission overhead.
The identity-based authentication scheme solves the
problem of certificate management in PKI. However, this
scheme greatly increases the computation and transmission
costs of authentication [15]. In this scheme, each car has
a large number of anonymous identities. When the vehicle
needs to send a message, it needs to select a pseudonym to
sign the message and send it. Therefore, the vehicle needs
to have a large storage space to store the pseudonym. At the
same time, the fact that a user has multiple anonymous identities increases a lot of computational overhead to the authority’s tracking of real identities in case of communication
disputes. To solve this problem, Zhang et al. [9] proposed an
effective authentication based scheme that uses tamper-proof
devices (TPD) to generate dynamic anonymous identities,
which avoids the need for vehicles to store a large number of
anonymous identities. At the same time, the login verification
of TPD protects the user’s personal privacy. In addition,
this scheme uses RSU for batch authentication based on
anonymous identity, which greatly reduces the computation
and transmission costs of message authentication. However,
the IBV scheme does not address V2V communication and
is not resistant to replay attacks. And IBV scheme integrates
information and authentication through RSU, which greatly
increases the workload of RSU and reduces the efficiency of
RSU authentication.
To solve these problems, in this paper, we proposed a
proxy based hybrid authentication scheme (HPBS), which
combines the PKI scheme and the identity-based anonymous
batch authentication scheme and introduces the concept of
proxy vehicle. During the system initialization phase, each
agent vehicle and RSU receives a unique long term certificate
from the CA. When the proxy vehicle enters the communication range of the new RSU, The proxy vehicle needs to
be mutually verified with the RSU. At the end of authentication, the RSU and the proxy vehicle jointly generate a
set of keys. In the group managed by the proxy vehicle,
the message authentication of the ordinary vehicle is carried
out using symmetric encryption with the group key as the
key. When a proxy vehicle node or RSU node is compromised, the CA will revoke its unique certificate. Ordinary
vehicles through the certificate of the proxy vehicle verify the
validity of proxy vehicle. In V2I, we mainly used anonymous
batch authentication based on identity twice. One is batch
authentication of the agent vehicle to the ordinary vehicle,
and the other is batch authentication of the RSU to the agent
vehicle.
Specifically, our main contributions are as follows.
(1) We proposed a hybrid proxy based authentication
scheme that satisfies the security and efficiency requirements
of VANETs.
(2) Every RSU and proxy vehicle holds a long term
PKI-based certificate, which is used to verify the validity
of node. For the sent message, the vehicle needs to sign it
with a locally generated pseudo-identity. The proxy vehicle
and the RSU verify each other’s certificates before they can
communicate and generate group keys. Mutual authentication
between vehicles can be quickly authenticated with group
keys. The vehicle and RSU use bilinear batch authentication
to authenticate the message.
(3) CA manages the revoked certificates by the RSU
revocation lists (RCRL) and the proxy vehicle revocation lists
(PVCRL). When the node registered in the list is corrupted,
the CA can revoke its certificate. In view of the limited
computing and storage resources of the RSU, we used the
agent vehicle to decompress the RSU load.
The remainder of this paper is as follows: in section 2,
we analyzed the relevant work of the existing literature. In
section 3, we described the system model and preparation in
detail. In section 4, we introduced the message authentication
scheme proposed in this paper in detail. In section 5, we certified the safety of our program. In section 6, we analyzed and
evaluated the performance of our solution in detail. In the last
section, we summarized the research status and future work
of this paper.
**II. RELATED WORK**
In VANETs, security authentication and privacy protection
are two problems that need to be solved urgently. To
solve these two problems, many anonymous authentication
schemes [16]–[18] have been proposed. Most of them sign
and authenticate messages based on PKI.
In order to protect the user’s real identity and personal
privacy, the concept of pseudonyms came into being. Chaum
[19] established a pseudonymous system that allows entities
to communicate effectively anonymously with other entities
through pseudonyms. The proposed system plays a great role
in protecting personal privacy. Fan et al. [20] solved the
privacy protection and message authentication problems in
vehicle communication systems, and proposed an efficient
pseudonymy public key infrastructure (EPPKI) scheme using
bilinear pairs. This scheme greatly improves the efficiency
of message authentication. However, this scheme can not
authenticate a large number of messages in a short time. In
order to improve the security of the authentication system,
Sun et al. [2] proposed an efficient anonymous authentication scheme based on bilinear pairings. However, the computational and transmission costs of this scheme are large.
Yue et al. [21] proposed an anonymous authentication scheme
based on group signature framework. The main advantage of
this scheme is to improve the security of VANETs. However,
the performance of this scheme still needs to be further
improved.
In recent years, Zhang et al. [22] proposed an extensible
vehicle anonymous batch authentication scheme that maintains the effectiveness of traditional schemes, reduces the size
of CRL, and does not require the preloading of the same
system private key. However, the scheme still requires large
overhead in computation and storage.
-----
To improve the efficiency of certification, in [23], Li et al.
proposed a scheme for message authentication using secret
sharing. The scheme uses verifiable secret sharing to verify
each other and obtain a set of keys, and then uses this set of
keys to generate and verify messages. This scheme has some
advantages in performance. However, the scheme trusts the
third party too much, and a single point of failure will cause
the system to be completely destroyed.
Hasrouny et al. [24] proposed a group-based authentication
scheme using elliptic curve cryptography (ECC). The scheme
realizes the secure communication of V2V and reduces the
delay caused by security message. The cost of validation is
reduced because the recipient’s certificate does not need to
be validated. The scheme does not affect the efficiency of
certification as the number of vehicles increases. However,
the scheme does not take into account conditional privacy
protection and batch authentication of messages. In [25],
Shao et al. proposed an anonymous authentication scheme
using bilinear pairs in distributed entity groups. This scheme
adds the characteristics of threshold authentication on the
basis of traditional anonymous authentication. The whole
validation is based on batch authentication. However, for
high-speed moving vehicles, the scheme will incur a lot of
computing and communication costs, and the management
of the certificate also has some problems. Gao et al. [26]
proposed a virtual network privacy protection scheme based
on pseudonym ring in order to solve the problems of ring
establishment and ring member selection. The scheme has a
deep network structure and a trust model. Compared with the
traditional scheme, the scheme has stronger robustness and
efficiency. In [27], Liu et al. proposed a practical distributed
condition security authentication scheme. The scheme does
not need to rely on TPD and has a significant improvement
in security features. In [28], Mamun and Miyaji proposed
a scheme based on bilinear pairings.This scheme improves
batch authentication of identification-based Group Signature
(IBGS). The scheme improves the original scheme by batch
scheduling algorithm, which improves the performance of
authentication. However, performance results for the scheme
are not provided.
**III. SYSTEM MODEL AND PRELIMINARIES**
In this section, We introduced our system model in detail and
briefly list the basic theoretical knowledge for our solution.
_A. SYSTEM MODEL_
At present, most studies [11] [29], [30] solve the VANET
authentication problem through the two-layer network model.
The two-layer network model is the management layer and
the application layer respectively. The application layer is
generally composed of vehicles and RSUs, which communicate with each other through the wireless DSRC channel.
And vehicles are divided into group leader vehicles and general vehicles. Management consists of CA and application
server (AS) who communicate with RSU via the Internet. In
particular, the communication types can be divided into V2V
and V2I, as shown in FIGURE 1.
**FIGURE 1. The system model of VANETs.**
(1) VT : On the road, there are many buses that run a fixed
route every day. We chose these buses with fixed routes and
large computing and storage resources as our proxy vehicles.
In Figure 1, VT is the proxy vehicle we chose. First, it needs
to authenticate with the RSU and generate an in-group key.
Secondly, it is also responsible for collecting and sorting out
the authentication information of the surrounding vehicles,
then verifying the time stamp, and finally integrating the
verified information and handing it to the RSU for batch
authentication.
(2) CA: CA is the trusted agency for the entire system. It is
responsible for assigning long-term certificates to proxy vehicle nodes. All proxy vehicles and RSUs must be registered
with CA before joining VANETs. It is maintained by CRL
respectively. We assume that the CA has sufficient computing
power and storage capacity for communication, and that it
cannot be breached by any adversary.
(3) RSU: RSU connects management to the application
layer. On the one hand, the RSU is responsible for checking
the validity of the proxy vehicle certificate entering its communication range and providing the group key to the VT . On
the other hand. The RSU is responsible for the bilinear batch
authentication based on false identity for the group member
authentication information sorted out by VT . Bilinear authentication based on false identity is performed for discrete
common vehicles that are not in the group.
(4) On board Unit (OBU): OBU is a device that is built into
the vehicle during production. OBU can communicate not
only with other OBUs, but also with RSUs. In this scheme,
we assume that each OBU is equipped with a TPD.
_B. BILINEAR MAPS_
Let G be a cyclic additive group and GM be a cyclic multiplicative group. The point P _G generates the group G. G_
∈
and GM have the same prime order q, |G| = |GM | = q. Let
_e : G × G →_ _GM be a bilinear pairing which satisfies three_
flowing properties [32, 33].
(1) Bilinearity: For all P, T _, S_ ∈ _G, e(P + T_ _, S) =_
_e(P, S)e(T_ _, S) and e(P, T_ +S) = e(P, T )e(P, S). In particular,
for all a, b ∈ _Zq[∗][,][ e][(][aP][,][ bP][)][ =][ e][(][P][,][ P][)][ab][ =][ e][(][P][,][ abP][)][ =]_
_e(abP, P)._
-----
(2) Non-degenerate: There exist two points P, T ∈ _G such_
that e(P, T ) ̸= 1, where 1 is the identity element in GM .
(3) Computability: There must is an efficient algorithm to
compute e(P, T ) for all P, T ∈ _G._
In bilinear groups with mapping e, DDH problem is easy
to calculate, while CDH problem is difficult to calculate [33].
For example, for any x, y ∈ _Zq[∗][, and given][ xP][,][ yP][,][ xyP][ ∈]_ _[G][,]_
there exists an efficient algorithm to checking e(xP, yP) =
_e(P, xyP)._
_C. SECURITY REQUIREMENTS_
The vehicle-to-All communication (V2X) scenario mainly
satisfies to meet three security requirements: identity privacy
protection, message authentication and traceability. We will
discuss this in more detail below.
Message authentication: In V2X communication, authentication must be performed to ensure that the message has
not been changed by the legal entity and is delivered in
the communication. In addition, on heavily traffic-intensive
routes, we need to make certification more efficient to avoid
system crashes.
Identity privacy preserving: In V2X communication
system, because of its broadcast nature, the information of
specific identity will be monitored frequently. If the signature
scheme used is a normal signature scheme, this can easily
reveal the identity of the individual [34]. Even if we use a
pseudonym for signature, an attacker can still link to a car
by analyzing multiple signatures.This can lead to a loss of
location privacy [35]. Therefore, identity privacy needs to be
protected.
Traceability: When the signature is disputed or the message
content is forged, the CA should be able to retrieve the
vehicle’s real identity from the vehicle’s false identity.
**IV. A HYBRID PROXY BASED AUTHENTICATION SCHEME**
In this paper, we proposed a hybrid proxy based authentication scheme, which uses identity-based signature and
the PKI-based certificate. Here, the certificate is mainly
used to verify the identity of RSU nodes and VT nodes.
The identity-based signature is mainly used for anonymous
identity-based batch authentication of vehicles in the group
and anonymous identity-based single authentication of discrete vehicles outside the group. The process of our scheme
mainly includes the following five steps: the basic idea of
the scheme, the initialization of the system, the generation
of group key, the authentication of signature and the tracking
of real identity. The symbols used in this article are listed
in Table 1.
_A. BASIC IDEAS_
In this section, we introduced the idea of our scheme in the
paper, as shown in FIGURE 2.
In VANETs, CA is the only organization used to register
certificates and issue certificates. RSU and VT are registered
in the CA for long term certificates, which are put into
their OBU. Particularly, we let the CA manage revocation
certificates for the RSU and vehicle, respectively. That is,
**TABLE 1. Notation.**
**FIGURE 2. The system model of VANETs.**
when the RSU and VT are revoked, their certificates are added
to the CRL, respectively. When the RSU and VT need to
be authenticated, other entities can query the status of the
certificates they provide through Online Certificate Status
Protocol (OCSP) and authenticate them with the public key
in the certificate.
Both the RSU and VT periodically broadcast a hello
message, including its own public key, certificate, and so on.
The RSU works as follows. When a vehicle enters the RSU
communication range to send a message to the RSU, the RSU
will judge the message it sends. If the communication vehicle
is VT, the in-group key will be generated after being authenticated with VT, and the messages of all members sorted
out by VT will be authenticated with bilinear batch based
on anonymous identity. If the communication vehicle is a
-----
**FIGURE 3. The identity of a group.**
normal vehicle, only a single bilinear authentication based on
anonymous identity is performed for the message.
As VT, each time it enters the communication range of the
RSU, it first authenticates with the RSU and obtains the key
within the group. VT also needs to collate messages from
group members and send them to the RSU.
If the ordinary vehicle can find VT within the communication range, the VT is authenticated, and then the message
that needs to be sent to the RSU is sent to VT after successful
authentication. If VT does not exist within the communication
range, the vehicle authenticates the RSU directly and sends a
message.
In our scheme, we also had V2V communication. We
divided V2V into two groups: V2V communication between
two groups and vehicle communication within the group and
discrete vehicle communication outside the group.
_B. SYSTEM INITIALIZATION_
The CA initializes the system parameters and assigns
certificates to each RSU node and VT node.The system
initialization process is as follows:
1) SYSTEM PARAMETER GENERATION
The CA as a trust institution that checks the vehicle’s identity and generates and pre-distributes the vehicle’s private
key. During system initialization, the CA sets the following
system parameters for each RSU and OBU:
(1) G is a cyclic addition group of order q generated by
_P, and GM is the same group of multiplication cycles as G._
Let e : G × G → _GM be a bilinear map._
(2) CA selects a random number c ∈ _Zq[∗]_ [as its private key]
_SKCA, and then Calculate the public key PKCA = SKCAP._
(3) CA first randomly selected d [1], d [2] ∈ _Zq[∗]_ [as the two]
private keys, and calculated the corresponding public keys
_Ppub1 = d_ [1]P, Ppub2 = d [2]P. The CA puts the two keys into
each vehicle’s TPD.
(4) Each RSU node and OBU node is equipped with a public parameter {G, GM _, P, q, PKCA, Ppub1_ _, Ppub2, h, H_ _, e}, and_
each vehicle’s TPD is equipped with a parameter {d [1], d [2]}.
(5) The RID and PWD are required for the vehicle to start
TPD. The RID is the unique identification of the vehicle, and
the PWD is the password required to start TPD.
2) RSU CERTIFICATE ISSUANCE
For each RSU, the certificate and RSU key pair are generated
when the RSU is registered. The process is as follows:
(1) CA randomly selected a number t ∈ _Zq[∗]_ [as RSU’s]
private key SKR, and calculated RSU’s public key PKR = tP.
(2) The CA signs PKR and generates the certificate
_CertCA,R = {PKR, σCA} and sends it to RSU for saving_
through a secure channel. And σCA = signPKCA(PKR).
3) VT CERTIFICATE ISSUANCE
For each VT, the certificate and VT key pair are generated
when the VT is registered.The process is as follows:
(1) CA randomly selected a number l ∈ _Zq[∗]_ [as][ V][T][ ’s private]
key SKT, and calculated VT ’s public key PKT _lP._
=
(2) The CA signs PKT and generates the certificate
_CertCA,T = {PKT, σCA} and sends it to VT for saving through_
a secure channel. And σCA = signPKCA(PKT ).
_C. THE IDENTITY OF A GROUP GENERATION AND_
_ANONYMOUS IDENTITY GENERATION_
The RSU broadcasts within its communication range.When
a vehicle is communicating with it, the RSU detects if the
vehicle is VT . If so, the RSU and VT jointly generate the group
key of VT . The detail can be described as FIGURE 3.
1) THE IDENTITY OF A GROUP GENERATION
(1) RSU broadcasts message Mes0:{CertCA,R, σR, T0} within
the communication range, where CertCA,R = {PKR, σCA},
_σR = signPKR_ ([′]hello[′]) and T0 is a timestamp.
(2) After receiving Mes0, VT first checks the status of
_CertCA,R with OCSP, then checks the timestamp T0 and_
verifies the certificate CertCA,R and the signature σR. When
all validation is passed, VT generates a random number N1
and sends Mes1:{CertCA,T, EncPKR (N1), T1, σT } to the RSU.
And CertCA,T = {PKT, σCA}, σCA = signPKCA(PKT ).
(3) After receiving Mes1, RSU first checks the status of
_CertCA,T with OCSP, then checks the timestamp T1 and_
verifies the certificate CertCA,T and the signature σT . When
all validation is passed, RSU generates a random number
_N2 and computes PSK = N1_ � _N2. RSU sends information_
_Mes2:{EncPKT (N2, T2), EncPSK_ (N0)} to VT .
-----
**Algorithm 1 The Identity of a Group Generation**
RSU broadcast Mes0:{CertCA,R, σR, T0}
_VT receive Mes0_
Check T0, CertCA,R, σR
**if T0,CertCA,R and σR are valid then**
_VT generates a random number N1_
_VT send Mes1:{CertCA,T, EncPKR(N1), T1, σT } to the_
RSU
RSU receive Mes1
Check T1, CertCA,T, σT
**if T1,CertCA,T and σT are valid then**
RSU generates a random number N2 and
computes PSK = N1 ⊕ _N2_
RSU sends Mes2 : EncPKT (N2, T2), EncPSK (N0)
to VT
_VT receive Mes2_
_VT checks T2_
**if T2 are valid then**
_VT calculate PSK = N_ 1 ⊕ _N_ 2
_VT send Mes3 : {EncPSK_ (N0, T3)} to the
RSU
RSU receive Mes3
Check N0, T3
**if T2 and T3 are valid then**
The group key generation ends
**else if then**
(4) VT checks T2. If the check passes, calculate
_PSK = N1_ � _N2, N ′ = N0 and send Mes3 to the RSU. RSU_
verifies T3 and N [′], The group key generation ends when the
validation passes.
The specific algorithm of group key generation is shown in
Algorithm 1.
Here, we used the RSU and the proxy vehicle to generate
identity of a group for each proxy vehicle’s group. The
identification of group identity is mainly used to distinguish
the communication between groups in V2V communication.
In Section 4.4.2, we went into detail.
2) ANONYMOUS IDENTITY GENERATION
All vehicles use the parameters given when the CA is
registered and the TPD device to generate their respective
anonymous identities. The process is as follows.
In order to protect the privacy of users, we used TPD to
generate false identities and corresponding private keys [31].
TPD is mainly composed of the following parts: authentication module, pseudo-identity generation module, and private
key generation module. These three modules are described in
detail below.
Authentication module: The identity module is an access
control module for TPD, and only if you have RID and PWD
can you start the device. PWD is the CA’s signature to RID.
Pass the verification of this module and go to the next module.
Here, we assumed that TPD is unbeatable.
Pseudo identity generation module: This module is mainly
used to generate pseudo-identities for RID, and each
pseudo-identity AID consists of AID[1] and AID[2]. In this module, the ElGamal encryption algorithm [36] over the ECC [37]
is employed to generate pseudonyms. And AID[1] _N_ _P,_
= _AID[2]_ = RID [�] _H_ (N · Ppub1), where N is a random nonce.
Each pseudo-identity is guaranteed to be unique by every
change of N . Here, P and Ppub1 are the public parameters for
the CA preload. AID[1] and AID[2] are generated and passed to
the next module.
Private key generation module: This module uses
identity-based encryption [32]. This module is mainly used to
generate the private key SK, which consists of two parts, SK [1]
and SK [2], where SK [1] _d_ [1] _AID[1]_ and SK [2] _d_ [2] _H_ (AID[1]
= - = - ∥
_AID[2]), respectively._
Finally, the vehicle can obtain a list of pseudo-identities
_AID = (AID[1], AID[2]) and the corresponding private key_
_SK = (SK_ [1], SK [2]).
_D. SIGNATURE VERIFICATION_
1) MESSAGE SIGNING
According to the DSRC agreement, vehicles on the road
need to periodically broadcast traffic-related information,
because these transmitted information may affect the traffic
control center’s reasonable command of the traffic and make
a correct judgment of the current traffic situation. Therefore,
we needed to sign the sent message anonymously to improve
the security of communication. The sender can protect its own
privacy, and the recipient can verify the integrity and validity
of the message by signing. The specific algorithm process is
shown in TABLE 2. Details of the signature are as follows.
(1) First, the vehicle Vi generates a daily traffic
information mi.
(2) Vi selects an anonymous identity and the corresponding
private key to sign the message Mi = mi ∥ _Ti, where the_
signature σi = SKi[1] [+][ h][(][M][i][)][SK]i[ 2][.]
(3) Vi broadcasts the message (AIDi, Mi, σi), where AIDi =
(AID[1]i _[,][ AID]i[2][) and][ σ][i][ =][ SK][ 1]i_ [+][ h][(][M][i][)][SK]i[ 2][.]
(4) These steps are repeated every 100-300 ms according
to the DSRC [38].
2) MESSAGE VERIFICATION
In message authentication, we mainly divided into three
authentication methods. The vehicles in the group communicate with the RSU, Vehicles in the same group communicate
with each other, Vehicles that are not in the same group
communicate with each other.
(1) The vehicles in the group communicate with the RSU:
Given the system public parameters: we used bilinear
message authentication based on anonymous identity.
{G, GM _, P, q, PKCA, Ppub1i_ _[,][ P][pub]i[2][,][ h][,][ H]_ _[,][ e][}][ and the message]_
(AIDi, Mi, σi) sent by discrete vehicle _Vi._ Each VT
first batch authenticates message (AIDi, Mi, σi) for a
member of the group. VT needs to validate e([�][n]i=1 _[σ][i][,][ P][)]_
= e([�][n]i=1 _[AID][i][,][ P]pub[1]i_ [)][ e][(][�]i[n]=1 _[h][(][M][i][)][ HAID][i][,][ P]pub[2]i_ [), where]
-----
**TABLE 2. The specific algorithm of the scheme.**
_HAIDi = H_ (AID[1]i [∥] _[AID]i[2][). This batch verification equation]_
follows since.
_n_
�
_e(_ _σi, P)_
_i=1_
_n_
= e(�(SKi[1] [+][ h][(][M][i][)][SK]i[ 2][)][,][ P][)]
_i=1_
_n_ _n_
= e(� _SKi[1][,][ P][)][e][(]�_ _h(Mi)SKi[2][,][ P][)]_
_i=1_ _i=1_
_n_ _n_
� �
= e( _di[1][AID]i[1][,][ P][)][e][(]_ _di[2][h][(][M][i][)][HAID][i][,][ P][)]_
_i=1_ _i=1_
_n_ _n_
� �
= e( _AID[1]i_ _[,][ d]i[1][P][)][e][(]_ _h(Mi)HAIDi, di[2][P][)]_
_i=1_ _i=1_
_n_ _n_
� �
= e( _AID[1]i_ _[,][ P]pub[1]i_ [)][e][(] _h(Mi)HAIDi, Ppub2i_ [)]
_i=1_ _i=1_
_VT will consolidate the message that the authentication_
is successful and the timestamp is normal into MT =
_T([�]T is a timestamp and[n]i=1_ _[m][i][)]_ ∥ _TT and send ( σT_ _AID=_ _TSK, MT[1]_ _T[+], σ[ h]T ) to the RSU.[(][M][T][ )][SK]T[ 2][. The]_
RSU validates e(σT, P) = e(AID[1]T _[,][ P]pub[1]T_ [)][e][(][h][(][M][T][ )][H] [(][AID]T[1] [∥]
_AID[2]i_ [)][,][ P]pub[2]T [), as verified below.]
_e(σT, P)_
= e(SKT[1] [+][ h][(][M][T][ )][SK]T[ 2][,][ P][)]
= e(SKT[1][,][ P][)][e][(][h][(][M][T][ )][SK]T[ 2][,][ P][)]
= e(dT[1] _[AID]T[1]_ _[,][ P][)][e][(][h][(][M][T][ )][d]T[2]_ _[H]_ [(][AID]T[1] [∥] _[AID]T[2]_ [)][,][ P][)]
= e(AID[1]T _[,][ d]T[1]_ _[P][)][e][(][h][(][M][T][ )][H]_ [(][AID]T[1] [∥] _[AID]T[2]_ [)][,][ d]T[2] _[P][)]_
= e(AID[1]T _[,][ P]pub[1]T_ [)][e][(][h][(][M][T][ )][H] [(][AID]T[1] [∥] _[AID]T[2]_ [)][,][ P]pub[2]T [)]
(2) Vehicles in the same group communicate with each
other: we used bilinear message authentication based on
anonymous identity. One of the vehicles sends a message
(AIDi, Mi, σi, PSKi) to the other vehicle. If PSKi is the same
as your own PSK, then this information comes from the same
group of vehicles. The signature σi is valid if e(σi, P) =
_e(AID[1]i_ _[,][ P]pub[1]i_ [)][e][(][h][(][M][i][)][H] [(][AID]i[1] ∥ _AID[2]i_ [)][,][ P]pub[2]i [), as verified]
below.
_e(σi, P)_
= e(SKi[1] [+][ h][(][M][i][)][SK]i[ 2][,][ P][)]
= e(SKi[1][,][ P][)][e][(][h][(][M][i][)][SK]i[ 2][,][ P][)]
= e(di[1][AID]i[1][,][ P][)][e][(][h][(][M][i][)][d]i[2][H] [(][AID]i[1] [∥] _[AID]i[2][)][,][ P][)]_
= e(AID[1]i _[,][ d]i[1][P][)][e][(][h][(][M][i][)][H]_ [(][AID]i[1] [∥] _[AID]i[2][)][,][ d]i[2][P][)]_
= e(AID[1]i _[,][ P]pub[1]i_ [)][e][(][h][(][M][i][)][H] [(][AID]i[1] [∥] _[AID]i[2][)][,][ P]pub[2]i_ [)]
(3) Vehicles that are not in the same group communicate
with each other: Here, we used bilinear message authentication based on anonymous identity. One of the vehicles sends
a message (AIDi, Mi, σi) to the other vehicle, the signature
_σi is valid if e(σi, P) = e(AID[1]i_ _[,][ P]pub[1]i_ [)][e][(][h][(][M][i][)][H] [(][AID]i[1] ∥
_AID[2]i_ [)][,][ P]pub[2]i [), as verified below.]
_e(σi, P)_
= e(SKi[1] [+][ h][(][M][i][)][SK]i[ 2][,][ P][)]
= e(SKi[1][,][ P][)][e][(][h][(][M][i][)][SK]i[ 2][,][ P][)]
= e(di[1][AID]i[1][,][ P][)][e][(][h][(][M][i][)][d]i[2][H] [(][AID]i[1] [∥] _[AID]i[2][)][,][ P][)]_
= e(AID[1]i _[,][ d]i[1][P][)][e][(][h][(][M][i][)][H]_ [(][AID]i[1] [∥] _[AID]i[2][)][,][ d]i[2][P][)]_
= e(AID[1]i _[,][ P]pub[1]i_ [)][e][(][h][(][M][i][)][H] [(][AID]i[1] [∥] _[AID]i[2][)][,][ P]pub[2]i_ [)]
Through the above four authentication methods, we will
introduced the V2I and V2V message authentication methods
in our system.
First of all, we used VT and RSU to achieve batch certification on dense traffic roads in our scheme, which greatly
reduces the certification delay. We mixed in the PKI scheme
and used certificates to guarantee the identity of RSU and VT,
which improved the security of the whole system. We also
-----
used pseudonyms to protect users’ privacy. We used VT to
integrate the information and send a timestamp to the RSU
for authentication, which not only prevented replay attacks,
but also relieved the pressure on the RSU to authenticate and
integrate the information at the same time.
In addition, in the authentication of intra-group
communication, we used the authentication scheme based
on symmetric key, which greatly reduces the authentication time of intra-group information, improves the rate of
intra-group communication, and guarantees the security of
communication.
**V. SECURITY ANALYSIS**
This section will mainly analyze the security of our proposed
scheme. Firstly, BAN Logic is adopted to prove the correctness of the scheme. Secondly, we apply informal security
analysis to illustrate the security requirements our solution
meets.
_A. PROOF OF SAFETY_
In this section, we use BAN Logic in [39] to prove the
logical correctness of HPBS scheme. BAN logic is a formal
logic widely used for reasoning about encryption and protocols.The BAN logic can be used to prove that the protocol
implementation is achieving the desired goal.At the same
time, we can also use it to find some defects in the scheme
design.
The HPBS programme has two main objectives. One is that
during authentication, VT and RSU determine that they share
a new session key. The other goal is for VT and RSU to get
information from each other.
With X as Vi, Y and Z as RSU, MA and MB as P[a] and P[b],
_DA as MsgVT, DB and DC as MsgR, KA and KA[−][1]_ as PKT and
_SKT, KB and KB[−][1]_ as PKR and SKR, TA1, TB, TA2 and TC as the
timestamp, KAB as PSK, the messages in the HPBS scheme
can be represented as follows:
_VT →_ _RSU :_
_X_ _Y_
→ :
_TA1, Y_ _, X_ _, {MA, DA}KB, {TA1, Y_ _, X_ _, {MA, DA}KB}KA−1_
_RSU →_ _VT :_
_Y_ _X_
→ :
_TB, X_ _, Y_ _, {MB, DB}KA, {TB, X_ _, Y_ _, {MB, DB}KA}KB−1_
_VT →_ _RSU :_
_X →_ _Z : TA2, Z_ _, X_ _, {TA2, Z_ _, X_ }KAB
_RSU →_ _VT :_
_Z →_ _X : TC_ _, X_ _, Z_ _, {TC_ _, X_ _, Z_ _, DC_ }KAB
As a plaintext can be easily forged, the idealized message
in BAN logic is shown as follows:
_VT →_ _RSU :_
_X →_ _Y : {MA, DA}KB_ _, {TA1, Y_ _, X_ _, {MA, DA}KB}KA−1_
_RSU →_ _VT :_
_Y →_ _X : {MB, DB}KA_ _, {TB, X_ _, Y_ _, {MB, DB}KA}KB−1_
_VT →_ _RSU :_
_X →_ _Z : {TA2, Z_ _, X_ }KAB
_RSU →_ _VT :_
_Z →_ _X : {TC_ _, X_ _, Z_ _, DC_ }KAB
As both of VT and RSU use their IDs as their public keys
and broadcast to neighbors, it can be assumed that:
_X |≡_ (KA) �→ _X X |≡_ (KB) �→ _Y Y |≡_ (KB) �→ _Y_
_Y |≡_ _♯(TA1) X |≡_ _♯(TB) Z |≡_ _♯(TA2) X |≡_ _♯(TC_ )
_X |≡_ _♯(MA) Y |≡_ _♯(MB) Y |≡_ _♯(DB) Z |≡_ _♯(DC_ )
Through the logic of BAN, we obtain
:
_Y |≡_ (KA) �→ _X_ _, Y ◁{TA1, Y_ _, X_ _, {MA, DA}KB}KA−1_
_Y |≡_ _X |∼_ (TA1, Y _, X_ _, {MA, DA}KB)_
Using TA1 for fresh rule, we derive:
_Y |≡_ _♯(TA1)_
_Y |≡_ _♯(TA1, Y_ _, X_ _, {MA, DA}KB)_
Furthermore, with nonce-verification rule, we can infer
:
_Y|≡_ _♯(TA1, Y_ _,X_ _,{MA, DA}KB), Y |≡_ _X |∼_ (TA1,Y _,X_ _, {MA, DA}KB)_
_Y |≡_ _X |≡_ (MA, DA)
From RSU → _VT, via the message-meaning, we also_
obtain:
_X |≡_ (KB) �→ _Y_ _, X ◁{TB, X_ _, Y_ _, {MB, DB}KA_ }KB−1
_X |≡_ _Y |∼_ (TB, X _, Y_ _, {MB, DB}KA)_
Using TB for fresh rule, we obtain:
_X |≡_ _♯(TB)_
_X |≡_ _♯(TB, X_ _, Y_ _, {MB, DB}KA)_
So, with nonce-verification rule,we obtain
:
_X|≡_ _♯(TB,X_ _,Y_ _, {MB, DB}KA_ ),Y |≡ _X |∼_ (TB,X _,Y_ _, {MB, DB}KA)_
_X |≡_ _Y |≡_ (MB, DB)
With KAB, we can obtain:
_X |≡_ _Y |≡_ (MB, DB), Y |≡ _X |≡_ (MA, DA)
_X |≡_ _Y |≡_ _X_ (KAB) ↔ _Y_ _, Y |≡_ _X |≡_ _X_ (KAB) ↔ _Y_
From the above equation, we can see the authentication
process between VT and RSU, which means that the HPBS
case can meet the first security objective.
From VT → _RSU_, we obtain:
_Z |≡_ _X_ (KAB) ↔ _Z_ _, X ◁{TA2, Z_ _, X_ }KAB
_Z |≡_ _X |∼_ ({TA2, Z _, X_ }KAB)
Using TA2 for fresh rule, we also derive:
_Z |≡_ _♯(TA2)_
_Z |≡_ _♯({TA2, Z_ _, X_ }KAB)
Therefore, we can derive by nonce-verification rule:
_Z |≡_ _♯({TA2, Z_ _, X_ }KAB), Z |≡ _X |∼_ ({TA2, Z _, X_ }KAB)
_Z |≡_ _A |≡_ (TA2, Z _, X_ )
From VT → _RSU_,via the message-meaning, we obtain:
_X |≡_ _X_ (KAB) ↔ _Z_ _, Z ◁{TC_ _, X_ _, Z_ _, DC_ }KAB
_X |≡_ _Z |∼_ ({TC _, X_ _, Z_ _, DC_ }KAB)
In addition, using TC for fresh rule, we get:
_X |≡_ _♯(TC_ )
_X |≡_ ({TC _, X_ _, Z_ _, DC_ }KAB)
-----
Finally, with nonce-verification rule, we can derive:
_X |≡_ _♯({TC_ _, X_ _, Z_ _, DC_ }KAB), Z |≡ _X |∼_ ({TC _, X_ _, Z_ _, DC_ }KAB)
_X |≡_ _Z |≡_ (TC _, X_ _, Z_ _, DC_ )
It can be determined from the above proof that the HPBS
program can also fulfill the second goal. Through the formal
proof of HPBS scheme, we can conclude that the scheme can
guarantee the integrity of the information exchanged and the
confidentiality of the recipient.
_B. THE FORMAL SECURITY ANALYSIS_
In this section, we mainly proved the security of our scheme
from four aspects: the message authentication, the user identity privacy preservation, the resist replay attacks, and the
traceability by the CA.
1) THE MESSAGE AUTHENTICATION
The message authentication is the basic security requirements
of VANETs. In our scheme, the signature σi = SKi[1] [+]
_h(Mi)SKi[2]_ [is actually a one-time identity-based signature. It]
is impossible to forge a valid signature without knowing SKi[1]
and SKi[2][. Because of the NP-hard computation complexity]
of Diffie-Hellman problem in G, it is difficult to derive the
private keys SKi[1] [and][ SK]i[ 2] [by way of][ AID]i[1][,][ P]pub[1]i [,][ P][, and]
_H_ (AID[1]i _i_ [). On the other hand,][ σ][i][ =][ SK][ 1]i _i_
[∥] _[AID][2]_ [+][ h][(][M][i][)][SK][ 2]
is a diophantine equation, and we knew that just knowing σi
and h(Mi) to get SKi[1] [and][ SK]i[ 2] [is quite difficult.]
On the other hand, the CA assigns long-term certificates to
each registered RSU and VT . When VT and RSU authenticate
each other’s messages, we used pki-based certificate authentication. We can authenticated the message by verifying the
status of the certificate.
Therefore, we can concluded that the one-time
identity-based signature in our scheme is secure as message
authentication.
2) THE USER IDENTITY PRIVACY PRESERVATION
In our scheme, we generated two random pseudo-identities
_AID[1]i_ [and][ AID]i[2] [using the real identity][ RID][i][ of the vehi-]
cle i and the random number N, where AID[1]i = NP and
_AID[2]i_ = RIDi � _H_ (NPpub1i [). Because the pseudo-identity]
pair (AID[1]i _[,][ AID]i[2][) is an ElGamaltype ciphertext, it can resist]_
the opt-in plaintext attack. Therefore, without knowing the
key pair (s[1]i _[,][ s]i[2][), no one can calculate the real identity of the]_
vehicle i through the pseudo-identity pair. Also, because each
signature uses a different pseudonymous pair (AID[1]i _[,][ AID]i[2][).]_
Therefore, personal privacy is protected.
3) THE RESIST REPLAY ATTACKS
Because of the characteristics of wireless communication,
the information we sent is often easy to be captured. Although
attackers cannot forge signatures to tamper with information
and forge information attacks, they can replay attacks. For
example, suppose the vehicle i is found to have a traffic
accident in a certain section of the road, in order to make the
traffic control center deal with the incident and reasonably
clear the road. The vehicle i sent a message Mi at time T1,
and both the attacker and the traffic center obtained Mi. The
transportation center went through a series of certification
processes to make sure that it was credible, so it was reasonably arranged. If the attacker uses the obtained information
to send out the message Mi again at time T2, the traffic
center will still pass the certification and take measures.
However, it takes manpower and resources to find out that
this is a hoax, and the traffic arrangement for emergencies
will make the traffic situation chaotic. Imagine if there were
an infinite number of such messages, and the whole system
crashed.
In our scheme, we used private key timestamp signatures
for individual authentication to prevent replay attacks. In
batch authentication, we asked VT to collect the information
by verifying the timestamp of each information, consolidating the information that is not in question, and then VT
signs the time with its own group key and sends the consolidated information to the RSU. In intra-group authentication, we used the intra-group communication key to sign the
timestamp and put it into the sending message.
Therefore, our scheme successfully withstands replay
attacks in communication.
4) THE TRACEABILITY BY THE CA
In our scheme, in order to protect user privacy, we signed
messages with different pseudonyms. As the only credible agency, CA can use the following formula to calculate the true identity of the vehicle. AID[2]i � _H_ (di1[AID]i[1][)][ =]
_RIDi_ � _H_ (NPpub1i [)][ �] _[H]_ [(][d]i[1][NP]pub[1]i [)][ =][ RID][i][.]
Part of the private key di[1] [of vehicle][ i][ is only known by CA,]
so other vehicles and RSU cannot calculate the real identity
of the vehicle. When a vehicle i delivers false messages and
conducts illegal operations, the RSU can report to the CA,
which calculates to obtain its real identity. This satisfies the
traceability of the real identity of the vehicle.
**VI. PERFORMANCE EVALUATION**
In this section, we will evaluated the performance of the
HPBS scheme primarily by verifying latency and transport
overhead, and compare it with the related schemes, such as
ECDSA [40] and LIAP [41] in terms of computation and
transmission overheads. Considered that the ECDSA scheme
is the signature algorithm adopted by IEEE1609.2 standard, we adopted it as a comparison scheme. LIAP is A
local identity-based anonymous message authentication protocol. Our scheme has the same points as LIAP: (1) We
both used a hybrid approach to design anonymous message authentication schemes; (2) We used identity-based and
PKI-based to design mixed schemes. Differences between
our approach and LIAP:(1) LIAP uses anonymous message
authentication in part. Our scheme utilizes PKI-based ideas
locally; (2) Our scheme introduces proxy vehicles. Therefore,
we used LIAP as our comparison object. Here, we only
considered the communication overhead of V2V and V2I,
and we do not analyze the communication between CA
and RSU.
-----
**TABLE 3. Comparisons of the speed of three signature schemes (ms).**
_A. COMPUTATION OVERHEAD ANALYSIS_
In this section, we calculated the calculation cost of vehicle
vehicle validation general vehicle information and RSU vehicle integration information respectively. Here, we added
the two as total message validation computation overhead
and compare the computation overhead with the other two
scenarios in detail.
In the V2I communication phase, The computational
overhead is mainly generated by message validation. The
operations required to validate the message are as follows.
_Tmul represents the time required to perform a point mul-_
tiplication, Tmtp represents the time required to perform a
MapToPoint hash operation, and Tpar represents the time
required to perform a pairing operation. The experiments run
on an Intel i7-9750 3 GHZ machine. According to [28], The
following parameters are obtained: Tmul is 0.39 ms, Tmtp is
0.09 ms and Tpar is 4.5 ms.
TABLE 3 shows a comparison of three schemes for the
computational overhead of an RSU signed for a single message and n messages. The time required for the ECDSA
scheme to validate a message is 4Tmul, and the time required
for the validation of n messages is 4nTmul. The LIAP
scheme takes Tmul _Tmtp_ 3Tpar to validate a message and
+ +
(n + 1)Tmul + nTmtp + 3Tpar to validate n messages.
First, we assumed that the traffic density of the vehicle
is equal to the number of messages to be verified sent by
the vehicle during the cycle, and each vehicle sends a message at a fixed time of 300ms as the cycle. We assumed
that in the RSU communication range, the number of proxy
vehicles is m and the number of messages to verify is n.
Therefore, the average number of messages that need to
be validated per agent vehicle is
⌈ _m[n]_ [⌉][. The time it takes to]
validate a message with our scheme is 2Tmul + 2Tmtp + 6Tpar,
and the time it takes to validate n messages is (m + n/m)
_Tmul + (m + n/m)Tmtp + 6Tpar_ .
FIGURE 4 illustrates the relationship between the number
of messages and the number of proxy vehicles within an
RSU’s coverage area and the computation overhead of the
RSU. We can see from the figure that the computation overhead increases as the number of messages and the number
proxy vehicles increases. When the number of proxy vehicles
is greater than 1, the calculation cost of our scheme is much
higher than that of the other two schemes. Below, we drew
the comparison line diagram of the three schemes of proxy
vehicles m 2 and m 3.
= =
FIGURE 5 shows the change of the computational
overhead of the three schemes with the increase of the number of messages when the number of proxy vehicles in the
RSU communication range is 2. From the figure, we can
see that our scheme requires less computational overhead
**FIGURE 4. Computation overhead vs. Number of messages and Number**
of proxy vehicles.
**FIGURE 5. Computation overhead vs. Number of messages, the number**
proxy vehicles m = 2.
than the other two schemes when the number of messages
is more than 50. At the same time, as the number of messages increases, the computational overhead of our scheme is
smaller than that of the other two schemes.
From FIGURE 6, We can saw that when there are three
proxy vehicles in the communication range of RSU, the
calculation cost of our scheme is less than the other two
schemes as the number of messages increases. By comparing
FIGURE 5 and FIGURE 6, we can find that as the number of
proxy vehicles in the RSU communication range increases,
the delay required to validate messages will decrease.
In V2V communication phase, The message authentication
between vehicles is mainly divided into two ways: one is
the authentication of vehicles within a group, and the other
is the authentication of vehicles between different groups.
Message authentication between vehicles in the same group
only requires the computational overhead of decrypting a
-----
**TABLE 4. Comparisons of transmission overhead of three schemes (byte).**
**FIGURE 6. Computation overhead vs. Number of messages, the number**
proxy vehicles m = 3.
symmetric signature using the group key. The computational
overhead required for message authentication between vehicles that are not in the same group is a bilinear authentication operation, and the computational overhead required is
_Tmul + Tmtp + 3Tpar_ .
_B. TRANSMISSION OVERHEAD ANALYSIS_
In this section, We analyzed and compared the transmission
overhead of ECDSA, LIAP, and HPBS. In our scheme,
the transport overhead we calculate includes the transport
overhead from the normal vehicle to the proxy vehicle and
the transport overhead from the proxy vehicle to the RSU.
TABLE 4 shows the number of bytes that need to be
transferred under one message and n messages for each of
the three scenarios. Here, we do not count message Mi as
transport overhead. Based on the authentication process in
section IV, we can calculate that the number of bytes of
message (AIDi, σi) transmitted from the ordinary vehicle to
the proxy vehicle is 21 42n. The information transferred
+
from the proxy vehicle to the RSU is (AIDT, σT ), and we can
calculate that the transfer overhead is 21 42m. And m is the
+
number of proxy vehicles. We can figure out that the total cost
of the transfer is 21 42n 21 42m.
+ + +
FIGURE 7 illustrates the relationship between the number
of messages and the number of proxy vehicles within an
RSU’s coverage area and the transmission overhead of the
RSU. From the picture, we can see that, with the increase
of the number of messages, the number of transmitted bytes
of the three schemes all shows an increasing trend. The
transmission overheads of ECDSA is the largest among the
three schemes, and the transmission overhead of the HPBS is
much smaller than the other two.
From FIGURE 8, we can clearly saw the comparison of
transmission overhead of the three schemes when there are
two proxy vehicles in the communication range of the RSU.
**FIGURE 7. Transmission overhead vs. Number of messages and Number**
of proxy vehicle.
**FIGURE 8. Transmission overhead vs. Number of messages, the number**
proxy vehicles m = 2.
**FIGURE 9. Transmission overhead vs. Number of messages, the number**
proxy vehicles m = 3.
We found that after the number of messages is greater than 3,
our scheme has the lowest transmission cost among the three
-----
schemes and the gap between the three becomes larger as the
number of messages increases.
By comparing FIGURE 9 and FIGURE 8, we can found
that the transmission overhead of our scheme decreases
slightly as the number of proxy vehicles increases. By looking
at the number of proxy vehicles, there was a slight increase in
the transmission overhead of our scheme. However, the transmission overhead of our scheme is always much less than that
of the other two schemes.
**VII. CONCLUSION**
In HPBS, we used the computing power of the proxy vehicle
to reduce the burden on the RSU, where the proxy vehicle
can batch authenticate messages from other vehicles and the
RSU is responsible for authenticating messages from the
agent vehicle. At the same time, we use the group keys
jointly generated by the proxy vehicle and the RSU to make
intra-group V2V communication more efficient. In the event
of an illegal operation of a node, HPBS can trace the node
through CA and obtain its true identity. In addition, HPBS
is able to withstand replay attacks. HPBS was analyzed and
compared with other schemes in terms of computational and
transmission overhead.
In the work of HPBS, we mainly proposed a hypothetical
password algorithm that takes buses and other similar vehicles as proxy vehicles. Since the route of these special vehicles is fixed and concentrated on the road with high traffic
flow, it is more advantageous for the scheme to be applied
in practice. In the future, we will used the trust extension
to increase the number of agent vehicles, which will further
improve the efficiency of certification. In addition, we will
also use game theory to study the incentive mechanism to
minimize redundant authentication events.
**REFERENCES**
[1] R. Lu, X. Lin, T. H. Luan, X. Liang, and X. Shen, ‘‘Anonymity analysis on
social spot based pseudonym changing for location privacy in VANETs,’’
in Proc. IEEE Int. Conf. Commun. (ICC), Kyoto, Japan, Jun. 2011, pp. 1–5.
[2] Y. Sun, R. Lu, X. Lin, X. Shen, and J. Su, ‘‘An efficient pseudonymous
authentication scheme with strong privacy preservation for vehicular communications,’’ IEEE Trans. Veh. Technol., vol. 59, no. 7, pp. 3589–3603,
Sep. 2010.
[3] M. Arshad, Z. Ullah, N. Ahmad, M. Khalid, H. Criuckshank, and Y. Cao,
‘‘A survey of local/cooperative-based malicious information detection
techniques in VANETs,’’ EURASIP J. Wireless Commun. Netw., vol. 2018,
no. 1, p. 62, Dec. 2018.
[4] Z. Ning, J. Huang, and X. Wang, ‘‘Vehicular fog computing: Enabling
real-time traffic management for smart cities,’’ IEEE Wireless Commun.,
vol. 26, no. 1, pp. 87–93, Feb. 2019.
[5] X. Wang, Z. Ning, M. Zhou, X. Hu, L. Wang, B. Hu, R. Y. K. Kwok,
and Y. Guo, ‘‘A privacy-preserving message forwarding framework for
opportunistic cloud of things,’’ IEEE Internet Things J., vol. 5, no. 6,
pp. 5281–5295, Dec. 2018.
[6] X. Wang, Z. Ning, and L. Wang, ‘‘Offloading in Internet of vehicles: A fogenabled real-time traffic management system,’’ IEEE Trans. Ind. Informat.,
vol. 14, no. 10, pp. 4568–4578, Oct. 2018.
[7] A. Boukerche, H. A. B. F. Oliveira, E. F. Nakamura, and A. A. F. Loureiro,
‘‘Vehicular ad hoc networks: A new challenge for localization-based systems,’’ Comput. Commun., vol. 31, no. 12, pp. 2838–2849, Jul. 2008.
[8] M. Raya and J. P. Hubaux, ‘‘Securing vehicular ad hoc networks,’’ J. Com_put. Secur., vol. 15, no. 1, pp. 39–68, Jan. 2007._
[9] C. Zhang, X. Lin, R. Lu, P.-H. Ho, and X. Shen, ‘‘An efficient message
authentication scheme for vehicular communications,’’ IEEE Trans. Veh.
_Technol., vol. 57, no. 6, pp. 3357–3368, Nov. 2008._
[10] D. He, C. Chen, S. Chan, and J. Bu, ‘‘Analysis and improvement of a
secure and efficient handover authentication for wireless networks,’’ IEEE
_Commun. Lett., vol. 16, no. 8, pp. 1270–1273, Aug. 2012._
[11] L. Zhang, Q. Wu, A. Solanas, and J. Domingo-Ferrer, ‘‘A scalable robust authentication protocol for secure vehicular communications,’’ IEEE Trans. Veh. Technol., vol. 59, no. 4, pp. 1606–1617,
May 2010.
[12] A. Wasef and X. Shen, ‘‘Efficient group signature scheme supporting batch
verification for securing vehicular networks,’’ in Proc. IEEE Int. Conf.
_Commun., Cape Town, South Africa, May 2010, pp. 1–5._
[13] C. Zhang, R. Lu, X. Lin, P.-H. Ho, and X. Shen, ‘‘An efficient identitybased batch verification scheme for vehicular sensor networks,’’ in Proc.
_IEEE INFOCOM 27th Conf. Comput. Commun., Phoenix, AZ, USA,_
Apr. 2008, pp. 246–250.
[14] C.-C. Lee and Y.-M. Lai, ‘‘Toward a secure batch verification with
group testing for VANET,’’ Wireless Netw., vol. 19, no. 6, pp. 1441–1449,
Jan. 2013.
[15] D. He, S. Zeadally, B. Xu, and X. Huang, ‘‘An efficient identity-based
conditional privacy-preserving authentication scheme for vehicular ad
hoc networks,’’ IEEE Trans. Inf. Forensics Securiry, vol. 10, no. 12,
pp. 2681–2691, Dec. 2015.
[16] J. Zhang, Z. Sun, S. Liu, and P. Liu, in Proc. Int. Conf. Secur., Privacy
_Anonymity Comput., Commun. Storage, Nov. 2016, pp. 145–155._
[17] L. Yao, C. Lin, G. Wu, T. Jung, and K. Yim, ‘‘An anonymous authentication scheme in data-link layer for VANETs,’’ Int. J. Ad Hoc Ubiquitous
_Comput., vol. 22, no. 1, pp. 1–13, May 2016._
[18] S. Jiang, X. Zhu, and L. Wang, ‘‘An efficient anonymous batch authentication scheme based on HMAC for VANETs,’’ IEEE Trans. Intell. Transp.
_Syst., vol. 17, no. 8, pp. 2193–2204, Aug. 2016._
[19] D. Chaum, ‘‘Security without identification: Transaction systems to make
big brother obsolete,’’ Commun. ACM, vol. 28, no. 10, pp. 1030–1044,
Oct. 1985.
[20] C.-I. Fan, R.-H. Hsu, and C.-H. Tseng, ‘‘Pairing-based message authentication scheme with privacy protection in vehicular ad hoc networks,’’
in Proc. Int. Conf. Mobile Technol., Appl., Syst. Mobility, Sep. 2008,
pp. 1–7.
[21] X. Yue, B. Chen, X. Wang, Y. Duan, M. Gao, and Y. He, ‘‘An efficient
and secure anonymous authentication scheme for VANETs based on the
framework of group signatures,’’ IEEE Access, vol. 6, pp. 62584–62600,
Oct. 2018.
[22] J. Zhang, H. Zhong, J. Cui, Y. Xu, and L. Liu, ‘‘An extensible and
effective anonymous batch authentication scheme for smart vehicular networks,’’ IEEE Internet Things J., vol. 7, no. 4, pp. 3462–3473,
Apr. 2020.
[23] X. Li, Y. Liu, and X. Yin, ‘‘An anonymous conditional privacy-preserving
authentication scheme for VANETs,’’ in Proc. IEEE 21st Int. Conf. High
_Perform. Comput. Commun. IEEE 17th Int. Conf. Smart City IEEE 5th_
_Int. Conf. Data Sci. Syst. (HPCC/SmartCity/DSS), Zhangjiajie, China,_
Aug. 2019, pp. 1763–1770.
[24] H. Hasrouny, C. Bassil, A. E. Samhat, and A. Laouiti, ‘‘Group-based
authentication in V2V communications,’’ in Proc. 5th Int. Conf. Digit.
_Inf. Commun. Technol. Appl. (DICTAP), Beirut, Lebanon, Apr. 2015,_
pp. 173–177.
[25] J. Shao, X. Lin, R. Lu, and C. Zuo, ‘‘A threshold anonymous authentication protocol for VANETs,’’ IEEE Trans. Veh. Technol., vol. 65, no. 3,
pp. 1711–1720, Mar. 2016.
[26] T. Gao, X. Deng, Q. Li, M. Collotta, and I. You, ‘‘APPAS: A privacypreserving authentication scheme based on pseudonym ring in VSNs,’’
_IEEE Access, vol. 7, pp. 69936–69946, Mar. 2019._
[27] Z. Liu, L. Xiong, T. Peng, D. T. Peng, and H. B. Liang, ‘‘A realistic distributed conditional privacy-preserving authentication scheme for
vehicular ad hoc networks,’’ IEEE Access, vol. 6, pp. 26307–26317,
May 2018.
[28] M. S. I. Mamun and A. Miyaji, ‘‘An optimized signature verification
system for vehicle ad hoc NETwork,’’ in Proc. 8th Int. Conf. Wire_less Commun., Netw. Mobile Comput., Shanghai, China, Sep. 2012,_
pp. 1–8.
[29] X. Lin, X. Sun, P.-H. Ho, and X. Shen, ‘‘GSIS: A secure and privacypreserving protocol for vehicular communications,’’ IEEE Trans. Veh.
_Technol., vol. 56, no. 6, pp. 3442–3456, Nov. 2007._
[30] L. Zhang, Q. Wu, B. Qin, and J. Domingo-Ferrer, ‘‘APPA: Aggregate
privacy-preserving authentication in vehicular ad hoc networks,’’ in Proc.
_Int. Conf. Informat. Secur., 2011, pp. 293–308._
-----
[31] D. Boneh and M. Franklin, ‘‘Identity-based encryption from
the Weil pairing,’’ in _Proc._ _Annu._ _Int._ _Cryptol._ _Conf.,_ 2001,
pp. 213–229.
[32] A. Miyaji, M. Nakabayashi, and S. Takano, ‘‘New explicit conditions
of elliptic curve traces for FR-reduction,’’ IEICE Trans. Fundam.
_Electron., Commun. Comput. Sci., vol. 84, no. 5, pp. 1234–1243,_
May 2001.
[33] D. Boneh, B. Lynn, and H. Shacham, ‘‘Short signatures from
the Weil pairing,’’ _J._ _Cryptol.,_ vol. 17, no. 4, pp. 297–319,
Jul. 2004.
[34] K. Ren, W. Lou, K. Kim, and R. Deng, ‘‘A novel privacy preserving
authentication and access control scheme for pervasive computing environments,’’ IEEE Trans. Veh. Technol., vol. 55, no. 4, pp. 1373–1384,
Jul. 2006.
[35] K. Sampigethava, L. Huang, M. Li, R. Poovendran, K. Matsuura, and
K. Sezaki, ‘‘CARAVAN: Providing location privacy for VANET,’’ in Proc.
_Int. Workshop Veh. Ad Hoc Netw., 2006, pp. 1–15._
[36] T. Elgamal, ‘‘A public key cryptosystem and a signature scheme based
on discrete logarithms,’’ IEEE Trans. Inf. Theory, vol. IT-31, no. 4,
pp. 469–472, Jul. 1985.
[37] V. S. Miller, ‘‘Use of elliptic curves in cryptography,’’ in Proc. Conf. Theory
_Appl. Cryptograph. Techn., 1985, pp. 417–426._
[38] Dedicated _Short_ _Range_ _Communications_ _(DSRC)._ Accessed:
Jun. 16, 2011. [Online]. Available: https://ieeexplore.ieee.org/abstract/
document/5888501.html
[39] M. Burrows, M. Abadiand, and R. Needham, ‘‘A logic of authentication,’’ ACM Trans. Comput. System., vol. 8, no. 1, pp. 18–36,
Feb. 1990.
[40] IEEE Trial-Use Standard for Wireless Access in Vehicular Environments—
_Security Services for Applications and Management, IEEE Standard_
1609.2, 2006.
[41] S. Wang and N. Yao, ‘‘LIAP: A local identity-based anonymous message authentication protocol in VANETs,’’ Comput. Commun., vol. 112,
pp. 154–164, Nov. 2017.
HUA LIU is currently pursuing the master’s
degree with the Zhejiang University of Science and
Technology. His research interests wireless mesh
network security, cryptography, and information
theory.
HAIJIANG WANG received the M.S. degree
from Zhengzhou University, in 2013, and the
Ph.D. degree from Shanghai Jiao Tong University, in 2018. He is currently a Teacher with the
School of Information and Electronic Engineering, Zhejiang University of Science and Technology. His research interests include cryptography
and information security, in particular, public key
encryption, attribute-based encryption, searchable
encryption.
HUIXIAN GU is currently pursuing the master’s
degree with the Zhejiang University of Science
and Technology. His research interests edge cache,
wireless communication, and information theory.
-----
| 17,936
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ACCESS.2020.3021408?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ACCESS.2020.3021408, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09186126.pdf"
}
| 2,020
|
[
"JournalArticle"
] | true
| null |
[
{
"paperId": "ec9fdcc55e2f96bccb4c9af5e0b1cb0f9698fc88",
"title": "An Extensible and Effective Anonymous Batch Authentication Scheme for Smart Vehicular Networks"
},
{
"paperId": "a5f52fb6a74836afba9a92af5ad47864e7c8c98e",
"title": "An Anonymous Conditional Privacy-Preserving Authentication Scheme for VANETs"
},
{
"paperId": "679fc48629057d35e31f1261dbf18c906562009e",
"title": "Vehicular Fog Computing: Enabling Real-Time Traffic Management for Smart Cities"
},
{
"paperId": "04c99428b09877cad8758b6ecab9581a7e69d4e8",
"title": "A Privacy-Preserving Message Forwarding Framework for Opportunistic Cloud of Things"
},
{
"paperId": "74e6710188e3deeee13304c10e2e93dc1e807998",
"title": "Offloading in Internet of Vehicles: A Fog-Enabled Real-Time Traffic Management System"
},
{
"paperId": "52272a115125208e680fd55cbf4b710318480b43",
"title": "A survey of local/cooperative-based malicious information detection techniques in VANETs"
},
{
"paperId": "37925af1f7de754df3679a995fe5e62aeac9de93",
"title": "LIAP: A local identity-based anonymous message authentication protocol in VANETs"
},
{
"paperId": "6df58346516100e57dfff767e896cc4fcd9cf0e8",
"title": "An anonymous authentication scheme in data-link layer for VANETs"
},
{
"paperId": "022a27f8b733d7c550688e4ad699e1cfaf8f8ed7",
"title": "An Efficient Anonymous Batch Authentication Scheme Based on HMAC for VANETs"
},
{
"paperId": "6171d4e96d03a9ce6b9c99e6bda24a9af7b9487c",
"title": "A Threshold Anonymous Authentication Protocol for VANETs"
},
{
"paperId": "3bda998e88f6eab9c3e6a24134359bdd5e2a1d3a",
"title": "An Efficient Identity-Based Conditional Privacy-Preserving Authentication Scheme for Vehicular Ad Hoc Networks"
},
{
"paperId": "30634f8570c026b104c413dddccc731cd8ee6aa9",
"title": "Group-based authentication in V2V communications"
},
{
"paperId": "0891a0965bcfd3ab92d63e9b2cd7c192288776e7",
"title": "Toward a secure batch verification with group testing for VANET"
},
{
"paperId": "74b5160546a895922474eb910489d54092114f21",
"title": "An Optimized Signature Verification System for Vehicle Ad Hoc NETwork"
},
{
"paperId": "2aaba9b71be3ca0937eb857717ef73927f245fa2",
"title": "Analysis and Improvement of a Secure and Efficient Handover Authentication for Wireless Networks"
},
{
"paperId": "c5a016494d89aaae75148d3e69ff01271a67b291",
"title": "APPA: Aggregate Privacy-Preserving Authentication in Vehicular Ad Hoc Networks"
},
{
"paperId": "9818997f588f8c22dcb2a119b906e94920900539",
"title": "Anonymity Analysis on Social Spot Based Pseudonym Changing for Location Privacy in VANETs"
},
{
"paperId": "a0b7158cfb660bd973b4d60d362d75d1ebd6d4c4",
"title": "An Efficient Pseudonymous Authentication Scheme With Strong Privacy Preservation for Vehicular Communications"
},
{
"paperId": "923da51f72c9db0a20718699b1252aa6ef6b9bbc",
"title": "Efficient Group Signature Scheme Supporting Batch Verification for Securing Vehicular Networks"
},
{
"paperId": "3ee2a1327509dac486592a3f2abcb84baa20a9f0",
"title": "A Scalable Robust Authentication Protocol for Secure Vehicular Communications"
},
{
"paperId": "ac545dd14f623ec0afeaff9d876d8c6f6ba611ae",
"title": "Pairing-based message authentication scheme with privacy protection in vehicular ad hoc networks"
},
{
"paperId": "15e6ff6ffa4206601654f22ff83ed41e3ebc00dc",
"title": "Vehicular Ad Hoc Networks: A New Challenge for Localization-Based Systems"
},
{
"paperId": "b8700747f6830eb3d2708db347506ed4e0e90a41",
"title": "An Efficient Message Authentication Scheme for Vehicular Communications"
},
{
"paperId": "7dd9e7739633932bdcbf9e0e552b4f8f56e93a6a",
"title": "GSIS: A Secure and Privacy-Preserving Protocol for Vehicular Communications"
},
{
"paperId": "73f8a0544eb1570e62a06b31255b363a86bc81cd",
"title": "Securing Vehicular Ad Hoc Networks"
},
{
"paperId": "f5d63ef587780db5ec2df1ac25dc3c136edf61b4",
"title": "The use of elliptic curves in cryptography"
},
{
"paperId": "f066820ea7c7a6aa826a922f8432910a15753dd7",
"title": "A novel privacy preserving authentication and access control scheme for pervasive computing environments"
},
{
"paperId": "3c0c82f42172bc1da4acc36b656d12351bf53dae",
"title": "Short Signatures from the Weil Pairing"
},
{
"paperId": "a7a9f305ebc4d5e9d128b0505fed18e62a393f46",
"title": "Identity-Based Encryption from the Weil Pairing"
},
{
"paperId": "66f4416508bb55750221494923f1a62c2845af89",
"title": "New Explicit Conditions of Elliptic Curve Traces for FR-Reduction"
},
{
"paperId": "502935ba4c4be621106cd186610bdcb8ce935ea0",
"title": "A logic of authentication"
},
{
"paperId": "a6020d6bce5c69e476dfee15bdf63944e2a717b3",
"title": "Security without identification: transaction systems to make big brother obsolete"
},
{
"paperId": "06a1d8fe505a4ee460e24ae3cf2e279e905cc9b0",
"title": "A public key cryptosystem and a signature scheme based on discrete logarithms"
},
{
"paperId": "51e8ee17fb9537f4703847c752389c5d0f145ca6",
"title": "Dedicated Short-Range Communication (DSRC)"
},
{
"paperId": "d2f13ed8f20a7ed9e6d5c9c2a02fbdadd2968465",
"title": "APPAS: A Privacy-Preserving Authentication Scheme Based on Pseudonym Ring in VSNs"
},
{
"paperId": "45816ebccf6673f0347347b1b448893356b1ab6d",
"title": "A Realistic Distributed Conditional Privacy- Preserving Authentication Scheme for Vehicular Ad Hoc Networks"
},
{
"paperId": "90800404e8b54c5a31930799e87fb09218a8b30f",
"title": "An Efficient and Secure Anonymous Authentication Scheme for VANETs Based on the Framework of Group Signatures"
},
{
"paperId": null,
"title": "Use Standard for Wireless Access in Vehicular Environments-Security Services for Applications and Management"
},
{
"paperId": "fb10495488bfc72edaf63bd17bc7963b34b6cefe",
"title": "CARAVAN: Providing Location Privacy for VANET"
}
] | 17,936
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00183d0d30904451be10a8ec7ceb6edf4a8f3637
|
[
"Computer Science"
] | 0.883614
|
Decentralized Hypothesis Testing in Wireless Sensor Networks in the Presence of Misbehaving Nodes
|
00183d0d30904451be10a8ec7ceb6edf4a8f3637
|
IEEE Transactions on Information Forensics and Security
|
[
{
"authorId": "2803419",
"name": "Erfan Soltanmohammadi"
},
{
"authorId": "48014844",
"name": "Mahdi Orooji"
},
{
"authorId": "1399257383",
"name": "M. Naraghi-Pour"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IEEE Trans Inf Forensics Secur"
],
"alternate_urls": [
"http://ieeexplore.ieee.org/servlet/opac?punumber=10206",
"http://www.signalprocessingsociety.org/publications/periodicals/forensics/"
],
"id": "d406a3f4-dc05-43be-b1f6-812f29de9c0e",
"issn": "1556-6013",
"name": "IEEE Transactions on Information Forensics and Security",
"type": "journal",
"url": "http://www.ieee.org/organizations/society/sp/tifs.html"
}
| null |
# Decentralized Hypothesis Testing in Wireless Sensor Networks in the Presence of Misbehaving Nodes
## Erfan Soltanmohammadi, Student Member, IEEE, Mahdi Orooji, Student Member, IEEE, Mort Naraghi-Pour Member, IEEE
**_Abstract—Wireless sensor networks are prone to node mis-_**
**behavior arising from tampering by an adversary (Byzantine**
**attack), or due to other factors such as node failure resulting from**
**hardware or software degradation. In this paper we consider the**
**problem of decentralized detection in wireless sensor networks**
**in the presence of one or more classes of misbehaving nodes.**
**Binary hypothesis testing is considered where the honest nodes**
**transmit their binary decisions to the fusion center (FC), while**
**the misbehaving nodes transmit fictitious messages. The goal of**
**the FC is to identify the misbehaving nodes and to detect the**
**state of nature. We identify each class of nodes with an operating**
**point (false alarm and detection probabilities) on the ROC**
**(receiver operating characteristic) curve. Maximum likelihood**
**estimation of the nodes’ operating points is then formulated and**
**solved using the expectation maximization (EM) algorithm with**
**the nodes’ identities as latent variables. The solution from the**
**EM algorithm is then used to classify the nodes and to solve**
**the decentralized hypothesis testing problem. Numerical results**
**compared with those from the reputation-based schemes show a**
**significant improvement in both classification of the nodes and**
**hypothesis testing results. We also discuss an inherent ambiguity**
**in the node classification problem which can be resolved if the**
**honest nodes are in majority.**
**_Index Terms—Wireless sensor networks, decentralized hypoth-_**
**esis testing, expectation maximization, sensor node classification,**
**Byzantine attack.**
I. INTRODUCTION
Wireless sensor networks (WSNs) consist of a large number
of tiny battery-powered sensors that are densely deployed
to sense their environment and report their findings to a
central processor (fusion center) over wireless links. Due
to size and energy constraints, sensor nodes have limited
processing, storage and communication capabilities. In a large
network of such sensors many nodes may fail due to hardware
degradation or environmental effects. While in some cases a
faulty node stops operating altogether, in other cases it may be
misbehaving and reporting false data as in the case of stuck-at
faults [1].
Sensor networks are also vulnerable to tampering. The networks are envisioned to be distributed over a large geographic
area with unattended sensor nodes which may be captured and
reprogrammed by an adversary. An adversary can also deploy
its own sensor nodes to transmit false data in order to confuse
Copyright (c) 2012 IEEE. Personal use of this material is permitted.
However, permission to use this material for any other purposes must be
obtained from the IEEE by sending a request to [email protected].
The authors are with the School of Electrical Engineering and Computer Science, Division of Electrical and Computer Engineering, Louisiana
State University, Baton Rouge, LA 70803 {e-mail: esolta1, morooj1,
hi@l d }
the fusion center (FC). Sensors under an adversary’s control
are often referred to as Byzantine nodes.
In binary hypothesis testing, in order to lower their bandwidth requirement and energy expenditures, the sensors often
make a local decision regarding the state of the hypothesis and
only send their binary decision to the FC. Having received the
messages from all the nodes, the FC will detect the hypothesis
using a judicious decision rule [2].
The problem of decentralized detection in the presence
of Byzantine nodes has been investigated by several authors
[3]–[6]. In [4], it is assumed that through collaboration, the
Byzantine nodes are aware of the true hypothesis. The authors
formulate the problem in the context of Kullback-Leibler
divergence and obtain optimal attacking distribution for the
Byzantine nodes using a water-filling procedure. In [5], the
authors consider data fusion schemes in a network under
Byzantine attack and propose techniques for identifying the
malicious users. In [6], the authors consider adding stochastic
resonance noise at the honest and/or Byzantines in order to
enhance the detection performance.
Cooperative spectrum sensing in cognitive radio networks
(CRN) is another example of decentralized hypothesis testing
where the secondary (unlicensed) users make a binary decision
on whether a channel is vacant of the primary (licensed)
user or not, and transmit that decision to the FC. The FC
then processes the received data from all the secondary users
and decides on the state of the channel. This problem is
identical to the classical decentralized detection and recently
several papers have considered cooperative spectrum sensing
in the presence of Byzantine attacks (spectrum sensing data
falsification) [7]–[13]. In [7], sequential probability ratio test
is modified via a reputation-based mechanism in order to filter
out the false data and only accept reliable messages. In [12],
the authors present a scheme for identifying the Byzantine
nodes and strategies for best fusion rule. In [14], a method
is presented to detect the Byzantine nodes based on how
their transmissions compare with those expected from honest
nodes. These approaches are often categorized as reputationbased fusion rules [12], [15]. We note that in cooperative
spectrum sensing we may also have more than one class
of unreliable nodes. While some malicious users may send
false data in order to gain unfair access to the channel, others
may be sending incorrect data due to the malfunctioning of
their sensing terminal. We should also point out that while
a collaborative CRN may consist of at most tens of radios,
a sensor network may comprise of hundreds or thousands of
nodes Therefore the proposed algorithms for CRNs ma not
-----
always be scalable for WSNs. However, the proposed method
in this paper is also applicable in the case of cooperative
spectrum sensing in CRNs.
In this paper we assume that there may be more than
one class of misbehaving nodes. We show that from the
point of view of the FC each class can be identified with
a (operating) point on the receiver operating characteristic
(ROC) that corresponds to the decision rule of the sensor
nodes in that class. We first estimate the operating points
of each class. For a fixed hypothesis vector, we formulate
this problem as a maximum likelihood estimation problem
with latent variables that correspond to the class identity of
the nodes. This problem is then solved using the expectation
maximization algorithm. Following this step we detect the
class identity of each node and also detect the hypothesis
vector.
The rest of this paper is organized as follows. The system
model is presented in Section II. In Section III, the proposed
node classifier is introduced, and in Section IV, the problem
of counterpart networks for node classification is presented.
Our performance metrics are introduced in Section V, and
numerical results are provided and conclusions are drawn in
Sections VI and VII, respectively.
II. SYSTEM MODEL
We consider a wireless sensor network consisting of L
nodes employed to detect the state of nature H ∈{H0, H1}.
It is assumed that there are K classes of nodes, C =
{c1, c2, · · ·, cK }, where c1 is the class of honest nodes and
c2, · · ·, cK denote the other K − 1 classes of (honest or
misbehaving) nodes. Each node samples the environment once
per unit time and makes a local decision on the state of H.
It then transmits its binary decision to the FC which, after
receiving a number of transmissions from the nodes, attempts
to classify the nodes and also decide on the state of H.
Denote by ht ∈{H0, H1} the state of H at time t =
1, 2, · · ·, T and let rl,t ∈ {0, 1}, l = 1, 2, · · ·, L, t =
1, 2,, T denote the decision of the lth node at time t
- · ·
regarding the state of ht. Since all the nodes in a class ck
are identical, the probabilities of detection and false alarm for
class ck are, respectively, given by
p˜d(k) = P (rl,t = 1|ht = H1, l ∈ ck), (1)
and
p˜f (k) = P (rl,t = 1|ht = H0, l ∈ ck). (2)
As in [4], [5], [12]–[14] we assume that the Byzantine
nodes do not collaborate. While collaboration can improve
the effectiveness of the adversary’s attack, it has its own
drawbacks. Collaboration requires additional infrastructure
such as a FC to coordinate the attacks, as well as increased
communication which can quickly deplete the resources of the
Byzantine nodes. In the absence of such collaboration, we can
assume that, given the hypothesis (H0 or H1), for any time
t the sensor decisions rl,t, l = 1, 2, . . ., L are conditionally
independent [15]–[17]. In addition, we assume that the sensor
decisions across time are conditionall independent gi en the
hypothesis vector h = (h1, h2, . . ., hT ), [12], [14] [1]. From
these assumptions it follows that given the hypothesis vector
h, the sensor decisions rl,t, l = 1, 2, . . ., L, t = 1, 2, . . ., T
are conditionally independent.
While an honest node l ∈ c1 will transmit its decision rl,t
to the FC, nodes in other classes may choose to do differently.
In particular, let dl,t ∈{0, 1} denote the message received at
the FC from node l at time t and define
ρ0(k) ≜ P (dl,t = 1|rl,t = 0, l ∈ ck), (3)
ρ1(k) ≜ P (dl,t = 1|rl,t = 1, l ∈ ck). (4)
Clearly for honest nodes, ρ0(1) = 0 and ρ1(1) = 1. Let
pd(k) ≜ P (dl,t = 1|ht = H1, l ∈ ck)
= ρ1(k)˜pd(k) + ρ0(k)(1 − p˜d(k)), (5)
and
pf (k) ≜ P (dl,t = 1|ht = H0, l ∈ ck)
= ρ1(k)˜pf (k) + ρ0(k)(1 − p˜f (k)). (6)
One may view pd(k) and pf (k) as the detection and false
alarm probabilities “perceived” by the FC for nodes in class
ck.
Recently in [13], the authors consider the problem of
detecting statistical attacks in cognitive radios using belief
propagation. This approach is similar to the reputation-based
method of [12], [15]. The modeling assumptions in [13] are
similar but somewhat simpler than those presented here. In
particular two types of attackers are assumed. If node k is
of Type-1, then it attempts to confuse the FC only when
hypothesis H1 is detected by sending a 0 with probability
rk and a 1 with probability 1 − rk. On the other hand, if node
k is of Type-2, it tries to confuse the FC when the detected
hypothesis is H0 by sending a 1 with probability rk and a 0
with probability 1−rk. Note that rk = 0 corresponds to honest
nodes. It is also assumed that there is a subset of trusted nodes
whose identities are known to the FC. In contrast, we do not
assume that such prior information is available at the FC and
our attacker model is more general in that the malicious nodes
may try to confuse the FC under both hypotheses.
**Remark 1. In Section III, we present our method for esti-**
_mating (pf_ (k), pd(k)) for k = 1, 2, · · ·, K. Our approach
_does not depend on how these probabilities are arrived at. In_
_particular it includes the case that Byzantines, after detecting_
_the hypothesis, flip their decisions and send it to the FC. This_
_corresponds to pd(k) = 1 −_ p˜d(k) and pf (k) = 1 − p˜f (k).
_Furthermore, we have assumed error free channels between_
_the sensors and the FC. However, the model presented here_
_also includes noisy channel models between sensors and the_
_FC. The effect of the channel transition probabilities can be_
_included in the parameters ρ0(k) and ρ1(k)._
The receiver operating characteristic (ROC) of a node in
class ck is denoted by Uk, i.e., ˜pd(k) = Uk(˜pf (k)). In
the following we refer to the point (pf (k), pd(k)) as the
1This assumption holds for example when the sensors’ observations across
ti t i t d b hit i
-----
_operating point of a node in class ck. For the honest nodes,_
pd(k) = ˜pd(k) and pf (k) = ˜pf (k), and so their operating
point is (˜pf (k), Uk(˜pf (k)). We show in Appendix A that for
other nodes, the operating point is in a region bounded by
(˜pf (k), Uk(˜pf (k)) and (˜pf (k), Vk(˜pf (k)), where Vk(x) is the
reflection of Uk(x) with respect to the point (0.5, 0.5), i.e.,
Vk(x) = 1−Uk(1−x). These nodes can achieve any operating
point in this region by choosing appropriate values for ρ0(k)
and ρ1(k).
III. CLASSIFICATION OF THE NODES
Let Z = [zl,k], zl,k ∈{0, 1} for l = 1, 2, · · ·, L, k =
1, 2,, K denote the identification matrix of the nodes
- · ·
where zl,k = 1 if l ∈ ck and 0, otherwise. To identify the
nodes, the FC collects T messages from each node and stores
them in a matrix D = [dl,t], l = 1, 2, · · ·, L, t = 1, 2, · · ·, T
subsequently referred to as the decision matrix. Using the
decision matrix the FC must detect the identification matrix Z
and the hypothesis vector h = (h1, h2, · · ·, hT ).
The maximum likelihood detection rule for (Z, h) is given
by
(Z[ˆ], h[ˆ]) = arg max P (D Z, h). (7)
|
Z,h
Evaluation of (7) requires the likelihood function P (D Z, h)
|
which is computed below. For a given hypothesis vector h,
denote the number of H0’s and H1’s in h by N and M = T −
N, respectively. Also denote the number of correct decisions
of the lth node on hypotheses H0 and H1 by nl and ml, 0 ≤
l L, respectively. In other words, out of N occurrences of
≤
H0 in h, node l correctly detects nl of them, and out of M
occurrences of H1 in h, it correctly detects ml of them. We
note that for a given hypothesis vector h, nl and ml can be
calculated from the lth row of D. We have,
results are compared with the Cramer-Rao lower bound and
show a close match.
_A. Estimation of Class Parameters_
From (8), it is evident that to detect Z we need to first estimate the operating points (pf (k), pd(k)) for k = 1, 2, · · ·, K.
Note that in the following it is assumed that the hypothesis
vector h is fixed and all the probabilities are conditioned on
h. For ease of notation, however, we drop this condition from
our notations.
In addition to the operating points of each class, the FC
is also unaware of the fraction of nodes in each class. Let
πk = P (zk,l = 1) denote the probability that node l belong to
class ck and define the matrix of class parameters, Θ, where
its kth row is given by
θ(k) ≜ [pc(k), pd(k), π(k)]. (9)
We would like to estimate the class parameters Θ from
the observation matrix D. Since the conditional probability
P (D Θ) is not given, we may write the maximum likelihood
|
estimate for Θ as,
Θ[∗] = arg max
Θ
� P (D, Z Θ). (10)
|
Z
�
pc(k)[n][l] (1 − pc(k))[(][N] [−][n][l][)] (8)
pd(k)[m][l] (1 − pd(k))[(][M] [−][m][l][)][�][z][l,k]
K
�
k=1
This may be viewed as a mixture model (with Z as the latent
variables since the nodes are not identified) and can be effectively solved using the iterative Expectation Maximization
(EM) algorithm [19]. Let us define the log-likelihood function,
L(Θ; D, Z) ≜ log P (D, Z Θ) (11)
|
Due to the fact that Z is latent, with EM we consider the
conditional expectation of (11) under the posterior distribution
of Z given D and Θ. This is the expectation step of EM. In the
maximization step, this expectation is maximized with respect
to Θ. Denote the current and the revised estimate of Θ by
Θ[old] and Θ[new], respectively. The two steps of EM algorithm
are described below.
_1) Expectation: Using the current estimate of the matrix_
of class parameters, Θ[old], find the posterior distribution of Z
given D and Θ[old]. Using this distribution find the expectation
of the log likelihood function in (11) for an arbitrary Θ given
by
Q(Θ; Θ[old]) ≜ EZ[L(Θ; D, Z) D, Θ[old]] (12)
|
= � P (Z D, Θ[old]) L(Θ; D, Z).
| ×
Z
_2) Maximization: Revise the estimate of class parameters_
to maximize the expectation calculated in the previous step,
i.e., let
Θ[new] = arg max Q(Θ; Θ[old]). (13)
Θ
It has been shown that each update of the EM algorithm is
guaranteed to increase the log-likelihood function [20]. This
implies that the EM algorithm ill con erge regardless of the
P (D Z, h) =
|
L
�
l=1
where pc(k) ≜ 1−pf (k) is the probability of correct rejection.
It can be seen from (8) that the likelihood function P (D Z, h) depends on the unknown parameters
|
(pf (k), pd(k)) for k = 1, 2, · · ·, K. Therefore for the detection problem in (7) the Bayesian or the Neyman-Pearson
rule cannot be implemented. Generalized likelihood ratio test
(GLRT) is often used in detection problems with unknown
parameters [18]. However, for our problem GLRT is not
mathematically tractable. Therefore, in this paper, we follow the following process. For a given hypothesis vector
h we first estimate the operating points (pf (k), pd(k)) for
k = 1, 2,, K. Using the estimated operating points, we
- · ·
can implement the maximum a posteriori (MAP) classification
rule for Z. The estimated operating points and identification
matrix Z are then used to implement the maximum likelihood
detection rule for the hypothesis vector h. We have not been
able to prove the optimality of the proposed method due to
its mathematical intractabilit In section VI o r sim lation
-----
initial value of Θ, [19], [21]. We now present the two steps
of EM algorithm for the problem at hand.
L(Θ; D, Z) = log P (D, Z Θ) (14)
|
= log[P (D Z, Θ)P (Z Θ)]
| |
L K
= log � � πk[z][k,l] �pc(k)[n][l] (1 − pc(k))[(][N] [−][n][l][)]
l=1 k=1
pd(k)[m][l] (1 − pd(k))[(][M] [−][m][l][)][�][z][k,l]
Finally, we should maximize Q(Θ; Θ[old]) with respect to
πk with the constraint that k=1 [π][k] = 1. This can be
[�][K]
achieved using Lagrange multiplier method by maximizing the
Lagrangian
which after some manipulations results in,
p[new]c (k) = L[1]k
p[new]d [(][k][) = 1]
Lk
L
�
l=1
L
�
l=1
Nnl [E][(][k, l][)][,] (21)
mMl [E][(][k, l][)][.] (22)
K
� zk,l [log πk + nl log pc(k)
k=1
=
L
�
l=1
+ (N − nl) log(1 − pc(k)) + ml log pd(k)
+ (M − ml) log(1 − pd(k)) ] .
To calculate Q(Θ; Θ[old]) in (12) for the expectation step, one
should find the conditional expectation of L(Θ; D, Z) with
respect to Z. Hence,
Q(Θ; Θ[old]) =
Q˜(Θ, ν; Θ[old]) ≜ Q(Θ; Θ[old]) + ν[
We have
K
� πk − 1]. (23)
k=1
∂Q[˜]
=
∂πk
L
�
l=1
E(l, k)
+ ν = 0 (24)
πk
L
�
l=1
K
� E[zk,l|D, Θ[old]] [log πk + nl log pc(k)
k=1
+ (N − nl) log(1 − pc(k)) + ml log pd(k)
+ (M − ml) log(1 − pd(k)) ] . (15)
We now need to perform the maximization step in (15).
Denoting xl ≜ (nl, ml), 1 ≤ l ≤ L, we have
E(l, k) ≜ E(zk,l|D, Θ[old]) = P (zl,k = 1|xl; Θ[old]) (16)
πk[(][old][)]P (xl|zl,k = 1; θ[(][old][)](k))
=,
�Kj=1 [π]j[(][old][)]P (xl|zl,j = 1; θ[(][old][)](j))
where,
P (xl|zl,k = 1; θ[(][old][)](k)) (17)
= [p[(]c[old][)](k)][n][l] [1 − p[(]c[old][)](k)][(][N] [−][n][l][)]
× [p[(]d[old][)](k)][m][l] [1 − p[(]d[old][)](k)][(][M] [−][m][l][)],
and where θ[(][old][)](k) (the kth row of Θ[old]) is the current vector
of parameters for the kth class. The quantity E(l, k) can be
interpreted as the probability that class ck is responsible for
the decisions made by the lth node. So, the effective number
of nodes assigned to class ck, denoted by Lk, is given by,
Lk ≜
L
� E(l, k). (18)
l=1
The estimation of the probability of correct rejection and
the probability of detection for any 1 k K can be found
≤ ≤
by solving (13) as,
∂Q(Θ; Θ[old])
=
∂pc(k)
∂Q(Θ; Θ[old])
=
∂pd(k)
L
� E(l, k) � nl N − nl
l=1 pc(k) [−] 1 − pc(k)
L
� E(l, k) � ml
l=1 pd(k) [−] 1[M] −[ −]pd[m](k[l])
�
= 0,
�
= 0,
(19)
(20)
Multiplying both sides by πk and summing over k we get
ν = L which results in
−
πk[new] = [L]L[k] (25)
Since the log(.) function is concave and E(l, k) 0, l, k,
≥ ∀
it can be seen from (15) that Q(Θ; Θ[old]) is a concave function
of πk’s (in ℜ[+]). This followed by the fact that the constraint
�Kk=1 [π][k][ = 1][ is linear in][ π][k][’s implies that the Lagrange]
multiplier method in (24) achieves the optimal solution [22].
_B. Classification of the Nodes_
Let Θ[∗] denote the matrix of class parameters estimated by
the EM algorithm. Given Θ[∗], the conditional probability that
node l belongs to class ck is given by
P (zl,k = 1|xl; θ[∗](k)) (26)
πk[∗][P] [(][x][l][|][z][l,k][ = 1;][ θ][∗][(][k][))]
= �Kj=1 [π]j[∗][P] [(][x][l][|][z][l,j][ = 1;][ θ][∗][(][j][))],
where θ[∗](k) is the kth row of Θ[∗]. The denominator in (26)
is independent of k. Therefore, the maximum a posteriori
classification rule for node l (given Θ[∗]) is given by
k[∗] = arg max {πk[∗][P] [(][x][l][|][z][l,k] [= 1;][ θ][∗][(][k][))][, k][ = 1][,][ 2][,][ · · ·][, K][}][,]
k
(27)
and we set
� 1 for k = k[∗]
zl,k[∗] [=] (28)
0 for k = k[∗].
̸
_C. Estimation of the Hypothesis Vector_
In the previous section we showed how to estimate the class
parameters Θ[∗] and obtain the node identification matrix Z[∗] for
a given hypothesis vector h. Therefore, in the sequel we denote
these parameters by Z[∗](h) = [zl,k[∗] [(][h][)]][ and][ Θ][∗][(][h][)][. Similarly]
N, M, nl, and ml are substituted by N (h), M (h), nl(h), and
(h) respecti el The ma im m likelihood detection r le
-----
for h obtained from the observation matrix D given Z[∗](h)
and Θ[∗](h) is now given by
hˆ = arg max P (D Z[∗](h); Θ[∗](h)) (29)
|
h
where,
P (D Z[∗](h); Θ[∗](h)) = (30)
|
L K
� � �p[∗]c [(][k][;][ h][)][n][l][(][h][)][(1][ −] [p][∗]c [(][k][;][ h][))][(][N] [(][h][)][−][n][l][(][h][))]
l=1 k=1
p[∗]d[(][k][;][ h][)][m][l][(][h][)][(1][ −] [p][∗]d[(][k][;][ h][))][(][M] [(][h][)][−][m][l][(][h][))][�][z][l,k][(][h][)],
and where [p[∗]c [(][k][;][ h][)][, p][∗]d[(][k][;][ h][)][, π][∗][(][k][;][ h][)]][ is the][ k][th row of]
Θ[∗](h) denoting the estimated parameters of the kth class
for the hypothesis vector h. The final estimation of all the
network parameters is given by (h[ˆ], Z[ˆ], Θ[ˆ] ) where Z[ˆ] = Z[∗](h[ˆ])
and Θ[ˆ] = Θ[∗](h[ˆ]). The entire procedure is summarized in
Algorithm 1.
**Data: Decision matrix, D**
**Result: Estimation of identification matrix,** Z[ˆ], the matrix
of class parameters Θ[ˆ], and hypothesis vector h[ˆ]
**begin**
**forall the possible hypothesis vectors, h** 0, 1 _,_
∈{ }[T]
**do**
_Estimate the matrix of class parameters, Θ[∗](h),_
_using EM-Algorithm:_
Assume an initial value for Θ[old];
**while** ��Θnew − Θold�� ≥ ǫ do
E Step: Find E(l, k) using (16);
M Step: Estimate Θ[new] (p[new]c (k), p[new]d [(][k][)][ and]
πk[new]) using (21), (22), and (25);
**end**
Classify the nodes by computing Z[∗](h): for each
node l find k[∗] using (27);
**end**
Detect the hypothesis vector, h[ˆ] from (29);
Find the (h[ˆ], Z[ˆ], Θ[ˆ] ) where Z[ˆ] = Z[∗](h[ˆ]) and
Θˆ = Θ[∗](ˆh).
**end**
**Algorithm 1: Calculation of the identification matrix, the**
matrix of class parameters, and the hypothesis vector via the
EM algorithm.
**Remark 2. We have assumed that the FC is aware of the**
_number of classes K. The issue of how to select the number of_
_classes known as model order selection is a well known prob-_
_lem in classification. While criteria such as Akaike information_
_criterion (AIC) or Bayesian information criterion (BIC) have_
_been proposed, they do not always work satisfactorily and tend_
_to favor overly simple models [21]. The main issue in model_
_selection is under- or overfitting the data. However, in large_
_sensor networks this will not be an issue owing to the fact that_
_the expected number of classes K is much smaller than the_
_b_ _f_ L Th _f_ K _b_ _ti_ _t d_ _d_
_yet be much smaller than L (in which case overfitting will not_
_occur). If the actual number of classes is smaller, the proposed_
_algorithm will not assign any nodes to the fictitious classes._
_In decentralized detection schemes such as ours, it is_
_assumed that the nodes only transmit a (binary) quantized_
_version of their measurement to the FC (instead of their_
_actual measurement). A question then arises as to how the_
_FC can identify the nodes. While the nodes can transmit_
_a label for identification, the overhead associated with this_
_approach may not be justified given the severely limited energy_
_and transmission capability of the sensors. We believe that_
_this issue can be resolved using the media access control_
_mechanism. Clearly the sensors need some form of arbitration_
_mechanism to access the channel. The information from that_
_mechanism can be used by the FC to identify the nodes and_
_determine which received bit corresponds to which node. For_
_example the FC may use round-robin scheduling to collect the_
_nodes’ messages. The information from the nodes’ turn in the_
_schedule can be used to identify them._
_D. Complexity_
For a given hypothesis vector, the EM algorithm is very
fast and converges in only a few steps. However, for a vector
of T decisions from the sensors the EM algorithm must be
performed 2[T] times corresponding to the 2[T] possible hypothesis vectors. This increases the complexity of the algorithm
exponentially in terms of the observation interval. However,
as discussed in the numerical section, the proposed algorithm
converges much faster than the reputation-based algorithms
in terms of the number of observation samples T (A brief
description of the reputation-based algorithms is provided in
Appendix B). Another point to observe is that the rate at which
the state of nature changes is much lower than the rate at which
the sensors sample the environment. In other words, during
an observation time of T decisions from the sensors, the state
of nature will not change more than a few times. In such a
case the number of vectors h for which the EM algorithm is
performed is only polynomial in T . For example, in order to
detect a single change in h (from H0 to H1 or vice versa), EM
is performed for only 2T possible vectors h. Furthermore, the
complexity of the proposed algorithm is linear in the number
of nodes L and quadratic in the number of classes K. Given
that sensor networks are expected to consist of hundreds or
thousands of nodes, the linear complexity in the number of
nodes is significant.
IV. COUNTERPART NETWORKS
In this section we will show that any decision matrix D is
equally likely to be generated by one of two different networks
which we refer to as counterpart networks. For any matrix of
class parameters Θ we can define a counterpart matrix, Θ[(][c][)],
whose kth row, 1 k K, is given by
≤ ≤
θ[(][c][)](k) = [p[(]c[c][)][(][k][)][, p]d[(][c][)][(][k][)][, π][(][c][)][(][k][)]] (31)
= [1 − pd(k), 1 − pc(k), π(k)]
-----
Also define the counterpart hypothesis vector, h[(][c][)] ≜ 1T − h
where 1T is a vector of all ones with length T . It can be
verified that,
P (D Z, Θ, h) = P (D Z, Θ[(][c][)], h[(][c][)]) (32)
| |
The intuition behind (32) is that the probability of transmitting
a one (or a zero) for a node with the operating point (pf, pd)
under Hη, η ∈{0, 1}, is the same as a node with the operating
point (pd, pf ) under H1−η. Therefore any observed decision
matrix D is equally likely to be generated by one of two
networks, namely Z, Θ under the hypothesis vector h, or
{ }
Z, Θ[(][c][)] under the hypothesis vectors h[(][c][)]. This implies that
{ }
regardless of the method used, there are always two solutions
for the estimation of the class parameters and the detected
hypothesis vector.
The ambiguity described above can be resolved by assuming
some prior information on the network. In practice, the operating point of the honest nodes (pf (1), pd(1)) will be above
the chance line pd = pf [23]. If it is known that the class of
honest nodes is the largest class, then the ambiguity can be
resolved by choosing the solution for which the largest class
is above the chance line.
V. PERFORMANCE ASSESSMENT METRICS
To assess the performance of classifiers, two metrics of dis_criminability and reliability are often used [24]. Discriminabil-_
ity shows how well the classifier distinguishes the different
classes, whereas reliability indicates how well the posterior
probability that a node belongs to a class is estimated by
the proposed method. To show the discriminability of the
classifier, we define the misclassification rate by, [20],
problem is difficult due to the mixture model which involves
the latent variables Z and the hypothesis vector h. However,
CLRB can be computed for the case that the identification
matrix Z and the hypothesis vector h are known. This provides
a lower bound to the estimation errors of the proposed method
in which Z and h are not assumed to be known a priori. For
andgiven N Z = and T −M h, we define. Let Dk ζ be derived fromk = [�]l[L]=1 [z][l,k][,][ M] D by removing[ =][ �]t[T]=1 [h][t][,]
any row j if zj,k = 1 and let Dk,η be obtained from Dk
̸
by removing any column t such that ht ̸= η. It is clear that
the dimension of Dk,0 and Dk,1 are ζk and ζk,
× N × M
respectively. Finally, denote by dk,0 (resp. dk,1) the 1 × ζkN
(resp. 1×ζkM) vector formed by stacking rows of Dk,0 (resp.
Dk,1) next to each other. For any unbiased estimate ˆpf (k), the
conditional variance of ˆpf (k) is bounded by [25],
(36)
� ∂ ln P (dk,0 1|pf (k))
E
∂pf (k)
2[�][−][1]
�
var{pˆf (k)|pf (k)} ≥
�
where 1 is a column vector of all 1’s with length ζkN .
Unbiasedness of the proposed algorithm has been shown
through extensive simulations some of which is presented in
Section VI. For known Z and h, we have
P (dk,0 1 = ℓ|pf ) = [pf (k)][ℓ][1 − pf (k)][ζ][k][N −][ℓ]. (37)
Therefore after some manipulations we get
2
� ∂ �
E (38)
∂pf (k) [ln][ P] [(][d][k,][0][|][p][f] [)]
Following the same approach for ˆpd(k), the Cramer-Rao lower
bounds are given by
var{pˆf (k)|pf (k)} ≥ [p][f] [(][k][)(1][ −] [p][f] [(][k][))], (39)
ζkN
var{pˆd(k)|pd(k)} ≥ [p][d][(][k][)(1][ −] [p][d][(][k][))] . (40)
ζkM
= ζ�kN �ζkN � ℓ2 + ζk2[N][ 2][[][p][f] [(][k][)]][2][ −] [2][ℓζ][k][N] [p][f] [(][k][)]
ℓ=0 ℓ [pf (k)][2][−][ℓ][1 − pf (k)][2][−][ζ][k][N][ +][ℓ]
ζkN
=
pf (k)(1 − pf (k))
K
� |zl,k − zˆl,k|. (33)
k=1
∆Z ≜ [1]
2L
L
�
l=1
Similarly the performance of our hypothesis detection scheme
is evaluated by the hypothesis discriminability given by
∆H ≜ [1]
T
T
� |ht − h[ˆ]t|. (34)
t=1
To estimate the accuracy of the estimation of the nodes’
operating points we define the following measure based on
the normalized Euclidean distance between the estimated and
actual operating points, i.e.,
(pd(k) − pˆd(k))[2] + (pf (k) − pˆf (k))[2].
(35)
1
∆OP ≜ √
2
K
� πk�
k=1
VI. NUMERICAL RESULTS
In this section, employing the metrics in Section V, we
evaluate the performance of the proposed method referred to
as maximum-likelihood classifier (MLC) and also compare our
results with the reputation-based classifier (RBC) algorithm
[12], [26]. In RBC when the network parameters (e.g., the
nodes’ operating points) are known, the optimal q-out-of-L
rule can be computed (see for example [16], [27]). However,
when the FC is not aware of all the network parameters
as is the case here, majority rule has been used in [12]
and is also used here for our comparisons. In addition, in
(45) the threshold λ can be set following a Neyman-Pearson
criterion, for example by setting a threshold on the probability
of misclassifying the honest nodes as Byzantines. Moreover, if
the fraction of honest nodes is known to the FC as in [12], then
λ can be set to minimi e the probabilit of classification error
Note that the three measure in (33)-(35) are appropriately
normalized so as to be in the interval [0, 1].
_A. The Cramer-Rao Bound_
To evaluate the efficacy of the expectation maximization
algorithm in estimating the class parameters we would like
to compare our results with the Cramer-Rao lower bound
(CRLB) Ho e er comp tation of CRLB for o r estimation
-----
TABLE I
CLASS PARAMETERS OF EACH SET OF OPERATING POINTS
Set pf pd π
0.1 0.9 0.6
OP1
0.9 0.3 0.4
0.2 0.7 0.6
OP2
0.9 0.15 0.4
0.2 0.7 0.4
0.9 0.15 0.15
OP3
0.9 0.9 0.2
0.05 0.05 0.25
In our case, however, the FC is not aware of the fraction of
honest nodes. Therefore we set the threshold λ = .5. For this
choice of λ the probability that an honest node is misclassified
as Byzantine is the same as the probability that a Byzantine
node is misclassified as honest. Other values of the threshold
can favor the classification of honest nodes as Byzantines or
vice versa.
Simulation results are obtained from at least 10[4] independent trials. The EM algorithm is assumed to have converged
when ��Θnew − Θold�� < ǫ = 10−3. Moreover, to overcome
the ambiguity of the counterpart networks, we assume that
the honest nodes are in majority. This implies that for a
network consisting of two classes the break down point of
the algorithm is at 50% [28]. In Figs. 1, 2 and 9-12 where
a performance metric is presented vs. T, the number of
possible hypothesis vectors 2[T] is too large to evaluate (29)
exhaustively. Therefore in these cases it is assumed that during
the observation period there is at most one change in the
hypothesis vector h which may occur at random anywhere
from time 2 to T 1. This assumption, which as mentioned in
−
Section III-D is applicable in practice, is only made to reduce
the computational complexity of our simulations. However,
the efficacy of the proposed method is not affected by this
assumption as other figures verify.
Three sets of operating points, denoted OP1, OP2 and
OP3, are considered. Table I shows the class parameters
corresponding to each operating point. For OP1 and OP2
there are two classes of honest and Byzantine nodes. The FC
perceives the operating point of the Byzantines, (pf, pd), to
be that listed in Table I. One may view the Byzantines as
having an actual operating point (1 − pf, 1 − pd), but flipping
their decisions before transmission to the FC. Comparing
the operating point of honest nodes and the actual operating
point of Byzantine nodes in OP2 reveals that the Byzantine
nodes are more capable of detecting the event under both
hypotheses (i.e., with smaller probability of false alarm and
higher probability of detection). For OP3, four classes of nodes
are considered. The first class with the operating point (.2, .7)
comprises the honest nodes. The second class are Byzantine
nodes with the operating point (.9, .15), while the third and
fourth classes are “almost-always-yes” and “almost-alwaysno” nodes. The almost-always-yes nodes try to convince the
FC that the hypothesis is H1 by transmitting a 1 most of the
times, and increase the overall false alarm rate of the system.
In contrast, the almost-always-no nodes transmit a 0 most of
the time and decrease the overall probability of detection.
Figs 1 and 2 sho the performance of the classifiers s the
number of received decisions, T . It is evident that the accuracy
of node classification and the estimation of the operating points
improve with T . Moreover the proposed algorithm converges
much faster than the reputation-based method requiring fewer
number of observation samples. Note that since RBC can only
discriminate nodes into two classes, in the case of OP3 ∆Z
is not defined. The figures also show that the performance of
classifiers for OP1 is better than for OP2 and OP3. The reason
is that the misbehaving nodes are more capable in the latter
two cases. In particular in the case of OP2, the RBC method
fails completely. This is due to the fact that even though only
40% of the nodes are Byzantine, because of their operating
point (0.9, 0.15) vs. the operating point of the honest nodes
(0.2, 0.7), collectively the Byzantine nodes are more capable
than the honest nodes and can mislead the FC.
Fig. 1. Error in the estimation of the operating points vs. T for L = 100
nodes.
Fig. 2. Misclassification rate vs. T for L = 100 nodes.
Figs. 3 and 4 compare the performance of the classifiers
s the ratio of the honest nodes to the total n mber of nodes
|Set|pf|pd|π|
|---|---|---|---|
|OP1|0.1 0.9|0.9 0.3|0.6 0.4|
|OP2|0.2 0.9|0.7 0.15|0.6 0.4|
|OP3|0.2 0.9 0.9 0.05|0.7 0.15 0.9 0.05|0.4 0.15 0.2 0.25|
-----
(denoted by α) for T = 10. The operating points are OP1
and OP2 shown in Table I. As expected the performance of
the classifiers improves with α. It is seen that while RBC
can effectively classify the nodes in the case of OP1, the
computation of the operating points is not very accurate.
Moreover for OP2 the performance of RBC is not acceptable
and fails completely for α .6.
≤
Fig. 3. Error in the estimation of the operating points vs. α for T = 10 and
L = 100.
Fig. 4. Misclassification rate vs. α for T = 10 and L = 100.
In Figs. 5, 6 and 7 we compare the performance of the
classifiers vs. the number of nodes L for T = 4 samples. For
OP1, as the number of nodes increases, the classifier errors
converge to zero. Again for OP2, the error for RBC does not
converge to zero due to the fact that in this case the Byzantine
nodes are collectively more capable than the honest nodes.
Figs. 8 and 9 show the efficacy of the proposed estimation
method by comparing the variance of the estimated false
alarm probability of the honest nodes and the Cramer-Rao
lo er bo nd of Section V A As these fig res demonstrate the
Fig. 5. Hypothesis discriminability vs. L for T = 4.
Fig. 6. Error in the estimation of the operating points vs. L for T = 4.
accuracy of the estimation increases as number of observations
or number of nodes increases.
To show the robustness of the proposed method to possible time varying behavior of the Byzantine nodes, we
consider a case where the Byzantines change their operating
point during the observation period. Two classes of nodes
are considered. The honest nodes have an operating point
(pf, pd) = (0.1, 0.8). For Byzantine nodes, for each time t,
the probabilities of false alarm and detection are chosen at
random with uniform distribution on [0.75 .2, 0.75+ .2] and
−
[0.3 .2, 0.3+ .2], respectively. Moreover, these probabilities
−
are independent for each time t = 1, 2, . . ., T and for each
node. Finally the fraction of the Byzantine nodes is π2 = 0.4.
Figs. 10, 11 and 12 show ∆OP, ∆Z and ∆H versus T, respectively. In evaluating ∆OP for Byzantines we have compared
the mean of their operating point given by (.75, .3) with the
estimated operating point. We also show the results for the
case here the operating point of the B antines is fi ed and
-----
Fig. 7. Misclassification rate vs. L for T = 4.
Fig. 8. The variance of ˆpf (1) and the Cramer-Rao lower bound vs. L for
T = 10.
is equal to (.75, .3). It can be seen that, as in the case of fixed
operating points, the proposed method outperforms the RBC
method. Moreover, the performances are very close for the two
cases of fixed and randomly varying operating points. This can
be explained by the fact that the estimation of probabilities of
false alarm and detection in EM are obtained by evaluating
the average number of ones transmitted under H1 and H0 as
shown in (21) and (22).
VII. CONCLUSION
We consider the problem of decentralized detection in the
presence of one or more classes of misbehaving nodes. The
fusion center first estimates the nodes’ operating points (false
alarm and detection probabilities) on the ROC curve and then
uses this estimation to classify the nodes and to detect the
state of nature. We formulate and solve this problem in the
frame ork of e pectation ma imi ation algorithm N merical
Fig. 9. The variance of ˆpf (1) and the Cramer-Rao lower bound vs. T for
L = 10.
Fig. 10. Comparison of the error in the estimation of the operating points
vs. T for fixed and randomly varying Byzantine operating points.
results are presented that show the proposed algorithm significantly outperforms the reputation-based methods in classification of the nodes as well as the detection of the hypotheses.
The estimated operating points are compared to the CramerRao lower bound which shows the efficacy of the proposed
method.
APPENDIX
_A. Operating Region of Misbehaving Nodes_
Consider a node in class ck with the operating point
(˜pf (k), Uk(˜pf (k))) on its ROC curve. We show that by
appropriate selection of ρ0(k) and ρ1(k) in (5)-(6), a desired
operating point (pf (k), pd(k)) can be achieved in the region
bounded by (˜pf (k), Uk(˜pf (k))) and (˜pf (k), Vk(˜pf (k))) where
Vk(x) = 1 − Uk(1 − x).
Consider Fig. 13. Denote by A = (˜pf (k), ˜pd(k)) the
operating point of a node and by B = (1 − p˜f (k), 1 − p˜d(k))
the reflection of A at (0 5 0 5) We consider t o cases
-----
Fig. 11. Comparison of the misclassification rate vs. T for fixed and randomly
varying Byzantine operating points.
Fig. 12. Comparison of hypothesis discriminability vs. T for fixed and
randomly varying Byzantine operating points.
_1) Fixed ρ0(k): From (5) and (6), for fixed ρ0(k) = δ we_
get
pd(k) = mαpf (k) + δ(1 − mδ), (41)
where mδ ≜ pp˜˜fd ((kk)) [is the slope of the line between the]
origin and A = (˜pf (k), Uk(˜pf (k))). Therefore in this case
(pf (k), pd(k)) is located on a set of parallel lines with slope
mδ and the y-intercept starting from the origin (corresponding
to δ = 0) up to 1 − mδ (corresponding to δ = 1).
_2) Fixed ρ1(k): Similar to the previous case, for fixed_
ρ1(k) = β and using (5) and (6), one can write
pd(k) = mβpf (k) + β(1 − mβ) (42)
where mβ ≜ 11−−pp˜˜fd ((kk)) [is the slope of line][ OB][. As a result,]
in this case the region of operating points (pf (k), pd(k)) is a
set of parallel lines with slope mβ and the y-intercept starting
from the origin (β 0) and p to 1 (β 1)
Fig. 13. Region of achievable operating points for the nodes.
Combining the two cases above we see that the loci of
the operating point of the node will be in the parallelogram
OACB where points O and C correspond to ρ0(k) = ρ1(k) =
0 and ρ0(k) = ρ1(k) = 1, respectively.
Consider a Byzantine node l in class ck. With its transmitted
message dl,t, this node attempts to mislead the FC regarding
the state of ht. For this, however, the Byzantine must first
detect the sate of ht as represented by rl,t. There is an ROC
and an operating point (denoted by (˜pf (k), ˜pd(k)) in Section
II) associated with this detection rule. Since the transmitted
message dl,t must be based on this detection (rl,t), the above
results show that the operating point as perceived by the FC
(pf (k), pd(k)) cannot be arbitrary and must lie in the region
described above.
_B. Reputation-Based Node Classifier_
Voting rules or q-out-of-L rules [2] are commonly employed
in the FC to detect the occurrence of an event in decentralized
sensing [10], [11], [26], [29], [30]. Based on this rule, the
detected hypothesis is H1 if at least q out of L nodes vote in
favor of this event. When q = 1, q = L and q = L/2, this
rule is denoted by “OR-rule”, “AND-rule”, and the “Majorityrule”, respectively.
The operating point of the lth node, 1 l L, can be
≤ ≤
estimated using the transmitted decisions of the node under
the estimated hypothesis, i.e.,
pˆf (l) = �Tt=1[(1][ −] [h][ˆ][t][)][d][l,t] (43)
T − [�]t[T]=1 [h][ˆ][t]
pˆd(l) = ��Tt=1Tt=1[h][ˆ][t][h][ˆ][d][t][l,t], (44)
where h[ˆ]t, 1 ≤ t ≤ T is the detected hypothesis from the
voting rule at time t, and dl,t is the corresponding transmitted
decision of the lth node
-----
The reputation-based classification [12] is based on the
reputation metric, Rl, given by
Rl ≜ [T][ −] [�]t[T]=1 [|][d][l,t][ −] [h][ˆ][t][|]
T
Honest
≷ λ, (45)
Byzantine
[18] S. M. Kay, Fundamentals of Statistical Signal Processing: Detection
_Theory, 1st ed._ Upper Saddle River, New Jersey, USA: Prentice Hall,
1998.
[19] A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood
from incomplete data via the em algorithm,” JOURNAL OF THE ROYAL
_STATISTICAL SOCIETY, SERIES B, vol. 39, no. 1, pp. 1–38, 1977._
[20] A. R. Webb, Statistical Pattern Recognition, 2nd ed. Chichester, West
Sussex, England: John Wieley & Sons, 2001.
[21] C. M. Bishop, Pattern Recognition and Machine Learning (Information
_Science and Statistics). Secaucus, NJ, USA: Springer-Verlag New York,_
Inc., 2006.
[22] S. Boyd and L. Vandenberghe, Convex Optimization, 1st ed. New York:
Cambridge University Press, 2004.
[23] A. Vempaty, K. Agrawal, H. Chen, and P. Varshney, “Adaptive learning
of Byzantines’ behavior in cooperative spectrum sensing,” in Wireless
_Communications and Networking Conference (WCNC), 2011 IEEE,_
march 2011, pp. 1310 –1315.
[24] D. J. Hand, Construction and Assessment of Classification Rules, 1st ed.
Chichester, West Sussex, England: John Wiley & Sons, 1997.
[25] S. M. Kay, Fundamentals of Statistical Signal Processing, Volume I:
_Estimation Theory, 1st ed._ Upper Saddle River, New Jersey, USA:
Prentice Hall, 1993.
[26] X. Luo, M. Dong, and Y. Huang, “On distributed fault-tolerant detection
in wireless sensor networks,” IEEE Transactions on Computers, vol. 55,
no. 1, pp. 58 – 70, jan. 2006.
[27] W. Zhang, R. Mallik, and K. Ben Letaief, “Cooperative spectrum sensing
optimization in cognitive radio networks,” in Communications, 2008.
_ICC ’08. IEEE International Conference on, may 2008, pp. 3411 –3415._
[28] P. Rousseeuw and A. Leroy, Robust Regression and Outlier Detection.
New York: John Wiley & Sons, Inc., 1987.
[29] R. Viswanathan and P. Varshney, “Distributed detection with multiple
sensors i. fundamentals,” Proceedings of the IEEE, vol. 85, no. 1, pp.
54 –63, jan 1997.
[30] R. Soosahabi and M. Naraghi-Pour, “Scalable phy-layer security for
distributed detection in wireless sensor networks,” IEEE Transactions
_on Information Forensics and Security, vol. PP, no. 99, p. 1, 2012._
**Erfan Soltanmohammadi (S12) was born in Karaj,**
Iran, in 1984. He received the B.Sc. in electrical
engineering from Khaje Nasir University of Technology (KNTU ), Tehran, Iran, in 2007, and the M.S.
from Amirkabir University of Technology (AUT),
Tehran, Iran, in 2010. He is currently working towards the Ph.D. degree in systems (communication
& signal processing) in the School of Electrical
Engineering and Computer Science, Louisiana State
University, Baton Rouge, Louisiana, U.S.A, where
he is also a Graduate Research/Teaching Assistant.
His current research interests include security in wireless sensor networks,
cognitive radio, signal processing for communications, MIMO systems, and
blind communication techniques.
**Mahdi Orooji (S’11) was born in Tehran, Iran,**
in 1980. He received the B.Sc. degree in electrical
engineering from University of Tehran in 2003. He
is currently working towards the Ph.D. degree in
the School of Electrical Engineering and Computer
Science, Louisiana State University, Baton Rouge,
Louisiana, USA. His research interests are wireless
communication and statistical signal processing. Mr.
Orooji received the Huel D. Perkins Doctoral Fellowship Award from LSU, 2009-2013.
In other words, a node belongs to the class of honest nodes
if the fraction of its decisions that do not match the detected
hypotheses is less than some threshold η.
REFERENCES
[1] M. Franceschelli, A. Giua, and C. Seatzu, “Decentralized fault diagnosis
for sensor networks,” in Automation Science and Engineering, 2009.
_CASE 2009. IEEE International Conference on, aug. 2009, pp. 334 –_
339.
[2] P. Varshney, Distributed Detection and Data Fusion, 1st ed. New York:
Springer-Verlag, 1997.
[3] S. Marano, V. Matta, and L. Tong, “Distributed inference in the presence
of Byzantine sensors,” in Signals, Systems and Computers, 2006. ACSSC
_’06. Fortieth Asilomar Conference on, 29 2006-nov. 1 2006, pp. 281 –_
284.
[4] ——, “Distributed detection in the presence of Byzantine attacks,” IEEE
_Transactions on Signal Processing, vol. 57, no. 1, pp. 16 –29, jan. 2009._
[5] M. Abdelhakim, L. E. Lightfoot, and T. Li, “Reliable data fusion
in wireless sensor networks under Byzantine attacks,” in MILITARY
_COMMUNICATIONS CONFERENCE, 2011 - MILCOM 2011, nov._
2011, pp. 810 –815.
[6] M. Gagrani, P. Sharma, S. Iyengar, V. Nadendla, A. Vempaty, H. Chen,
and P. Varshney, “On noise-enhanced distributed inference in the
presence of Byzantines,” in Communication, Control, and Computing
_(Allerton), 2011 49th Annual Allerton Conference on, sept. 2011, pp._
1222 –1229.
[7] R. Chen, J.-M. Park, and K. Bian, “Robust distributed spectrum sensing
in cognitive radio networks,” in INFOCOM 2008. The 27th Conference
_on Computer Communications. IEEE, april 2008, pp. 1876 –1884._
[8] A. Rawat, P. Anand, H. Chen, and P. Varshney, “Countering Byzantine
attacks in cognitive radio networks,” in Acoustics Speech and Signal
_Processing (ICASSP), 2010 IEEE International Conference on, march_
2010, pp. 3098 –3101.
[9] P. Anand, A. Rawat, H. Chen, and P. Varshney, “Collaborative spectrum
sensing in the presence of Byzantine attacks in cognitive radio networks,” in Communication Systems and Networks (COMSNETS), 2010
_Second International Conference on, jan. 2010, pp. 1 –9._
[10] M. Abdelhakim, L. Zhang, J. Ren, and T. Li, “Cooperative sensing in
cognitive networks under malicious attack,” in Acoustics, Speech and
_Signal Processing (ICASSP), 2011 IEEE International Conference on,_
may 2011, pp. 3004 –3007.
[11] H. Wang, L. Lightfoot, and T. Li, “On phy-layer security of cognitive
radio: Collaborative sensing under malicious attacks,” in Information
_Sciences and Systems (CISS), 2010 44th Annual Conference on, march_
2010, pp. 1 –6.
[12] A. Rawat, P. Anand, H. Chen, and P. Varshney, “Collaborative spectrum sensing in the presence of Byzantine attacks in cognitive radio
networks,” IEEE Transactions on Signal Processing, vol. 59, no. 2, pp.
774 –786, feb. 2011.
[13] F. Penna, Y. Sun, L. Dolecek, and D. Cabric, “Detecting and counteracting statistical attacks in cooperative spectrum sensing,” IEEE
_Transactions on Signal Processing, vol. 60, no. 4, pp. 1806 –1822, april_
2012.
[14] A. Vempaty, K. Agrawal, H. Chen, and P. Varshney, “Adaptive learning
of Byzantines’ behavior in cooperative spectrum sensing,” in Wireless
_Communications and Networking Conference (WCNC), 2011 IEEE,_
march 2011, pp. 1310 –1315.
[15] B. Chen, R. Jiang, T. Kasetkasem, and P. Varshney, “Channel aware
decision fusion in wireless sensor networks,” IEEE Transactions on
_Signal Processing, vol. 52, no. 12, pp. 3454 – 3458, dec. 2004._
[16] Q. Zhang, P. Varshney, and R. Wesel, “”Optimal bi-level quantization
of i.i.d. sensor observations for binary hypothesis testing”,” IEEE
_Transactions on Information Theory, vol. 48, no. 7, pp. 2105 –2111,_
jul 2002.
[17] R. Niu, B. Chen, and P. Varshney, “Fusion of decisions transmitted
over rayleigh fading channels in wireless sensor networks,” IEEE
_Transactions on Signal Processing, vol. 54, no. 3, pp. 1018 – 1027,_
h 2006
-----
**Mort** **Naraghi-Pour** (S’81-M’87) was born in
Tehran, Iran, on May 15, 1954. He received the
B.S.E. degree from Tehran University, Tehran, in
1977 and the M.S. and Ph.D. degrees in electrical
engineering from the University of Michigan, Ann
Arbor, in 1983 and 1987, respectively. In 1978,
he was a student at the Philips International Institute, Eindhoven, The Netherlands, where he also
did research with the Telecommunication Switching
Group of the Philips Research Laboratories. Since
August 1987, he has been with the School of Electrical Engineering and Computer Science, Louisiana State University, Baton
Rouge, where he is currently an Associate Professor. From June 2000 to
January 2002, he was a Senior Member of Technical Staff at Celox Networks,
Inc., a network equipment manufacturer in St. Louis, MO. His research
and teaching interests include wireless communications, broadband networks,
information theory, and coding.
Dr. Naraghi-Pour has served as a Session Organizer, Session Chair, and
member of the Technical Program Committee for many international conferences.
-----
| 15,936
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/TIFS.2012.2229274?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/TIFS.2012.2229274, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://www.ece.lsu.edu/mort/publications/Double-Byz_111612.pdf"
}
| 2,013
|
[
"JournalArticle"
] | true
| null |
[] | 15,936
|
en
|
[
{
"category": "Business",
"source": "external"
},
{
"category": "Business",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Economics",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/001f5374720167b415511af1d1285b29a931b58d
|
[
"Business"
] | 0.902804
|
The Differential Impact of Corporate Blockchain-Development as Conditioned by Sentiment and Financial Desperation
|
001f5374720167b415511af1d1285b29a931b58d
|
Journal of Corporate Finance
|
[
{
"authorId": "3461735",
"name": "I. Cioroianu"
},
{
"authorId": "134706162",
"name": "S. Corbet"
},
{
"authorId": "84184229",
"name": "C. Larkin"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"J Corp Finance"
],
"alternate_urls": [
"http://www.sciencedirect.com/science/journal/09291199",
"https://www.journals.elsevier.com/journal-of-corporate-finance"
],
"id": "62205462-9b0c-4d6b-97ad-6352d9d7a2c7",
"issn": "0929-1199",
"name": "Journal of Corporate Finance",
"type": "journal",
"url": "http://www.elsevier.com/wps/find/journaldescription.cws_home/524467/description#description"
}
|
Abstract This paper investigates how companies can utilise Twitter social media-derived sentiment as a method of generating short-term corporate value from statements based on initiated blockchain-development. Results indicate that investors were subjected to a very sophisticated form of asymmetric information designed to propel sentiment and market euphoria, that translates into increased access to leverage on the part of speculative firms. Technological-development firms are found to financially behave in a profoundly different fashion to reactionary-driven firms which have no background in ICT technological development, and who experience an estimated increased one-year probability of default of 170 bps. Rating agencies are found to have under-estimated the risk on-boarded by these speculative firms, failing to identify that they should be placed under an increased degree of scrutiny. Unfiltered market sentiment information, regulatory unpreparedness and mis-pricing by trusted market observers has resulted in a situation where investors and lenders have been compromised by direct exposure to an asset class becoming known for law-breaking activity, financial losses and frequent reputational damage.
|
# The differential impact of corporate blockchain-development as conditioned by sentiment and financial desperation
Iulia Cioroianu[a][∗], Shaen Corbet[b,c], Charles Larkin[a,d,e]
_aInstitute for Policy Research, University of Bath, UK_
_bDCU Business School, Dublin City University, Dublin 9, Ireland_
_cSchool of Accounting, Finance and Economics, University of Waikato, New Zealand_
_dTrinity Business School, Trinity College Dublin, Dublin 2, Ireland_
_eKreiger School of Arts Sciences, Johns Hopkins University, Baltimore, MD, USA_
_∗Corresponding Author: [email protected]_
**Abstract**
This paper investigates how companies can utilise Twitter social media-derived sentiment as a
method of generating short-term corporate value from statements based on initiated blockchaindevelopment. Results indicate that investors were subjected to a very sophisticated form of asymmetric information designed to propel sentiment and market euphoria, that translates into increased
access to leverage on the part of speculative firms. Technological-development firms are found to
financially behave in a profoundly different fashion to reactionary-driven firms which have no background in ICT technological development, and who experience an estimated increased one-year
probability of default of 170bps. Rating agencies are found to have under-estimated the risk onboarded by these speculative firms, failing to identify that they should be placed under an increased
degree of scrutiny. Unfiltered market sentiment information, regulatory unpreparedness and mispricing by trusted market observers has resulted in a situation where investors and lenders have
been compromised by direct exposure to an asset class becoming known for law-breaking activity,
financial losses and frequent reputational damage.
_Keywords:_ Investor Sentiment; Blockchain; Leverage; Idiosyncratic Volatility; Social Media.
_Preprint submitted to Journal of Corporate Finance_ _May 26, 2021_
-----
**Highlights**
_• We test for corporate effects instigated by blockchain-related technological development_
_• Social media response is found to be a significant propellant of financial response_
_• Rating agencies under-priced the risk on-boarded by speculative non-technological firms_
_• Blockchain-based information shrouding significantly increases contagion risk_
_• Speculative projects by non-technological firms are of particular regulatory concern_
2
-----
**1. Introduction**
This paper investigates whether social media attention, in conjunction with underlying corporate
financial health and prior technological experience have significantly contributed to the development
of short-term profits and abnormal sentiment-driven pricing behaviour associated with rumours and
official announcements of blockchain development projects. Online public attention and sentiment
directly create a euphoric environment through which the corporate entity and shareholders could
realise rapid equity price rises and improved access to leverage. Purposefully generating this unwarranted social media "hype" is ethically and legally questionable. After the consideration of one
hundred and fifty-six individual cases between January 2017 and July 2019, there remains limited evidence of the complete operational delivery of these rumoured or announced blockchain-development
projects. Moreover, some of the studied corporations found themselves under investigation by national regulatory authorities for a range of alleged charges including misleading investors, the release
of false information and price manipulation, with a particular focus on those firms that changed
their names to incorporate terms such as ‘blockchain’ and ‘cryptocurrency’ (Cheng et al. [2019];
Sharma et al. [2020]; Akyildirim et al. [2020]; Cahill et al. [2020]). While not making accusation
of illicit behaviour, we highlight abnormal financial performance and evaluate the extent to which
financial pressures or social media campaigns were responsible for it.
The underlying motivations for these tactics are not singular. Some publicly traded companies
have found their industries in natural decline due to the challenges of international competitiveness,
responsiveness to technological advances and changing consumer demand. This appears to motivate
some companies to venture into new digital technologies, such as blockchain. These motivations,
while explicable, are not necessarily in keeping with existing ethical or regulatory principles, therefore it would not be unwarranted that regulatory bodies placed such announcements of blockchain
and cryptocurrency projects under increased scrutiny, especially when considering corporations with
no previous historical experience of ICT research and development[1]. It is important to note that
this paper focuses on a period of time when this technology was at its most euphoric and novel
level, between 2017 and 2019. Regulatory agencies had yet to establish the initial boundaries and
definitions that tempered this euphoria. Our results show how the impact of sentiment weakened
over time, which aligns to the increase in advisories from the Securities and Exchange Commission and FBI investigations becoming public knowledge. This paper’s conclusions that regulators
place blockchain announcements under more scrutiny is to encourage vigilance and to highlight
the magnitude of the manipulation that took place and could take place again with another novel
technological application.
Following Chen et al. [2019], who identify the internet of things (IoT), robo-advising, and
1Within the context of this research, such a company that has recently announced corporate development of
blockchain projects, with no prior, publicly denoted experience, or evidence of delivery of such ICT research and
development projects, is hereafter identified as a ‘reactionary-driven’ entrant. Such a firm is defined to be that which
has identified an opportunity to react in response to the development of blockchain technology and take advantage
without any provided evidence of actual, physical blockchain development project delivery.
3
-----
blockchain as the most valuable digital innovation types, we focus on the latter in order to capture a
technology which has already generated very high levels of attention and was at the peak of its "hype
curve". Building upon the work of Akyildirim et al. [2020], who present evidence of cryptocurrency
shifting price discovery based on a limited set of cases, we develop significant additional insights by
increasing the number of analysed firms to one hundred and fifty six and expanding the dimensions
of analysis to include social media attention, sentiment and underlying corporate fragility. We find
significant shifts in price discovery associated with each of these additional factors.
The novelty of this paper is to be found in the synthesis of sentiment analysis, as derived from
Twitter, with the behaviour of firms with respect to blockchain development. While previous papers
have highlighted the role of blockchain hysteria in equity pricing [Jain and Jain, 2019] and the role
of sentiment on pricing [Cioroianu et al., 2020], here we combine both aspects and link that to
corporate behaviour. We are therefore able to derive a specific quantum of error. The excessive
positivity of the ratings agencies is on the order of a one grade improvement over the true credit
rating. Blockchain announcement companies with speculative intent also present an average oneyear probability of default of 2.2% of classification error on the part of ratings agencies brought
about by firms bootstrapping performance based on this market euphoria. In order to highlight
that, we use abnormal returns to identify the corporate effect of Twitter and split our firms into
"strategic" firms with a long history of ICT development, and those who are "speculative" and
lack any history in ICT development, even if they have previously operated in a technological
manufacturing sector. This distinction is important because it enables a precise analysis of how
erroneous ratings agency statements have been.
From a regulatory perspective, we ask whether such project announcements have shrouded, or
cloaked true probability of default estimates, and if such risks have been identified and adequately
reflected by credit rating agencies. Distinctly, we investigate a number of issues that are within the
scope of current regulatory and policy-making concern. Primarily, we analyse as to whether social
media was used as a propellant to both generate and propagate hysteria related to the potential
usage of blockchain within the corporate structure. Secondly, through a variety of methodological
techniques for improved robustness, we attempt to quantify the key financial characteristics of
corporate entities that have announced their intentions to develop significant blockchain projects.
Within this context, we specifically observe the use of leverage and other types of debt by these
companies and how such capital adjustments can influence the corporate credit ratings. Finally, we
compare our additional estimated credit risk to that provided by well-known credit rating agencies
to evaluate whether the true risks of these investments were observed in warnings to investors.
We pay particular attention to companies initiating blockchain-development projects with no prior
technological development experience (reactionary-driven entrants).
Regulators have a mixed relationship with blockchain, as it offers great opportunity for security
and to facilitate transactions, but recent evidence also suggests that it harbours the capacity to
be used for money laundering and criminal activity [Canhoto, 2020, Corbet et al., 2020]. This
is in addition to ongoing concerns about announcement effects related to blockchain and initial
coin offerings that have attracted the interest of the Federal Reserve, Securities and Exchange
4
-----
Commission, Federal Bureau of Instigation and the US Treasury. This paper highlights the financial
effects of corporate misbehaviour based on blockchain technology, both directly and indirectly
[Byrne, 2011].
While some previous works consider market reactions to specific corporate blockchain behaviour,
to the best of our knowledge, this is the first study to analyse this behaviour in the context of social
media sentiment, internal financial positions, probability of default, and the role of rating agencies.
Specifically, we argue the views that both companies in natural decline and those of smaller magnitude (such as small cap and penny stocks) are most likely to benefit from channels incorporating
the use of blockchain and cryptocurrency projects to generate both abnormal returns, profits and
public exposure. Such arguments are developed with the knowledge that social media rumours are
also central to the news dissemination process in the period before the official announcement. We
control for this through a thorough review of the ‘first’ mention on social media of such blockchain
projects, with a comparable analysis of corporate performance in the period both before the rumour,
and that of the official announcement.
Consistent with our hypotheses, the empirical analysis presented in this paper concludes that
investors were subjected to a very sophisticated form of asymmetric information. This asymmetric
information is decidedly modern since it connects to the ability of new forms of media to drive sentiment and market euphoria while also being open to digital manipulation that is nearly impossible
to discern on the part of the untrained market participant that lacks access to sophisticated digital
tools. This manipulation takes the form of ‘bots’, ‘socialbots’ and algorithmic programmed trades
that ‘read’ sentiment, but can also bolster or sway it by generating and promoting social media
content. We find that strategic firms, with a background in technology, behave in a profoundly
different fashion to speculative firms with no background in ICT technology. The result is a desire
to engage in ‘shrouding’ behaviour on the part of strategic firms, where rumours of activity in the
blockchain space are the most important. By availing of digital support that is available at low
cost and the lack of investor knowledge of the complexities of blockchain, speculative firms were
able to use a lax regulatory environment and the returns associated with Bitcoin to build interest
and sentiment that drove abnormal returns. Further, our analysis of the internal financials of these
speculative firms indicated that they used these bandwagon effects to increase their leverage, which
dramatically rose their probability of default by 170bps. Astute market observers, such as rating
agencies, under-priced the risk on-boarded by these speculative firms as they announced their entry
into the blockchain sector. The final conclusion is that our investigations find that firms engaged
in blockchain developments should have been understood to be high risk and placed under a higher
level of scrutiny than they currently are as sophisticated digital tools, regulatory unpreparedness
and mispricing by trusted market observers has resulted in a situation where investors and lenders
have been placed in a compromised position with exposure to association with potential illicit
activity, financial losses and reputational damage.
The paper is structured as follows: previous research that guides our selected theoretical and
methodological approaches are summarised in Section 2. Section 3 presents a thorough explanation
of the wide variety of data used in our analyses along with the specific hypotheses tested, while Sec
5
-----
tion 4 presents a concise overview of the methodologies utilised to analyse the presented hypotheses.
Section 5 investigates the role that social media played as a driving force of corporate mispricing
of risk. Section 6 presents a concise overview of the results and their relevance for policy-makers
and regulatory authorities, while Section 7 concludes.
**2. Previous Literature**
Corporate insiders, such as directors and high-level executives, are most likely to possess information about the true estimates of firm value that would be considered superior to that possessed
by those attempting to value the corporation from outside. Such directors and managers are central to the decision-making processes that influences the value of the corporation. This is a classic
representation of asymmetric information and consequent moral hazard which has been the source
of much debate. Lee et al. [2014] examined whether corporate restriction policies on insider trading are effective to find that they are successful in preventing negative information exploitation
but insiders profit from inside information in a way that minimises their legal risk. Hillier et al.
[2015] found that personal attributes such as an insider’s year of birth, education and gender are a
key driver of insider trading performance, and matter more in companies with greater information
asymmetry and when outsiders are inattentive to public information. Cziraki et al. [2014] identified
that insider transactions are more profitable at firms where shareholder rights are not restricted by
anti-shareholder mechanisms. There has been much evidence to suggest the existence of significant
abnormal returns from trading arising from these conditions of asymmetric information and moral
hazard (Jeng et al. [2003]; Fidrmuc et al. [2006]).
Blockchain technology, and speculative use of such, have created a very simplistic mechanism
through which insiders can very simply generate substantial marketability and public interest. The
unprecedented and sustained price appreciation of Bitcoin afforded a new channel of asymmetric
information, namely that corporate directors could partake in the development of blockchain and
cryptocurrency projects to take advantage of the market exuberance that would follow thereafter.
Our selected methodological approach generalises the literature based on corporate events and
allows us to investigate the specific sentiment-influenced abnormal returns that existed across these
trades, inclusive of derivatives markets where they existed. Further evidence of high-risk strategies
have been sourced in the use of junk bonds by companies seeking substantial rewards in rapid, with
evidence provided of an increasing probability of default over a substantial period of time (Moeller
and Molina [2003]; Basile et al. [2017]), and substantial exposure to time-varying liquidity risk
(Acharya et al. [2013]).
With regards to research on cryptocurrency, White et al. [2020] identified that Bitcoin, somewhat
representative of broad cryptocurrencies, fails as a unit of account despite its transactional value
and diffuses like a technology-based product rather than like a currency. Moreover, one major
concern identified in this new cryptocurrency’s ability was to circumvent US sanctions that had
been implemented on the Venezuelan economy and their ability to access international financing.
While considering such specific issues, it is also important to observe the broader suspicious trading
6
-----
activities and structural problems within the cryptocurrency markets. Griffins and Shams [2018]
examined whether Tether influenced Bitcoin and other cryptocurrency prices to find that purchases
with Tether were timed following market downturns and resulted in significant increases in the price
of Bitcoin. Further, less than 1% of the hours in which Tether experienced significant transactions
were found to be associated with 50% of the increase of Bitcoin prices and 64% of other top
cryptocurrencies, drawing the damning conclusion that Tether was used to provide price support
and manipulate cryptocurrency prices. Furthermore, Gandal et al. [2018] identified the impact
of suspicious trading activity on the Mt.Gox Bitcoin exchange theft when approximately 600,000
Bitcoins were attained. The authors demonstrated that the suspicious trading likely caused the
spike in price in late 2013 from $150 to $1,000, most likely driven by one single actor. These two
significant pieces of research have fine-tuned the focus of regulators, policy-makers and academics
alike, as the future growth of cryptocurrencies cannot be sustained at pace with such significant
questions of abnormality remaining unanswered. Corbet et al. [2019] provide a concise review of a
broad number of mechanisms through which cryptocurrencies can influence corporate entities and
markets and point to a number of pathways through which the contagion risks of cryptocurrency
markets can flow.
The contagion risks sourced within negative shocks sourced in cryptocurrency and blockchain
fraud can manifest in substantial losses to uninformed investors should they lack the ability to
adequately quantify a true level of associated risk. Further, the inherent moral hazards contained
within this new avenue of product development are quite exceptional due to the widespread evidence of substantial growth in the share price of selected speculating companies. When analysing
innovation within the context of retail financial products Henderson and Pearson [2011] offering
prices of 64 issues of a popular retail structured equity product were, on average, almost 8% greater
than estimates of the products’ fair market values obtained using option pricing methods. The
results of this research are found to be consistent with the recent hypothesis that issuing firms
might shroud some aspects of innovative securities or introduce complexity to exploit uninformed
investors. A recent theoretical literature explores the equilibria in which firms shroud some aspects
of the terms on which their products are offered in order to exploit uninformed consumers, and
strategically create complexity to reduce the proportion of investors who are informed (Gabaix and
Laibson [2006]; Carlin [2009]). In these equilibria, prices are found to be higher than they would be
if consumers or investors were fully informed. In the context of structured equity products, these
arguments imply that premiums are higher than they otherwise would be.
When focusing on investor sentiment Danbolt et al. [2015] argued that sentiment - analysed
with Facebook data used as a proxy - subconsciously influences investor perception of potential
merger synergies and risks, which is found to be positively related to bidder announcement returns.
Huson and MacKinnon [2003] analysed the effect of corporate spin-offs on the trading environment,
noting the substantial changes in the information environment of the firm, to find that increased
transparency following spin-offs can obviate informed traders’ information or make it more valuable. Further, transaction costs and the price impact of trades are also higher following spin-offs.
Van Bommel [2002] found that an IPO’s initial return contains new information about the true
7
-----
value of the firm, therefore providing vital feedback for the investment decision. Information production by market participants is found to increase the precision of the market feedback captured
in the first competitively determined stock price. Easley and O’Hara [2004] investigate the role of
information in affecting a firm’s cost of capital to find that differences in the composition of information between public and private information affect the cost of capital, with investors demanding
a higher return to hold stocks with greater private information. The authors identify that this
higher return arises because informed investors are better able to shift their portfolio to incorporate
new information, and uninformed investors are thus disadvantaged. Bloomfield et al. [2009] found
that a dominated information set is sufficient to account for the contrarian behaviour observed.
When informed traders also observe prices, uninformed traders generate reversals by engaging in
contrarian trading, and uninformed traders may in fact be responsible for long-term price reversals
but play little role in driving short-term momentum. While Albuquerque et al. [2008] identified that
private information obtained from equity market data forecasts industry stock returns as well as
currency returns, Bruguier et al. [2010] hypothesise that Theory of Mind (ToM) has enabled even
fully uninformed traders to infer information from the trading process, where perceived skill in
predicting price changes in markets with insiders correlates with scores on two ToM tests, showing
that investors present increased ability to read markets when there are insiders present. Further,
Aitken et al. [2015] utilised a number of indices designed to test for market manipulation, insider
trading, and broker-agency conflict based on the specific provisions of the trading rules of each
stock exchange, along with surveillance to detect non-compliance with such rules, to find a significant reduction in the number of cases, but also increased profits per suspected case. Marin and
Olivier [2008] identified that at the individual stock level, insiders’ sales peak many months before
a large drop in the stock price, while insiders’ purchases peak only the month before a large jump.
With regards to financial market misconduct, Cumming et al. [2015] reviewed recent research on
the causes and consequence of different forms of financial market misconduct and potential agency
conflicts and the impact of regulation, highlighting the presence of reciprocity in financial market
misconduct regulation and enforcement.
This paper contributes to this wider literature on behaviour of cryptocurrencies and blockchain
by analysing the ways in which sentiment driven by association with this technology and initiated
by social media can have a material impact on corporate performance, especially for firms in decline
or distress, encouraging the misconduct and ratings agency confusion highlighted in the literature
above. The starting point of the paper is the existence of significant abnormal returns from trading
arising from these conditions of asymmetric information and moral hazard induced and exacerbated
by the attention and sentiment of the online and social media environment. It is well understood
how news impacts the prices of equities in the market. The source of that information has changed
over time, with social media playing as important a role as traditional media such as newspapers,
television, radio and new wires. Twitter is a more continuous, non-edited internet version of a
news wire and the information that it circulates is incorporated into the decision making processes
of investors. Twitter does not discern between rumour and fact. This is important, as firms
may seek to impose their own editorial policies by minimising leaks from their organisation and
8
-----
ensuring that official statements are properly disseminated via social media. Other firms may
seek to encourage rumours, especially as rumours generated in Twitter do not follow the same
conventions of traditional business journalism, seeking a "second source" for verification or adding
nuance as the communication is limited to 280 characters. Under such conditions it is easy for firms
with speculative motivations or a lack of background in blockchain technology to easily associate
themselves with the market euphoria surrounding Bitcoin and blockchain development in the 201719 period with minimal scrutiny [Hu et al., 2020]. We therefore investigate how Twitter information
is processed by market actors and how the different motivations of firms will result in varied equity
price responses. The section below describes the multiple sources used in the analysis.
**3. Data Description**
We collect data from multiple sources, primarily developing a concise list of corporate announcement that specifically constitute a news release relating to blockchain or cryptocurrency
development. To complete such a task, we develop a number of strict rules in an attempt to standardise the process across major international financial markets. The first implemented rule is that
the specified company must be a publicly traded company with an available stock ticker between
the period[2] 1 January 2012 and 30 June 2019. We develop on a combined search of LexisNexis,
Bloomberg and Thomson Reuters Eikon, searching for relevant keywords[3] under traditional corporate announcements. To obtain a viable observation, a single data observation must be present
across the three search engines and the source must have been denoted as an international news
agency, a mainstream domestic news agency or the company making the announcement itself. Forums, social media and bespoke news websites were omitted from the search. Finally, the selected
observation is based solely on the confirmed news announcements being made on the same day
across all of the selected sources. If a confirmed article or news release had a varying date of
release, it was omitted due to this associated ambiguity. All observations found to be made on
either a Saturday or Sunday (nine announcements in total) are denoted as active on the following
Monday morning. The dataset incorporates 156 total announcements made during the selected
time period. The timing and geographic location of each of the announcements are presented in
Figure 1. All times are adjusted to GMT, with the official end of day closing price treated as
the listed observation for each comparable company when analysing associated contagion effects.
The corporate announcements are then sub-categorised by perceived level of risk, denoted to be
speculative in nature or structural-development. Within this context, and building on the work of
Akyildirim et al. [2020], speculative announcements are found to be those relating to the change of
corporate identity to include words such as ‘blockchain’ and ‘cryptocurrency’, and the development
2The corporate announcement period covers from 1 January 2017 to 30 March 2019 to perform adequate pre-and
post-announcement analyses (announcement data for traded companies was not present in a robust manner prior to
January 2017).
3The selected keywords used in this search include that of: "cryptocurrency", "digital currency", "blockchain",
"distributed ledger", "cryptography", "cryptographic ledger", "digital ledger", "altcoin" and "cryptocurrency exchange".
9
-----
of corporate cryptocurrencies. Alternatively, structural-development includes announcements relating to internal security, and internal process, system and technological development. The following
analysis will be sub-categorised within these sub-groups throughout.
**Insert Figure 1 about here**
The next stage of data collection surrounded the identification of investor sentiment. To complete this task, Twitter data was collected for a period between 1 January 2017 and 31 March 2019
for each of the identified companies. All tweets mentioning the name of the company plus either
of the terms ‘crypto’, ‘cryptocurrency’ or ‘blockchain’ were computationally collected through the
Search Twitter function on https://twitter.com/explore using the Python ‘twitterscraper’ package,
observing platform rate limiting policies. A total number of 954,765 unique tweets were collected[4].
The data was then aggregated by company and by day, taking sums of the quantitative variables and
aggregating the text. In a provisional methodology, we determine the very first tweet as identified
on Twitter that was correctly based (identified as the ‘rumour’ hereafter) on the forthcoming corporate blockchain announcement (identified as the ‘official announcement’ hereafter). The associated
statistics based on this Twitter activity as divided by time, reach and size are presented in Table
1. Both of these dates are used to identify the establishment of dummy variables through which
the following analyses are built. Further to speculative and structural-development sub-divisions
outlined above, results are further separated based on whether they were ‘rumour’ or ‘official’. Such
division of analysis provides the existence of a unique observation period in which stock market behaviour, internal financial behaviour and the stock and derivative trading behaviour of directors and
senior management can be analysed. Further sub-division of tweets relating to corporate blockchain
development is conducted based on the natural logarithm of the number of tweets relating to each
company based on quartiles, but also based on high and low sentiment. The sentiment variables
were computed using the Python package ‘pysentiment’ and are based on the Harvard General
Inquirer IV-4 dictionary and the Loughran and McDonald Financial Sentiment dictionary[5]. Each
includes the following measures to determine sentiment: 1) counts of positive terms; 2) counts of
negative terms; 3) a measure of polarity calculated as the number of positive terms minus the
number of negative terms divided by the sum of positive and negative terms; and 4) a measure of
subjectivity (affect) calculated as the proportion of negative and positive terms relative to the total
number of terms in the tweet.
**Insert Table 1 about here**
4For brevity, additional summary statistics based on these tweets are available from the authors upon request.
5The Harvard General Inquirer IV-4 dictionary is available at the following link and the Loughran and McDonald
Financial Sentiment dictionary is available at the following link
10
-----
Considering the data presented in Table 1, we observe the key statistics as presented from the
scale of interest and sentiment of the associated Twitter activity[6]. This preliminary analysis of firms
exhibits a very clear linkage between blockchain announcements and firm equity price performance.
It would appear that the smaller the firm, the stronger the effect[7]. There are clear differences
in behaviour of rumour duration over the years between 2017-19, reflecting a changing regulatory
environment. Most importantly, there is a strong bifurcation of the speculative and the strategic
blockchain investment motivations. This split is important to note throughout the rest of the
analyses, as there is consistent evidence that firms experience strong ‘bandwagon effects’ as a result
of being associated with blockchain and that this effect is persistent. There is also evidence to
suggest that ‘rumours’ enter social media almost a week earlier than the official announcement, in
comparison to corporate entities who have signalled their intentions to begin strategic blockchaindevelopment projects. When considering that the average size of speculatively-denoted companies
is approximately 1/10th that of their strategically-developing counterparts, the reduced corporate
size and structure should theoretically produce an increased probability of more stringent planning
and information security (Zhou et al. [2015]), however, in preliminary testing, this does not appear
to be the case.
When considering previous research surrounding corporate blockchain development in conjunction with theoretical and methodological support based on the relationship between social media
exposure, blockchain development and corporate performance structures a number of distinct hypotheses are determined. Due to the interest and attention given to blockchain technologies in
the media and the wider public, we hypothesise that some firms will venture into the development
or adoption of blockchain technology or the language of blockchain in order to improve equity
performance.
_• Hypothesis h1: Blockchain announcements generate observable and significant changes in the_
perception of the firm to which the declaration or news is related: there exist significant
differentials in both timing and market response as measured by social media sentiment to
both the ‘rumour’ and the ‘official announcement’ of corporate blockchain-development
_• Hypothesis h2: Corporate desperation[8], as evidenced by a weak firm cash reserve and/or high_
leverage position, instigates the decision to incorporate blockchain technology.
6Interest is sub-divided by quintile of the number of identified tweets, which are further separated as per type of
blockchain-announcement, the year in which the announcement was made, and by company size. Further, we have
included a final column that specifically investigates the average time difference, as measured in days, of the time
between the first identified tweet, denoting the establishment of the ‘rumour’ and the ‘official’ announcement.
7The variable representing interest of social media is found to be significantly related with the size of the company,
while the effects of sentiment in relation to market capitalisation do not appear to present a clear relationship.
8Corporate desperation is understood as the default probability using a discrete hazard model in the form of a
multi-period logit relating to blockchain and investigates the cost-benefit trade-off of debt from the viewpoint of
shareholders by estimating the net value that equity holders place on an incremental dollar of debt by using the
Faulkender and Wang [2006] model of a firm’s excess stock return regressed on changes in several investment and
financial policy factors. The coefficient on the independent variables reflects the net cost (negative coefficient) or
benefit (positive coefficient) to equity holders of expansion into blockchain.
11
-----
_• Hypothesis h3: Companies who instigate blockchain development projects present evidence of_
increased probability of default should they have no prior technological development experience (reactionary-driven entrants)
_• Hypothesis h4: Credit ratings have adapted and segregated their consideration of the addi-_
tional corporate risk associated with speculative and strategic blockchain development
Specifically, h1 develops a novel investigation of the influence of social media on financial performance based on blockchain or blockchain-related technology. Firm fundamentals are then evaluated
against the increased probability of introducing or announcing such technological developments to
improve the market position of a firm in distress due to poor cash-flows or excessive leverage.
Hypothesis h2 takes as its prior that distressed firms will pursue "bandwagon effects" in order to
buttress or strengthen their equity performance and appear to be a more attractive for investors.
Next, through the use a probit technique, we investigates the behaviour of the selected companies as again separated by strategic and speculative use, but further considering as to whether
such companies can be identified as possessing previous experience of technological development
(reactionary-driven entrants). Hypothesis h3 focuses on specific effects within reactionary-driven
corporations with no previous evidence of technological experience but with publicly stated entrance
to blockchain-development projects[9]. Hypothesis h4 considers the risk differential and potential
under-pricing of the true risks inherent in such projects and blockchain-based decisions. While
considering a number of reputable measures of market risk, we specifically estimate the effects of
internal financial factors and then represent the estimated credit rating in comparison to the actual credit rating provided during the period surrounding the announcement of plans to develop
blockchain.
**4. Empirical Methodology**
Our selected methodological form builds upon four separate techniques through which our established hypotheses can be tested. These techniques address the core hypotheses. First, we focus
on the impact of social media on both the differences of response to ’rumours’ and ‘official’ firm
statements of forthcoming blockchain projects and then testing for significant influence that it could
have on market sentiment. To complete such a task, we revisit models similar to that presented by
Akyildirim et al. [2020] and Cahill et al. [2020] that have focused on abnormal returns, however, in
9While technological and corporate development is a welcome and necessary ambition for progress, we have observed a worrying trend in recent times where corporations with no previous experience in any element of technological
development have announced their intentions to develop cryptocurrency, or indeed, change their name to incorporate
a corporate identity that would present a case that blockchain and cryptocurrency development is central to the
corporate raison d’être, which has been proven in a small number of cases to have been misleading to investors.
These companies have been earlier defined to be reactionary-driven entrants to the blockchain development sector.
Here the underlying prior is that internal actors within firms will underpin these decisions in an attempt to profit
from the "bandwagon effects" associated with blockchain news as disseminated via Twitter hype and subsequent
developing investor sentiment.
12
-----
addition we control for the role of social media response. Once we establish the scale of such effects,
we then focus on the second technique for the corporate behaviour of such companies within three
separate scopes of analysis. We first examine this through the differential effects of leverage as
designed by Cathcart et al. [2020], examining default risk relating to structural changes in leverage
and cash holding behaviour of such companies in the period prior to blockchain-related rumours
announcements. We then employ a third technique to assess whether investors valued variations
of long-term debt and changes in their respective leverage ratios in a manner inspired by the work
of D’Mello et al. [2018]. Finally, using the methodology provided by Metz and Cantor [2006], we
estimate a probability of default methodology to add further robustness to the estimated default
risks generated from our analysis of leverage. Within this context, we can then re-estimate and
compare to the time-series of credit rating announcements at the times surrounding both rumours
and official blockchain-development announcements. By completing such as task, we can estimate
as to whether the idiosyncratic risks associated with such decisions are fully comprehended by
analysts[10].
To examine whether there exists evidence of internal structural changes in the use of leverage,
the structure in which such leverage is obtained, or indeed changes in cash holdings of these companies in the periods surrounding both rumours and announcements of blockchain-development. One
particular perception surrounding such decision-making processes surrounds the fact that some companies that have been making the decision to announce their intentions to incorporate blockchain
have already been in substantial decline. There are a number of particular methodologies in which
we can identify such substantial changes in the use and design of such leverage. Our analysis builds
on the work of Cathcart et al. [2020] who specifically investigated the differential impact of such
leverage on the default risk of firms of varying size. We design a structured methodological approach to investigate as to whether companies who announce their intentions to develop blockchain
present evidence of a variation of their usage and sources of leverage based on pre-defined speculative and strategic announcements of corporate blockchain-development. Further specific hypotheses
surrounding differentials based on the timing of rumours and official announcements, social media
outreach and associated sentiment, and corporate size, as measured by market capitalisation, add
explanatory benefits. To investigate the effects of leverage, we estimate a default probability using a
discrete hazard model in the form of a multi-period logit, similar to the previous work of Campbell
et al. [2008], which can be used to analyse unbalanced data using time-varying covariates. The logit
model is given by:
10Since the news feed gives time and dates in local time, we first changed all times of announcements and market
data to GMT, thereby accounting directing for differences in time zones for international firms. We further check
the data to account for the broad variation in market opening times as generated through differences in exchange
close times, weekends and public holidays. If the announcement occurs between market close and the following
market opening time, the next available trading day is taken as the announcement day. To mitigate the effects of
simultaneous response to financial announcements, we exclude any company that has an earnings announcement
or release of corporate accounts within five days either side of the blockchain-related announcement. For added
methodological robustness, we extended this filter for a variety of time horizons up to ten days either side of the
announcement and our results remain unchanged.
13
-----
_Pt(yi,c,j,t+1 = 1) = Φ(α −_ _Xi,tβ + Zi,c,tδ −_ _γc −_ _γj)_ (1)
1
= (2)
1 + exp [α + Xi,tβ + Zi,c,tδ + γc + γj]
where subscripts i, c, j, and t vary according to firms, countries, industries and years, respectively. The y variable is a dummy that indicates corporate default; it takes a value of 0 if the firm
is active and a value of 1 if the firm is insolvent or bankrupt. Firms that remain in default for
more than 1 year are retained in the sample used to estimate the model as depicted in the above
equation until the year they first migrate to the default state. The parameter α is the constant; γc
and γj are country and industry fixed effects, respectively; X is a vector of time-varying firm-level
variables, and Z is a vector of time-varying control variables. Covariates are lagged and refer to
the previous accounting year relative to the dependent variable.
The firm-level variables include leverage or its components, that is, trade, current, and noncurrent. These are, respectively, the ratios of total leverage, trade payables, and current and
non-current liabilities to total assets (as per Cathcart et al. [2020]). Controls that vary at the
country level include a set of macroeconomic variables. We employ the natural logarithm of GDP
growth (GDP), the yield of 3-month government bonds (Bond) and the logarithm of sovereign credit
default swap (CDS) spreads to capture the business cycle, interest rate effects, and sovereign risk,
respectively. The information on GDP is obtained from the Eurostat Database, interest rates are
collected from the IMF-World Economic Outlook Database and CDS spreads are obtained from
Markit. Firm-level control variables include the ratio of net income to total assets (NITA), the
ratio of current assets to total assets (CATA), the number of years since a firm’s incorporation
(Age). Summary statistics for each of these respective variables are presented in Table 2 The A
dummy variable is introduced to the logit methodology (IMP) to denote as to whether the firm
is active and not under regulatory investigation, while it receives a value of one if it is insolvent,
bankrupt or under regulatory investigation. Within this structure, we attempt to compare our
sample and sub-sample of corporate institutions to groupings of companies that have been already
proven to have caused significant issues with regards to blockchain development (as being currently
investigated by regulatory authorities), or the institution has simply become insolvent or has gone
bankrupt.
**Insert Table 2 about here**
To understand how corporate leverage interacted as separated by both speculative and strategic
blockchain-development, we calculate the marginal effects on the probabilities of default across
different levels of the independent variables, particularly as the selected methodology is non-linear
and we cannot directly interpret the sign, magnitude and statistical significance of the coefficients
of the logit covariates when they are interacted with dummy variables. The marginal effects where
the corporate blockchain-development is defined as strategic are presented as:
14
-----
_ϑPt(yi,c,j,t+1 = 1)_
= βxΦ[′](α + Xi,tβ + Zi,c,tδ + γc + γj) (3)
_ϑx_
Whereas, marginal effects in the same methodological specifications with companies who have
signalled their intention to develop blockchain for purely speculative reasons are modelled as:
_ϑPt(yi,c,j,t+1 = 1)_
= (βx + βx.SpecSpec)βxΦ[′](α + Xi,tβ + Zi,c,tδ + γc + γj) (4)
_ϑx_
where x is the variable of interest and Φ is the logit function. The marginal effect of the variable
of interest is a function of all the covariates including the value of the speculation dummy which
allows us to have separate marginal effects for companies who incorporate blockchain-development
for strategic purposes (when the dummy equals 0) and for companies who incorporate blockchaindevelopment for speculative purposes (when the dummy equals 1). To compute the marginal effects
we take the mean value of the covariates’ observations that pertain each set of companies.
In the final stage of our analysis, we set out to establish whether the effects of leverage and other
internal dynamics of corporations who have taken both strategic and speculative decisions to develop
blockchain have been effectively considered by credit rating agencies’ estimates. To complete this
task, we reconstruct estimates similar to those previously described by Metz and Cantor [2006].
The calculated marginal effects of leverage provide a basis point estimate of differential implied
probability which can be then compared to the actual point-in-time international credit ratings
to which inferences can be drawn. The authors parameterised the weighting functions for each
credit metric z, where the financial metrics we consider are coverage (CV), leverage (LV), return
on assets (ROA), volatility adjusted leverage (vLV), revenue stability (RS), and total assets (AT),
while defining wz as the exponential of the linear function of the issuer’s leverage as described by:
_wz = exp_ �az + bzlevt[i]� (5)
where the final weighting of Wz is calculated as:
_Wz_
_Wz =_ (6)
1 + [�]k[6]=1 _[W][k]_
The weights are assumed to be a function of an issuer’s leverage ratio. Through the use of a 20
point linear transformation scale for cross-corporation credit ratings as described in Table A2 (in
the Online Appendices), we are then able to scale the estimated credit rating through adjustments
to this weighted average rating. First, we add a constant notching adjustment n simply to absorb
rounding biases and give us a mean zero error in sample. Secondly, we then adjust for fiscal year
with fixed effects n(t), and finally, we adjust for industry with fixed effects n(I). To consider
the effects of blockchain announcements, we make an adjustment proportional to the volatility of
leverage in the period since the official blockchain-development announcement. Therefore,
_FR = w1RCV + w2RLV + w3RRoA + w4RRS + w5RvLV + w6RAT + w7RCV xAT_ (7)
15
-----
� _σ(LV )_
_R˜ = FR + n + n(t) + n(I) + δ_
_µ(LV )_
�
(8)
_R = max_ �5, min �20, _R[¯]��_ (9)
_R is our estimate of the final issuer credit rating. The free parameters are estimated by min-_
imising the log absolute notch error plus one[11]. We utilised an ordered probit methodology to
determine the probability that the company under observation possesses the rating allocated as
calculated by the above structure. We then compare the credit ratings over the time period analysed, investigating as to whether the true effects of the use of leverage for blockchain-development
were appropriately accounted for.
**5. Results**
_5.1. Understanding the hype surrounding blockchain announcements_
We begin our analysis by testing Hypothesis h1, which investigates whether blockchain announcements generate observable and significant changes in the perception of the firm to which the
declaration or news is related: there exist significant differentials in both timing and market response
as measured by social media sentiment to both the ‘rumour’ and the ‘official announcement’ of corporate blockchain development. In Table 3 we separate the data into four distinct blocks. Twitter
and equity activity on the day of announcement and thirty days before both the rumour or official announcement and then for the three days period after the rumour or official announcement.
This is entirely descriptive data as collected from the social media sources. Reactionary-driven
firms experience a stronger lift from rumours as opposed to official announcements as they actively
are seeking to exploit bandwagon effects associated with Bitcoin and blockchain. The statistical
modelling found below provides further significant evidence for the high risk behaviours of these
reactionary-driven firms.
**Insert Table 3 about here**
The number of Tweets issued in both speculatively and strategically orientated blockchain announcements supports the increases in the volume of attention afforded to a firm upon statement.
The interesting observation is the decay rate of that interest. While speculative firms exhibit "flashin-the-pan" interest, strategic firms have a much longer duration of interest, most especially after
they make an official company announcement. The general phenomenon from Figure 2 continues,
11This places much less weight on reducing very large errors and much greater weight on reducing small errors,
which more closely corresponds to how a user would make such trade-offs. In practice, the results are almost the
same as an iterated least squares approach: minimise squared errors, drop the large errors from the dataset, and
re-minimise squared errors.
16
-----
this time with retweets, with the strategic firms exhibiting a much slower decay rate following an
official announcement. This prolonged interest in news from strategic companies may reflect the
technical background of these companies and the desire on the part of investors to evaluate the
new products and how those investments sustain value creation. In retweets, the decay rate across
speculative and strategic firms is much slower after the official announcement when compared to
the overall number of tweets issued, as indicated in Figure 2. The most interesting artefact of the
data is that for retweets, the initial rumour is the most powerful driver of activity, resulting in an
acute but very brief (two days) period of interest.
**Insert Figure 2 about here**
As in Figures 3 and 4, we present the number of ‘Retweets’ and ‘Likes’ respectively. The presented number of ‘Likes’ follows a similar pattern to the retweets, with rumour being the most
powerful driver of activity, this time with a very rapid decay rate, with a near full return to prerumour conditions by day three. Official announcements follow the same pattern as in Figures 2
and 3, with strategic firms having a slower decay rate and maintaining a permanently higher level of
‘Likes’ after the official announcement. Speculative firms have a much more rapid decay rate than
strategic firms, but they also permanently increase their ‘Likes’ after the official announcement.
This further confirms the hypothesis that firms seek to use blockchain as a method of acquiring
interest in their firms, even if that interest is relatively fleeting. ’Likes’, as an indication of interest and approval, in the activities of both the speculative and strategic firms, making an official
announcement is a clearly positive action to increase the visibility, interest and approval of the firm.
**Insert Figures 3 and 4 about here**
It is important to note that Twitter is not an entirely transparent medium for registering interest.
The presence of ‘bots’ (automatic programmes) can manipulate the readers of Tweets as these bots
can emulate the behaviour of actual followers and mimic human interaction (so-called ’socialbots’).
This can result in an artificial increase in the number of tweets, retweets and likes attached to a
particular news announcement. Countermeasures can be taken by firms that have online security
support, most especially those with a deep knowledge of the technology behind bots. These firms
would typically fall into our strategic categorisation[12]. Therefore, we provide further validation
12The degree in which the misuse of social media data and, in particular, fake data has been estimated to have been
quite profound. Van Der Walt and Eloff [2018] discussed the many examples that exist of cases where fake accounts
created by bots or computers have been detected successfully using machine learning. Shao et al. [2018] performed
k-core decompositions on a diffusion network obtained from the 2016 US Presidential Elections, providing a first look
at the anatomy of a massive online misinformation diffusion network, where similarly, Grinberg et al. [2019] found
that only 1% of individuals accounted for 80% of fake news source exposures, and 0.1% accounted for nearly 80%
of fake news sources shared. Cresci et al. [2015] specifically investigated fake followers on Twitter, pointing out the
explicit dangers as they may alter concepts like popularity and influence.
17
-----
of Hypothesis h1, by re-estimating a similar baseline cumulative abnormal return model to that
used by Akyildirim et al. [2020] and Cahill et al. [2020], with significant novelty added through the
addition of sentiment. In Table 4, we observe the sentiment adapted cumulative abnormal returns
for a rumour and official statement for period surrounding each announcement. The highlights of
this table relate to the response of equities at AR0. Here, we identify that speculative investments
have an 11% higher return in both rumour and official announcement. Equities with a positive
sentiment will have a 13% and 8% respectively higher return and importantly, given regulatory
responses in recent years, sentiment adapted abnormal returns reaching 12% and 18% in 2017 but
are moderated to less 1% for rumours and 3% for official statements in 2019.
**Insert Table 4 about here**
Separating the analysis based on speculative versus the strategic firms for rumour and official
announcement responses, we find that strategic firms have little equity market price responses to
rumour, whereas speculative firms have very clear and persistent responses to rumour announcements. In the case of official announcements as presented in Figure 5, the substantiative response
of speculative firms is observed again but strategic firms also have the appearance of sentiment
adapted abnormal returns, but much smaller in magnitude[13].
**Insert Figure 5 about here**
Separating results by "reach" of the social media as measured by quartiles of tweets, retweets
and likes, ranked from lowest through to highest, we find that firms with the highest reach, exhibited the strongest results with respect to official announcements. We further analyse the impact
of sentiment as expressed by Twitter statements that have been indexed to positive, negative and
neutral sentiment. Strong and persistent sentiment adapted cumulative abnormal returns are associated with positive sentiment information from social media. This is consistent for rumour and the
official announcement. The impact of negative sentiment is still positive for both circumstances,
and interestingly, more powerful than a neutral social media sentiment for rumours. In the case
of official announcements, the expected order of positive, neutral and negative holds but even negative sentiments will still result in an improvement in returns. The only explanation that can be
associated with such a response is that overall effect of being associated with a blockchain initiative
13Further results are available from the authors on request relating to time varying effects. Though outside the
scope of this research, results indicate an influence from a changed regulatory environment with respect to blockchain
technology and the treatment of the "initial coin offering" (ICO) by the US Securities and Exchange Commission
(SEC) and the Federal Bureau of Investigation (FBI). The SEC began the process of investigating ICOs in the second
half of 2017, making their first investor bulletin in July 2017 and then an enforcement sweep in March 2018 with
the FBI making a public announcement of the sentencing of a virtual currency fraudster to 21 months in prison in
February 2019. Given these regulatory response, it is not surprising that evidence of abnormal returns reduces in
2018 and is muted in 2019, especially for rumours.
18
-----
or blockchain technology is understood to be overwhelmingly positive for a firm, even if it receives
a negative welcome from social media commentators.
**Insert Tables 5 through 6 about here**
In both of the Tables 5 and 6, we observe direct abnormal pricing performance at the time period
specifically surrounding both the date of the rumour and the official announcement, focusing on
the period thirty days before, the period inclusive of the day both before and after, and the day of
respectively. Firms with speculative motivations to embark on blockchain work during a rumour
will have a large proportion (0.14%) of their price movement explained exclusively by sentiment.
US market effects are the dominate effect in this period, while from the empirical evidence we can
identify that firms with strong responses to rumour do so most actively when they are speculative.
This is consistent with the view that firms that are engaged in blockchain for speculative purposes
are seeking to take advantage of an existing premium in the market associated with cryptocurrencies
and that regulatory responses have reduced that opportunity over time. Importantly, these effects
are most pronounced for rumours as opposed to official statements. When focusing specifically on
the day of, that is the absolute return at T0, at the point of an announcement the most important
explanatory factor is clearly Bitcoin prices, and this is most powerful for official statements by
firms. Sentiment is found to play a more important role on the day of the announcement but
it is still less important than the status of a firm being speculative for both rumour and official
announcements. The large explanatory power of speculative firm status continues to confirm our
hypothesis that firms seek to exploit this premium via “bandwagon” effects. The strong bifurcation
between official statements and rumours only acts to reinforce this assessment as official statements
by technologically focused firms engaged in strategic decisions will be taken into account by Federal
authorities and be disseminated by the traditional media as well as social media.
Importantly, and where this paper contributes to the literature, we must also ask whether corporate desperation potentially instigated the decision to incorporate blockchain technology. While
strategic usage of blockchain-development is of particular interest, there is a concerning issue surrounding companies that have decided to proceed with speculative blockchain development. The
first, which we will focus on in the following section, surrounds evidence of an increased use of
leverage, that is, companies have borrowed substantial levels of assets from which they can draw
upon to take the speculative attempt at rapid growth. Should the situation not manifest in a
successful outcome, the company will face even harsher financial conditions. Secondly, to date, and
almost three years after some official announcements, there is no evidence of project initiation in
some scenarios. One particular shared characteristic is quite noticeable when considering particular
cohorts of the sample of speculatively-denoted companies: their company and sector have been in
long-term decline.
In Figure 6, we present evidence of three particular companies from our sample that merit
particular attention due to the unique nature of their decisions to incorporate blockchain technology.
First, we present evidence of Kodak, a company who has struggled to transition in the age of mobile
19
-----
technology. Secondly, Future Fintech Group, an unprofitable Chinese company formerly known as
‘SkyPeople Fruit Juice’ who have now changed their business focus to utilise "technology solutions
to operate and grow its businesses’ while ‘building a regional agricultural products commodities
market with the goal to become a leader in agricultural finance technology.’ Finally, we observe the
performance of Bitcoin Group SE, a holding company focused on innovative and disruptive business
models and technologies in the areas of cryptocurrency and blockchain[14].
**Insert Figure 6 about here**
It would not be considered excessive for more sceptical market participants to ask of these
and similar cases: 1) had these companies just unveiled a novel and genius evolutionary use for
blockchain; or 2) had they just attempted to ride the wave of a potential cryptocurrency bubble?
The nature and rationale underlying these decisions is of particular interest. While we have established interactions with regards to sentiment and sentiment adapted cumulative abnormal returns,
it is central to our research to focus on whether internal corporate structures presented evidence
of changing structure in the form of excessive use of leverage in anticipation of such speculative
projects? And such important questions such as whether such increased use of borrowed capital reflected in increased corporate probability of default and as to whether corporate ambitions had been
identified by credit rating agencies? Further, one very interesting question remains unanswered:
had investors, policy-makers and credit rating agencies alike considered it curious that reactionarydriven companies with no previous technological development experience had now signalled their
intentions to change their corporate identity and enter a sector with little or no experience? Such
dramatic decisions would not only incorporate risks from a exceptionally high-risk sector into the
corporate structure, but might not have been fully appreciated and valued by investors and regulatory authorities alike.
_5.2. Did the selected companies increase their leverage and cash reserves in the period before_
_blockchain incorporation?_
To investigate Hypothesis h2 we set out to investigate as to whether the corporate decision
to initiate speculative blockchain-development projects coincided with two specific characteristic
changes: significantly weak cash holdings and elevated levels of corporate leverage in comparison to
14Three distinct scenarios are presented in the performance of these companies: 1) observing Kodak, we identify a
company in long-term sectoral decline, who through the announcement of KODAKOne, described as a revolutionary
new image rights management and protection platform secured in the blockchain created a scenario where at 5.00pm
(GMT) on 9 January, Kodak shares were worth $3.10, while at 2.40pm (GMT) on 10 January, shares were trading at
$12.75; 2) Future Fintech Group who had previously received a written warning from NASDAQ on 1 December 2017
for failing to maintain a market value above $5 million and risked being de-listed if it did not pass the threshold by
May 2018, according to public filings. The rapid boost in market value shortly after this warning mitigated this issue;
and 3) Bitcoin Group SE, a company formerly known as AE Innovative Capital SE, a Germany-based investment
who changed their corporate identity to re-establish itself with one sole raison d’être, to provide speculative venture
capital to companies with a focus on business concepts and technology.
20
-----
industrial peers. Both are characteristics of companies who are in a particularly vulnerable financial
positions (Aktas et al. [2019]; Dermine [2015]; Cai and Zhang [2011]; Choe [2003]; Acharya et al.
[2012]; Arnold [2014]; Aktas et al. [2018]). To test for such effects, we build on the work of Cathcart
et al. [2020] and estimate a logit regression estimates for the four specifications as presented in Table
7. The coefficient of representing leverage is positive and strongly significant, indicating that it is
a central force in the methodological structure when considering the baseline estimation compared
to companies that are either in liquidation or have been under SEC investigation for fraudulent
behaviour since announcing their intentions to develop blockchain. Further, for methodological
robustness, the leverage components in specification (2) are also positive and strongly significant.
The relationships between the estimations of trade-payables to total assets, and both current and
non-current liabilities to current assets respectively are presented in specifications (3) and (4). We
identify a significantly positive relationship between all variables and the logit-calculated structure.
However, the influence of the estimated leverage effect is significantly stronger across each estimated
methodology. We can therefore confirm that when controlling our sample for companies who have
defaulted or have become the focus of SEC or other legal and regulatory scrutiny, increased leverage
and reduced cash holdings were both significant characteristics of such companies.
**Insert Tables 7 and 8 about here**
Considering both the sign and significance of leverage and leverage components interactions
with blockchain-developing corporations, we next examine the marginal effects of such interactions
as per Cathcart et al. [2020]. We therefore estimate the default probability as separated by type
of corporate blockchain-developing type as denoted to be speculative or strategic. In Table 8,
we find that the marginal effect of leverage for strategic blockchain-developing corporations is
0.003, while for speculative blockchain-developing corporations is 0.022. These estimates and their
differences are economically significant. It is widely considered that an increase in the average
default rate from 0 to 9 basis points would cause a substantial downgrade from Aaa to A (Ou
et al. [2017]; Cathcart et al. [2020]). When considering this estimate, we can identify that the
estimated coefficient for speculative blockchain-developing firms could generate enough default risk
to downgrade an investment-grade company (approximately A3 as per Moody’s credit ratings),
as denoted to possess strong payment capacity, to fall to junk-grade status (Ba1, Moody’s). For
strategic blockchain announcements, the risks are relatively minimal and would be estimated to
be approximately one grade based on a one standard deviation change. While Cathcart et al.
[2020] state that their results relating to SMEs and large corporations surrounds the fact that
large financially constrained firms are able to raise bank finances more easily than are small firms,
especially during crisis periods (Beck [2008]), our results follow the same vein of thought.
After considering the summary statistics presented in Table 2, we identified that companies
that had taken part in speculative blockchain-development were most likely to be substantially
younger (26.4 years old), almost three times more leveraged (total liabilities divided by total assets
equals 0.750) and have substantially less income and current assets as a proportion of total assets.
21
-----
Such specific characteristics would also support the view that financial constraints had hindered an
ability to obtain leverage as smaller, younger firms were more likely to take the decision to carry
out highly speculative tasks such as creating a cryptocurrency or changing the corporate identity of
the company, similar to the moves made by companies such as Long Island Iced Tea and SkyPeople
Fruit Juice.
_5.3. Have reactionary-driven firms presented differential use of leverage?_
One of the key red flags surrounding the identification of unlawful behaviour within the context
of blockchain development has focused on the why reactionary-driven companies with no prior experience of technological development in any form would consider shifting their primary business
practice to blockchain development? While an exceptionally high-risk and complex change in corporate identity, a large number of companies have attempted to carry out such strategy changes since
2017. Using the division between strategic and speculative blockchain announcements, we investigate Hypothesis h3, adding a further taxonomy to denote as to whether our sample of companies are
identified as technologically proficient. Therefore, we identify companies in their respective domestic indices that operate within the communications, information technology and financial sectors
to be technologically proficient as development within this context is consider a core operational
function.
Using this structure we estimate a similar logic regression, we again set the y variable to be
a dummy that indicates corporate default or regulatory investigation; taking a value of zero if
the firm is active and a value of one if the firm is insolvent, bankrupt or under investigation.
Table 9 presents the estimates of the methodological structure used to calculate the representative
probability of default. We identify that leverage is once again a significant explanatory variable
with regards to both speculative and strategic methodological structures.
**Insert Tables 9 and 10 about here**
Considering the significant effects of leverage, we next analyse the marginal effects of technological experience with results provided in Table 10. We separate the estimates not only by intention underlying announced blockchain-development intention, but also whether each company has
been defined to possess previous technological experience. When considering speculatively-driven
blockchain-development, companies with prior experience present a significant marginal effect of
leverage of 0.023, which compared to the benchmark estimates represents a two-grade fall in credit
rating. Reactionary-driven blockchain announcements by companies that are found to possess no
technological experience are found to be capable of generating between a four and five grade fall in
credit rating due to significant leverage effects. When considering strategically-driven blockchain
announcements, companies with previous technological experience generate less than half of a onegrade credit rating decline due a marginal effect of leverage of 0.004, while those reactionary-driven
companies with no technological experience is found to generate a significant marginal effect of
0.015. This would lead approximately a one grade decline in credit rating. The results of this
22
-----
marginal effect analysis therefore support the hypothesis that reactionary-driven companies who
instigate blockchain-development projects with no previous technological experience are found to
present increased probability of default.
_5.4. Have credit ratings reflected the inherent risk of speculative blockchain development?_
While conclusively finding evidence that there exist significant differential effects between strategic and speculative blockchain-development announcements for corporations in the manner of which
news is disseminated, the response of investors, and indeed, the manner in which underlying fundamental corporate structures behave, we further find conclusive evidence of significant differentials
in behaviour considering whether the corporation had prior experience in the area of technological
development. This reflects considerable evidence that there exists a somewhat exceptionally risky
set of companies for which the nature of their intention does not appear to be fully valued within
standard risk metrics when considering their excessive use of leverage to take on exceptionally risky
projects that appear to be fundamentally based on ‘bandwagon effects’, such as changing longstanding corporate identity, or creating a cryptocurrency for no explicit structural rationale. It
is important that we investigate whether investors possess a true representation of the risk that
they are adding to their portfolios through investment in these companies. We test this through
an investigation of Hypothesis h4 which analyses whether credit ratings have been adapted and
present evidence of risk segregation when considering the additional corporate risk associated with
speculative and strategic blockchain development.
In Table 11 we observe two distinct measures of risk, as separated by type of blockchain announcement. The first is a combined global ranking measure based on structural and text mining
of credit rating risk into one concise, time-varying estimate for each company. The higher the value
of the measure, the lower the estimated probability that each company will enter bankruptcy or
default on their debt obligations over the forthcoming twelve months. Secondly, we present estimated values per company of the one-year estimated probability of default during the periods under
investigation.
**Insert Table 11 about here**
A number of interesting observations are presented when observing the companies in this manner. Primarily, there is a clear separation between the credit scores and actual presented probability
of default by type of blockchain-announcement. When considering strategically-denoted blockchain
development, companies that announce their intentions to use blockchain for purposes such as technological and security enhancement, or indeed the announcement of partnerships and investment
funds present evidence of superior control of their ability to repay creditors, with further support
of this finding provided through substantially and significantly compressed one-year probability of
default rates. While the average company in the sample presents a one-year PD of 0.8%, strategically positioned companies are found to be 0.5%. When comparing companies that are defined
23
-----
as instigating speculative blockchain announcements, while companies that announce their intentions to create cryptocurrency are not necessarily distinguishable from those who have announced
blockchain-development for strategic purposes when considering ability to repay creditors. However,
in comparison, companies that announce their intentions to change their names also present quite
insurmountable challenges within the forthcoming twelve months as evidenced in their significantly
suppressed credit rating scores. Such companies also present an average one-year probability of
default of 2.2%.
**Insert Table 12 about here**
When focusing specifically on credit ratings, a similar pattern emerges. In Table 12 we present
the average credit rating per company as separated by each type of blockchain-development announcement, further separated by period both before and after the official date. A linear transformation scale for S&P, Moody’s and Fitch is presented in Table A2. We use Moody’s rating scale
as the selected metric to present and compare our results. Further, using the earlier described
logit methodology, we re-estimate ratings based on the average marginal effects of leverage. Credit
rating agencies present evidence of only a nominal downgrade of the average company who utilised
speculative blockchain announcements from Baa1 to Baa3 in the period thereafter. Further, strategic blockchain announcements are found to remain unchanged at A2 between the periods both
before and after. When evaluating the significant marginal effects of leverage as considered within
the previous section, we reconstruct leverage-adjusted credit ratings (Metz and Cantor [2006]), as
presented in Table 12. A number of significant observations are identified. While credit rating
agencies appear to have somewhat distinguished and identified the risk associated with speculative
behaviour, evidence suggests that it fails to truly reflect inherent idiosyncratic risks.
An estimated downgrade from Baa1 to Baa3 was identified in the average speculative blockchain
company. When further classifying groups on the basis of ICT experience (as identified earlier to
be reactionary-driven companies), results indicate that even those experienced companies should
be considered to be of junk status at Ba1. Further, reactionary-driven companies without previous
experience are estimated to be positioned at B1. Even under the most optimistic circumstances,
speculative blockchain developing companies with no previous evidence of technological development do not exceed junk investment status of B1. This result provides significant evidence that
investors have not been appropriately advised of the true risks inherent in such speculative corporate decisions. When considering strategically-indicative blockchain announcements, the average
company in the sample is found to warrant a one-grade downgrade from A2 to A3 in circumstances
where evidence suggests previous technological experience, while a further one-grade downgrade to
Baa1 is suggested should no previous technological experience be identified.
**6. Discussion**
We find in our investigations that firms are aware of the price premium placed on blockchain,
reflecting the price premia experienced by some cryptocurrencies, namely Bitcoin. Cryptocurrencies
24
-----
are an application of blockchain technology, but blockchain can be used for a wide variety of security
and contracting business applications. During the period under observation, January 2017 to July
2019, Bitcoin experienced a price rally that saw prices move from $800 a coin to a peak of $19,783
on 17 December 2017 to a price of $3,300 in late December 2018 and a price $9,503 in July of 2019.
This rally attracted many firms to take advantage of the exuberance and associate themselves with
the powerful upward price movement of Bitcoin. The novelty of the technology and the inherent
information asymmetries that it brings afforded an opportunity for firms that exclusively seek
a rapid increase in equity prices or seek to rebuild market capitalisation. An association with
blockchain is a method of bootstrapping bandwagon effects. Some of these firms are distinctively
speculative in behaviour and the empirical analysis highlights that speculative firms performed
differently to strategic firms, which undertake blockchain projects for value creation purposes.
This incentive to exploit market euphoria consistently appears in our findings. At the highest
level, we split firms into those that are speculative and strategic in their actions. An additional
division is between firms with and without technological experience. Firms with technological
experience illustrate less idiosyncratic risk when compared to companies engaged in other sectors.
Using our earlier example firms, Kodak and Long Blockchain are firms with no background in
specific ICT technological development. However, Facebook and Apple are examples of firms with
extensive experience in ICT. Reactionary-driven firms with no prior technological experience are
found to generate significant returns during the ‘rumour phase’ of blockchain announcements, while
further presenting differential behaviour in their use of leverage. This reflects the desire of these
firms that are traditionally non-technologically-based to act in a speculative manner, to evolve
into a "risk-on" asset and where the underlying desire of these firms appears to surround taking
advantage of blockchain and cryptocurrency bandwagon effects.
While our results illustrate how firms have attempted to take advantage of the market conditions
surrounding Bitcoin to advantage their equity position, the internal corporate financial position
can also be manipulated by an association with blockchain. Firms that are engaged in blockchain
announcements that are speculative in nature tend to dramatically expand their leverage position.
This naturally changes their idiosyncratic risk position. Blockchain activity attracts investors which
extend credit to the firm to develop the new application or product. This has several interesting
outcomes. First, a dramatic increase in the probability of default in firms that undertake this course
of action. Second, the increase in idiosyncratic risk is sufficiently large to warrant a significant
downgrade of that firm’s credit rating, a downgrade that is currently underestimated by informed
market actors. Third, it highlights yet a further difference between strategic and speculative firms,
as the large cash position of strategic firms can be seen as a prerequisite to undertaking high-risk
product development projects such as blockchain.
All blockchain related activity is understood to increase risk to the firm that is undertaking
it. Reactionary-driven firms with prior experience of the technology sector and large cash reserves
will minimise the increase in their idiosyncratic risk and therefore have a much lower increase in
their probability of default. Given the importance of blockchain technology to operational security
for high tech firms, a common application outside of cryptocurrencies, the financial benefit of
25
-----
maintaining a store of ready cash to finance product development is apparent and explains in part
the desire for technology sector firms to hold their noted large cash reserves.
Given these observed and estimated conditions, the most obvious investment strategy is to
buy these companies’ equities based on rumours and sell in the days after official announcement.
This is a strategy that can only be undertaken in a circumstance of a information being based
on non-artificial sources. The reality of Twitter communication and computer-aide algorithmic
trading is that information, sentiment, interest can all be manipulated quickly and cheaply and
then fed into trading activity driven by sentiment-driven rule-based computer-aided trading - further
compounding the cycle of trades. Setting that cycle of information manipulation aside, there exists
a social media-based strategy through which investors can profit based on investment should their
source of information be non-bot. The ethical and legal implications of this strategy are substantial.
There is nothing to mitigate the effects of false statements to the market, i.e. ‘fake news’. The
quality of such news is only as good as the source that has generated the Tweet, which will not
typically abide by the conventions of traditional journalism. Still, if the information is of high or
low quality, it has the capacity to generate sentiment that can be read and understood by human
and machine learning alike. The use of automated programmes to generate interest can generate
positive returns should sufficient attention and reach of social media interaction take place.
Even if the role of sentiment is limited to its importance to rumour statements by firms, it
still has the power to drive equity prices. This is especially true for firms engaged in speculative
objectives. Speculative firms improve their equity returns and access to leverage as a result of
associating with blockchain but also become highly risky firms with a high probability of default
and cease to be investment-grade assets. This matters for those that direct those firms, investor
guides and for investors themselves as it takes a set of bad asymmetric information conditions and
generates the optimal conditions for moral hazard. While some participants argue that those with
better quality information should be rewarded (Ho and Michaely [1988]; Rashes [2001]) for their
efforts when obtaining quality information, the real difficult task for policy-makers and regulators
is the identification of ‘questionable’ cases. Regulators have been slow to address the space of
cryptocurrencies as the legislative frameworks they rely upon are based on older technologies and
practices, which at the most fundamental level generate problems of definition and jurisdiction. The
regulatory environment with respect to blockchain was underdeveloped with lax enforcement prior
to the second half of 2017. Regulators, most importantly the Securities and Exchange Commission
and the Federal Bureau of Investigation began the process of investigating potentially fraudulent
cryptocurrency companies and subsequently released investor guidelines. At the same time regulation cannot be so tough that is creates fear of entry that stifles technological development [Corbet
et al., 2020]. This is perhaps where a direction of future research in this emerging area should focus.
In the meantime, timely and unobstructed investigations of such announcements should be carried
out by regulators so as to minimise the probability of illicit activity. The argument supporting this
should centre upon the need to protect uninformed investors from such channels of manipulation.
This is even more necessary considering the identified mis-pricing of risk in our research.
There appears to be a substantial risk associated with this questionable behaviour as surrounds
26
-----
contagion and if investors have truly quantified the relationship between these companies and their
exceptional risk-taking behaviour. This is evidenced by the exceptional levels of leverage used in the
high-risk categories of firms. Revising recent credit ratings, and continuing to assume that investors
observe and obtain information within these metrics (Alsakka et al. [2014]; Becker and Milbourn
[2011]; Iannotta et al. [2013]), our logit-calculated revised credit ratings that consider the sentiment
and speculative nature of blockchain-development ambitions present evidence of both substantial
and significant mis-pricing of risk. Those companies who partake in speculative blockchain development are found to possess an average actual credit rating of Baa2, which is of an investment
grade. Considering companies with both experience and no experience of technological development, leverage-adjusted re-estimated credit ratings find that the average grade should be no higher
than junk status (Ba1 with technological experience and B1 without). Re-evaluating those companies who use blockchain-development for strategic purposes is found to have their risk correctly
identified when possessing previous technological experience, while only receiving a one-sub-grade
announcement with no previous technological experience. This finding presents evidence that the
underlying behavioural aspects of these companies have the potential to mislead investors and
generate substantial repercussions throughout unsuspecting portfolios.
The analysis from our sentiment and default probability methodologies ensures that firms that
desire to move into blockchain fall into two categories: a high-risk, high-default probability speculative firm or a firm that is in decline seeking to regain market capitalisation and investor attention,
and a cash-rich technology firm that is seeking to develop a new product or service. Given such conditions, there are clear policymaker implications as more stringent oversight and enforcement has
reduced the attraction for the latter but market actors continue to under-price the risk associated
with an expansion into blockchain.
**7. Conclusions**
This research specifically investigates whether social media attention, when controlling for underlying corporate financial health and previous technological development experience, has significantly contributed to abnormal financial performance, elevated use of leverage, and the shrouding
of both actual and perceived risk of default associated with rumours and official announcements
relating to blockchain-development projects. First, the level of social media activity is found to be
significantly dependent on the type of blockchain announcement. We identify that speculativelydriven announcements, those of reactionary-driven companies with no prior technological development experience, generate abnormal pricing performance of approximately 35%, when compared
to strategically-denoted projects. These effects have been found to diminish over time. When
considering the ability of some companies to use social media sources to generate product-based
interest with substantial positive sentiment, companies that generate the largest amount of interest
are found to experience the largest abnormal price returns. This specific result generates an added
layer of regulatory complexity given the difficulty in discerning if that digital interest is artificially
manufactured. Theoretically, significant abnormal profits exist through the generation of added
social media activity.
27
-----
Secondly, we find that firms with technological experience illustrate less idiosyncratic risk when
compared to companies engaged in other sectors. Those reactionary-driven companies that lack
experience in technological development, are found to be substantially leveraged in comparison
to those with substantial development experience. Such a result indicates that not only are such
companies making high-risk decisions, but they are using borrowed funds to take such risks. Thirdly,
we identify clear separation between the credit scores and actual presented probability of default
by type of blockchain-announcement. Speculative companies are found to present an added 1.7%
one-year probability of default when compared to strategically-denoted companies.
Finally, reactionary-driven companies with no previous technological experience that take on
additional leverage, when considered in the light of the estimated one-grade downgrade using a
leverage-adjusted credit rating methodology, should be considered to be no better than junk investment status. This latter result provides significant evidence that investors have not been appropriately advised of the true risks inherent in such speculative corporate decisions. Companies
that signal their intentions to instigate strategic blockchain-development do not appear to present
evidence of the same elevated short-term probability of default or discrepancy in leverage-adjusted
credit ratings. While some informed investors will observe the internal structural discrepancies,
algorithmic and sentiment-driven computer-aided trading can specifically seek and benefit from
short-term momentum driven by hysteria relating to blockchain and cryptocurrencies, irrespective
of the ethical or moral issues inherently attached.
In a developing sector increasingly plagued by issues surrounding fraud and cybercriminality,
policy-makers must tread carefully between over-regulation, potentially stifling credible technological development, and counter-balancing such activity through ensuring the presence of market
integrity and corporate credibility. Given the exogenous conditions and speed of technological evolution, protecting unsuspecting and uniformed investors should be considered a priority. To do
so, regulators must ensure that those aspiring to take advantage of misinforming investors must
be adequately disincentivised. At the same time, many of the companies that have indicated this
product development course of action are in long-term sectoral decline, or have been established
simply to take advantage of a short-term profit opportunity. To date, almost no viable corporate
cryptocurrency has been developed, although in each scenario examined, a substantial long-term
share premium persisted along with significant underestimation of leverage risks.
The ability of companies to advertise the creation of instruments with almost any self-determined
parameters implies that there are few limits on the complexity of design of these technological solutions. The substantiative, repeated price appreciation without project delivery should generate
regulatory concern. Investors cave been therefore forced to base their decisions on improper information and social media hysteria, both, as evidence in ongoing investigations have shown, influenced
by artificial sources. This information also possesses the ability to trigger automated trading systems that act as a potential accelerant of abnormal performance. Such shrouding of information
relating to blockchain-development by corporate entities will substantially influence an investment
system with myopic investors who are being driven by social media hysteria and other sources
of noise. Corporate institutions operating this strategy should only expect to attract the same
28
-----
risk-loving investors that have been the source of the price-increases in cryptocurrency markets.
Therefore, optimising companies will continue to exploit myopic consumers through such speculative announcements that shroud blockchain-development as a source of future corporate revenues.
In turn, sophisticated social media advertisements further exploit these marketing schemes, adding
to the hysteria and acting as a propellant of abnormal price performance.
For those companies in desperate economic situations, it might be their only route to profits,
hence the need to be particularly beware of reactionary-driven corporations making announcements
with no prior technological-development experience. Further investor education and increased regulatory enforcement, particularly of corporate entities with no previous technological development
experience announcing speculative blockchain-development projects, might be a particularly successful solution. Ultimately, investors and regulators will be required to become more vigilant and
sophisticated as digital tools take a traditional market story of irrational exuberance in the face of
a new technology and layer it with the complexity of social media communication.
**References**
Acharya, V., Y. Amihud, and S. Bharath (2013). Liquidity risk of corporate bond returns: Conditional approach.
_Journal of Financial Economics 110_ (2), 358–386.
Acharya, V., S. A. Davydenko, and I. A. Strebulaev (2012). Cash holdings and credit risk. The Review of Financial
_Studies 25_ (12), 3572–3609.
Aitken, M., D. Cumming, and F. Zhan (2015). Exchange trading rules, surveillance and suspected insider trading.
_Journal of Corporate Finance 34, 311–330._
Aktas, N., C. Louca, and D. Petmezas (2019). Ceo overconfidence and the value of corporate cash holdings. Journal
_of Corporate Finance 54, 85–106._
Aktas, N., G. Xu, and B. Yurtoglu (2018). She is mine: Determinants and value effects of early announcements in
takeovers. Journal of Corporate Finance 50, 180 – 202.
Akyildirim, E., S. Corbet, D. Cumming, B. Lucey, and A. Sensoy (2020). Riding the wave of crypto-exuberance: The
potential misusage of corporate blockchain announcements. Technological Forecasting and Social Change 159,
120191.
Akyildirim, E., S. Corbet, A. Sensoy, and L. Yarovaya (2020). The impact of blockchain related name changes on
corporate performance. Journal of Corporate Finance, 101759.
Albuquerque, R., E. De Francisco, and L. Marques (2008). Marketwide private information in stocks: Forecasting
currency returns. Journal of Finance 63 (5), 2297–2343.
Alsakka, R., O. ap Gwilym, and T. N. Vu (2014). The sovereign-bank rating channel and rating agencies’ downgrades
during the european debt crisis. Journal of International Money and Finance 49, 235–257.
Arnold, M. (2014). Managerial cash use, default, and corporate financial policies. Journal of Corporate Finance 27,
305–325.
Basile, P., S. Kang, J. Landon-Lane, and H. Rockoff (2017). An index of the yields of junk bonds, 1910-1955. Journal
_of Economic History 77_ (4), 1203–1219.
29
-----
Beck, T. (2008). Bank competition and financial stability: friends or foes? The World Bank.
Becker, B. and T. Milbourn (2011). How did increased competition affect credit ratings? _Journal of Financial_
_Economics 101_ (3), 493–514.
Bloomfield, R., W. Tayler, and F. Zhou (2009). Momentum, reversal, and uninformed traders in laboratory markets.
_Journal of Finance 64_ (6), 2535–2558.
Bruguier, A., S. Quartz, and P. Bossaerts (2010). Exploring the nature of "trader intuition". _Journal of Fi-_
_nance 65_ (5), 1703–1723.
Byrne, E. F. (2011). Business ethics should study illicit businesses: To advance respect for human rights. Journal of
_Business Ethics 103_ (4), 497.
Cahill, D., D. G. Baur, Z. F. Liu, and J. W. Yang (2020). I am a blockchain too: How does the market respond to
companies’ interest in blockchain? Journal of Banking & Finance 113, 105740.
Cai, J. and Z. Zhang (2011). Leverage change, debt overhang, and stock prices. Journal of Corporate Finance 17 (3),
391–402.
Campbell, J. Y., J. Hilscher, and J. Szilagyi (2008). In search of distress risk. _The Journal of Finance 63_ (6),
2899–2939.
Canhoto, A. I. (2020). Leveraging machine learning in the global fight against money laundering and terrorism
financing: An affordances perspective. Journal of Business Research.
Carlin, B. I. (2009). Strategic price complexity in retail financial markets. Journal of financial Economics 91 (3),
278–287.
Cathcart, L., A. Dufour, L. Rossi, and S. Varotto (2020). The differential impact of leverage on the default risk of
small and large firms. Journal of Corporate Finance 60, 101541.
Chen, M. A., Q. Wu, and B. Yang (2019). How valuable is fintech innovation? The Review of Financial Studies 32 (5),
2062–2106.
Cheng, S. F., G. De Franco, H. Jiang, and P. Lin (2019). Riding the blockchain mania: public firms’ speculative 8-k
disclosures. Management Science 65 (12), 5901–5913.
Choe, C. (2003). Leverage, volatility and executive stock options. Journal of Corporate Finance 9 (5), 591–609.
Cioroianu, I., S. Corbet, and C. Larkin (2020). Guilt through association: Reputational contagion and the boeing
737-max disasters. Economics Letters, 109657.
Corbet, S., D. J. Cumming, B. M. Lucey, M. Peat, and S. A. Vigne (2020). The destabilising effects of cryptocurrency
cybercriminality. Economics Letters 191, 108741.
Corbet, S., C. Larkin, B. Lucey, A. Meegan, and L. Yarovaya (2020). Cryptocurrency reaction to fomc announcements:
Evidence of heterogeneity based on blockchain stack position. Journal of Financial Stability 46, 100706.
Corbet, S., B. Lucey, A. Urquhart, and L. Yarovaya (2019). Cryptocurrencies as a financial asset: A systematic
analysis. International Review of Financial Analysis 62, 182–199.
Cresci, S., R. Di Pietro, M. Petrocchi, A. Spognardi, and M. Tesconi (2015). Fame for sale: Efficient detection of
fake twitter followers. Decision Support Systems 80, 56–71.
Cumming, D., R. Dannhauser, and S. Johan (2015). Financial market misconduct and agency conflicts: A synthesis
and future directions. Journal of Corporate Finance 34, 150–168.
30
-----
Cziraki, P., P. De Goeij, and L. Renneboog (2014). Corporate governance rules and insider trading profits. Review
_of Finance 18_ (1), 67–108.
Danbolt, J., A. Siganos, and E. Vagenas-Nanos (2015). Investor sentiment and bidder announcement abnormal
returns. Journal of Corporate Finance 33, 164–179.
Dermine, J. (2015). Basel iii leverage ratio requirement and the probability of bank runs. Journal of Banking &
_Finance 53, 266–277._
D’Mello, R., M. Gruskin, and M. Kulchania (2018). Shareholders valuation of long-term debt and decline in firms’
leverage ratio. Journal of Corporate Finance 48, 352–374.
Easley, D. and M. O’Hara (2004). Information and the cost of capital. Journal of Finance 59 (4), 1553–1583.
Faulkender, M. and R. Wang (2006). Corporate financial policy and the value of cash. The Journal of Finance 61 (4),
1957–1990.
Fidrmuc, J. P., M. Goergen, and L. Renneboog (2006). Insider trading, news releases, and ownership concentration.
_The Journal of Finance 61_ (6), 2931–2973.
Gabaix, X. and D. Laibson (2006). Shrouded attributes, consumer myopia, and information suppression in competitive markets. The Quarterly Journal of Economics 121 (2), 505–540.
Gandal, N., J. Hamrick, T. Moore, and T. Oberman (2018). Price manipulation in the Bitcoin ecosystem. Journal
_of Monetary Economics 95, 86–96._
Griffins, J. and A. Shams (2018). Is Bitcoin really un-Tethered? _Available_ _at_ _SSRN,_
_http://dx.doi.org/10.2139/ssrn.3195066_ .
Grinberg, N., K. Joseph, L. Friedland, B. Swire-Thompson, and D. Lazer (2019). Political science: Fake news on
twitter during the 2016 u.s. presidential election. Science 363 (6425), 374–378.
Henderson, B. and N. Pearson (2011). The dark side of financial innovation: A case study of the pricing of a retail
financial product. Journal of Financial Economics 100 (2), 227–247.
Hillier, D., A. Korczak, and P. Korczak (2015). The impact of personal attributes on corporate insider trading.
_Journal of Corporate Finance 30, 150–167._
Ho, T. S. and R. Michaely (1988). Information quality and market efficiency. Journal of Financial and Quantitative
_Analysis 23_ (1), 53–70.
Hu, Y., Y. G. Hou, L. Oxley, and S. Corbet (2020). Does blockchain patent-development influence bitcoin risk?
_Journal of International Financial Markets, Institutions and Money, 101263._
Huson, M. and G. MacKinnon (2003). Corporate spinoffs and information asymmetry between investors. Journal of
_Corporate Finance 9_ (4), 481–503.
Iannotta, G., G. Nocera, and A. Resti (2013). Do investors care about credit ratings? an analysis through the cycle.
_Journal of Financial Stability 9_ (4), 545–555.
Jain, A. and C. Jain (2019). Blockchain hysteria: Adding “blockchain” to company’s name. Economics Letters 181,
178–181.
Jeng, L. A., A. Metrick, and R. Zeckhauser (2003). Estimating the returns to insider trading: A performanceevaluation perspective. Review of Economics and Statistics 85 (2), 453–471.
31
-----
Lee, I., M. Lemmon, Y. Li, and J. M. Sequeira (2014). Do voluntary corporate restrictions on insider trading eliminate
informed insider trading? Journal of Corporate Finance 29, 158–178.
Marin, J. and J. Olivier (2008). The dog that did not bark: Insider trading and crashes. Journal of Finance 63 (5),
2429–2476.
Metz, A. and R. Cantor (2006). Moody’s credit rating prediction model. Special Comments, Credit Ratings, Moody’s
_Investors Service, Global Credit Research._
Moeller, T. and C. Molina (2003). Survival and default of original issue high-yield bonds. _Financial Manage-_
_ment 32_ (1), 83–107.
Ou, S., S. Irfan, Y. Liu, and K. Kanthan (2017). Annual default study: Corporate default and recovery rates,
1920-2016. Data Report-Moody’s Investor Services.
Rashes, M. S. (2001). Massively confused investors making conspicuously ignorant choices (mci–mcic). The Journal
_of Finance 56_ (5), 1911–1927.
Shao, C., P.-M. Hui, L. Wang, X. Jiang, A. Flammini, F. Menczer, and G. Ciampaglia (2018). Anatomy of an online
misinformation network. PLoS ONE 13 (4).
Sharma, P., S. Paul, and S. Sharma (2020). What’s in a name? a lot if it has “blockchain”. Economics Letters 186,
108818.
Van Bommel, J. (2002). Messages from market to management: The case of ipos. Journal of Corporate Finance 8 (2),
123–138.
Van Der Walt, E. and J. Eloff (2018). Using machine learning to detect fake identities: Bots vs humans. IEEE
_Access 6, 6540–6549._
White, R., Y. Marinakis, N. Islam, and S. Walsh (2020). Is bitcoin a currency, a technology-based product, or
something else? Technological Forecasting and Social Change 151, 119877.
Zhou, M., L. Lei, J. Wang, W. Fan, and A. Wang (2015). Social media adoption and corporate disclosure. Journal
_of Information Systems 29_ (2), 23–50.
32
-----
Figure 1: Frequency and geographical location of identified blockchain-development projects
a) Time-varying representation of corporate announcement of blockchain development
b) Geographical representation of corporate announcement of blockchain development
Note: The corporate announcement period covers from 1 January 2017 to 30 March 2019 (announcement data for traded
companies was not present in a robust manner prior to January 2017). We develop on a combined search of LexisNexis,
Bloomberg and Thomson Reuters Eikon, search for the keywords including that of: "cryptocurrency", "digital currency",
"blockchain", "distributed ledger", "cryptography", "cryptographic ledger", "digital ledger", "altcoin" and
"cryptocurrency exchange". To obtain a viable observation, a single data observation must be present across the three
search engines and the source was denoted as an international news agency, a mainstream domestic news agency or the
company making the announcement itself. Forums, social media and bespoke news websites were omitted from the search.
Finally, the selected observation is based solely on the confirmed news announcements being made on the same day across
all of the selected sources. If a confirmed article or news release had a varying date of release, it was omitted due to this
associated ambiguity. All observations found to be made on either a Saturday or Sunday (nine announcements in total)
are denoted as active on the following Monday morning. The dataset incorporates 156 total announcements made during
the selected time period. All times are adjusted to GMT, with the official end of day closing price treated as the listed
observation for each comparable company when analysing associated contagion effects.
33
-----
Figure 2: Tweets relating to corporate blockchain announcements
a) Speculatively-defined corporate blockchain announcements
i) Rumour ii) Official
b) Strategically-defined corporate blockchain announcements
i) Rumour ii) Official
c) Total Twitter activity surrounding corporate blockchain announcements
i) Rumour ii) Official
Note: Twitter data was collected for a period between 1 January 2017 and 31 March 2019 for a list of 156 companies. All
tweets mentioning the name of the company plus either of the terms ‘crypto’, ‘cryptocurrency’ or ‘blockchain’ were
computationally collected through the Search Twitter function on https://twitter.com/explore using the Python
‘twitterscraper’ package. A total number of 954,765 unique tweets were collected. The data was then aggregated by
company and by day, taking the sums of the variables. In a provisional methodology, we determine the very first tweet as
identified on Twitter that was correctly based (identified as the ‘rumour’ hereafter) on the forthcoming corporate
blockchain announcement (identified as the ‘official announcement’). In the above figure, we present evidence of average
the total number of Tweets in the 30 days both before and after the identification of both the date of the ‘rumour’ and the
‘official announcement’. The vertical axis represents a logarithmic scale so as to best represent the scale of the number of
tweets in the days surround each event, which is indicated with a line.
34
-----
Figure 3: Twitter-based ‘Retweets’ relating to corporate blockchain announcements
a) Speculatively-defined corporate blockchain announcements
i) Rumour ii) Official
b) Strategically-defined corporate blockchain announcements
i) Rumour ii) Official
c) Total Twitter activity surrounding corporate blockchain announcements
i) Rumour ii) Official
Note: Twitter data was collected for a period between 1 January 2017 and 31 March 2019 for a list of 156 companies. All
tweets mentioning the name of the company plus either of the terms ‘crypto’, ‘cryptocurrency’ or ‘blockchain’ were
computationally collected through the Search Twitter function on https://twitter.com/explore using the Python
‘twitterscraper’ package. A total number of 954,765 unique tweets were collected. The data was then aggregated by
company and by day, taking the sums of the variables. In a provisional methodology, we determine the very first tweet as
identified on Twitter that was correctly based (identified as the ‘rumour’ hereafter) on the forthcoming corporate
blockchain announcement (identified as the ‘official announcement’). In the above figure, we present evidence of average
the total number of Retweets in the 30 days both before and after the identification of both the date of the ‘rumour’ and
the ‘official announcement’. The vertical axis represents a logarithmic scale so as to best represent the scale of the number
of retweets in the days surround each event, which is indicated with a line.
35
-----
Figure 4: Twitter-based ‘Likes’ relating to corporate blockchain announcements
a) Speculatively-defined corporate blockchain announcements
i) Rumour ii) Official
b) Strategically-defined corporate blockchain announcements
i) Rumour ii) Official
c) Total Twitter activity surrounding corporate blockchain announcements
i) Rumour ii) Official
Note: Twitter data was collected for a period between 1 January 2017 and 31 March 2019 for a list of 156 companies. All
tweets mentioning the name of the company plus either of the terms ‘crypto’, ‘cryptocurrency’ or ‘blockchain’ were
computationally collected through the Search Twitter function on https://twitter.com/explore using the Python
‘twitterscraper’ package. A total number of 954,765 unique tweets were collected. The data was then aggregated by
company and by day, taking the sums of the variables. In a provisional methodology, we determine the very first tweet as
identified on Twitter that was correctly based (identified as the ‘rumour’ hereafter) on the forthcoming corporate
blockchain announcement (identified as the ‘official announcement’). In the above figure, we present evidence of average
the total number of ‘Likes’ in the 30 days both before and after the identification of both the date of the ‘rumour’ and the
‘official announcement’. The vertical axis represents a logarithmic scale so as to best represent the scale of the number of
likes in the days surround each event, which is indicated with a line.
36
-----
Figure 5: Sentiment adapted cumulative abnormal returns
i) Separated by type of blockchain announcement
a) Defined ‘rumour’ b) Defined ‘official announcement’
ii) Separated by the defined reach of social media
a) Defined ‘rumour’ b) Defined ‘official announcement’
iii) Separated by defined sentiment
a) Defined ‘rumour’ b) Defined ‘official announcement’
Note: This figure shows the average sentiment adapted cumulative abnormal returns by type of announcement for a 61-day
window [30,+30]. Within this context, and building on the work of Akyildirim et al. [2020], speculative announcements are
found to be those relating to the change of corporate identity to include words such as ‘blockchain’ and ‘cryptocurrency’,
and the development of corporate cryptocurrencies. Alternatively, structural-development includes announcements relating
to internal security, and internal process, system and technological development. The following analysis will be
sub-categorised within these sub-groups throughout. The analyses are repeated for the two defined windows of analysis,
the first surrounding the 30-day period before the first social media ‘rumour’, the second based on the same time frame
surrounding the ‘official announcement’. Reach is defined by the natural log of the number of tweets, retweets and likes.
‘Very Low’ defines the group of companies in the lowest 25th percentile as ranked by tweets in the period 30 days prior to
the announcement in our sample. Low represents the 26th through 50th percentile, while medium reach is defined as the
51st through 75th percentile. High social media reaching companies represent the top 25th percentile by market
capitalisation 30 days prior to the announcement. The analyses are repeated for the two defined windows of analysis, the
first surrounding the 30-day period before the first social media ‘rumour’, the second based on the same time frame
surrounding the ‘official announcement’. 37
-----
Figure 6: Selected corporate performance after blockchain-development announcements
a) Kodak
b) Future Fintech Group
c) Bitcoin Group SE
Note: The above figure presents evidence of the respective share price performance of Kodak, Future Fintech Group and
Bitcoin Group SE, for all daily closing prices on dates since the incorporation of each respective company. The horizontal
line in each individual graph represents the date of a significant speculative-blockchain announcement. For Kodak, this
represents the date of the first official announcement of KODAKOne (9 January 2018). For Future Fintech Group, this
represents the date on which the corporate identity changed from that of SkyPeople Fruit Juice (19 December 2017).
While for Bitcoin Group SE, this date represents the beginning of a period of sharp growth in the price of Bitcoin where
the company held 100% of the shares in Bitcoin Deutschland AG, which operated Germany’s only authorised trading place
for the digital currency Bitcoin under Bitcoin.de (9 October 2017).
38
-----
Table 1: Summary statistics of Twitter activity and corporate size
Interest Sentiment Company Size Rumour Duration
_By announcement type_
Blockchain Partnership 1.985 2.768 41.590 12.750
Coin Creation 2.899 2.017 12.229 12.564
Investment Fund 2.282 1.672 65.831 8.417
Name Change 2.942 2.894 15.452 15.482
Security Improvements 2.143 2.044 239.239 5.800
Technological Improvement 2.403 2.249 118.994 5.315
Speculative 2.785 2.717 12.229 13.564
Strategic 2.137 1.955 122.486 6.233
_By year_
2017 2.240 2.031 65.363 13.188
2018 2.238 2.164 98.140 11.719
2019 2.412 2.158 101.548 10.548
_By Twitter Activity (Ranked by quintile)_
Some Interest - 1.720 35.442 15.412
Low Interest - 1.990 64.761 11.791
Average Interest - 2.679 69.238 7.667
High Interest - 2.568 155.167 10.529
Very High Interest - 2.683 370.029 8.000
_By Company Size (Ranked by quintile)_
Very Small 1.752 1.800 - 15.909
Small 2.061 2.350 - 19.150
Medium 2.178 2.060 - 6.522
Large 2.514 2.055 - 10.231
Very Large 2.643 2.313 - 11.143
Note: In the table above, we observe the key statistics as presented from the scale of interest and sentiment of the
associated Twitter activity. Interest is sub-divided by quintile of the number of identified tweets, which are further
separated as per type of blockchain-announcement, the year in which the announcement was made, and by company size.
Further, we have included a final column that specifically investigates the average time difference, as measured in days, of
the time between the first identified tweet, denoting the establishment of the ‘rumour’ and the ‘official’ announcement.
39
-----
Table 2: Summary statistics for the probit methodology and marginal effects regression variables
_Total_
Mean Median Std. Dev. Min Max
NITA 0.017 0.005 1.831 -0.908 1.147
CATA 0.258 0.595 0.299 -0.045 1.000
Age 35.912 23.603 32.731 16.658 120.047
Leverage 0.463 0.136 0.196 0.005 5.703
Trade 0.116 0.100 0.094 0.003 0.996
Current 0.201 0.181 0.150 0.009 4.507
Noncurrent 0.115 0.085 0.645 0.000 2.632
_Speculative_
Mean Median Std. Dev. Min Max
NITA -0.012 0.014 0.049 -0.050 0.000
CATA -0.476 0.616 0.012 -0.001 0.991
Age 29.437 21.523 26.969 16.658 119.532
Leverage 0.750 0.139 0.304 0.074 5.703
Trade 0.125 0.100 0.120 0.025 0.996
Current 0.429 0.194 0.236 0.129 4.507
Noncurrent 0.235 0.100 1.019 0.000 2.632
_Strategic_
Mean Median Std. Dev. Min Max
NITA 0.059 0.002 2.894 -0.908 1.147
CATA 1.356 0.528 0.471 -0.045 1.000
Age 40.237 23.651 35.431 22.329 120.047
Leverage 0.271 0.134 0.045 0.005 0.670
Trade 0.110 0.100 0.070 0.003 0.426
Current 0.049 0.175 0.005 0.009 0.147
Noncurrent 0.036 0.079 0.018 0.000 0.051
Note: The above table reports the summary statistics of the estimated coefficients based on the companies identified
within our sample and subsequently used in the following logit regressions. The dependent variable takes a value of zero if
the firm is active and not under regulatory investigation, while it receives a value of one if it is insolvent, bankrupt or
under regulatory investigation. Similar to the methodology used by Cathcart et al. [2020], GDP is the 1-year GDP growth
rate; bond is the 3-month government bond interest rate; CDS is the logarithm of the CDS price of government bonds;
NITA is the ratio of net income to total assets; CATA is the ratio of current assets to total assets; AGE is the number of
days since incorporation divided by 365; IMP is a dummy variable that takes a value of one if the identified company is
impaired as defined as to be ‘insolvent, bankrupt or under regulatory investigation’. Lev is the ratio of total liabilities to
total assets; Trade is the ratio of trade payables to total assets; Curr is the ratio of current liabilities (minus trade
payables) to total assets; and Noncurr is the ratio of non-current liabilities to total assets.
40
-----
Table 3: Social media statistics for selected periods as denoted by type of denoted blockchain development announcement
**[-30,-1]** Rumour Official
Speculative Strategic Total Speculative Strategic
Total Average Total Average Total Average Total Average Total Average Total
Tweets 130,790 4,087 677,103 21,159 807,893 25,247 19,385 606 68,989 2,156 88,374
Retweets 192,817 6,026 823,857 25,746 1,016,674 31,771 186,715 5,835 216,718 6,772 403,433
Likes 351,655 10,989 1,614,424 50,451 1,966,079 61,440 340,219 10,632 358,076 11,190 698,295
Replies 29,936 936 133,147 4,161 163,083 5,096 30,834 964 23,889 747 54,723
Interest 2.369 2.669 2.596 2.159 2.772
Positive/Negative 1.847 2.288 2.180 1.802 2.306
Max Polarity 4.042 5.249 4.930 4.972 9.102
Min Polarity -0.333 0.013 -0.069 0.042 2.295
Max Subjectivity 1.546 1.734 1.673 1.937 3.838
Min Subjectivity 0.267 0.338 0.319 0.323 0.687
‘Blockchain’ Mentions 65,716 2,054 513,210 16,038 578,926 18,091 8,682 271 53,321 1,666 62,003
‘Cryptocurrency’ Mentions 82,239 2,570 226,014 7,063 308,253 9,633 13,660 427 22,479 702 36,139
**[0,3]** Rumour Official
Speculative Strategic Total Speculative Strategic
Total Average Total Average Total Average Total Average Total Average Total
Tweets 126,600 31,650 646,736 161,684 773,336 193,334 18,546 4,637 20,410 5,103 38,956
Retweets 175,772 43,943 765,026 191,257 940,798 235,200 214,040 53,510 200,770 50,193 414,810
Likes 326,274 81,569 1,488,686 372,172 1,814,960 453,740 394,880 98,720 328,940 82,235 723,820
Replies 27,037 6,759 121,544 30,386 148,581 37,145 38,330 9,583 21,080 5,270 59,410
Interest 3.545 3.886 3.805 2.919 3.402
Positive/Negative 3.721 4.195 4.084 3.509 3.081
Max Polarity 24.453 23.502 23.543 32.086 24.647
Min Polarity -0.548 3.122 2.287 0.652 7.364
Max Subjectivity 9.766 7.272 7.749 14.630 7.545
Min Subjectivity 1.391 1.291 1.302 1.972 1.256
‘Blockchain’ Mentions 62,696 15,674 498,753 124,688 561,449 140,362 7,768 1,942 16,540 4,135 24,308
‘Cryptocurrency’ Mentions 80,773 20,193 208,065 52,016 288,838 72,210 13,882 3,471 6,479 1,620 20,361
Note: The above table presents the estimated Twitter data in the identified periods as separated by the date of the ‘rumour’ and the date of the ‘official
announcement’.
|[-30,-1] Rumour Speculative Strategic Total|Official Speculative Strategic Total|
|---|---|
|Total Average Total Average Total Average|Total Average Total Average Total Average|
|Tweets 130,790 4,087 677,103 21,159 807,893 25,247 Retweets 192,817 6,026 823,857 25,746 1,016,674 31,771 Likes 351,655 10,989 1,614,424 50,451 1,966,079 61,440 Replies 29,936 936 133,147 4,161 163,083 5,096 Interest 2.369 2.669 2.596 Positive/Negative 1.847 2.288 2.180 Max Polarity 4.042 5.249 4.930 Min Polarity -0.333 0.013 -0.069 Max Subjectivity 1.546 1.734 1.673 Min Subjectivity 0.267 0.338 0.319 ‘Blockchain’ Mentions 65,716 2,054 513,210 16,038 578,926 18,091 ‘Cryptocurrency’ Mentions 82,239 2,570 226,014 7,063 308,253 9,633|19,385 606 68,989 2,156 88,374 2,762 186,715 5,835 216,718 6,772 403,433 12,607 340,219 10,632 358,076 11,190 698,295 21,822 30,834 964 23,889 747 54,723 1,710 2.159 2.772 2.560 1.802 2.306 2.132 4.972 9.102 7.701 0.042 2.295 1.513 1.937 3.838 3.192 0.323 0.687 0.563 8,682 271 53,321 1,666 62,003 1,938 13,660 427 22,479 702 36,139 1,129|
|[0,3] Rumour Speculative Strategic Total|Official Speculative Strategic Total|
|---|---|
|Total Average Total Average Total Average|Total Average Total Average Total Average|
|Tweets 126,600 31,650 646,736 161,684 773,336 193,334 Retweets 175,772 43,943 765,026 191,257 940,798 235,200 Likes 326,274 81,569 1,488,686 372,172 1,814,960 453,740 Replies 27,037 6,759 121,544 30,386 148,581 37,145 Interest 3.545 3.886 3.805 Positive/Negative 3.721 4.195 4.084 Max Polarity 24.453 23.502 23.543 Min Polarity -0.548 3.122 2.287 Max Subjectivity 9.766 7.272 7.749 Min Subjectivity 1.391 1.291 1.302 ‘Blockchain’ Mentions 62,696 15,674 498,753 124,688 561,449 140,362 ‘Cryptocurrency’ Mentions 80,773 20,193 208,065 52,016 288,838 72,210|18,546 4,637 20,410 5,103 38,956 9,739 214,040 53,510 200,770 50,193 414,810 103,703 394,880 98,720 328,940 82,235 723,820 180,955 38,330 9,583 21,080 5,270 59,410 14,853 2.919 3.402 3.230 3.509 3.081 3.234 32.086 24.647 27.297 0.652 7.364 4.972 14.630 7.545 10.069 1.972 1.256 1.511 7,768 1,942 16,540 4,135 24,308 6,077 13,882 3,471 6,479 1,620 20,361 5,090|
-----
Table 4: Sentiment adapted cumulative abnormal returns as at the point of both ‘rumour’ and ‘official’ announcement
relating to corporate blockchain announcements
Rumour Official Announcement
[-30,-1] [AR0] [0,3] [-30,-1] [AR0] [0,3]
_Motivation_
Speculative 0.1397 0.1132 0.0465 0.1444 0.1086 0.0527
Structural 0.0171 0.0238 0.0040 0.0757 0.0674 -0.0034
_Reach_
High 0.1785 0.1601 0.0516 0.0438 0.0798 0.0028
Medium 0.1775 0.1296 0.0303 0.0519 0.0702 0.0881
Low 0.0624 0.0714 0.0013 0.0300 0.0547 0.0146
Very Low 0.0426 0.0423 0.0048 0.0918 0.2098 0.0214
_Sentiment_
Negative 0.0747 0.0599 0.0275 -0.0169 0.0822 0.0155
Neutral 0.0251 0.0314 -0.0130 0.0682 0.0821 -0.0344
Positive 0.1568 0.1276 0.0856 0.1695 0.0963 0.1155
Note: The table shows regression estimates of Sentiment adapted cumulative abnormal returns for each of the denoted
blockchain-developing listed firms in the time period surrounding both the ‘rumour’ and ‘official announcement’.
Motivation is defined as whether each corporate blockchain-decision is defined to be either speculative or strategic. Both
Reach and Sentiment refer to the volume of social media interactions and the estimated sentiment as defined to be either
positive, neutral or negative.
42
-----
Table 5: OLS Regressions for the period inclusive of the day before to the day after each event
‘Rumour’ ‘Official Announcement’
Spec1 Spec2 Spec3 Spec4 Spec5 Spec1 Spec2 Spec3 Spec4 Spec5
US 0.221*** 0.238*** 0.270*** 0.285*** 0.318*** 0.116*** 0.107*** 0.126*** 0.124*** 0.149***
(0.071) (0.076) (0.087) (0.091) (0.102) (0.042) (0.039) (0.046) (0.045) (0.054)
Bitcoin 0.152*** 0.147*** 0.105*** 0.111*** 0.124*** 0.080*** 0.066*** 0.049*** 0.048*** 0.058***
(0.049) (0.047) (0.034) (0.036) (0.040) (0.029) (0.024) (0.018) (0.017) (0.021)
Duration -0.003*** -0.002* 0.001*** 0.001***
(0.001) (0.002) (0.000) (0.000)
Reach -0.015*** -0.009 0.034*** 0.044***
(0.009) (0.035) (0.004) (0.005)
Sentiment 0.085*** 0.090 0.034*** 0.053***
(0.052) (0.056) ((0.005) (0.006)
Speculative 0.127** 0.137*** 0.030*** 0.037***
(0.084) (0.086) (0.008) (0.009)
Constant 0.050 0.043 0.007 0.079 0.054 0.081 0.018 0.085 0.071 0.061***
(0.088) (0.126) (0.081) (0.099) (0.151) (0.088) (0.124) (0.081) (0.099) (0.015)
Adj R2 0.240 0.230 0.251 0.249 0.283 0.251 0.256 0.254 0.251 0.266
Note: The table shows regression estimates of Sentiment adapted cumulative abnormal returns for the period [-1,+1] for each of the denoted
blockchain-developing listed firms in the time period surrounding both the ‘rumour’ and ‘official announcement’. Duration refers to the time difference as
measured in days between the estimated ‘rumour’ and the ‘official announcement’. Both Reach and Sentiment refer to the volume of social media interactions
and the estimated sentiment as defined to be either positive, neutral or negative. Speculative is a dummy that takes the value of one if the announcement is
defined to be of a speculative nature and zero otherwise. ***, ** and * indicate level of significance at 1%, 5%, and 10% respectively.
|‘Rumour’|‘Official Announcement’|
|---|---|
|Spec1 Spec2 Spec3 Spec4 Spec5|Spec1 Spec2 Spec3 Spec4 Spec5|
|US 0.221*** 0.238*** 0.270*** 0.285*** 0.318*** (0.071) (0.076) (0.087) (0.091) (0.102) Bitcoin 0.152*** 0.147*** 0.105*** 0.111*** 0.124*** (0.049) (0.047) (0.034) (0.036) (0.040) Duration -0.003*** -0.002* (0.001) (0.002) Reach -0.015*** -0.009 (0.009) (0.035) Sentiment 0.085*** 0.090 (0.052) (0.056) Speculative 0.127** 0.137*** (0.084) (0.086) Constant 0.050 0.043 0.007 0.079 0.054 (0.088) (0.126) (0.081) (0.099) (0.151) Adj R2 0.240 0.230 0.251 0.249 0.283|0.116*** 0.107*** 0.126*** 0.124*** 0.149*** (0.042) (0.039) (0.046) (0.045) (0.054) 0.080*** 0.066*** 0.049*** 0.048*** 0.058*** (0.029) (0.024) (0.018) (0.017) (0.021) 0.001*** 0.001*** (0.000) (0.000) 0.034*** 0.044*** (0.004) (0.005) 0.034*** 0.053*** ((0.005) (0.006) 0.030*** 0.037*** (0.008) (0.009) 0.081 0.018 0.085 0.071 0.061*** (0.088) (0.124) (0.081) (0.099) (0.015) 0.251 0.256 0.254 0.251 0.266|
-----
Table 6: OLS Regressions for the day of each type of announcement
‘Rumour’ ‘Official Announcement’
Spec1 Spec2 Spec3 Spec4 Spec5 Spec1 Spec2 Spec3 Spec4 Spec5
US 0.050*** 0.050*** 0.052*** 0.048*** 0.042*** -0.020*** -0.020*** -0.002*** 0.021*** 0.048***
(0.016) (0.016) (0.017) (0.015) (0.013) (0.007) (0.007) (0.001) (0.008) (0.017)
Bitcoin 0.032*** 0.035*** 0.033*** 0.033*** 0.027*** 0.127*** 0.129*** 0.144*** 0.145*** 0.305***
(0.010) (0.011) (0.011) (0.011) (0.009) (0.046) (0.047) (0.052) (0.052) (0.110)
Duration -0.001*** 0.000*** 0.000 0.000
(0.000) (0.000) (0.000) (0.001)
Reach -0.010* -0.008*** 0.008*** 0.012***
(0.005) (0.001) (0.002) (0.023)
Sentiment 0.021*** 0.020 0.032*** 0.043***
(0.011) (0.018) (0.013) (0.028)
Speculative 0.024* 0.027* 0.080* 0.088***
(0.015) (0.018) (0.042) (0.043)
Constant 0.017 0.031 0.005 0.007 0.010 0.051* 0.031 0.042 -0.007 0.053
(0.028) (0.040) (0.026) (0.032) (0.048) (0.044 (0.063) (0.041) (0.049) (0.076)
Adj R2 0.225 0.225 0.234 0.228 0.249 0.214 0.215 0.227 0.247 0.268
Note: The table shows regression estimates of abnormal returns for the period [AR0], for each of the denoted blockchain-developing listed firms in the time
period surrounding both the ‘rumour’ and ‘official announcement’. Duration refers to the time difference as measured in days between the estimated ‘rumour’
and the ‘official announcement’. Both Reach and Sentiment refer to the volume of social media interactions and the estimated sentiment as defined to be
either positive, neutral or negative. Speculative is a dummy that takes the value of one if the announcement is defined to be of a speculative nature and zero
otherwise. ***, ** and * indicate level of significance at 1%, 5%, and 10% respectively.
|‘Rumour’|‘Official Announcement’|
|---|---|
|Spec1 Spec2 Spec3 Spec4 Spec5|Spec1 Spec2 Spec3 Spec4 Spec5|
|US 0.050*** 0.050*** 0.052*** 0.048*** 0.042*** (0.016) (0.016) (0.017) (0.015) (0.013) Bitcoin 0.032*** 0.035*** 0.033*** 0.033*** 0.027*** (0.010) (0.011) (0.011) (0.011) (0.009) Duration -0.001*** 0.000*** (0.000) (0.000) Reach -0.010* -0.008*** (0.005) (0.001) Sentiment 0.021*** 0.020 (0.011) (0.018) Speculative 0.024* 0.027* (0.015) (0.018) Constant 0.017 0.031 0.005 0.007 0.010 (0.028) (0.040) (0.026) (0.032) (0.048) Adj R2 0.225 0.225 0.234 0.228 0.249|-0.020*** -0.020*** -0.002*** 0.021*** 0.048*** (0.007) (0.007) (0.001) (0.008) (0.017) 0.127*** 0.129*** 0.144*** 0.145*** 0.305*** (0.046) (0.047) (0.052) (0.052) (0.110) 0.000 0.000 (0.000) (0.001) 0.008*** 0.012*** (0.002) (0.023) 0.032*** 0.043*** (0.013) (0.028) 0.080* 0.088*** (0.042) (0.043) 0.051* 0.031 0.042 -0.007 0.053 (0.044 (0.063) (0.041) (0.049) (0.076) 0.214 0.215 0.227 0.247 0.268|
-----
Table 7: Default probability: regression results
Specification (1) (2) (3) (4)
Lev 0.834*** 0.943***
(0.011) (0.017)
Lev*IMP 1.368***
(0.037)
Trade 0.227*** 0.304***
(0.066) (0.068)
Trade*IMP 0.289***
(0.019)
Curr 0.766*** 0.321***
(0.021) (0.035)
Curr*IMP 0.426***
(0.031)
Noncurrent 0.327*** 0.231***
(0.024) (0.027)
Noncurrent*IMP 0.296*
(0.175)
DEF 1.548*** 1.592*** 1.590*** 1.008***
(0.152) (0.166) (0.152) (0.223)
GDP -0.041*** -0.041*** -0.040*** -0.044***
(0.001) (0.001) (0.001) (0.001)
Bond 0.052*** 0.053*** 0.051*** 0.054***
(0.001) (0.001) (0.001) (0.001)
CDS 0.094*** 0.094*** 0.102*** 0.102***
(0.002) (0.002) (0.004) (0.004)
NITA -0.113*** -0.113*** -0.080*** -0.129***
(0.031) (0.031) (0.030) (0.027)
CATA 0.183*** 0.182*** 0.633*** 0.540***
(0.047) (0.047) (0.215) (0.221)
Age -0.025*** -0.025*** -0.024*** -0.024***
(0.004) (0.004) (0.004) (0.004)
Constant -1.798*** -1.831*** -2.330*** -1.990***
(0.157) (0.164) (0.241) (0.265)
Observations 11,562 11,562 11,559 11,559
Pseudo-R2 0.0901 0.0904 0.0939 0.0944
Note: This table reports the estimated coefficients for the logit regressions and their robust standard errors clustered at
the firm level (in parentheses). The dependent variable takes a value of zero if the firm is active and not under regulatory
investigation, while it receives a value of one if it is insolvent, bankrupt or under regulatory investigation. Similar to the
methodology used by Cathcart et al. [2020], GDP is the 1-year GDP growth rate; bond is the 3-month government bond
interest rate; CDS is the logarithm of the CDS price of government bonds; NITA is the ratio of net income to total assets;
CATA is the ratio of current assets to total assets; AGE is the number of days since incorporation divided by 365; IMP is
a dummy variable that takes a value of one if the identified company is impaired as defined as to be ‘insolvent, bankrupt
or under regulatory investigation’. Lev is the ratio of total liabilities to total assets; Trade is the ratio of trade payables to
total assets; Curr is the ratio of current liabilities (minus trade payables) to total assets; and Noncurr is the ratio of
non-current liabilities to total assets. Independent variables are lagged. ***, ** and * indicate level of significance at 1%,
5%, and 10% respectively.
45
-----
Table 8: Default probability: average marginal effects
Leverage Trade Current Noncurrent Observations
_Speculative_ 0.022*** 0.024*** 0.031*** 0.015*** 4,642
(0.001) (0.002) (0.002) (0.001)
_Strategic_ 0.003*** 0.004*** 0.005*** 0.004*** 6,507
(0.000) (0.000) (0.000) (0.000)
Note: The table shows average marginal effects of total leverage, trade payables, and current and non-current liabilities to
total assets, and associated marginal effects when companies are denoted to either have, or do not have any previous
technological development experience prior to decisions to partake in either speculative and strategic corporate blockchain
development. Standard errors are reported in parentheses. Standard errors of marginal effects are calculated using the
delta method. Lev is the ratio of total liabilities to total assets; Trade is the ratio of trade payables to total assets; Curr is
the ratio of current liabilities (minus trade payables) to total assets; and Noncurr is the ratio of non-current liabilities to
total assets. Average marginal effects of leverage are computed using specification (2) as presented in Table 7. Average
marginal effects of trade payables, and current and non-current liabilities to total assets are computed using specification
(4) of Table 7. Statistical significance is calculated using the Wald test. ***, ** and * indicate level of significance at 1%,
5%, and 10% respectively.
46
-----
Table 9: Default probability based on previous technological experience: regression results
_Speculative_ _Strategic_
Specification (1) (2) (3) (4) (1) (2) (3) (4)
Lev 0.638*** 0.842*** 0.297*** 0.268***
(0.010) (0.009) (0.022) (0.023)
Lev*IMP 0.775*** 0.300***
(0.121) (0.092)
Trade 0.126* 0.136*** 0.575*** 0.499***
(0.073) (0.073) (0.253) (0.237)
Trade*IMP 0.379* 0.929***
(0.237) (0.173)
Curr 0.079*** 0.102*** 0.473*** 0.316***
(0.035) (0.031) (0.101) (0.102)
Curr*IMP 0.142* 0.358*
(0.080) (0.234)
Noncurr 0.293*** 0.160*** 0.253 0.146**
(0.094) (0.049) (0.113) (0.078)
Noncurr*IMP 0.397*** 0.334*
(0.132) (0.258)
GDP 0.051*** 0.053*** 0.057*** 0.058*** -0.009*** -0.010*** -0.010*** -0.010***
(0.001) (0.001) (0.001) (0.001) (0.001) (0.001) (0.001) (0.001)
Bond 0.031*** 0.031*** 0.031*** 0.031*** 0.043*** 0.042*** 0.043*** 0.043***
(0.001) (0.001) (0.001) (0.001) (0.001) (0.001) (0.001) (0.001)
CDS 0.142*** 0.142*** 0.144*** 0.144*** 0.062*** 0.062*** 0.069*** 0.071***
(0.003) (0.003) (0.003) (0.003) (0.002) (0.002) (0.002) (0.002)
NITA -0.052*** -0.066*** -0.036*** -0.092*** -0.068*** -0.036*** -0.125*** -0.082***
(0.000) (0.000) (0.000) (0.001) (0.001) (0.001) (0.003) (0.003)
CATA 0.241*** 0.305*** 0.096*** 0.108*** 0.090*** 0.089*** 0.107*** 0.071***
(0.006) (0.007) (0.030) (0.030) (0.034) (0.032) (0.044) (0.044)
Age 0.000*** 0.000*** 0.000*** 0.000*** 0.000*** -0.001*** -0.001*** -0.001***
(0.000) (0.000) (0.000) (0.000) (0.000) (0.000) (0.000) (0.000)
Constant -0.656*** -0.671*** -0.929*** -0.130*** -0.619*** -0.797*** -2.206*** -2.478***
(0.150) (0.153) (0.295) (0.320) (0.349) (0.385) (0.573) (0.583)
Pseudo R2 0.084 0.129 0.121 0.149 0.099 0.108 0.099 0.166
Note: This table reports the estimated coefficients for the logit regressions and their robust standard errors clustered at the firm level (in parentheses). The
dependent variable takes a value of zero if the firm is active and not under regulatory investigation, while it receives a value of one if it is insolvent, bankrupt
or under regulatory investigation. Similar to the methodology used by Cathcart et al. [2020], GDP is the 1-year GDP growth rate; bond is the 3-month
government bond interest rate; CDS is the logarithm of the CDS price of government bonds; NITA is the ratio of net income to total assets; CATA is the ratio
of current assets to total assets; AGE is the number of days since incorporation divided by 365; IMP is a dummy variable that takes a value of one if the
identified company is impaired as defined as to be ‘insolvent, bankrupt or under regulatory investigation’. Lev is the ratio of total liabilities to total assets;
Trade is the ratio of trade payables to total assets; Curr is the ratio of current liabilities (minus trade payables) to total assets; and Noncurr is the ratio of
non-current liabilities to total assets. Independent variables are lagged. ***, ** and * indicate level of significance at 1%, 5%, and 10% respectively.
|Speculative|Strategic|
|---|---|
|Specification (1) (2) (3) (4)|(1) (2) (3) (4)|
|Lev 0.638*** 0.842*** (0.010) (0.009) Lev*IMP 0.775*** (0.121) Trade 0.126* 0.136*** (0.073) (0.073) Trade*IMP 0.379* (0.237) Curr 0.079*** 0.102*** (0.035) (0.031) Curr*IMP 0.142* (0.080) Noncurr 0.293*** 0.160*** (0.094) (0.049) Noncurr*IMP 0.397*** (0.132) GDP 0.051*** 0.053*** 0.057*** 0.058*** (0.001) (0.001) (0.001) (0.001) Bond 0.031*** 0.031*** 0.031*** 0.031*** (0.001) (0.001) (0.001) (0.001) CDS 0.142*** 0.142*** 0.144*** 0.144*** (0.003) (0.003) (0.003) (0.003) NITA -0.052*** -0.066*** -0.036*** -0.092*** (0.000) (0.000) (0.000) (0.001) CATA 0.241*** 0.305*** 0.096*** 0.108*** (0.006) (0.007) (0.030) (0.030) Age 0.000*** 0.000*** 0.000*** 0.000*** (0.000) (0.000) (0.000) (0.000) Constant -0.656*** -0.671*** -0.929*** -0.130*** (0.150) (0.153) (0.295) (0.320)|0.297*** 0.268*** (0.022) (0.023) 0.300*** (0.092) 0.575*** 0.499*** (0.253) (0.237) 0.929*** (0.173) 0.473*** 0.316*** (0.101) (0.102) 0.358* (0.234) 0.253 0.146** (0.113) (0.078) 0.334* (0.258) -0.009*** -0.010*** -0.010*** -0.010*** (0.001) (0.001) (0.001) (0.001) 0.043*** 0.042*** 0.043*** 0.043*** (0.001) (0.001) (0.001) (0.001) 0.062*** 0.062*** 0.069*** 0.071*** (0.002) (0.002) (0.002) (0.002) -0.068*** -0.036*** -0.125*** -0.082*** (0.001) (0.001) (0.003) (0.003) 0.090*** 0.089*** 0.107*** 0.071*** (0.034) (0.032) (0.044) (0.044) 0.000*** -0.001*** -0.001*** -0.001*** (0.000) (0.000) (0.000) (0.000) -0.619*** -0.797*** -2.206*** -2.478*** (0.349) (0.385) (0.573) (0.583)|
|Pseudo R2 0.084 0.129 0.121 0.149|0.099 0.108 0.099 0.166|
|---|---|
-----
Table 10: Default probability: average marginal effects of previous technological experience
_Speculative_ _Strategic_
Lev Trade Curr Noncurr Lev Trade Curr Noncurr
_Experience_ 0.023*** 0.019*** 0.017*** 0.015*** 0.004*** 0.006*** 0.006*** 0.005***
(0.007) (0.003) (0.004) (0.003) (0.001) (0.001) (0.001) (0.001)
_No Experience_ 0.042*** 0.032*** 0.030*** 0.034*** 0.015*** 0.019*** 0.017*** 0.015***
(0.011) (0.006) (0.005) (0.004) (0.006) (0.006) (0.006) (0.004)
_Technological differential, no experience_
0.019*** 0.013*** 0.013*** 0.019*** 0.009*** 0.013*** 0.011*** 0.010***
Note: The table shows average marginal effects of total leverage, trade payables, and current and non-current liabilities to
total assets, and associated marginal effects when companies are denoted to either have, or do not have any previous
technological development experience prior to decisions to partake in either speculative and strategic corporate blockchain
development. Standard errors are reported in parentheses. Standard errors of marginal effects are calculated using the
delta method. Lev is the ratio of total liabilities to total assets; Trade is the ratio of trade payables to total assets; Curr is
the ratio of current liabilities (minus trade payables) to total assets; and Noncurr is the ratio of non-current liabilities to
total assets. Average marginal effects of leverage are computed using specification (2) as presented in Table 9. Average
marginal effects of trade payables, and current and non-current liabilities to total assets are computed using specification
(4) of Table 9. Statistical significance is calculated using the Wald test. ***, ** and * indicate level of significance at 1%,
5%, and 10% respectively.
Table 11: Credit repayment ability and probability of default and credit ratings due to leverage used on corporate
blockchain-development projects by type
1-yr PD (%)
Ave Max Min
Blockchain Partnership CRGR 23.3 37.0 3.0
PD 0.8 1.5 0.4
Coin Creation CRGR 31.6 97.0 1.0
PD 1.4 14.8 0.0
Investment Fund CRGR 49.3 93.0 7.0
PD 0.3 0.9 0.0
Name Change CRGR 9.5 21.0 1.0
PD 4.2 24.3 0.5
Security Improvements CRGR 27.7 90.0 1.0
PD 0.7 4.0 0.1
Technological Improvements CRGR 36.7 91.0 2.0
PD 0.5 2.4 0.01
_Speculative_ CRGR 23.8 97.0 1.0
PD 2.2 24.3 0.0
_Strategic_ CRGR 38.4 91.0 1.0
PD 0.5 4.0 0.1
**Total** CRGR 34.0 97.0 1.0
PD 0.8 24.3 0.0
Note: In the above table, PD represents the estimated 1-year probability of default as separated by type of company
making each corporate blockchain announcement. The CRGR, is the provided rank of Credit Combined Global Rank as
provided by Thomson Reuters Eikon. This measure is used to validate and provide robustness to our estimated probability
of default. The CRGR is described as a 1-100 percentile rank of a company’s 1-year probability of default based on the
StarMine Combined Credit Risk model. The combined model then blends the Structural, SmartRatios and Text Mining
Credit Risk models into one final estimate of credit risk at the company level. Higher scores indicate that companies are
less likely to go bankrupt, or default on their debt obligations within the next twelve month period.
48
|Speculative|Strategic|
|---|---|
|Lev Trade Curr Noncurr|Lev Trade Curr Noncurr|
|Experience 0.023*** 0.019*** 0.017*** 0.015*** (0.007) (0.003) (0.004) (0.003) No Experience 0.042*** 0.032*** 0.030*** 0.034*** (0.011) (0.006) (0.005) (0.004)|0.004*** 0.006*** 0.006*** 0.005*** (0.001) (0.001) (0.001) (0.001) 0.015*** 0.019*** 0.017*** 0.015*** (0.006) (0.006) (0.006) (0.004)|
|0.019*** 0.013*** 0.013*** 0.019***|0.009*** 0.013*** 0.011*** 0.010***|
|---|---|
|Col1|1-yr PD (%)|
|---|---|
||Ave Max Min|
|Blockchain Partnership Coin Creation Investment Fund Name Change Security Improvements Technological Improvements|CRGR 23.3 37.0 3.0 PD 0.8 1.5 0.4 CRGR 31.6 97.0 1.0 PD 1.4 14.8 0.0 CRGR 49.3 93.0 7.0 PD 0.3 0.9 0.0 CRGR 9.5 21.0 1.0 PD 4.2 24.3 0.5 CRGR 27.7 90.0 1.0 PD 0.7 4.0 0.1 CRGR 36.7 91.0 2.0 PD 0.5 2.4 0.01|
|Speculative Strategic|CRGR 23.8 97.0 1.0 PD 2.2 24.3 0.0 CRGR 38.4 91.0 1.0 PD 0.5 4.0 0.1|
|Total|CRGR 34.0 97.0 1.0 PD 0.8 24.3 0.0|
-----
Table 12: Re-estimated credit ratings due to leverage use on corporate blockchain-development projects as defined by previous technological experience
Restimated Credit Rating
Actual Credit Rating Previous Technological Experience No Previous Technological Experience
Ave Max Min Ave Max Min Ave Max Min
Speculative Pre- Baa1 (8.4) Aa2 (3.0) Caa1 (17.0)
Post- Baa3 (9.7) A1 (5.0) Caa2 (18.0) Ba1 (11.4) A3 (7.3) Ca/C (20.0) B1 (14.2) Ba1 (10.7) Ca/C (20.0)
Strategic Pre- A2 (6.0) Aaa (1.0) Ba2 (12.0)
Post- A2 (6.4) Aa1 (2.0) Ba3 (13.0) A3 (7.2) Aa2 (2.5) B1 (13.5) Baa1 (8.4) Aa3 (3.7) B2 (14.7)
Note: The above table presents the utilised linear transformation methodology used to compare the respective credit ratings based on the companies analysed.
Where possible, the differential point between investment grade and junk grade investment status is used as the separating point between point 10 and point
11. At point 20, companies are treated in same manner should they be considered to be either near default or in default. We have selected Moody’s credit
ratings as the representative value in the provided analysis. We have used the linear transformation scale provided in Table A2 to transfer ratings from S&P
and Fitch to comparative Moody’s rating. The provided ratings are based on the actual transformed ratings during the time period under observation and the
re-estimated credit ratings based on whether the company under observation has previous technological development experience.
|Col1|Restimated Credit Rating|Col3|Col4|
|---|---|---|---|
||Actual Credit Rating Previous Technological Experience No Previous Technological Experience|||
||Ave Max Min|Ave Max Min|Ave Max Min|
|Speculative Pre- Post- Strategic Pre- Post-|Baa1 (8.4) Aa2 (3.0) Caa1 (17.0) Baa3 (9.7) A1 (5.0) Caa2 (18.0) A2 (6.0) Aaa (1.0) Ba2 (12.0) A2 (6.4) Aa1 (2.0) Ba3 (13.0)|Ba1 (11.4) A3 (7.3) Ca/C (20.0) A3 (7.2) Aa2 (2.5) B1 (13.5)|B1 (14.2) Ba1 (10.7) Ca/C (20.0) Baa1 (8.4) Aa3 (3.7) B2 (14.7)|
-----
**Appendices**
Table A1: List of variables and variable description defined in Twitter Sentiment Search
Variable Description
company Company name
company_id Company ID
date Date
number_tweets Number of tweets
retweets Number of retweets
likes Number of likes
replies Number of replies
blockchain Number of mentions of the term ‘blockchain’
crypto Number of mentions of the terms ‘crypto’ or ‘cryptocurrency’
hi_pos Number of positive terms based on Harvard General Inquirer dictionary
hi_neg Number of negative terms based on Harvard General Inquirer dictionary
hi_polarity Polarity (Pos-Neg)/(Pos+Neg) based on Harvard General Inquirer
hi_subjectivity Subjectivity (Pos+Neg)/All_words based on Harvard General Inquirer
lm_pos Number of positive terms based on Loughran-McDonald dictionary
lm_neg Number of negative terms based on Loughran-McDonald dictionary
lm_polarity Polarity (Pos-Neg)/(Pos+Neg) based on Loughran-McDonald dictionary
lm_subjectivity Subjectivity (Pos+Neg)/All_words based on Loughran-McDonald dictonary
Note: Twitter data was collected for a period between 1 January 2017 and 31 March 2019 for a list of 156 companies. All
tweets mentioning the name of the company plus either of the terms ‘crypto’, ‘cryptocurrency’ or ‘blockchain’ were
computationally collected through the Search Twitter function on https://twitter.com/explore using the Python
‘twitterscraper’ package. A total number of 954,765 unique tweets were collected. The above list of variables describes the
format in which the data was obtained.
Table A2: Linear Transformation Scale for Credit Ratings
Rank S&P Moody’s Fitch
Highest Quality 1 AAA Aaa AAA
2 AA+ Aa1 AA+
High Quality 3 AA Aa2 AA
4 AA- Aa3 AA
5 A+ A1 A+
Inv. Grade
Strong Payment Capacity 6 A A2 A
7 A- A3 A
8 BBB+ Baa1 BBB+
Adequate payment capacity 9 BBB Baa2 BBB
10 BBB- Baa3 BBB
11 BB+ Ba1 BB+
Likely to survive despite uncertainty 12 BB Ba2 BB
13 BB- Ba3 BB
14 B+ B1 B+
High Credit Risk 15 B B2 B
Junk Grade
16 B- B3 B
17 CCC+ Caa1 CCC+
Very High Credit Risk 18 CCC Caa2 CCC
19 CCC- Caa3 CCCNear Default or In Default 20 CC/SD/D Ca/C CC/C/DDD/DD/D
Note: The above table presents the utilised linear transformation methodology used to compare the respective credit
ratings based on the companies analysed. Where possible, the differential point between investment grade and junk grade
investment status is used as the separating point between point 10 and point 11. At point 20, companies are treated in
same manner should they be considered to be either near default or in default.
50
|Variable|Description|
|---|---|
|company company_id date number_tweets retweets likes replies blockchain crypto hi_pos hi_neg hi_polarity hi_subjectivity lm_pos lm_neg lm_polarity lm_subjectivity|Company name Company ID Date Number of tweets Number of retweets Number of likes Number of replies Number of mentions of the term ‘blockchain’ Number of mentions of the terms ‘crypto’ or ‘cryptocurrency’ Number of positive terms based on Harvard General Inquirer dictionary Number of negative terms based on Harvard General Inquirer dictionary Polarity (Pos-Neg)/(Pos+Neg) based on Harvard General Inquirer Subjectivity (Pos+Neg)/All_words based on Harvard General Inquirer Number of positive terms based on Loughran-McDonald dictionary Number of negative terms based on Loughran-McDonald dictionary Polarity (Pos-Neg)/(Pos+Neg) based on Loughran-McDonald dictionary Subjectivity (Pos+Neg)/All_words based on Loughran-McDonald dictonary|
|Col1|Col2|Rank S&P Moody’s Fitch|
|---|---|---|
|Highest Quality High Quality Strong Payment Capacity Adequate payment capacity|Inv. Grade|1 AAA Aaa AAA 2 AA+ Aa1 AA+ 3 AA Aa2 AA 4 AA- Aa3 AA- 5 A+ A1 A+ 6 A A2 A 7 A- A3 A- 8 BBB+ Baa1 BBB+ 9 BBB Baa2 BBB 10 BBB- Baa3 BBB-|
|Likely to survive despite uncertainty High Credit Risk Very High Credit Risk Near Default or In Default|Junk Grade|11 BB+ Ba1 BB+ 12 BB Ba2 BB 13 BB- Ba3 BB- 14 B+ B1 B+ 15 B B2 B 16 B- B3 B- 17 CCC+ Caa1 CCC+ 18 CCC Caa2 CCC 19 CCC- Caa3 CCC- 20 CC/SD/D Ca/C CC/C/DDD/DD/D|
-----
| 35,960
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.2139/ssrn.3758485?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.2139/ssrn.3758485, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBYNCND",
"status": "GREEN",
"url": "https://doras.dcu.ie/25906/1/R4.pdf"
}
| 2,021
|
[] | true
| 2021-01-01T00:00:00
|
[
{
"paperId": "7be6075c0222921fb49beac69dc1ce3c096451a1",
"title": "Guilt through Association: Reputational Contagion and the Boeing 737-MAX Disasters"
},
{
"paperId": "3aa3e79ce360208ee69a4837cc9f629e6297cd81",
"title": "The Impact of Blockchain Related Name Changes on Corporate Performance"
},
{
"paperId": "48a5f369794ee1f7c69dc2b75d6b36b75b89f900",
"title": "Leveraging machine learning in the global fight against money laundering and terrorism financing: An affordances perspective"
},
{
"paperId": "b89cc9bbb885072d395300e32f226c1e8df71a23",
"title": "Does Blockchain Patent-Development Influence Bitcoin Risk?"
},
{
"paperId": "f6bcf81c0fc81e421d1fafcf0a11fdb2861b2162",
"title": "Riding the Wave of Crypto-Exuberance: The Potential Misusage of Corporate Blockchain Announcements"
},
{
"paperId": "9618434d5ae8e305620065bf66065be4e298fbed",
"title": "The destabilising effects of cryptocurrency cybercriminality"
},
{
"paperId": "d6faa1112ae766f50510fee65b5a8f7ff29515f7",
"title": "Is Bitcoin a currency, a technology-based product, or something else?"
},
{
"paperId": "d242f0a5e53a64905d288e0c14923f093c6686cd",
"title": "The Differential Impact of Leverage on the Default Risk of Small and Large Firms"
},
{
"paperId": "91b307dab6b5da230190ae6f7e3b126daac100c0",
"title": "Is Bitcoin Really Un-Tethered?"
},
{
"paperId": "87920bfd867361a3460c2c58b547cb4303c83e4c",
"title": "Blockchain hysteria: Adding “blockchain” to company’s name"
},
{
"paperId": "03454a5e070413c3974a2bffdab16a8a085dffac",
"title": "Riding the Blockchain Mania: Public Firms’ Speculative 8-K Disclosures"
},
{
"paperId": "1481e1dcf67729086cfe1cb446cc4b58a48a4c2d",
"title": "Fake news on Twitter during the 2016 U.S. presidential election"
},
{
"paperId": "ef40ccfda8cf89ef85102a6dc81699851be68d0c",
"title": "CEO Overconfidence and the Value of Corporate Cash Holdings"
},
{
"paperId": "c6b979c8ea5008a98e54eb3dd2f7cc69f0ce8e10",
"title": "Cryptocurrency Reaction to FOMC Announcements: Evidence of Heterogeneity Based on Blockchain Stack Position"
},
{
"paperId": "135d16b93ee0b407446740d7651c36393182a0c2",
"title": "I Am a Blockchain Too..."
},
{
"paperId": "845729b538e34beb4a68f6c6c204eb844083e3a5",
"title": "How Valuable Is FinTech Innovation?"
},
{
"paperId": "ef10f7315a0dd3f2a0f7983f3cdfc5fe62ee81e4",
"title": "She is mine: Determinants and value effects of early announcements in takeovers"
},
{
"paperId": "da9da59b5c3753b267df5c2d9702b92fd57a5baa",
"title": "Cryptocurrencies as a Financial Asset: A Systematic Analysis"
},
{
"paperId": "67fe308eebf4e4e47f43a0bd586d909a6081f3f8",
"title": "KODAKCoin: a blockchain revolution or exploiting a potential cryptocurrency bubble?"
},
{
"paperId": "b00a1172b26cff165d94f576bd0b3b99e7204767",
"title": "Using Machine Learning to Detect Fake Identities: Bots vs Humans"
},
{
"paperId": "fe044bf700e7a4c3166f47af6dce9a1a94d73f9c",
"title": "Anatomy of an online misinformation network"
},
{
"paperId": "822fdc3f67b8964639cf98fb36ff3a6d72d65682",
"title": "An Index of the Yields of Junk Bonds, 1910-1955"
},
{
"paperId": "ee4ca761d98981c12c7bf3a83287319da401160f",
"title": "Shareholders Valuation of Long-Term Debt and Decline in Firms’ Leverage Ratio"
},
{
"paperId": "9236488f7e5f9a27fa991232020983968f735e79",
"title": "Price Manipulation in the Bitcoin Ecosystem"
},
{
"paperId": "c8b71ccf612feb28e4c6d4d0115b4ce16569354c",
"title": "Fame for sale: Efficient detection of fake Twitter followers"
},
{
"paperId": "a481d057a5185e3dd3106a4d15d72c129e63f8c0",
"title": "Social Media Adoption and Corporate Disclosure"
},
{
"paperId": "048ae78dcc137a14a0a3c74ed4a45dfb8aa3e473",
"title": "Investor sentiment and bidder announcement abnormal returns"
},
{
"paperId": "aa9100a29e5d0f67844f0f7c5adb0049b2e6b72d",
"title": "Financial Market Misconduct and Agency Conflicts: A Synthesis and Future Directions"
},
{
"paperId": "23911221c1dc47bae967e2244b3a48bac1b5a1a7",
"title": "Exchange Trading Rules, Surveillance, and Suspected Insider Trading"
},
{
"paperId": "313a5e7c7d6c9702ba7b0c92f6ed21b46dad4cff",
"title": "Basel III Leverage Ratio Requirement and the Probability of Bank Runs"
},
{
"paperId": "ee53f91f2fc64199a8cf5204984ca5fa1eb86474",
"title": "Do Voluntary Corporate Restrictions on Insider Trading Eliminate Informed Insider Trading"
},
{
"paperId": "81df2f0dbd32a4644b915106b1c4d8f332f9c278",
"title": "The sovereign-bank rating channel and rating agencies' downgrades during the European debt crisis"
},
{
"paperId": "7d0953ef0b0412fca13545409bdfd2cfc77bf4af",
"title": "The Impact of Personal Attributes on Corporate Insider Trading"
},
{
"paperId": "dfa241161fd4fbbe8152c1052a8e23710389c500",
"title": "Managerial Cash Use, Default, and Corporate Financial Policies"
},
{
"paperId": "8f345253be5225e3d970f7e2b07e5eb6424fbf38",
"title": "Liquidity Risk of Corporate Bond Returns: A Conditional Approach"
},
{
"paperId": "6ca943e13570f32e6e25100a61291fd1f46cb8f3",
"title": "Do Investors Care About Credit Ratings? An Analysis Through the Cycle"
},
{
"paperId": "72c44f0feffe99f69eb9cbf6b300b3ce089e5686",
"title": "Leverage change, debt overhang, and stock prices"
},
{
"paperId": "cc857d68d0ce5f77e19628174fd669acd250158b",
"title": "Corporate Governance Rules and Insider Trading Profits"
},
{
"paperId": "c19a7c2a27b3291a69d69d54f25e3ed43e3eb7f8",
"title": "Business Ethics Should Study Illicit Businesses: To Advance Respect\nfor Human Rights"
},
{
"paperId": "b8debf6a96b6bafa45b65a1acab93b02f41be26a",
"title": "The dark side of financial innovation: A case study of the pricing of a retail financial product"
},
{
"paperId": "b4bc18b49fcf4575a688b36c642fc1c8be318d2a",
"title": "Liquidity Risk of Corporate Bond Returns: A Conditional Approach"
},
{
"paperId": "22fe3b2f1c7861bdd8a7e06eb26377a8548fed59",
"title": "How Did Increased Competition Affect Credit Ratings?"
},
{
"paperId": "8a3ec1cdb6d6868efb40943775dc96401b0715df",
"title": "Exploring the Nature of 'Trader Intuition'"
},
{
"paperId": "07c07549fa50460ca54dbc9c005932f9d9fcb422",
"title": "The Dog That Did Not Bark: Insider Trading and Crashes"
},
{
"paperId": "51057a48d3164f0dd74bcea7a93aecc3afc443a4",
"title": "Cash Holdings and Credit Risk"
},
{
"paperId": "ba4639035b91cfdada43b0b746360bb9babde3ed",
"title": "Information Salience, Investor Sentiment, and Stock Returns: The Case of British Soccer Betting"
},
{
"paperId": "af17cf0a4067a2aaa84618fed68fd0ae73c67d5d",
"title": "Bank Competition and Financial Stability: Friends or Foes?"
},
{
"paperId": "d95ee724379fe39bdc2e196cb8b85f675575dbdc",
"title": "Momentum, Reversal, and Uninformed Traders in Laboratory Markets"
},
{
"paperId": "dc951be6be2da5175c28e1c1e4cbfe6dbf823555",
"title": "Strategic Price Complexity in Retail Financial Markets"
},
{
"paperId": "f23b0ab3ab5ffc91af96d35b48a0153b0fc3ab05",
"title": "Insider Trading, News Releases and Ownership Concentration"
},
{
"paperId": "3e4c63d1ab2ed14bf47dd3f834e6d021eb094191",
"title": "In Search of Distress Risk"
},
{
"paperId": "36d2763cec795f30b67852f193b07261515f8508",
"title": "Marketwide Private Information in Stocks: Forecasting Currency Returns"
},
{
"paperId": "c0e5df2f74076cca0b78a9e5009282901035073c",
"title": "Corporate Financial Policy and the Value of Cash"
},
{
"paperId": "4792223b7edcbfd5b01b391c93bf65a8e872f157",
"title": "Survival and Default of Original Issue High-Yield Bonds"
},
{
"paperId": "75d6f7b67f54624ba8a1e9b97081b9de36624d14",
"title": "Messages from Market to Management: The Case of IPOs"
},
{
"paperId": "720124bf60271642f956d156ded1c8c58aea15e9",
"title": "Information and the Cost of Capital"
},
{
"paperId": "6c675ba349fc1c0f165c61d1b8df08a804720739",
"title": "Massively Confused Investors Making Conspicuously Ignorant Choices (MCI-MCIC)"
},
{
"paperId": "5af85f2fac4ac8e9e03dc3bbf099fbbaa46cf8c1",
"title": "Leverage, Volatility and Executive Stock Options"
},
{
"paperId": "2ad051ea98362da6b85db896798d9b1707e459fd",
"title": "Stock Price Decreases Prior to Executive Stock-Option Grants"
},
{
"paperId": "19ca9c0a8ff3daf8ca1b9ee47309bd55a7ae85c1",
"title": "Corporate Spinoffs and Information Asymmetry between Investors"
},
{
"paperId": "830a8647a667fb0817d8d1a1b562c9fce5dad274",
"title": "Speculation Spreads and the Market Pricing of Proposed Acquisitions"
},
{
"paperId": "197ebd6dd1b03cdc2a61e58aa0bb79f268b627e6",
"title": "Estimating the Returns to Insider Trading: A Performance-Evaluation Perspective"
},
{
"paperId": "a08509120db7d60ec9bc72fcd642543fc86a9018",
"title": "Information Quality and Market Efficiency"
},
{
"paperId": "4951e25fc2d7fd64bfabbb6400b2a648ae2aa899",
"title": "What’s in a name? A lot if it has “blockchain”"
},
{
"paperId": null,
"title": "Annual default study: Corporate default and recovery rates, 1920-2016"
},
{
"paperId": "f7cb415d20f662d706d39e7dd4dd187d1af2f507",
"title": "The dark side of financial innovation: A case study of the pricing of a retail financial product $ Journal of Financial Economics"
},
{
"paperId": null,
"title": "Shrouded attributes, consumer myopia, and information suppression in competitive markets"
},
{
"paperId": "c02e4b4eb1fa1bf7968ce598ee3a3e5067219b49",
"title": "Moody ' s Credit Rating Prediction Model"
},
{
"paperId": "b796d278d39f0c5e25c7422eecb02b492d93c701",
"title": "Messages from market to management : the case of IPOs"
},
{
"paperId": null,
"title": "Blockchain-based information shrouding significantly increases contagion risk • Speculative projects by non-technological firms"
}
] | 35,960
|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Sociology",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/001fe29d66b837d5230f22d8a9c8617895f13a06
|
[
"Medicine"
] | 0.894642
|
Time trends in social contacts before and during the COVID-19 pandemic: the CONNECT study
|
001fe29d66b837d5230f22d8a9c8617895f13a06
|
BMC Public Health
|
[
{
"authorId": "2967909",
"name": "M. Drolet"
},
{
"authorId": "2132650922",
"name": "Aurélie Godbout"
},
{
"authorId": "6240594",
"name": "M. Mondor"
},
{
"authorId": "3659454",
"name": "G. Béraud"
},
{
"authorId": "2132655146",
"name": "Léa Drolet-Roy"
},
{
"authorId": "1398402256",
"name": "Philippe Lemieux-Mellouki"
},
{
"authorId": "2963051",
"name": "E. Demers"
},
{
"authorId": "2234804",
"name": "M. Boily"
},
{
"authorId": "3572567",
"name": "C. Sauvageau"
},
{
"authorId": "5015836",
"name": "G. De Serres"
},
{
"authorId": "2554551",
"name": "N. Hens"
},
{
"authorId": "2556787",
"name": "P. Beutels"
},
{
"authorId": "4527855",
"name": "B. Dervaux"
},
{
"authorId": "31640192",
"name": "M. Brisson"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://www.pubmedcentral.nih.gov/tocrender.fcgi?journal=63",
"https://bmcpublichealth.biomedcentral.com/"
],
"id": "c81aae02-04bc-4c4d-811c-5043ae130a31",
"issn": "1471-2458",
"name": "BMC Public Health",
"type": "journal",
"url": "http://www.biomedcentral.com/bmcpublichealth/"
}
|
Background Since the beginning of the COVID-19 pandemic, many countries, including Canada, have adopted unprecedented physical distancing measures such as closure of schools and non-essential businesses, and restrictions on gatherings and household visits. We described time trends in social contacts for the pre-pandemic and pandemic periods in Quebec, Canada. Methods CONNECT is a population-based study of social contacts conducted shortly before (2018/2019) and during the COVID-19 pandemic (April 2020 – February 2021), using the same methodology for both periods. We recruited participants by random digit dialing and collected data by self-administered web-based questionnaires. Questionnaires documented socio-demographic characteristics and social contacts for two assigned days. A contact was defined as a two-way conversation at a distance ≤ 2 m or as a physical contact, irrespective of masking. We used weighted generalized linear models with a Poisson distribution and robust variance (taking possible overdispersion into account) to compare the mean number of social contacts over time and by socio-demographic characteristics. Results A total of 1291 and 5516 Quebecers completed the study before and during the pandemic, respectively. Contacts significantly decreased from a mean of 8 contacts/day prior to the pandemic to 3 contacts/day during the spring 2020 lockdown. Contacts remained lower than the pre-COVID period thereafter (lowest = 3 contacts/day during the Christmas 2020/2021 holidays, highest = 5 in September 2020). Contacts at work, during leisure activities/in other locations, and at home with visitors showed the greatest decreases since the beginning of the pandemic. All sociodemographic subgroups showed significant decreases of contacts since the beginning of the pandemic. The mixing matrices illustrated the impact of public health measures (e.g. school closure, gathering restrictions) with fewer contacts between children/teenagers and fewer contacts outside of the three main diagonals of contacts between same-age partners/siblings and between children and their parents. Conclusion Physical distancing measures in Quebec significantly decreased social contacts, which most likely mitigated the spread of COVID-19.
|
p g
## RESEARCH
## Open Access
# Time trends in social contacts before and during the COVID‑19 pandemic: the CONNECT study
### Mélanie Drolet[1], Aurélie Godbout[1,2], Myrto Mondor[1], Guillaume Béraud[3], Léa Drolet‑Roy[1], Philippe Lemieux‑Mellouki[1,2], Alexandre Bureau[2,4], Éric Demers[1], Marie‑Claude Boily[5], Chantal Sauvageau[1,2,6], Gaston De Serres[1,2,6], Niel Hens[7,8], Philippe Beutels[8,9], Benoit Dervaux[10] and Marc Brisson[1,2,5*]
**Abstract**
**Background: Since the beginning of the COVID-19 pandemic, many countries, including Canada, have adopted**
unprecedented physical distancing measures such as closure of schools and non-essential businesses, and restrictions
on gatherings and household visits. We described time trends in social contacts for the pre-pandemic and pandemic
periods in Quebec, Canada.
**Methods: CONNECT is a population-based study of social contacts conducted shortly before (2018/2019) and**
during the COVID-19 pandemic (April 2020 – February 2021), using the same methodology for both periods. We
recruited participants by random digit dialing and collected data by self-administered web-based questionnaires.
Questionnaires documented socio-demographic characteristics and social contacts for two assigned days. A contact
was defined as a two-way conversation at a distance 2 m or as a physical contact, irrespective of masking. We used
≤
weighted generalized linear models with a Poisson distribution and robust variance (taking possible overdispersion
into account) to compare the mean number of social contacts over time and by socio-demographic characteristics.
**Results: A total of 1291 and 5516 Quebecers completed the study before and during the pandemic, respectively.**
Contacts significantly decreased from a mean of 8 contacts/day prior to the pandemic to 3 contacts/day during the
spring 2020 lockdown. Contacts remained lower than the pre-COVID period thereafter (lowest 3 contacts/day
=
during the Christmas 2020/2021 holidays, highest 5 in September 2020). Contacts at work, during leisure activities/
=
in other locations, and at home with visitors showed the greatest decreases since the beginning of the pandemic.
All sociodemographic subgroups showed significant decreases of contacts since the beginning of the pandemic.
The mixing matrices illustrated the impact of public health measures (e.g. school closure, gathering restrictions) with
fewer contacts between children/teenagers and fewer contacts outside of the three main diagonals of contacts
between same-age partners/siblings and between children and their parents.
**Conclusion: Physical distancing measures in Quebec significantly decreased social contacts, which most likely miti‑**
gated the spread of COVID-19.
*Correspondence: [email protected]
1 Centre de Recherche du CHU de Québec - Université Laval, Québec,
Québec, Canada
Full list of author information is available at the end of the article
© The Author(s) 2022. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which
permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the
original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or
other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line
to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory
regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this
[licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativeco](http://creativecommons.org/licenses/by/4.0/)
[mmons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.](http://creativecommons.org/publicdomain/zero/1.0/)
-----
**Keywords: COVID-19, Social contacts, Public health, Social distancing measures, Mathematical modeling, Infectious**
disease
**Background**
On September 1[st] 2021, Canada surpassed 1.5 million
confirmed cases of COVID-19, and > 25% of these cases
were from Quebec [1]. While the province of Quebec was
the epicenter of the first wave, most Canadian provinces
experienced stronger second and third waves in terms
of cases and hospitalisations. Since the beginning of the
pandemic, Canada has adopted unprecedented physical distancing measures from complete lockdowns to a
combination of school and non-essential businesses closures and restrictions on gatherings and household visits,
depending on epidemiological indicators and regions [2].
Given that physical distancing measures are a cornerstone of public health COVID-19 mitigation efforts, it is
important to examine how social contacts changed over
time: 1) to better understand the dynamics of the pandemic, 2) to inform future measures, and 3) to provide
crucial data for mathematical modeling. To our knowledge, this is one of the few population-based studies that
has compared social contacts documented shortly before
and during the COVID-19 pandemic using the same
methodology [3].
The main objective of this study is to describe the
time trends in social contacts for the COVID-19 prepandemic (2018–2019) and pandemic periods (April
2020-February 2021) in Quebec, Canada using a social
contact survey and a representative sample of the population. Specific objectives are to describe the time trends
in the number of social contacts, overall and by location
(home, work, school, public transport, leisure, other) and
by key socio-demographic characteristics.
**Methods**
**Study design**
CONNECT (CONtact and Network Estimation to Control Transmission) is a population-based survey of epidemiologically relevant social contacts and mixing patterns
conducted in the province of Quebec, Canada. The first
phase of CONNECT was conducted in 2018–2019 (February 2018 to March 2019), one year before the COVID19 pandemic. Four additional phases of CONNECT
were undertaken to document changes in social contacts
during the COVID-19 pandemic period (CONNECT2:
April 21[st]-May 25[th] 2020 and CONNECT3,4,5: July 3[rd]
2020-February 26[th] 2021) (Additional file 1: Table S1).
All CONNECT phases were conducted with the same
methodology.
**Recruitment of participants**
The target population of CONNECT consisted of
all non-institutionalized Quebecers without any age
restriction (e.g., elderly living in retirement homes who
generally have personal phone lines were eligible but
those living in long-term care homes (nursing homes,
Quebec CHSLD) were not eligible). We used random
digit dialling to recruit participants. The randomly
generated landline and mobile phone number sample
was provided by ASDE, a Canadian firm specialized in
survey sampling [4]. After having explained the study,
verified eligibility of the household and documented
the age and sex of all household members, we randomly
selected one person per household to participate in the
study, using a probability sample stratified by age. This
recruitment procedure was sequentially repeated for
every new phase of CONNECT (i.e., new participants
were recruited for every CONNECT phase).
**Data collection**
We collected data using a self-administered web-based
questionnaire. A secured individualized web link to the
questionnaire and information about the study were
sent by email to each selected participant who consented to participate in the study. Parents of children
aged less than 12 years were asked to complete the
questionnaire on behalf of their child, whereas teenagers aged 12–17 years completed their own questionnaire, after parental consent.
The same questionnaire was used for all CONNECT
phases. The first section of the questionnaire documented key socio-demographic characteristics. The
second section was a social contact diary, based on
instruments previously used in Polymod and other
similar studies [5–7] (an example of the diary is provided in the Additional file 1: Figure S1). Briefly, participants were assigned two random days of the week
(one week day and one weekend day) to record every
different person they had contact with between 5 am
and 5 am the following morning. A contact was defined
as either physical (handshake, hug, kiss) or nonphysical
(two-way conversation in the physical presence of the
person, at a distance equal or less than 2 m, irrespective
of masking). Participants provided the characteristics
of the contact persons (age, sex, ethnicity, and relationship to themselves (e.g., household member, friend, colleague)) as well as characteristics of the contacts with
-----
this person: location where the contact(s) occurred
(home, work, daycare/school, public transport, leisure,
other location), duration, usual frequency of contact
with that person, and whether the contact was physical or not. Participants reporting more than 20 professional contacts per day were asked not to report all
their professional contacts in the diary. Instead they
were asked general questions about these professional
contacts: age groups of the majority of contact persons, average durations of contacts and whether physical contacts were generally involved or not. Additional
questions about teleworking were included from CONNECT2 onwards.
All CONNECT phases were approved by the ethics
committee of the CHU de Québec research center (project 2016–2172) and we commissioned the market company Advanis for recruitment and data collection. All
participants gave their consent to participate in the study
during the recruitment phone call. Informed consent was
taken from a parent and/or legal guardian for study participation in the case of minors.
**Analyses**
We weighted the participants of the CONNECT 1–5
surveys by age, sex, region (Greater Montreal and
other Quebec regions), and household composition
(households without 0–17-year-olds, households with
0–5-year-olds, if not with 6–14-year-olds, if not with
15–17-year-olds), using the Quebec data of the 2016
Canadian census (Additional file 1: Table S2) and we verified that they were representative of the Quebec population for key socio-demographic characteristics. To obtain
daily number of social contacts on a weekly basis, we
weighted the number of daily contacts reported during
the week (5/7) and the weekend (2/7). We classified the
type of employment of workers using the 2016 National
occupation classification (NOC) [8].
We estimated the number of social contacts per person
and per day, for all locations combined and for 6 different locations: home, work, school, public transportation,
leisure, and other locations. To do so, several steps were
necessary. First, for a contact person met in more than
a single location during a single day, the location of the
contact was assigned in the following hierarchical order,
according to risk of transmission: home, work, school,
public transport, leisure and other locations [9]. For
example, if a parent reported contacts with his child at
home, in public transportation and in a leisure activity,
we only considered the home contact to avoid counting
contacts with the same person multiple times. Second,
for workers reporting more than 20 professional contacts per working day, we added their reported number
of professional contacts to the work location for their
working day(s). Similar to other studies which allowed a
maximal number of contacts per day [5, 6, 10], we truncated professional contacts at a maximum of 40 per day
to eliminate extreme values and contacts at low risk of
transmission of infectious diseases. Third, we identified
all workers in schools through their NOC code and job
descriptions and attributed their professional contacts to
the school location. We did so to describe social contacts
in schools, not only between students, but also between
students and their teachers, educators, and other school’s
workers. Unless specified, we estimated the mean
number of contacts in the different locations using a
population-based denominator. With this method, all
individuals were considered in the denominator of each
location had they reported contacts or not for that location. The sum of contacts in the different locations gives
the total number of contacts.
Using data available from CONNECT1-5, we determined different periods to reflect the Quebec COVID-19
epidemiology, their related physical distancing measures,
and expected seasonality in social contacts (Additional
file 1: Figure S2). We used data collected from February
1[st] 2018 to March 17[th] 2019 as our pre-COVID period.
We used data collected from April 21[st] to May 25[th] 2020
to represent the first wave, data collected from July 3[rd] to
August 31[st] 2020 to represent the summer, and data collected from September 1[st] 2020 to February 26[th] 2021 to
represent the second wave. We further stratified the second wave to represent periods of expected seasonality in
social contacts: September with the return to school and
at work, fall with gathering restrictions, the Christmas
holidays with school and work vacations and closure of
non-essential business, January and February 2021 with
the gradual return to work and school after Christmas
vacations and school/non-essential business closures,
and the introduction of a curfew. We used a Canadian
stringency index, adapted from the Oxford COVID-19
Government Response Tracker (OxCGRT) [11], to quantify the intensity of public health measures in Quebec
over time [12]. This index is obtained by averaging the
intensity score of 12 policy indicators (e.g., school closures, workplace closures, gathering restrictions, stayat-home requirements, etc.) and higher values indicate
stricter measures. We estimated the mean stringency
index for each of the 8 periods described previously by
averaging the daily values of the index.
We compared the mean number of social contacts
over time (total or by location) using weighted generalized linear models with a Poisson distribution and an
identity link. Generalized estimating equations with
robust variance [13] were used to account for the correlation between the two days of diary data collection
and overdispersion. A categorical period effect was
-----
included in the model and is presented as the absolute
difference in the mean number of contacts compared to
the previous period. We also compared the mean number of social contacts according to different key sociodemographic characteristics using the same model with
a period-by-covariate interaction and adjusting for age
(in 8 categories). In this model, period and characteristic effects were tested using contrasts: each period was
compared to the previous period within each level of
the covariate, and the global effect of the characteristic
was tested within each period. We also examined the
association between the mean number of social contacts and the stringency index (in 5 categories), irrespective of periods, using a model similar to the one
comparing periods.
Finally, we estimated mixing matrices. The entries of
the mixing matrix represent the mean number of social
contacts per person per day according to the age of the
respondent (column) and the age of his contacts (row).
Mixing matrices were estimated separately for the 8 periods described previously and for 3 categories of contact
locations: all locations, home (contacts with household
members and visitors), any location outside home. The
matrices were obtained by maximizing a constrained log
likelihood of the number of reported contacts per day
among CONNECT participants weighted by age, sex,
household composition and region. The number of contacts was assumed to follow a negative binomial distribution. The likelihood constraint ensured that the total
number of contacts between individuals of age i and age
j is the same whether it is estimated from entry (i,j) or
entry (j,i) of the total mixing matrix including contacts in
all locations (i.e., reciprocity of the mixing matrix).
All statistical analyses were performed with SAS version 9.4. Maximization of the log likelihood for the
mixing matrices was performed using a nonlinear programming algorithm (nlminb2 function from the ROI
package in R).
**Results**
**Participants**
A total of 1291, 546, and 4970 Quebecers completed the
social contact questionnaires during the pre-COVID
period (CONNECT1), the first wave (CONNECT2), and
summer 2020 and second wave (CONNECT3-5), respectively. Participation rates (number of questionnaires
completed among consenting participant) were 30%,
38%, and 34% for CONNECT 1,2, and 3–5 respectively
(Additional file 1: Figure S3). These participants were
generally representative of the Quebec general population, and they were comparable across the different
phases of CONNECT (Table 1).
**Time trends in the number of social contacts**
During the pre-pandemic period, the mean number of
social contacts per person per day was 7.8 (95% confidence interval (CI):7.2–8.5) (Fig. 1 and Additional file 1:
Table S3). This number decreased significantly by 60%
during the spring 2020 lockdown to 3.1 (95% CI:2.6–3.5).
It then increased gradually during summer 2020 and
peaked at 5.0 (95% CI:4.3–5.8) contacts/day in September 2020; this peak coincided with the return to school
and work. The mean number of contacts decreased significantly again during fall 2020 to 4.1 (95% CI:3.7–4.5)
when physical distancing measures were intensified in
Quebec to control the second wave. The mean number
of social contacts also decreased significantly during the
Christmas holidays at 2.9 (95% CI:2.7–3.1) because of
school and work vacations and closure of non-essential
businesses. There was a trend towards increasing numbers of contacts in January (3.5, 95% CI (3.0–3.9)) with
the gradual return to school and in February 2021 (4.0,
95% CI (3.3–4.6)) with the re-opening of non-essential
businesses. These time trends in social contacts closely
followed the intensity of public health measures as quantified by the stringency index (Fig. 1). The mean number of contacts was also significantly associated with
the stringency index, irrespective of periods (Additional
file 1: Table S4 and Figure S4).
During the pre-pandemic period, the great majority of contacts occurred at home (2.3 contacts: 1.2 with
household members and 1.1 with visitors), at work (2.7
contacts) and at school (1.6 contacts) (Fig. 1 and Additional file 1: Table S3). The mean number of contacts at
home with household members remained relatively constant over time (1.2 to 1.4 contacts), whereas the number of contacts at home with visitors varied significantly
through the study period with lower numbers observed
during the spring 2020 lockdown, in January and February 2021 (0.2–0.3 contacts). Compared to the pre-pandemic period, contacts at work and school decreased
significantly during the spring lockdown (1.2 and 0.0,
respectively), summer (1.2 and 0.2) and the holidays (0.8
and 0.0) and peaked in September 2020 (1.5 and 0.9) with
the return at work and school. Contacts in the other locations (transport, leisure and other locations) represented
a small proportion of overall contacts during the prepandemic period (1.3 contacts). They also decreased significantly since the beginning of the pandemic and stayed
low through the study period.
**Time trends in the number of social contacts by age**
The location of social contacts varied substantially by
age (Fig. 2, Additional file 1: Table S5). Contacts in
households represented an important part of contacts
-----
**Table 1 Key socio-demographic characteristics of CONNECT participants and the Quebec general population**
**2016 Census** **Pre-COVID** **1[st] wave** **Summer 2020 and**
**2[nd] wave**
**%** **N** **%weighted** **N** **%weighted** **N** **% weighted**
**Total** **1291** **546** **4970**
**Age**
0–5 yrs old 6.2 222 6.6 40 8.4 298 7.0
6–11 yrs old 6.6 163 8.1 31 5.7 225 7.5
12–17 yrs old 5.9 91 6.8 60 8.0 506 7.3
18–25 yrs old 9.3 98 8.5 53 9.1 506 9.3
26–45 yrs old 26.4 204 27.9 181 26.3 1304 26.8
46–65 yrs old 27.6 303 25.8 131 25.8 1539 26.0
66–75 yrs old 10.6 162 9.7 45 9.8 514 9.7
- 75 yrs old 7.4 48 6.4 5 6.8 78 6.5
**Sex**
Male 49.6 609 49.9 238 49.2 2515 50.0
Female 50.4 682 50.1 308 50.8 2455 50.0
**Region**
Rural 18.8 239 15.5 66 12.1 836 16.2
Urban 81.2 1049 84.5 480 87.9 4134 83.8
**Region**
Greater Montreal[€] 61.0 635 61.0 371 61.0 2815 60.9
Other Quebec regions 39.0 642 39.0 175 39.0 2152 39.1
**Household size**
1 33.3 239 23.2 125 23.3 968 19.6
2 34.8 408 34.1 188 36.2 2049 39.6
3 13.9 198 13.4 78 13.1 738 15.5
4 12.1 268 17.1 108 18.9 824 17.8
5 + 6.0 178 12.3 47 8.5 391 7.6
**Household composition**
Without 0–17-year-olds 61.0 734 65.2 346 65.2 3345 64.8
With 0–5-year-olds 17.1 256 16.2 84 16.4 743 16.6
If not, with 6–14-year-olds 14.0 255 16.1 90 15.8 734 16.0
If not, with 15–17-year-olds 2.6 46 2.5 26 2.5 148 2.6
0–17 without information 5.3 – – – – – –
**Level of education (among 25–64 yrs)**
No diploma, no degree 13.3 38 7.7 18 6.8 112 3.7
Secondary (high) school 18.5 63 12.1 34 12.5 293 10.1
College, cegep, other non-university certificate/diploma 38.8 184 37.6 99 32.3 916 31.9
University 29.3 210 42.7 164 48.4 1483 54.2
**Employment rate**
among 15–19 yrs old 47.3 10 22.9 8 22.2 98 31.7
among 20–24 yrs old 72.1 37 42.5 20 55.4 160 51.1
among 25–44 yrs old 85.2 179 81.8 147 83.9 1183 89.6
among 45–64 yrs old 71.4 167 59.3 99 71.0 1074 72.8
among ≥ 65 yrs old 10.3 22 8.6 10 13.7 159 19.6
**Participation rate in education**
among 18–24 yrs old 55.0 51 67.9 33 68.9 319 70.3
among 25–29 yrs old 14.0 12 19.5 7 16.5 66 18.2
among 30–34 yrs old 8.0 7 15.0 5 8.5 40 9.8
-----
**Table 1** (continued)
**Race/Ethnicity**
**2016 Census** **Pre-COVID** **1[st] wave** **Summer 2020 and**
**2[nd] wave**
**%** **N** **%weighted** **N** **%weighted** **N** **% weighted**
Caucasian 87.0 1124 88.5 460 87.6 4449 90.3
Other 13.0 156 11.5 77 12.4 455 9.7
Missing 11 – 9 – 66 –
**Country of origin**
Canadian-born 85.0 1208 91.5 472 88.9 4498 89.8
Foreign-born 15.0 83 8.5 72 11.1 466 10.2
Missing 0 – 2 – 6 –
**Mother tongue**
English 8.0 77 7.0 43 6.8 351 7.4
French 79.0 1122 88.1 440 84.1 4288 86.3
Other 13.0 54 4.9 56 9.2 291 6.3
Missing 38 – 7 – 40 –
**Type of occupation (workers)[*]**
0. Management 9.8 52 13.3 35 13.5 346 12.5
1. Business, finance, administration 15.9 58 13.2 56 17.4 510 19.0
2. Natural & applied sciences 6.7 31 8.2 33 11.6 350 13.6
3. Health 7.0 48 10.8 32 9.6 224 8.3
4. Education, law & social, community & gov. service 11.8 89 20.3 48 15.4 446 16.2
5. Art, culture, recreation & sport 3.2 15 3.4 12 4.1 115 4.8
6. Sales & services 23.2 67 15.1 41 12.2 382 13.7
7. Trades, transport & equipment operators 13.5 43 12.6 26 11.0 240 8.6
8. Natural resources, agriculture & related production 1.6 3 1.0 3 1.4 21 0.9
9. Manufacturing & utilities 4.9 6 2.0 11 3.8 64 2.4
Unknown 2.4 3 – 2 – 48 –
Pre-COVID: February 1[st] 2018 to March 17[th] 2019; 1[st] wave: April 21[st] to May 25[th] 2020; Summer 2020 and 2[nd] wave: July 3[rd] 2020 to February 26[th] 2021
€ Greater Montreal: Regions of Montréal, Laval, Montérégie, Lanaudière, Laurentides
- 2016 National occupation classification
for all age groups. Contacts in other locations were
highly dependent on age. The main locations of contacts away from home for individuals aged 0–17, 18–65,
and > 65 years were, respectively, school, work, and other
locations.
During the pre-pandemic period, the mean number of social contacts at school/daycare for youth aged
0–17 years was 3.3 contacts (Fig. 2A). These contacts significantly decreased to nearly 0 during the spring 2020
lockdown and the Christmas holidays. They reached the
pre-pandemic level with the return to school in September (4.0 contacts), during fall 2020 (3.0) and in February
2021 (3.7). Except for post-secondary, similar time trends
in contacts at school/daycare were observed by education
level (daycare, elementary, high school) (Additional file 1:
Table S6).
During the pre-pandemic period, the mean number
of contacts at work for adults aged 18–65 years was 5.6
(Fig. 2B). These contacts significantly decreased to 1.7
during the spring 2020 lockdown and thereafter remained
significantly lower than the pre-pandemic period (from
1.3 during the Christmas holidays to 2.7 in September).
The number of contacts at work varied by the type of
occupation and the proportion of workers reporting teleworking, and therefore having no contact at work (Additional file 1: Tables S7,S8). During the pre-pandemic
period, the greatest number of contacts at work were
reported by workers in the domains of Sales & services
(10.8), Management (10.2), and Health (10.1). Contacts at
work decreased during the spring 2020 lockdown for the
majority of domains and remained lower than the prepandemic period thereafter. Except for workers in the
domains of Health and Sales & Services, the majority of
workers in the other domains (> 50%) reported teleworking since the beginning of the pandemic.
During the pre-pandemic period, the mean number of
social contacts in other locations for adults older than
65 years was 1.6 (Fig. 2C). These contacts decreased
-----
significantly at the beginning of the pandemic and
remained low through the study period (between 0.2 and
0.8). Therefore, adults older than 65 years had virtually
no contact outside their house during this period.
**Time trends in the number of social contacts by key**
**socio‑demographic characteristics**
During the pre-pandemic period, the mean number
of social contacts was significantly higher among individuals living in households with ≥ 3 individuals (vs
households with 1–2 individuals), in households with
0–17-year-olds (vs households without 0–17-yearolds), among native French or English speakers (vs other
mother tongues), and among individuals with a university degree (vs no degree) (Fig. 3, Additional file 1: Table
S5). During the first wave, social contacts significantly
decreased for most socio-demographic characteristics.
The mean number of social contacts slightly increased
after the first wave for all socio-demographic characteristics, although it remained lower than the pre-pandemic
period through the study period. During the second
wave, the only significant differences between sociodemographic characteristics were a higher number of
contacts in households with more individuals and/or
households with 0–17-year-olds, mainly explained by the
greater number of contacts with household members. Of
note, individuals with a university degree had the greatest decrease of their social contacts during the first wave
(from 10.7 to 2.5, p < 0.0001) and their contacts remained
relatively low through the study period (2.5 to 4.4).
**Time trends in social contact matrices**
During the pre-pandemic period, the mixing matrices indicated a high assortativity of contacts by age (as
illustrated by the central diagonal), and mixing between
children and adults, mainly at home (as illustrated by
the 2 secondary diagonals) (Fig. 4). These general mixing patterns with 3 diagonals remained apparent during
the different pandemic periods, even though the number of contacts was substantially reduced. Interestingly,
the matrices of contacts in any location outside home
clearly illustrate the impact of school closures or holidays
(Spring 2020, Summer 2020, Holidays 2020–21) with
fewer contacts between children/teenagers. The matrices
of contacts at home (with household members and visitors) also illustrate the impact of restrictions on private
gatherings (Spring 2020, Fall 2020 to February 2021) with
fewer contacts outside the 3 main diagonals and contacts
limited to household members (i.e., same-age partners/
siblings and children/parents).
**Discussion**
Public health measures to control the COVID-19
spread in Quebec had a significant impact on the number of social contacts. Contacts decreased from a mean
of 8 contacts per day prior to the pandemic to 3 contacts per day during the spring 2020 lockdown (60%
-----
decline vs pre-COVID). Contacts then increased gradually during the 2020 summer to peak at 5 contacts
per day in September with the return to school and at
work (36% decline vs pre-COVID). Contacts decreased
thereafter during the fall 2020 and winter 2021 to about
4 contacts per day as the physical distancing measures
were intensified in Quebec to control the second wave
of COVID-19 (47% decline vs pre-COVID). Contacts at
work, at school, in leisure activities, and at home with
visitors showed the greatest changes through the study
period. Before the pandemic, adults aged 18–65 years,
individuals with a university degree, those living in
households with 3 or more individuals and/or in households with 0–17-year-olds, and native French or English speakers reported the greatest number of social
contacts. Contacts decreased significantly among all
socio-demographic subgroups during the spring 2020
lockdown and remained lower than the pre-pandemic
period through the study period.
Our results indicating a 60% reduction of social contacts during the spring 2020 lockdown in Quebec (from
7.8 to 3.1) are generally consistent with the results from
similar studies. The CoMix survey, an ongoing empirical
study of social contacts conducted in several European
countries [14–16], estimated a 70–80% reduction in the
number of social contacts during the spring 2020 lockdown compared to similar studies conducted in 2006
(POLYMOD) and 2010 [5, 17]. For example, contacts
decreased from 10.8 in 2006[5] to 2.8 contacts per day
during the lockdown in United Kingdom [14]. A Canadian study also estimated a 56–80% reduction in social
contacts in May, July, September and December 2020
[18] compared to POLYMOD data collected in United
Kingdom in 2006[5]. However, it is difficult to determine,
from these studies, which part of the decrease is related
to socio-demographic changes between 2006/2010 and
2020 and to the COVID-19 pandemic. Furthermore, the
authors of the Canadian study recognized that social
-----
-----
contacts collected in the United Kingdom in 2006 may
not be representative of Canadian contacts before the
pandemic [18]. Other studies from different countries
(e.g. Belgium, France, Germany, Italy, Netherlands, Spain,
United Kingdom, United States, Luxembourg, China)
have also estimated a mean of around 3 contacts per day
during the spring 2020 lockdown period [3, 15, 19–22]
and similar increasing trends in social contacts after the
first lockdown when physical distancing measures were
relaxed [3, 15, 22, 23]. Our results are also consistent with
Google phone mobility data for Quebec showing substantial decreases in visits of about 80% in retail & recreation, work, and transit transportation stations during the
spring 2020 lockdown compared to January 2020. Mobility increased thereafter but remained lower than the
pre-COVID levels for these locations (mean decreases of
20%, 25% and 45% for visits in retail & recreation, work,
and transit transportation stations, respectively, from
September to Mid-December 2020) [24].
To our knowledge this is one of the few populationbased studies of social contacts worldwide to compare
social contacts during the pandemic to those documented
shortly before the pandemic using the same methodology. Only one other study conducted in the Netherland
included social contacts documented shortly before the
pandemic (in 2016–2017) and during the pandemic using
the same methodology [3]. However, CONNECT has
some limitations. Firstly, previous data suggested that
social contacts measured with survey methodology could
underestimate the number of social contacts compared
with a sensors methodology, particularly for contacts of
short duration [25, 26]. More specifically, parents participating in CONNECT reported difficulties in reporting contacts at school on behalf of their child. Secondly,
although CONNECT is population-based with a random
recruitment of the general population, volunteer participants may differ from those refusing to participate in the
study and may be those adhering the most to the public
health measures. However, we have collected a wealth of
information regarding the participant’s characteristics
and we are confident that the recruitment process was
successful in providing a sample of participants generally representative of the Quebec general population
(in terms of region, participation rate to education and
employment, race, country of origin and mother tongue),
and samples are comparable across the different phases
of the study. Thirdly, given that public health measures
undertaken aimed at limiting social contacts, social
desirability may have contributed to an underestimation
of contacts. Some participants may not have reported all
their contacts, particularly contacts forbidden by public health measures. These three main limitations would
likely bias our results towards an underestimation of
social contacts. Nonetheless, changes in social contacts
measured in our study closely followed the epidemiology and physical distancing measures in Quebec (Fig. 1,
Additional file 1: Figures S2 and S4). For example, the
beginning of the second wave coincided with an increasing number of social contacts related to school and work
return in September. The number of cases stabilisation/
decrease of the second wave coincided with a decreasing
number of contacts related to the intensification of public
health measures in January and February 2021 (Fig. 1).
Our results have important implications for COVID-19
control and policy decisions in Quebec and elsewhere.
First, continuous monitoring of social contacts represents
a measure of the effectiveness of public health measures
aiming at reducing social contacts to contain and prevent COVID-19 transmission. Our results suggest that
Quebecers have been generally adherent to public health
measures since the beginning of the pandemic. For example, restriction of household contacts with visitors was an
important public health measure during the spring lockdown and since October 2020 in Quebec. This is clearly
reflected by the small number of household contacts with
visitors during the spring and fall 2020 and by changes in
household mixing matrices with fewer contacts outside
of the 3 main diagonals of contacts between same-age
partners/siblings and between children and their parents. Second, data on age- and location-specific changes
in social contacts and mixing matrices are proxies for
contact events that can lead to transmission when made
between susceptible and infectious individuals and are
an essential input for transmission-dynamic mathematical models considering different types of contacts. Our
social contacts data and mixing matrices have been integrated and were regularly updated into our COVID-19
mathematical model for projections of the potential evolution of the pandemic in Quebec to help inform policy
decisions [27]. Finally, social contact data can generate
hypothesis to improve our understanding of the COVID19 transmission dynamics. For example, an important
increase in the number of cases while the number of contacts remains relatively stable could suggest that the virus
became more transmissible per contact. Hypothesis such
as the introduction of a new variant more transmissible
in a region, new transmission modes, or a higher transmissibility of the virus for specific meteorological conditions could then be explored.
In conclusion, physical distancing measures in Quebec were effective at significantly decreasing social
contacts, which most likely helped prevent COVID-19
spread and generalized overflow of hospital capacity. It
is important to continue monitoring contacts as vaccines are rolled out.
-----
**Supplementary Information**
[The online version contains supplementary material available at https://doi.](https://doi.org/10.1186/s12889-022-13402-7)
[org/10.1186/s12889-022-13402-7.](https://doi.org/10.1186/s12889-022-13402-7)
**Additional file 1:** **Table S1. Overview of the different phases of CON‑**
NECT. Table S2. Weighting procedure. Table S3. Time trends in the
number of social contacts, by location of contacts A mean number of
contacts, B median number of contacts. Table S4. Association between
the mean number of social contacts and the stringency index. Table S5.
Time trends in the total number of social contacts by key socio-demo‑
graphic characteristics. Table S6. Time trends in the number of social
contacts at school/daycare among children-students according to school
level. Table S7. Time trends in social contacts at work among workers
and according to the type of employment (2016 National occupation
classification). Table S8. Time trends in the proportion of workers who
reported working remotely, according to the type of employment (2016
National occupation classification). Fig. S1. Example of the social contact
diary. Fig. S2. Quebec COVID-19 epidemiology, related physical distancing
measures, and CONNECT data periods. Fig. S3. Flowchart of participant
identification for CONNECT 1, CONNECT 2, and CONNECT 3,4,5. Fig. S4.
Mean number of social contacts according to the intensity of public
health measures in Quebec as summarized by the stringency index.
**Acknowledgements**
MCB acknowledges funding from the MRC Centre for Global Infectious
Disease Analysis (reference MR/R015600/1), jointly funded by the UK Medical
Research Council (MRC) and the UK Foreign, Commonwealth & Development
Office (FCDO), under the MRC/FCDO Concordat agreement and is also part of
the EDCTP2 programme supported by the European Union.
**Author’s contribution**
MB, MD, and MCB designed the study. All authors (except AB) participated
in the development and validation of the study questionnaires. MB and MD
drafted the article and supervised the data collection and analysis. AG, GB,
and LDR participated in data collection. AG, MM, GB, LDR, PLM, AB and ED
participated in the analysis. All authors interpreted the results and critically
revised the manuscript for scientific content. All authors approved the final
version of the article.
**Funding**
This study was funded by the Canadian Immunization Research Network,
the Canadian Institutes of Health Research (foundation scheme grant FDN143283), the Institut National de Santé Publique du Québec, and the Fonds de
recherche du Québec – Santé research (scholars award to MB).
**Availability of data and materials**
The datasets generated and/or analysed during the current study are not
publicly available due to the dataset containing sensitive personal data.
Aggregated data and mixing matrices data are available from the correspond‑
ing author on reasonable request.
**Declarations**
**Ethics approval and consent to participate**
All methods were carried out in accordance with relevant guidelines and reg‑
ulations. The CONNECT study was approved by the Ethics Committee of the
Centre de recherche du CHU de Québec-Université Laval (project 2016–2172).
All participants provided informed consent during the recruitment phone call.
Informed consent was taken from a parent and/or legal guardian for study
participation in the case of minors.
**Consent for publication**
Not applicable.
**Competing interests**
The authors declare that they have no competing interests.
**Author details**
1 Centre de Recherche du CHU de Québec - Université Laval, Québec, Québec,
Canada. [2] Laval University, Québec, Québec, Canada. [3] Department of Infec‑
tious Diseases, Centre Hospitalier Universitaire de Poitiers, 86021 Poitiers,
France. [4] CERVO Brain Research Center, Centre Intégré Universitaire de Santé
Et de Services Sociaux de La Capitale-Nationale, Québec, QC, Canada. [5] MRC
Centre for Global Infectious Disease Analysis, School of Public Health, Imperial
College London, London, UK. [6] Institut National de Santé Publique du Québec,
Québec, Québec, Canada. [7] I‑BioStat, Data Science Institute, Hasselt University,
Hasselt, Belgium. [8] Centre for Health Economic Research and Modelling Infec‑
tious Diseases, Vaccine and Infectious Disease Institute, University of Antwerp,
Antwerp, Belgium. [9] School of Public Health, University of New South Wales,
Sydney, Australia. [10] Institut Pasteur U1167 – RID‑AGE – Facteurs de risque
et déterminants moléculaires des maladies liées au vieillissement, Univ Lille,
Inserm, CHU Lille, 59000 Lille, France.
Received: 13 October 2021 Accepted: 4 May 2022
**References**
1. Government of Canada: Coronavirus disease (COVID-19): Outbreak
[update. Available at https://www.canada.ca/en/public-health/services/](https://www.canada.ca/en/public-health/services/diseases/2019-novel-coronavirus-infection.html)
[diseases/2019-novel-coronavirus-infection.html. Accessed September 3](https://www.canada.ca/en/public-health/services/diseases/2019-novel-coronavirus-infection.html)
2021. 2021.
2. Institut national de santé publique du Québec: Ligne du temps COVID-19
[au Québec. Available at https://www.inspq.qc.ca/covid-19/donnees/](https://www.inspq.qc.ca/covid-19/donnees/ligne-du-temps)
[ligne-du-temps. Accessed September 20, 2021. 2021.](https://www.inspq.qc.ca/covid-19/donnees/ligne-du-temps)
3. Backer JA, Mollema L, Vos ER, Klinkenberg D, van der Klis FR, de Melker HE,
van den Hof S, Wallinga J. Impact of physical distancing measures against
COVID-19 on contacts and mixing patterns: repeated cross-sectional sur‑
veys, the Netherlands, 2016–17, April 2020 and June 2020. Euro Surveill.
2021;26(8):2000994.
4. [Available at http://surveysampler.com/samples/rdd-samples/. Accessed](http://surveysampler.com/samples/rdd-samples/)
September 2021
5. Mossong J, Hens N, Jit M, Beutels P, Auranen K, Mikolajczyk R, Massari M,
Salmaso S, Tomba GS, Wallinga J, et al. Social contacts and mixing patterns
relevant to the spread of infectious diseases. PLoS Med. 2008;5(3):e74.
6. Beraud G, Kazmercziak S, Beutels P, Levy-Bruhl D, Lenne X, Mielcarek N,
Yazdanpanah Y, Boelle PY, Hens N, Dervaux B. The French Connection: The
First Large Population-Based Contact Survey in France Relevant for the
Spread of Infectious Diseases. PLoS ONE. 2015;10(7):e0133203.
7. Beutels P, Shkedy Z, Aerts M, Van Damme P. Social mixing patterns for
transmission models of close contact infections: exploring self-evaluation
and diary-based data collection through a web-based interface. Epide‑
miol Infect. 2006;134(6):1158–66.
8. Government of Canada: National Occupational Classification. Available at
[https://noc.esdc.gc.ca/Home/Welcome/7ab7c22e13254181a057e5bbd](https://noc.esdc.gc.ca/Home/Welcome/7ab7c22e13254181a057e5bbd5d0f33b?GoCTemplateCulture=en-CA)
[5d0f33b?GoCTemplateCulture=en-CA. Accessed February 4, 2021. 2016.](https://noc.esdc.gc.ca/Home/Welcome/7ab7c22e13254181a057e5bbd5d0f33b?GoCTemplateCulture=en-CA)
9. Willem L, Van Hoang T, Funk S, Coletti P, Beutels P, Hens N. SOCRATES: an
online tool leveraging a social contact data sharing initiative to assess
mitigation strategies for COVID-19. BMC Res Notes. 2020;13(1):293.
10. Hens N, Ayele GM, Goeyvaerts N, Aerts M, Mossong J, Edmunds JW, Beu‑
tels P. Estimating the impact of school closure on social mixing behaviour
and the transmission of close contact infections in eight European
countries. BMC Infect Dis. 2009;9:187.
11. Hale T, Angrist N, Boby T, Cameron-Blake E, Hallas L, Kira B, Majumdar S,
Petherick A, Phillips T, Tatlow H, et al. Variation in government responses
[to COVID-19. BSG-WP-2020/032, Version 10.0. Available at https://www.](https://www.bsg.ox.ac.uk/sites/default/files/2020-12/BSG-WP-2020-032-v10.pdf)
[bsg.ox.ac.uk/sites/default/files/2020-12/BSG-WP-2020-032-v10.pdf.](https://www.bsg.ox.ac.uk/sites/default/files/2020-12/BSG-WP-2020-032-v10.pdf)
Accessed January 10, 2022. 2020.
12. Cheung C, Lyons J, Madsen B, miller S, Sheik S: The Bank of Canada
COVID‑19 stringency index: measuring policy response across provinces.
[Available at https://www.bankofcanada.ca/2021/02/staff-analytical-note-](https://www.bankofcanada.ca/2021/02/staff-analytical-note-2021-1/)
[2021-1/. Accessed January 10, 2022. 2021.](https://www.bankofcanada.ca/2021/02/staff-analytical-note-2021-1/)
13. Zeger SL, Liang KY, Albert PS. Models for longitudinal data: a generalized
estimating equation approach. Biometrics. 1988;44(4):1049–60.
-----
14. Jarvis CI, Van Zandvoort K, Gimma A, Prem K. group CC-w, Klepac P, Rubin
GJ, Edmunds WJ: Quantifying the impact of physical distance measures
on the transmission of COVID-19 in the UK. BMC Med. 2020;18(1):124.
15. Coletti P, Wambua J, Gimma A, Willem L, Vercruysse S, Vanhoutte B, Jarvis
CI, Van Zandvoort K, Edmunds J, Beutels P, et al. CoMix: comparing mixing
patterns in the Belgian population during and after lockdown. Sci Rep.
2020;10(1):21885.
[16. The Comix study. Available at https://www.uhasselt.be/UH/71795-start/](https://www.uhasselt.be/UH/71795-start/The-CoMix-study)
[The-CoMix-study. Accessed September 20, 2021](https://www.uhasselt.be/UH/71795-start/The-CoMix-study)
17. Hoang TV, Coletti P, Kifle YW, Kerckhove KV, Vercruysse S, Willem L,
Beutels P, Hens N. Close contact infection dynamics over time: insights
from a second large-scale social contact survey in Flanders, Belgium, in
2010–2011. BMC Infect Dis. 2021;21(1):274.
18. Brankston G, Merkley E, Fisman DN, Tuite AR, Poljak Z, Loewen PJ, Greer
AL: Quantifying Contact Patterns in Response to COVID-19 Public Health
Measures in Canada. medRxiv. 2021:2021.2003.2011.21253301.
19. Latsuzbaia A, Herold M, Bertemes JP, Mossong J. Evolving social
contact patterns during the COVID-19 crisis in Luxembourg. PLoS ONE.
2020;15(8):e0237128.
20. Del Fava E, Cimentada J, Perrotta D, Grow A, Rampazzo F, Gil-Claver S,
et al. Differential impact of physicaldistancing strategies on social con‑
tactsrelevant for the spread of SARS-CoV-2:evidence from a cross-national
[onlinesurvey, March–April 2020. BMJ Open. 2021;11:e050651. https://doi.](https://doi.org/10.1136/bmjopen-2021-050651)
[org/10.1136/bmjopen-2021-050651.](https://doi.org/10.1136/bmjopen-2021-050651)
21. Zhang J, Litvinova M, Liang Y, Wang Y, Wang W, Zhao S, Wu Q, Mer‑
ler S, Viboud C, Vespignani A, et al. Changes in contact patterns
shape the dynamics of the COVID-19 outbreak in China. Science.
2020;368(6498):1481–6.
22. Liu CY, Berlin J, Kiti MC, Del Fava E, Grow A, Zagheni E, Melegaro A, Jen‑
ness SM, Omer SB, Lopman B, et al. Rapid Review of Social Contact Pat‑
terns During the COVID-19 Pandemic. Epidemiology. 2021;32(6):781–91.
23. Jarvis C.I. GA, van Zandvoort K., Wong K.L.M., Munday J.D., Klepac P.,
Funk S., Edmunds W.J. & CMMID COVID-19 working group.: CoMix study
[- Social contact survey in the UK. Available at https://cmmid.github.io/](https://cmmid.github.io/topics/covid19/comix-reports.html)
[topics/covid19/comix-reports.html. Accessed February 4, 2021. 2020.](https://cmmid.github.io/topics/covid19/comix-reports.html)
[24. Google: Community mobility reports. Available at https://www.google.](https://www.google.com/covid19/mobility/)
[com/covid19/mobility/. Accessed Frbruary 4, 2021. 2020.](https://www.google.com/covid19/mobility/)
25. Hoang T, Coletti P, Melegaro A, Wallinga J, Grijalva CG, Edmunds JW,
Beutels P, Hens N. A Systematic Review of Social Contact Surveys to
Inform Transmission Models of Close-contact Infections. Epidemiology.
2019;30(5):723–36.
26. Smieszek T, Barclay VC, Seeni I, Rainey JJ, Gao H, Uzicanin A, Salathe M.
How should social mixing be measured: comparing web-based survey
and sensor-based methods. BMC Infect Dis. 2014;14:136.
27. Brisson M, Gingras, G., Drolet M., Laprise JF. and the Groupe de recherche
en modélisation mathématique et en économie de la santé liée aux
maladies infectieuses: Modélisations de l’évolution de la COVID-19 au
[Québec. Available at https://www.inspq.qc.ca/covid-19/donnees/proje](https://www.inspq.qc.ca/covid-19/donnees/projections)
[ctions. Accessed February 05, 2021. 2020.](https://www.inspq.qc.ca/covid-19/donnees/projections)
**Publisher’s Note**
Springer Nature remains neutral with regard to jurisdictional claims in pub‑
lished maps and institutional affiliations.
-----
| 12,917
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC9125550, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-022-13402-7"
}
| 2,021
|
[
"JournalArticle"
] | true
| 2021-10-07T00:00:00
|
[
{
"paperId": "c7839d505dcd3324ec4b7b7c17eb672120b484be",
"title": "Differential impact of physical distancing strategies on social contacts relevant for the spread of SARS-CoV-2: evidence from a cross-national online survey, March–April 2020"
},
{
"paperId": "a1ae2ecc9a14303c8a03135a3b71b6cc652b7c22",
"title": "Rapid Review of Social Contact Patterns During the COVID-19 Pandemic"
},
{
"paperId": "924f58177d48f3b0542393dd3ad0d1cdef7464d8",
"title": "Quantifying contact patterns in response to COVID-19 public health measures in Canada"
},
{
"paperId": "1e66a6f951239e4e3475542c8e524159d62751a4",
"title": "Impact of physical distancing measures against COVID-19 on contacts and mixing patterns: repeated cross-sectional surveys, the Netherlands, 2016–17, April 2020 and June 2020"
},
{
"paperId": "6684a8f6f9c222dc608ea45faf782ac2cb4f2eea",
"title": "Close contact infection dynamics over time: insights from a second large-scale social contact survey in Flanders, Belgium, in 2010-2011"
},
{
"paperId": "58561daf13cd1bb5513c16442ceb4bf8c4fb55bf",
"title": "CoMix: comparing mixing patterns in the Belgian population during and after lockdown"
},
{
"paperId": "ab2318e1d67e5f9bb9d57ab74bbd8a1b945a5e99",
"title": "Evolving social contact patterns during the COVID-19 crisis in Luxembourg"
},
{
"paperId": "04da3272305af0fae4d2f18e1ba1ac22158003dd",
"title": "The differential impact of physical distancing strategies on social contacts relevant for the spread of COVID-19"
},
{
"paperId": "57728bbc38536ed09043b1471bd117eb5e830f0c",
"title": "Changes in contact patterns shape the dynamics of the COVID-19 outbreak in China"
},
{
"paperId": "f67f8fbd6465aea380140c93db94878fa3488f4b",
"title": "Quantifying the impact of physical distance measures on the transmission of COVID-19 in the UK"
},
{
"paperId": "2d2085208f33bd72e001e90f67489296e108e097",
"title": "SOCRATES: an online tool leveraging a social contact data sharing initiative to assess mitigation strategies for COVID-19"
},
{
"paperId": "f4495e8d4d8af634ea1107777a76e1bf4d1fa32d",
"title": "Sampler"
},
{
"paperId": "2b2eb6e82bf1160909a9a47074b78c51b3ed0d77",
"title": "A Systematic Review of Social Contact Surveys to Inform Transmission Models of Close-contact Infections"
},
{
"paperId": "d6978051897a97b4a3dff90eefc1eb05ca5105cb",
"title": "The French Connection: The First Large Population-Based Contact Survey in France Relevant for the Spread of Infectious Diseases"
},
{
"paperId": "46668c35c98fbc58771bec9915448593a32ff558",
"title": "How should social mixing be measured: comparing web-based survey and sensor-based methods"
},
{
"paperId": "e12cda7344aa86782756ae2dd78907d9bbb5f4e7",
"title": "Diagnosing norovirus-associated infectious intestinal disease using viral load"
},
{
"paperId": "566b5c62fc77292ebe09295d59e7fbf6fc914260",
"title": "Social Contacts and Mixing Patterns Relevant to the Spread of Infectious Diseases"
},
{
"paperId": "a9854fa491f37b5a6ff4bede66bb5f7c9e120703",
"title": "Social mixing patterns for transmission models of close contact infections: exploring self-evaluation and diary-based data collection through a web-based interface"
},
{
"paperId": "d44d6830dc387f46895ae036ee3dda3d974ae01a",
"title": "Models for longitudinal data: a generalized estimating equation approach."
},
{
"paperId": "2a7c5b34e499178f61e385d0583196d6c781a849",
"title": "National Occupational Classification"
},
{
"paperId": "34b9635d7779e219e9d60e0d3d33919ca9bc123c",
"title": "Publisher's Note"
},
{
"paperId": null,
"title": "Variation in government responses to"
},
{
"paperId": null,
"title": "The Bank of Canada COVID‐19 stringency index: measuring policy response across provinces"
},
{
"paperId": null,
"title": "Google: Community mobility reports"
},
{
"paperId": null,
"title": "Available at https:// noc. esdc. gc. ca/ Home/ Welco me/ 7ab7c 22e13 25418 1a057 e5bbd 5d0f3 3b? GoCTe mplat eCult ure= en-CA"
},
{
"paperId": null,
"title": "COVID-19): Outbreak update"
},
{
"paperId": null,
"title": "and the Groupe de recherche en modélisation mathématique et en économie de la santé liée aux maladies infectieuses: Modélisations de l'évolution de la COVID-19 au Québec"
},
{
"paperId": null,
"title": "Ligne du temps COVID-19 au Québec"
},
{
"paperId": null,
"title": "Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations"
},
{
"paperId": null,
"title": "The Comix study"
},
{
"paperId": null,
"title": "CMMID COVID-19 working group. CoMix study - Social contact survey in the UK. Available at https://cmmid.github.io/topics/covid19/comix-reports.html"
}
] | 12,917
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0022cb3f8e1120f11d7baceb300ade97abe341fd
|
[
"Computer Science"
] | 0.88347
|
Distributed Paged Hash Tables
|
0022cb3f8e1120f11d7baceb300ade97abe341fd
|
International Conference on High Performance Computing for Computational Science
|
[
{
"authorId": "144962723",
"name": "J. Rufino"
},
{
"authorId": "2411096",
"name": "A. Pina"
},
{
"authorId": "2563770",
"name": "A. Alves"
},
{
"authorId": "3033826",
"name": "J. Exposto"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"High Performance Computing for Computational Science (Vector and Parallel Processing)",
"High Perform Comput Comput Sci (vector Parallel Process",
"Int Conf High Perform Comput Comput Sci",
"VECPAR"
],
"alternate_urls": null,
"id": "1eed7a6b-9e3b-431b-a1a1-c61ba1a420d6",
"issn": null,
"name": "International Conference on High Performance Computing for Computational Science",
"type": "conference",
"url": "https://link.springer.com/conference/vecpar"
}
| null |
# Distributed Paged Hash Tables
Jos´e Rufino[1][ ⋆], Ant´onio Pina[2], Albano Alves[1], and Jos´e Exposto[1]
1 Polytechnic Institute of Bragan¸ca, 5301-854 Bragan¸ca, Portugal
_{rufino,albano,exp}@ipb.pt_
2 University of Minho, 4710-057 Braga, Portugal
```
[email protected]
```
**Abstract. In this paper we present the design and implementation of**
DPH, a storage layer for cluster environments. DPH is a Distributed
Data Structure (DDS) based on the distribution of a paged hash table.
It combines main memory with file system resources across the cluster
in order to implement a distributed dictionary that can be used for the
storage of very large data sets with key based addressing techniques. The
DPH storage layer is supported by a collection of cluster–aware utilities
and services. Access to the DPH interface is provided by a user–level
API. A preliminary performance evaluation shows promising results.
## 1 Introduction
Today commodity hardware and message passing standards such as PVM [1]
and MPI [2] are making possible to assemble clusters that exploit distributed
storage and computing power, allowing for the deployment of data-intensive
computer applications at an affordable cost. These applications may deal with
massive amounts of data both at the main and secondary memory levels. As
such, traditional data structures and algorithms may no longer be able to cope
with the new challenges specific to cluster computing.
Several techniques have thus been devised to distribute data among a set
of nodes. Traditional data structures have evolved towards Distributed Data
Structures (DDSs) [3, 4, 5, 6, 7, 8, 9] . At the file system level, cluster aware
file systems [10, 11] already provide resilience to distributed applications. More
recently a new research trend has emerged: online data structures for external
memory that bypass the virtual memory system and explicitly manage their own
I/O [12].
Distributed Paged Hashing (DPH[1]) is a cluster aware storage layer that
implements a hash based Distributed Data Structure (DDS). DPH has been
designed to support a Scalable Information Retrieval environment (SIRe), an
ongoing research project with a primary focus on information retrieval and cataloging techniques suited to the World Wide Web.
_⋆_ Supported by PRODEP III, through the grant 5.3/N/199.006/00, and SAPIENS,
through the grant 41739/CHS/2001.
1 A preliminary presentation of our work took place at the PADDA2001 workshop [13];
here we present a more in-depth and updated description of DPH.
-----
680 Jose Ru o et a
The main idea behind DPH is the distribution of a paged hash table over
a set of networked page servers. Pages are contiguous bucket sets[2], all with the
same number of buckets. Because the amount of pages is initially set our strategy
appears to be static. However, pages are created on–demand so the hash table
grows dynamically.
A page broker is responsible for the mapping of pages to page servers. The
mapping takes place just once for the lifetime of a page (page migration is not
yet supported) and so the use of local caches at the service clients alleviates the
_page broker. In a typical scenario, the page broker is mainly active during the_
first requests to the DPH structure when pages are mapped to the available page
_servers. Because the local caches are incrementally updated the page broker will_
be relieved from further mapping requests.
The system doesn’t rely only on the available main memory at each node.
When performance is not the primary concern, a threshold based swap mechanism may also be used to take advantage of the file system. It is even possible
to operate the system solely based on the file system, achieving the maximum
level of resilience. The selection of the swap-out bucket victims is based on a
Least–Recently–Used (LRU) policy.
The paper is organized as follows: section 2 covers related work, section 3
presents the system architecture, section 4 shows preliminary performance results
and section 5 concludes and points directions for future work.
## 2 Related Work
Hash tables are well known data structures [14] mainly used as a fast key based
addressing technique. Hashing has been intensively exploited because retrieval
times are O(1) when compared with O(log n) for tree-structured schemes or
_O(n) for sequential schemes. Hashing is classically static meaning that, once set,_
the bit–length of the hash index never changes and so the complete hash table
must be initially allocated.
In dynamic environments, with no regular patterns of utilization, the use of
static hash tables results on storage space waste if only a small bucket subset is
used. Static hashing may not also be able to guarantee O(1) retrieval times when
buckets overflow. To counterwork these limitations several dynamic hashing [15]
techniques have been proposed, such as Linear Hashing (LH) [16] and Extendible
Hashing (EH) [17], along with some variants.
Meanwhile, with the advent of cluster computing, traditional data structures have evolved towards distributed versions. The issues involved aren’t trivial because, in a distributed environment, scalability is a primary concern and
new problems arise (consistency, timing, order, security, fault tolerance, hot–
spots, etc.). In the hashing domain, LH* [3] extended LH [16] techniques for
file and table addressing and coined the term Scalable Distributed Data Struc_ture (SDDS). Distributed Dynamic Hashing (DDH) [4] offered an alternative_
2 In the DPH context, a bucket is a hash table entry where collisions are allowed and
self–contained, that is, collisions don’t overflow into other buckets.
-----
st buted aged as ab es 68
approach to LH* while EH* [5] provided a distributed version of EH [17]. Although in a very specific application context, [18] have exploited a very similar
concept to DPH, named two–level hashing. Distributed versions of several other
classical data structures, such as trees [7, 8] and even hybrid structures, such as
hash–trees [19], have also been designed. More recently, work has been done on
hash based distributed data structures to support Internet services [9].
## 3 Distributed Paged Hashing
Our proposal shows that for certain classes of problems, an hybrid approach,
that mixes static and dynamic techniques, may achieve good performance and
scalability without the complexity of purely dynamic schemes.
When the dimension of the key space is unknown a priori, a pure dynamic
hashing approach would incrementally use more bits from the hash index when
buckets overflow and split. Only then storage consumption would expand to
make room for the new buckets. Typically, the expansion takes place at another
server, as distributed dynamic hashing schemes tend to move one of the splits
to another server.
Although providing maximum flexibility, a dynamic approach increases the
load on the network, not only during bucket splitting, but also when a server
forwards requests from clients with an outdated view of the <bucket, server>
mapping. Once we know in advance that the application domain (SIRe) will
include a distributed web crawler, designed to extract and manage millions of
URLs, then it doesn’t make much sense not to start, from the beginning, using
the maximum bit–length of the hash index. As such, DPH is a kind of hybrid approach that includes both static and dynamic features: it uses a fixed bit–length
hash table, but pages (and buckets) are created on–demand and distributed
across the cluster.
DPH user applications
DPH API
DPH services (page broker + page servers)
pCoR
PThreads TCP/IP GM (for Myrinet)
**Fig. 1. The DPH architecture**
|DPH user applications DPH API DPH services (page broker + page servers)|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
||DPH services (page broker + page servers)|||||
|||||||
|pCoR||||||
|||||||
|PThreads||||GM (for Myrinet)||
-----
68 Jose Ru o et a
**3.1** **Architecture**
Figure 1 presents the architecture of DPH. User applications interface with the
DPH core (the page broker and the page servers) through a proper API. The runtime system is provided by pCoR [20], a prototype of CoR [21]. CoR paradigm
extends the process abstraction to achieve structured fine grained computation
using a combination of message passing, shared memory and POSIX Threads.
pCoR is both multithreaded and thread safe and already provides some very
useful features, namely message passing (by using GM [22] over Myrinet) between threads across the cluster. This is fully exploited by the DPH API and
services, which are also multithreaded and thread safe.
**3.2** **Addressing**
The DPH addressing scheme is based on one–level paging of the hash table:
1. a static hash function H is used to compute an index i for a key k: H(k) = i;
2. the index i may be split into a page field p and an offset field o: i =< p, o >;
3. the hash table may be viewed as a set of 2[#][p] pages, with 2[#][o] buckets per
page, where #p and #o are the (fixed) bit–length of the page and offset
fields, respectively;
4. the page table pt will have 2[#][p] entries, such that pt[j] = psj, where psj is
a reference to the page server for page j.
_H is a 32 bit hash function[3], but smaller bit subsets from the hash index_
may be used, with the remaining bits being simply discarded. The definition of
the page and offset bit–lengths are the main decisions to take prior to the usage
of the DPH data structure. The more bits the page field uses, the more pages
will be created, leading to a very sparse hash table (if enough page servers are
provided), with a small number of buckets per page. Of course, the reverse will
happen when the offset field consumes more bits: fewer, larger pages, handled by
a small number of page servers. The later scenario will less likely take advantage
of the distribution. Thus, defining the index bit–length is a decision dependent
on the key domain. We want to minimize collisions and so large indexes may
seem reasonable but that should be an option only if we presume that the key
space will be uniformly used. Otherwise storage space will be mostly wasted on
control data structures.
**3.3** **Page Broker**
The page broker is responsible for the mapping of pages into page servers. As
such, the page broker maintains a page table, pt, with 2[#][p] entries, one for each
page. When it receives a mapping request for page p, the page broker returns
3 H has been chosen from [23]. A comparison was made with other general hash
functions from [24], [14] and [25], but no significant differences have been found,
both in terms of performance and collision avoidance.
-----
st buted aged as ab es 683
page data node data node
0
1
LRU
<key> <data> <key> <data>
data node
2[#o]-2
2[#o]-1
page table <key> <data>
0
1 page
0
file system
2[#p]-1 1
data node
2[#o]-2
2[#o]-1 <key> <data>
**Fig. 2. Main data structures for a page server**
_pt[p], which is a reference to the page server responsible for the page p. It may_
happen, however, that this is the first mapping request for the page p. If so,
the page broker will have to choose a page server to handle that page. A Round
Robin (RR) policy is currently used over the available page servers, assuring that
each handles an equal share of the hash table, but we plan to add the choice for
more adaptive policies, such as weighted RR (proportional to the available node
memory and/or current load, for instance) or others.
**3.4** **Page Servers**
A page server hosts a page subset of the distributed hash table (as requested
by the page broker, during the mapping process), and answers most of the DPH
user level API requests (insertions, searches, deletions, etc.).
Figure 2 presents the main data structures for a page server. A page table
with 2[#][p] entries is used to keep track of the locally managed pages. A page is
a bucket set with 2[#][o] entries. A bucket is an entry point to a set of data nodes
which are <key, data> pairs. Collisions are self contained in a bucket (chaining).
Other techniques, like using available empty buckets on other pages (probing),
wouldn’t be compatible with the swapping mechanism[4].
Presently, buckets are doubly–linked lists. These rather inefficient data structures, with O(n) access times, were used just to rapidly develop the prototype.
In the future we plan to use more efficient structures, such as trees, skip–lists [26]
or even dynamic hashing.
4 This mechanism uses the bucket as the swap unit and depends on information kept
therein to optimize the process.
|Col1|Col2|Col3|Col4|
|---|---|---|---|
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||
||||||||||||
||||||||||||
||||||||||||
|Col1|Col2|
|---|---|
-----
68 Jose Ru o et a
One of the most valuable features of a page server is the ability to use the
file system as a complementary online storage resource. Whenever the current
user data memory usage surpasses a certain configurable threshold, a swapping
mechanism is activated. A bucket victim is chosen, from the buckets currently
hold in memory. The victim, chosen from a Least–Recently–Used (LRU) list, is
the oldest possible referenced bucket that still frees enough memory to lower the
current usage bellow the threshold.
The LRU list links every bucket currently in main memory, crossing local
page boundaries, and so a bucket may be elected as a victim in order to release
storage to a bucket from another local page. The LRU list may also be exploited
in other ways. For instance, besides being a natural queue, quickly browsing
every bucket in a page server is possible, without the need to hash any key.
Buckets that have been swapped-out to the file system are still viewed as
online and will be swapped-in as they are needed. The swapping granularity
is currently at the bucket level and not at the data node level. This may be
unfair to some data nodes in the bucket but prevents too many small files (often
thousands), one for each data node, at the file system level, which would degrade
performance. The swapping mechanism is further ooptimizedthrough the use of
a dirty bit per bucket, preventing unmodified buckets to be unnecessarily saved.
A page server may work with a zero threshold thus using the main memory
just to keep control data structures and as an intermediate pool to perform the
user request, after which the bucket is immediately saved to the file system and
the temporary instance removed from main memory.
If a DPH instance has been terminated graciously (thus ssynchronizingits
state with the file system), then it may be loaded again, on–demand: whenever
a page server is asked to perform an operation on a bucket that is empty, it first
tries to load a possible instance from the file system because an instance may
be there from a previous shutdown of the DPH hash table. In fact, even after
an unclean shutdown, partial recovery may be possible because uunsynchronized
bucket instances are still loaded.
**3.5** **User Applications**
User applications may be built on top of the DPH API and runtime environment.
From a user application perspective, insertions, retrievals and removals are the
main interactions with the DPH storage layer. These operations must have a key
hashed and then mapped into the correct page server. This mapping is primarily
done through a local cache of the page broker page table. A user application starts
with an empty page table cache and so many cache misses will take place, forcing
mapping requests to the page broker. This is done automatically, in a transparent
way to the user application. Further mappings of the same page will benefit from
a cache hit and so the user application will readily contact the relevant page
_server._
-----
st buted aged as ab es 685
Presently, mapping never changes for the lifetime of a DPH instance[5] and so
the cache will be valid during the execution of the user application. This way, a
_page broker will be a hot–spot (if ever) for a very limited amount of time. Our_
preliminary tests show no significant impact on performance during cache fills.
**3.6** **Client–Server Interactions**
Our system operates with a relatively small number of exchanged messages[6]:
1. mapping a page into a page server may use zero, two, four or (very seldom)
more messages: if the local cache gives a hit, zero messages were needed;
otherwise the page broker must be contacted; if the page table gives a hit,
only the reply to the user application is needed, summing up two messages;
otherwise a page server must be contacted and so two more messages are
needed (request and reply); of course, if the page server replied with a negative acknowledgement, the Round Robin search for another page server will
add two more messages per page server;
2. insertions, retrievals and removals typically use two messages (provided
a cache hit); however, insertions and retrievals may be asynchronous, using
only one message (provided, once again, a cache hit); the later means that no
acknowledge is requested from the page server, which translates into better
performance, though the operation may have not be successfully performed
and the user application won’t be aware of it.
Once local caches become updated, and assuming the vast majority of the
requests to be synchronous insertions, retrievals and deletions, we may set two
messages as the upper bound for each interaction of a client with a DPH instance.
## 4 Performance Evaluation
**4.1** **Test–Bed**
The performance evaluation took place in a cluster of five nodes, all running
Linux Red Hat 7.2 with the stock kernel (2.4.7-10smp) and GM 1.5.1 [22]. The
nodes were interconnected using a 1.28+1.28 Gbits/s Myrinet switch. Four of
the nodes (A,B,C,D) have the following hardware specifics: two Pentium III
processors at 733 Mhz, 512 Mb SDRAM/100 MHz, i840 chipset, 9Gb UDMA
66 hard disks, Myrinet SAN LANai 9 network adapter. The fifth node (E) has
four Pentium III Xeon processors running at 700 Mhz, 1 Gb ECC SDRAM/100
MHz, ServerWorks HE chipset, 36 Gb Ultra SCSI 160 hard disk and a Myrinet
SAN LANai 9 network adapter.
5 We are referring to a live instance, on top of a DPH runtime system.
6 We have restricted the following analysis to the most relevant interactions.
-----
686 Jose Ru o et a
**4.2** **Hash Bit–Length**
Because DPH uses static hashing, the hash bit–length must be preset. This
should be done in such a way that overhead from control data structures and
collisions are both minimized. However, those are conflicting requisites. For instance, to minimize collisions we should increase the bit–length, thus increasing
the hash table height; in turn, a larger hash table will have more empty buckets
and will consume more control data structures. We thus need a metric for the
choice of the right hash bit–length.
**Metric Definition Let Bj be the number of buckets with j data nodes, after**
the hash table has been built. If k keys have been inserted, then Pj = (Bj _×j)/k is_
the probability of any given key to have been inserted in a Bj bucket. Also, let Nj
be the average number of nodes visited to find a key in a Bj bucket. Once we have
used linked lists to handle collisions, Nj = (j + 1)/2. Then, given an arbitrary
key, the average number of nodes to be searched for the key is N = [�]j[(][P][j][ ×][N][j][).]
The overhead from control data structures is O = C/(U + C), where C is the
storage consumed in control data structures and U is the storage consumed in
user data (keys and other possible attached data). Finally, our metric is defined
by the ranking R = nN _oO, where n and o are the percentual weights given_
_×_
to N and O, respectively. For a specific scenario, the hash bit–length to choose
will be the one that minimizes R.
**Application Scenario The tests were performed, in a single cluster node (A),**
for a varying number of keys, using hash bit–lengths from 15 to 20. The page
field of our addressing scheme used half of the hash; the other half was used as
an offset in the page (for odd bit–lengths, the page field was favored). Keys were
random unique sequences, 128 bytes wide; user data measured 256 bytes[7].
Figure 3 presents the rankings obtained. If an ideal general hash function
(one that uniformly spspreads the hashes across the hash space, regardless of
the randomness and nature of the keys) was used, we would expect the optimum
hash bit–length to be approximately log2k, for each number of keys k. However,
not only our general hash function [23] isn’t ideal, but also the overhead factor
must be taken into account. We thus observe that our metric is minimized when
the bit–length is log2k − 1, regardless of k[8].
In order to determine if the variation of the key size would interfere with
the optimum hash bit–length we ran another test, this time by varying the key
size across 4, 128, 256 . Figure 4 shows the results for 125000 keys. It may
_{_ _}_
be observed that log2k − 1 still is the recommended hash bit–length, independently of the key size[9]. The ranking is preserved because regardless of the key
size, the hash function provides similar distributions of the keys; therefore, N is
approximately the same, while the overhead O is the varying factor.
7 Typical sizes used in the web crawler being developed under the SIRe project.
8 For instance, 17 bits for the hash bit–length seems reasonable when dealing with
a maximum of 125000 keys, but our metric gives 16 bits as the recommended value.
9 This was also observed with 250000, 500000 and 1000000 keys.
-----
st buted aged as ab es 687
0,9 1 000 000 keys
500 000 keys
0,8
250 000 keys
0,7 125 000 keys
0,6
0,5
0,4
0,3
0,2
0,1
0
15 16 17 18 19 20
hash bit-length
**Fig. 3. R for n = 50% and o = 50%**
0,9
0,8
0,7
0,6
0,5
0,4
0,3
0,2 4 bytes
128 bytes
0,1 256 bytes
0
15 16 17 18 19 20
hash bit-length
**Fig. 4. Effect of the key size on R**
**4.3** **Scalability**
To evaluate the scalability of our system we produced another type of experiments using k=1500000 as the maximum number of keys. Accordingly with the
metric R defined in the previous experiment, the hash bit–length was set to
_log2k −_ 1 = 19 bits. Also, as previously, keys were random unique sequences,
128 bytes wide, with 256 bytes of attached user data. Each client thread was
responsible for the handling of 125000 keys.
We measured insertions and retrievals. Insertions were done in newly created
DPH instances and thus the measured times (“build times”) accounted for cache
misses and page broker mappings. The retrieval times and retrieval key rates are
not presented, because they were observed to be only marginally better. The
memory threshold was set high enough to prevent any DPH swapping.
-----
688 Jose Ru o et a
**One Page Server, Multiple Clients The first test was made to investigate**
how far the system would scale by having a single page server to attend simultaneous requests from several multithreaded clients. Our cluster is relatively small
and so, to minimize the influence of hardware differences between nodes, we used
the following configuration: nodes A,B and C hosted clients, node D hosted the
_page server and node E hosted the page broker._
Figure 5 shows the throughput obtained when 1, 2 or 3 clients make simultaneous key insertions by using, successively, 1, 2 or 3 threads: 1 active client,
with 1 thread, will insert 125000 keys; . . . ; 3 active clients, with 3 threads each,
will insert 3 3 125000 = 1125000 keys.
_×_ _×_
It may be observed that, as expected, we need to add more working nodes to
increment the throughput, when using 1 thread per client. Of course, this trend
will stop as soon as the communication medium or the page server get saturated.
With 2 threads per client, the keyrate still increases; in fact, with just 1 client
and 2 threads the throughput achieved is the same as with 2 clients with 1 thread
each but, when 3 simultaneous clients are active (in a total of 6 client threads),
the speedup from 2 clients is minimum, thus indicating that the saturation point
may be near.
When using 3 threads per client and just 1 active client, the speedup from
2 threads is still positive but, when increasing the number of active clients, no
advantage is taken from the use of 3 threads. With 2 active clients, 6 threads
are used, which equals the number of working threads when 3 clients are active,
with 2 threads each; as we already have seen, this later scenario produces very
poor speedup; nevertheless it still produces better results than 2 clients with 3
threads (the more threads per client, the more time will be consumed in thread
scheduling and I/O contention).
The values presented allow us to conclude that 6 working threads are pushing
the system to the limit, but they are unclear about the origin of that behavior:
the communication medium or the page server?
3 threads 17974 22892 23464
client nodes
|30000|Col2|Col3|Col4|
|---|---|---|---|
|30000 25000 20000 15000 10000 5000 0||||
||1|2|3|
|1 thread|8797|15824|20475|
|2 threads|15194|22934|24364|
|3 threads|17974|22892|23464|
**Fig. 5. Insert keyrate with one page server and multiple clients**
-----
st buted aged as ab es 689
**Two Page Servers, Multiple Clients To answer the last question we added**
one more page server to the crew and repeated the tests. But, with just four
nodes (the fifth hosted the page broker solely), we couldn’t perform tests with
more than 2 clients. Still, with a maximum of 3 threads per client, we were able
to obtain results using a total of 6 threads.
Figure 6 sums up the test results by showing the improving on the insert
rate when using one more page server. For 1 active client the gains are relatively
modest. For 2 active clients the speedup is much more evident, specially when
3 threads per client are used, summing up 6 threads on overall.
The results presented allow us to conclude that by adding page servers to our
system important performance gains may be obtained. However it remains to be
done a quantitative study of the performance scaling in a cluster environment
with much more nodes to assign both to clients and page servers.
**Multiple <Page Server, Client> Pairs So far, we have decoupled clients**
and page servers on every scenario we have tested. It may happen, however,
that both must share the same cluster node (as is the case for our small cluster).
Thus, it is convenient to evaluate how the system scales in such circumstances.
As previously, the page broker was always kept at the node E and measurements were made with a different number of working threads in the client (1, 2
and 3). We started with a single node, hosting a client and a page server. We
then increased the number of nodes, always pairing a client and a page server.
The last scenario had four of these pairs, one per node, summing up to 12 active
threads and accounting for a maximum of 12 125000 = 1500000 keys inserted.
_×_
Figure 7 shows the insert key rate. The 1–node scenario shows very low key
rates with 2 and 3 threads. This is due to high I/O contention between the client
threads and the page server threads. When the number of nodes is augmented,
the key space, although larger, is also more scattered across the nodes, which
45,0% 1 thread 42,6%
40,0% 2 threads
3 threads
35,0%
30,0%
25,0%
19,5%
20,0%
15,0%
10,0%
5,8%
5,0% 3,4%
1,7%
0,2%
0,0%
1 2
client nodes
**Fig. 6. Speedup with two page servers**
|Col1|3,4%|
|---|---|
|||
|||
|Col1|Col2|42,6%|
|---|---|---|
||||
||||
||||
-----
690 Jose Ru o et a
40000
1 thread
35000 2 threads
3 threads
30000
25000
20000
15000
10000
5000
0
1 2 3 4
<page server, client> nodes
**Fig. 7. Insert keyrate with multiple <page server, client> pairs**
200,0% 1 thread 187,7%
2 threads 171,1%
3 threads
150,0%
100,0% 85,6%
73,9% 72,3% 76,7%
50,0%
0,0%
-8,5% 2 -12,3% 3 -12,9% 4
-50,0%
<page server, client> nodes
**Fig. 8. Insert speedup with multiple <page server, client> pairs**
alleviates the contention on each node and makes the use of more threads much
more profitable.
Figure 8 shows the speedup with multiple nodes. The speedup refers to the
increasing of the measured rates over the rates that could be predicted by linear
extrapolation from the 1–node scenario.
## 5 Conclusions
DPH is a Distributed Data Structure (DDS) based on a simple yet very effective
principle: the paging of a hash table and the mapping of the pages among a set
of networked page servers.
Conceptually, DPH uses static hashing, because the hash index bit–length
is set in advance. Also, the usage of a page table to preserve mappings between
|Col1|85,6%|
|---|---|
|||
|||
|Col1|171,1%|
|---|---|
|||
|||
|Col1|Col2|187,7%|
|---|---|---|
||||
||||
||||
-----
st buted aged as ab es 69
sections (pages) of the table and their locations (page servers) makes DPH a di_rectory based [15] approach._
However, the hash table is not created at once, because it is virtually paged
and pages are dynamically created, on–demand, being scattered across the cluster, thus achieving data balancing. Local caches at user applications prevent the
_page broker to become a hot–spot and provide some immunity to page broker_
failures (once established, mappings do not change and so the page broker can
almost be dismissed).
Another important feature available in the DPH DDS is the capability to
exploit the file system as a complementary on–line storage area, which is made
possible through the use of a LRU/threshold based swapping mechanism. In this
regard, DPH is very flexible in the way it consumes available storage resources,
whether they are memory or disk based.
Finally, the performance evaluation we have presented shows that it is possible to define practical metrics to set the hash bit–length and that our selected
hash function [23] preserves the (relative) rankings regardless of the key size. We
have also investigated the scalability of our system and although we have observed promising results, further investigation is needed with many more nodes.
Much of the research work on hash based DDSs has been focused on dynamic
hashing schemes. With this work we wanted to show that the increasing performance and storage capacity of modern clusters may also be exploited with great
benefits using an hybrid approach.
In the future we plan to pursue our work in several directions: elimination
of the page broker by using directoryless schemes, inspired by hash routing techniques, such as consistent hashing [27]; usage of efficient data structures to handle
collisions and near zero–memory–copy techniques to improve performance; exploitation of cluster aware file systems (delayed due to the lack of choice on
quality open–source implementations) and external memory techniques [12].
## References
[1] Al Geist, A. Beguelin, J. Dongarra, W. Jiang, R. Manchek, and V. Sunderam.
_PVM: Parallel Virtual Machine. A User’s Guide and Tutorial for Networked Par-_
_allel Computing. Scientific and Engineering Computation. MIT Press, 1994. 679_
[2] M. Snir, S. Otto, S. Huss-Lederman, David Walker, and J. Dongarra. MPI - The
_Complete Reference. Scientific and Engineering Computation. MIT Press, 1998._
679
[3] W. Litwin, M.-A. Neimat, and D. A. Schneider. LH*: Linear Hashing for Distributed Files. In Proceedings of the ACM SIGMOD - International Conference
_on Management of Data, pages 327–336, 1993._ 679, 680
[4] R. Devine. Design and implementation of DDH: a distributed dynamic hashing
algorithm. In Proceedings of the 4th Int. Conf. on Foundations of Data Organi_zation and Algorithms, pages 101–114, 1993._ 679, 680
[5] V. Hilford, F. B. Bastani, and B. Cukic. EH* – Extendible Hashing in a Distributed Environment. In Proceedings of the COMPSAC ’97 - 21st International
_Computer Software and Applications Conference, 1997._ 679, 681
-----
69 Jose Ru o et a
[6] R. Vingralek, Y. Breitbart, and G. Weikum. Distributed File Organization with
Scalable Cost/Performance. In Proceedings of the ACM SIGMOD - International
_Conference on Management of Data, 1994._ 679
[7] B. Kroll and P. Widmayer. Distributing a Search Tree Among a Growing Number
of Processors. In Proceedings of the ACM SIGMOD – International Conference
_on Management of Data, pages 265–276, 1994._ 679, 681
[8] T. Johnson and A. Colbrook. A Distributed, Replicated, Data–Balanced Search
Structure. Technical Report TR03-028, Dept. of CISE, University of Florida,
1995. 679, 681
[9] S. D. Gribble, E. A. Brewer, J. M. Hellerstein, and D. Culler. Scalable, Distributed
Data Structures for Internet Service Construction. In Proceedings of the Fourth
_Symposium on Operating Systems Design and Implementation, 2000._ 679, 681
[10] W. K. Preslan et all. A 64–bit, Shared Disk File System for Linux. In Proceed_ings of the 7h NASA Goddard Conference on Mass Storage Systems and Tech. in_
_cooperation with the Sixteenth IEEE Symposium on Mass Storage Systems, 1999._
679
[11] P. H. Carns, W. B. Ligon, R. B. Ross, and R. Thakur. PVFS: A Parallel File
System for Linux Clusters. In Proceedings of the 4th Annual Linux Showcase and
_Conference, pages 317–327. USENIX Association, 2000._ 679
[12] J. S. Vitter. Online Data Structures in External Memory. In Proceedings of
_the 26th Annual Intern. Colloquium on Automata, Languages, and Programming,_
1999. 679, 691
[13] J. Rufino, A. Pina, A. Alves, and J. Exposto. Distributed Hash Tables. International Workshop on Performance-oriented Application Development for Distributed Architectures (PADDA 2001), 2001. 679
[14] D. E. Knuth. The Art of Computer Programming – Volume 3: Sorting and Search_ing. Addison-Wesley, 2nd edition, 1998._ 680, 682
[15] R. J. Enbody and H. C. Du. Dynamic Hashing Schemes. ACM Computing Surveys,
(20):85–113, 1988. 680, 691
[16] W. Litwin. Linear hashing: A new tool for file and table addressing. In Proceedings
_of the 6th Conference on Very Large Databases, pages 212–223, 1980._ 680
[17] R. Fagin, J. Nievergelt, N. Pippenger, and H. R. Strong. Extendible hashing:
a fast access method for dynamic files. ACM Transactions on Database Systems,
(315-344), 1979. 680, 681
[18] T. Stornetta and F. Brewer. Implementation of an Efficient Parallel BDD Package.
In Proceedings of the 33rd ACM/IEEE Design Automation Conference, 1996. 681
[19] P. Bagwell. Ideal Hash Trees. Technical report, Computer Science Department,
Ecole Polytechnique Federale de Lausanne, 2000. 681
[20] A. Pina, V. Oliveira, C. Moreira, and A. Alves. pCoR - a Prototype for Resource
Oriented Computing. (to appear in HPC 2002), 2002. 682
[21] A. Pina. MC [2] _- Modelo de Computa¸c˜ao Celular. Origem e Evolu¸c˜ao. PhD thesis,_
Dep. de Inform´atica, Univ. do Minho, Braga, Portugal, 1997. 682
[22] Myricom. The GM Message Passing System, 2000. 682, 685
[23] B. Jenkins. A Hash Function for Hash Table Lookup. Dr. Doob’s, 1997. 682,
686, 691
[24] A. V. Aho, R. Sethi, and J. D. Ullman. Compilers: Principles, Techniques and
_Tools. Addison–Wesley, 1985._ 682
[25] R. C. Uzgalis. General Hash Functions. Technical Report TR 91-01, University
of Hong Kong, 1991. 682
[26] W. Pugh. Skip Lists: A Probabilistic Alternative to Balanced Trees. Communi_cations of the ACM, 33(6):668–676, 1990._ 683
-----
st buted aged as ab es 693
[27] D. Kargeer, A. Sherman, A. Berkheimer, B. Bogstad, R. Dhanidina, K. Iwamoto,
B. Kim, L. Matkins, and Y. Yerushalmi. Web Caching with Consistent Hashing.
In Proceedings of the 8th International WWW Conference, 1999. 691
-----
| 9,376
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/3-540-36569-9_46?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/3-540-36569-9_46, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "CLOSED",
"url": ""
}
| 2,002
|
[
"JournalArticle"
] | false
| 2002-06-26T00:00:00
|
[] | 9,376
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00239b5c8b8458f15aabd9da3336dc99a3d81632
|
[
"Computer Science"
] | 0.816505
|
Software Speed Records for Lattice-Based Signatures
|
00239b5c8b8458f15aabd9da3336dc99a3d81632
|
Post-Quantum Cryptography
|
[
{
"authorId": "2955750",
"name": "Tim Güneysu"
},
{
"authorId": "1902820",
"name": "Tobias Oder"
},
{
"authorId": "2672355",
"name": "T. Pöppelmann"
},
{
"authorId": "1722449",
"name": "P. Schwabe"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"PQCrypto",
"Post-quantum Cryptogr"
],
"alternate_urls": null,
"id": "5dfe770f-9a03-43e4-aade-935a79ae6428",
"issn": null,
"name": "Post-Quantum Cryptography",
"type": "conference",
"url": null
}
| null |
# Software Speed Records for Lattice-Based Signatures
Tim G¨uneysu[1], Tobias Oder[1], Thomas P¨oppelmann[1], and Peter Schwabe[2][ ⋆]
1 Horst G¨ortz Institute for IT-Security, Ruhr-University Bochum, Germany
2 Digital Security Group, Radboud University Nijmegen, The Netherlands
**Abstract. Novel public-key cryptosystems beyond RSA and ECC are**
urgently required to ensure long-term security in the era of quantum
computing. The most critical issue on the construction of such cryptosystems is to achieve security and practicability at the same time. Recently,
lattice-based constructions were proposed that combine both properties,
such as the lattice-based digital signature scheme presented at CHES
2012. In this work, we present a first highly-optimized SIMD-based software implementation of that signature scheme targeting Intel’s Sandy
Bridge and Ivy Bridge microarchitectures. This software computes a signature in only 634988 cycles on average on an Intel Core i5-3210M (Ivy
Bridge) processor. Signature verification takes only 45036 cycles. This
performance is achieved with full protection against timing attacks.
**Keywords: Post-quantum cryptography, lattice-based cryptography, cryp-**
tographic signatures, software implementation, AVX, SIMD
## 1 Introduction
Besides breakthroughs in classical cryptanalysis the potential advent of quantum computers is a serious threat to the established discrete-logarithm problem
(DLP) and factoring-based public-key encryption and signature schemes, such
as RSA, DSA and elliptic-curve cryptography. Especially when long-term security is required, all DLP or factoring-based schemes are somewhat risky to use.
The natural consequence is the need for more diversification and investigation
of potential alternative cryptographic systems that resist attacks by quantum
computers. Unfortunately, it is challenging to design secure post-quantum signature schemes that are efficient in terms of speed and key sizes. Those which
are known to be very efficient, such as the lattice-based NTRU-sign [15] have
been shown to be easily broken [19]. Multivariate quadratic (MQ) signatures,
e.g., Unbalanced Oil and Vinegar (UOV), are fast and compact, but their public
keys are huge with around 80 kB and thus less suitable on embedded systems –
even with optimizations the keys are still too large (around 8 Kb) [20].
_⋆_ This work was supported by the National Institute of Standards and
Technology under Grant 60NANB10D004. Permanent ID of this document:
```
ead67aa537a6de60813845a45505c313. Date: March 28, 2013
```
-----
The introduction of special ring-based (ideal) lattices and their theoretical
analysis (see, e.g., [18]) provides a new class of signature and encryption schemes
with a good balance between key size, signature size, and speed. The speed advantage of ideal lattices over standard lattice constructions usually stems from
the applicability of the Number Theoretic Transform (NTT), which allows operations in quasi-linear runtime of (n log n) instead of quadratic complexity.
_O_
In particular, two implementations of promising lattice-based constructions for
encryption [12] and digital signatures [14] were recently presented and demonstrate that such constructions can be efficient in reconfigurable hardware. However, as the proof-of-concept implementation in [12] is based on the generic NTL
library [22], it remains still somewhat unclear how these promising schemes perform on high-performance processors that include modern SIMD multimedia
extensions such as SSE and AVX.
**Contribution. The main contribution of this work is the first optimized soft-**
ware implementation of the lattice-based signature scheme proposed in [14].
It is an aggressively optimized variant of the scheme originally proposed by
Lyubashevsky [17] without Gaussian sampling. We use security parameters p =
8383489, n = 512, k = 2[14] that are assumed to provide an equivalent of about
80 bits of security against attacks by quantum computers and 100 bits of security against classical computers. With these parameters, public keys need only
1536 bytes, private keys need 256 bytes and signatures need 1184 bytes. On one
core of an Intel Core i5-3210M processor (Ivy Bridge microarchitecture) running
at 2.5 GHz, our software can compute more than 3900 signatures per second or
verify more than 55000 signatures per second. To maximize reusability of our
results we put the software into the public domain[3]. We will additionally submit
our software to the eBACS benchmarking project [4] for public benchmarking.
**Outline. In Section 2 we first provide background information on the imple-**
mented signature scheme. Our implementation and optimization techniques are
described in Section 3 and evaluated and compared to previous work in Section 4.
We conclude with future work in Section 5.
## 2 Signature Scheme Background
In this section we briefly revisit the lattice-based signature scheme implemented
in this work. For more detailed information as well as security proofs, please
refer to [14, 17].
**2.1** **Notation**
In this section we briefly recall the notation from [14]. We use a similar notation
and denote by R[p][n] the polynomial ring Z[x]p⟨x[n] +1⟩ with integer coefficients in
the range [− _[p][−]2_ [1] _[,][ p][−]2_ [1] [] where][ n][ is a power of two. The prime][ p][ must satisfy the]
3 The software is available at http://cryptojedi.org/crypto/\#lattisigns
-----
congruence relation p 1 (mod 2n) to allow us to use the quasi-linear-runtime
_≡_
NTT-based multiplication. For any positive integer k, we denote by Rk[p][n] the
set of polynomials in with coefficients in the range [ _k, k]. The expression_
_R[p][n]_ _−_
$
_a_ _D denotes the uniformly random sampling of a polynomial a from the set_
_←−_
_D._
**2.2** **Definition**
According to the description in [14] we have chosen a to be a randomly generated
global constant. For the key generation described in Algorithm 1 we therefore
basically perform sampling of random values from the domains R1[p][n] [followed by a]
polynomial multiplication with the global constant and an addition. The private
key sk consists of the values s1, s2 while t is the public key pk. Algorithm 2 signs
**Algorithm 1: Key generation algorithm GEN(p, n)**
**Input: Parameters p, n**
**Output: (t)pk, (s1, s2)sk**
$
**1 s1, s2** _←−R1[p][n]_
**2 t ←** _as1 + s2_
a message m specified by the user. In step 1 two polynomials y1, y2 are chosen
uniformly at random with coefficients in the range [ _k, k]. In step 2 a hash_
_−_
function is applied on the higher-order bits of ay1+y2 which outputs a polynomial
_c by interpreting the first 160-bit of the hash output as a sparse polynomial. In_
step 3 and 4, y1 and y2 are used to mask the private key by computing z1 and
and restarts otherwise. The polynomialz2. The algorithm only continues if z1 and z2 z is then compressed into2 are in the range [−(k − z32)2′ [in step 7], k − 32]
by Compress. This compression is part of the aggressive size reduction of the
signature σ =(z1,z2′ [,][c][) since only some portions of][ z][2] [are necessary to maintain]
the security of the scheme. For the implemented parameter set Compress has a
chance of failure of less than two percent which results in the restart of the whole
signing process.
The verification algorithm VER as described in Algorithm 3 first ensures that
all coefficients of z1, z2′ [are in the range [][−][(][k][ −] [32)][, k][ −] [32] and rejects the input]
otherwise by returning b = 0 to indicate an invalid signature. In the next step,
_az1+z2′_ _[−][tc][ is computed, transformed into the higher-order bits and then hashed.]_
If the polynomial c from the signature and the output of the hash match, the
signature is valid and the algorithm outputs b = 1 to indicate its success.
In Algorithm 4 the transformation of a polynomial into a higher-order representation is described. This algorithm exploits the fact that every polynomial
_Y_ can be written as
_∈R[p][n]_
_Y = Y_ [(1)](2(k 32) + 1) + Y [(0)]
_−_
-----
**Algorithm 2: Signing algorithm SIGN(s1, s2, m)**
**InputOutput: s:1 z, s1, z2 ∈R2′** _[∈R]1[p][n]k[p][, message][n]−32[,][ c][ ∈{][ m][0][,][ 1][ ∈{][}][160][0][,][ 1][}][∗]_
$
**1 y1, y2** _←−Rk[p][n]_
**2 c ←** H(Transform(ay1 + y2), m)
**3 z1 ←** _s1c + y1_
**4 z2 ←** _s2c + y2_
**5 if z1 or z2 ̸∈Rk[p][n]−32** **[then]**
**6** go to step 1
_′_
**7 z2** _[←]_ [Compress][(][ay]1 [+][ y]2 _[−]_ _[z]2[,][z]2[,][p][,][k][ −]_ [32)]
_′_
**8 if z2** [=][⊥] **[then]**
**9** go to step 1
**Algorithm 3: Verification algorithm VER(z1, z2′** _[, c, t, m][)]_
**Input: z1, z2′** _[∈R]k[p][n]−32[,][ t][ ∈R][p][n]_ [,][ c][ ∈{][0][,][ 1][}][160][, message][ m][ ∈{][0][,][ 1][}][∗]
**Output: b**
_′_
**1 if z1 or z2** _[̸∈R]k[p][n]−32_ **[then]**
**2** _b ←_ 0
**3 else**
_′_
**4** **if c =H(Transform(az1 + z2** _[−]_ _[tc][),][ m][)][ then]_
**5** _b ←_ 1
**6** **else**
**7** _b ←_ 0
where Y [(0)] _∈Rk[p][n]−32_ [and thus every coefficient of][ Y][ (0)][ is in the range [][−][(][k][ −]
32), k 32]. Due to this bijectional relationship, every polynomial Y can be also
_−_
written as the tuple (Y [(1)], Y [(0)]).
Algorithm 5 describes the compression algorithm Compress which takes a
polynomial y, a polynomial z with small coefficients and the security parameter
_k as well as p as input. It is designed to return a polynomial z′ that is compacted_
but still maintains the equality between the higher-order bits of y + z and y + z′
so that (y + z)[(1)] = (y + z′ )(1). In particular, the parameters of the scheme are
chosen in a way that the if-condition specified in step 3 is true only for rare
cases. This is important since only values assigned to z′ [i] in step 6 to step 12
can be efficiently encoded.
The hash function H maps an arbitrary-length input 1, 0 to a 512-coefficient
_{_ _}[∗]_
polynomial with 32 coefficients in 1, 1 and all other coefficients zero. The
_{−_ _}_
whole process of generating this string and its transformation into a polynomial
with the above described character is shown in Algorithm 6. In step 1 the message is concatenated with a binary representation of the polynomial x generated
-----
**Algorithm** **4:** Higher-order transformation algorithm
Transform(y, k)
**Input: y ∈R[p][n]**, k
**Output: y[(1)]**
**1 for i=0 to n −** 1 do
**2** _y[(0)][i] ←_ _y[i] mod (2(k −_ 32) + 1)
**3** _y[(1)][i] ←_ _[y]2([[][i]k[]][−]−[y]32)+1[(0)][[][i][]]_
**4 return y[(1)]**
by the algorithm BinRep. It takes a polynomial x as input and outputs a
_∈R[p][n]_
(somehow standardized) binary representation of this polynomial. The 160-bit
hash value is processed by partitioning it into 32 blocks of 5 side-by-side bits (beginning with the lowest ones) that each correspond to a particular region in the
polynomial c. These bits are r4r3r2r1r0 where (r3r2r1r0)2 represents the position
in the region interpreted as a 4-bit unsigned integer and the bit r4 determines if
the value of the coefficient is 1 or 1.
_−_
**2.3** **Parameters and Security**
Parameters that offer a reasonable security margin of approximately 100 bits
of comparable classical symmetric security are n = 512, p = 8383489, and k =
2[14]This parameter set is the primary target of this work. For some intuition on
how these parameters were selected, how the security level has been computed,
for a second parameter set and a security proof in the random-oracle model we
refer again to [14].
In general, the security of the signature scheme is based on the Decisional
Compact Knapsack (DCKp,n) problem and the hardness of finding a preimage in
the hash function. For solving the DCK problem one has to distinguish between
uniform samples from R[p][n] _×R[p][n]_ and samples from the distribution (a, as1 + _s2)_
with a being chosen uniformly at random from R[p][n] and s1, s2 being chosen
uniformly at random from R1[p][n] [. In comparison to the Ring-LWE problem [18],]
where s1, s2 are chosen from a Gaussian distribution of a certain range, this
just leads to s1, s2 with coefficients being either ±1 or zero. Therefore, the DCK
problem is an ”aggressive” variant of the LWE problem but is not affected by the
Arora-Ge algorithm as only one sample is given for the DCK problem and not the
required polynomially-many [1]. Note also that extraction of the private key from
the public key requires to solve the search variant of the DCK problem. In [14]
the hardness of breaking the signature scheme for the implemented parameter
set is computed based on the root Hermite factor of 1.0066 and stated to provide
roughly 100 bits of security. Finding a preimage in the hash function has classical
time complexity of 2[l] but is lowered to 2[l/][2] by Grover’s quantum algorithm [13].
As we use an output bit length of l = 160 from the hash function the implemented
-----
**Algorithm 5: Compression Algorithm Compress(y, z, p, k)**
**OutputInput: y: ∈R z′ ∈Rk[p][n]** [,]pk[ z]n[ ∈R]k[p][n]−32[,][ p][,][ k]
**1 uncompressed ←** 0
**2 for i=0 to n −** 1 do
**3** **if |y[i]| >** _[p][−]2_ [1] _−_ _k then_
_′_
**4** _z_ [i] ← _z[i]_
**5** _uncompressed ←_ _uncompressed + 1_
**6** **else**
**7** write y[i] = y[i][(1)](2k + 1) + y[i][(0)] where −k ≤ **y[i][(0)]** _≤_ _k if_
_y[i][0]_ + z[i] > k then
_′_
**8** _z[i]_ _←_ _k_
**9** **else if y[i][0]** + z[i] < −k then
_′_
**10** _z[i]_ _←−k_
**11** **else**
_′_
**12** _z[i]_ _←_ 0
**13 if uncompressed ≤** [6][kn]p **then**
_′_
**14** **return z**
**15 else**
**16** **return ⊥**
scheme achieves a security level of roughly 80 bits of security against attacks by
a quantum computer.
## 3 Software Optimization
In this section we show our approach to high-level optimization of algorithms
and low-level optimization to make best use of the target micro-architecture.
**3.1** **High-Level Optimization**
In the following we present high-level ideas to speed-up the polynomial multiplication, runtime behavior as well as randomness generation.
**Polynomial multiplication. In order to achieve quasi-linear speed in** (n log n)
_O_
when performing the essential polynomial-multiplication operation we use the
Fast Fourier Transform (FFT) or more specifically the Number Theoretic Transform (NTT) [21]. The advantages offered by the NTT have recently been shown
by a hard- and software implementation of an ideal lattice-based public key
cryptosystem [12]. The NTT is defined in a finite field or ring for a given
primitive n-th root of unity ω. The generic forward NTTω(a) of a sequence
-----
**Algorithm 6: Hash Function Invocation H(x, m)**
**Input: Polynomial x ∈R[p][n]**, message m ∈{0, 1}[∗], hash function
_H˜_ ({0, 1}[∗]) →{0, 1}[160]
**Output: c ∈R1[p][n]** with at most 32 coefficients being -1 or 1
**1 r ←** _H[˜]_ (m||BinRep(x))
**2 for i=0 to n −** 1 do
**3** _c[i] = 0_
**4 for i=0 to 31 do**
**5** _pos ←_ 8 · r5i+3 + 4 · r5i+2 + 2 · r5i+1 + r5i
**6** **if r5i+4 = 0 then**
**7** _c[i · 16 + pos] ←−1_
**8** **else**
**9** _c[i · 16 + pos] ←_ 1
_{a0, .., an−1} to {A0, . . ., An−1} with elements in Zp and length n is defined as_
_Ai =_ [�][n]j=0[−][1] _[a][j][ω][ij][ mod][ p, i][ = 0][,][ 1][, . . ., n][ −]_ [1 with the inverse NTT]ω[−][1][(][A][) just]
using ω[−][1] instead of ω.
For lattice-based cryptography it is also convenient that most schemes are
defined in Zp[x]/⟨x[n] + 1⟩ and require reduction modulo x[n] + 1. As a consequence, let ω be a primitive n-th root of unity in Zp and ψ[2] = ω. Then when
_a = (a0, . . . an−1) and b = (b0, . . . bn−1) are vectors of length n with elements in_
Zp let d = (d0, . . . dn−1) be the negative wrapped convolution of a and b (thus
_d = a ∗_ _b mod x[n]_ + 1). Let ¯a, [¯]b and d[¯] be defined as (a0, ψa1, . . ., ψ[n][−][1]an−1),
(b0, ψb1, . . ., ψ[n][−][1]bn−1), and (d0, ψd1, . . ., ψ[n][−][1]dn−1). It then holds that d[¯] =
_NTTw[−][1][(][NTT][w][(¯][a][)][◦][NTT][w][(¯][b][)) [24], where][ ◦]_ [means componentwise multiplica-]
tion. This avoids the doubling of the input length of the NTT and also gives us
a modular reduction by x[n] +1 for free. If parameters are chosen such that n is a
power of two and that p 1 mod 2n, the NTT exists and the negative wrapped
_≡_
convolution can be implemented efficiently.
In order to achieve high NTT performance, we precompute all constants
_ω[i], ω[−][i], ψ[i]_ as well as n[−][1] _ψ[i]_ for i 0 . . . n 1. The multiplication by n[−][1],
_·_ _∈_ _−_
which is necessary in the NTT[−][1] step, is directly performed as we just multiply
by n[−][1] _ψ[−][i]._
_·_
**Storing parameters in NTT representation. The polynomial a is used as**
input to the key-generation algorithm and can be chosen as a global constant. By
setting ˜a = NTT(a) and storing ˜a we just need to perform NTT[−][1](˜a ◦ NTT(y1)),
which consists of one forward transform, one point multiplication and one backward transform. This is implemented in the poly mul a function and is superior
to the general-purpose NTT multiplication, which requires three transforms.
**Random polynomials. During signature generation we need to generate two**
polynomials with random coefficients uniformly distributed in [ _k, k]. To obtain_
_−_
these polynomials, we first generate 4 (n + 16) = 2112 random bytes using
_·_
-----
the Salsa20 stream cipher [2] and a seed from the Linux kernel random-number
generator /dev/urandom. We interprete these bytes as an array of n+16 unsigned
32-bit integers. To convert one such a 32-bit integer r to a polynomial coefficient
_c in [_ _k, k] we first check whether r_ (2k +1) 2[32]/(2k +1) . If it is, we discard
_−_ _≥_ _·⌊_ _⌋_
this integer and move to the next integer in the array. Otherwise we compute
_c = (r mod (2k + 1))_ _k._
_−_
The probability that an integer is discarded is (2[32] mod (2k + 1))/2[32]. For
our parameters we have (2[32] mod (2k + 1)) = 4. The probability to discard a
randomly chosen 32-bit integer is thus 4/2[32] = 2[−][30]. The 16 additional elements
in our array (corresponding to one block of Salsa20) make it extremely unlikely
that we do not sample enough random elements to set all coefficients of the
polynomial. In this highly unlikely case we simply sample another 2112 bytes of
randomness.
During key generation we use the same approach to generate polynomials
with coefficients in 1, 0, 1 . The difference is that we sample bytes instead of
_{−_ _}_
32-bit integers. We again sample one additional block of Salsa20 output, now
corresponding to 64 additional elements. A byte is discarded only if its value is
255, the chance to discard a random byte is thus 2[−][8].
**3.2** **Low-Level Optimization**
The performance of the signature scheme is largely determined by a small set
of operations on polynomials with n = 512 coefficients over Zp where p is a 23bit prime. This section first describes how we represent polynomials and what
implementation techniques we use to accelerate operations on these polynomials.
**Representation of polynomials. We represent each 512-coefficient polyno-**
mial as an array of 512 double-precision floating-point values. Each such array
is aligned on a 32-byte boundary, meaning that the address in memory is divisible by 32. This representation has the advantage that we can use the singleinstruction multiple-data (SIMD) instructions of the AVX instruction-set extension in modern Intel and AMD CPUs. These instructions operate on vectors of
4 double-precision floats in 256-bit-wide, so called ymm vector registers. These
registers and the corresponding AVX instructions can be found, for example, in
the Intel Sandy Bridge, Intel Ivy Bridge, and AMD Bulldozer processors. The
following performance analysis focuses on Ivy Bridge processors; Section 4 also
reports benchmarks from a Sandy Bridge processor.
Both Sandy Bridge and Ivy Bridge processors can perform one AVX doubleprecision-vector multiplication and one addition every cycle. This corresponds
to 4 multiplications (vmulpd instruction) and 4 additions (vaddpd instruction)
of polynomial coefficients each cycle. However, arithmetic cost is not the main
bottleneck in our software as loads and stores are often necessary because only
64 polynomial coefficients fit into the 16 available ymm registers. The performance
of loads and stores is more complex to determine than arithmetic throughput.
In principle, the processor can perform two loads and one store every two cycles.
However, this maximal throughput can be reduced by bank conflicts. For details
see [10, Section 8.13].
-----
**Modular reduction of coefficients. To perform a modular reduction of a**
coefficient x, we first compute c = x _p[−][1], then round c, then multiply c by p_
_·_
and then subtract c from x. The first step uses a precomputed double-precision
approximation p[−][1] of the inverse of p. When reducing all coefficients of a polynomial, the multiplications and the subtraction are performed on four coefficients in
parallel with the vmulpd and vsubpd AVX instructions, respectively. The rounding is also done on four coefficients in parallel using the vroundpd instruction.
Note that depending on the rounding mode we can obtain the reduced value of
_x in different intervals. If we perform a truncation we obtain x in [0, p_ 1], if we
_−_
round to the nearest integer we obtain x in [ ((p 1)/2), (p 1)/2]. We only
_−_ _−_ _−_
need rounding to the nearest integer (vroundpd with rounding-mode constant
```
0x08). Both representations are required at different stages of the computation;
vroundpd supports choosing the rounding mode.
```
**Lazy reduction. The prime p has 23 bits. A double-precision floating-point**
value has a 53-bit mantissa and one sign bit. Even the product of two coefficients
does not use the whole available precision, so we do not have to perform modular
reduction after each addition, subtraction or even multiplication. We can thus
make use of the technique known as lazy reduction, i.e., of performing reduction
modulo p only when necessary.
**Optimizing the NTT. The most speed-critical operation for signing is poly-**
nomial multiplication and we can thus use the NTT transformation as described
above. We start from a standard fast iterative algorithm (see, e.g., [9]) for computing the FFT/NTT and adapt it to the target architecture. The transformation of a polynomial f with coefficients f0, . . ., f511 to or from NTT representation consist of an initial permutation of the coefficients followed by log2 n = 9
levels of operations on coefficients. On level 0, pick up f0 and f1, multiply f1
with a constant (a power of ω), add the result to f0 to obtain the new value of
_f0 and subtract the result from f0 to obtain the new value of f1. Then pick up_
_f2 and f3 and perform the same operations to find the new values for f2 and f3_
and so on. The following levels work in a similar way except that the distance
of pairs of elements that are processed together is different: on level i process
elements that are 2[i] positions apart. For example, on level 2 pick up and transform f0 and f4, then f1 and f5 etc. On level 0 we can omit the multiplication
by a constant, because the constant is 1.
The obvious bottleneck in this computation are additions (and subtractions):
Each level performs 256 additions and 256 subtractions accounting for a total of
9 512 = 4608 additions requiring at least 1152 cycles. In fact the lower bound of
_·_
cycles is much higher, because after each multiplication by a constant we need to
reduce the coefficients modulo p. This takes one vroundpd instruction and one
subtraction. The vroundpd instruction is processed in the same port as additions
and subtractions, we thus get a lower bound of (9 512+8 512)/4 = 2176 cycles.
_·_ _·_
To get close to this lower bound, we need to make sure that all the additions
can be efficiently processed in AVX instructions by minimizing overhead from
memory access, multiplications or vector-shuffle instructions.
-----
Starting from level 2, the structure of the algorithm is very friendly for 4-way
vector processing: For example, we can load (f0, f1, f2, f3) into one vector register, load (f4, f5, f6, f7) in another vector register, load the required constants
(c0, c1, c2, c3) into a third vector register and then use one vector multiplication,
one vector addition and one vector subtraction to obtain (f0+c0f4, f1+c1f5, f2+
_c2f6, f3 + c3f7) and (f0_ _c0f4, f1_ _c1f5, f2_ _c2f6, f3_ _c3f7). However, on lev-_
_−_ _−_ _−_ _−_
els 0 and 1 the transformations are not that straightforwardly done in vector
registers. On level 0 we do the following: Load f0, f1, f2, f3 into one register;
perform vector multiplication of this register with (1, 1, 1, 1) and store the
_−_ _−_
result in another register; perform a vhaddpd instruction of these two registers
which results exactly in (f0 + v1, f0 _f1, f2 + f3, f2_ _f3). On level 1 we do_
_−_ _−_
the following: Load f0, f1, f2, f3; multiply with a vector of constants, reduce the
result modulo p; use the vperm2f128 instruction with constant argument 0x01
to obtain c2f2, c3f3, c0f0, c1f1 in another register and perform vector register
multiplication of this register by (1, 1, −1, −1); add the result to (f0, f1, f2, f3)
to obtain the desired (f0 + c2f2, f1 + c1f1, f0 _c2f2, f1_ _c3f3)._
_−_ _−_
A remaining bottleneck is memory access. To minimize loads and stores, we
merge levels 0,1,2, levels 3,4,5 and levels 6,7,8. The idea is that on one level two
pairs of coefficients are interacting; through two levels it is 4-tuples of coefficients
that interact and through 3 levels it is 8-tuples of coefficients that interact. On
levels 0,1 and 2 we load these 8 coefficients; perform all transformations through
the 3 levels and store them again, then proceed to the next 8 coefficients. On
higher levels we load 32 coefficients, perform all transformations through 3 levels
on them, store them and then proceed to the next 32 coefficients.
In total, one NTT transformation takes 4484 cycles on the Ivy Bridge processor. This includes about 500 cycles for the initial coefficient permutation. We
are continuing to investigate the difference between the lower bound on cycles
dictated by vector additions and the cycles actually taken by our software.
**Addition and subtraction. Addition and subtraction of polynomials simply**
means loading coefficients, performing double-precision floating-point addition
or subtraction, and storing the result coefficient. This is completely parallel, so
we do this in 256 vector loads, 128 vector additions or subtractions, and 128
vector stores.
**Higher-order transformation. The higher-order transformation described in**
Algorithm 4 is a nice example of the power of representing polynomial coefficients
as double-precision floats: The only operation required is the multiplication by
the precomputed value (2(k 32) + 1)[−][1] (a double-precision approximation of
_−_
(2(k 32)+1)[−][1]) and a subsequent rounding towards the nearest integer. As for
_−_
the coefficient reduction we perform these computations using the vmulpd and
```
vroundpd instructions.
## 4 Performance Analysis and Benchmarks
```
In this section we analyze the performance of our software and report benchmarks for key generation (crypto keypair), as well as the signing (crypto sign)
-----
and verification (crypt sign open) algorithm. Our software implements the
eBATS API [4] for signature software, but we did not use SUPERCOP for benchmarking. The reason is that SUPERCOP reports the median of multiple runs
to filter out benchmarks that are polluted by, for example, an interrupt that
occurred during some of the computations. Considering the median of timings
when signing would be overly optimistic and cut off legitimate benchmarks of
signature generations that took very long because they required many attempts.
Therefore, for signing we report the average of 100000 signature generations; for
key-pair generation, verification and lower-level functions we report the median
of 1000 benchmarks. However, we will submit our software to eBACS for public
benchmarking and discuss the issue with the editors of eBACS. Note that our
software for signing is obviously not running in constant time but the timing
variation is independent of secret data; our software is fully protected against
timing attacks.
We performed benchmarks on two different machines:
**– a machine called h9ivy at the University of Illinois at Chicago with an Intel**
Core i5-3210M CPU (Ivy Bridge) at 2500 MHz and 4 GB of RAM; and
**– a machine called h6sandy at the University of Illinois at Chicago with an**
Intel Core i3-2310M CPU (Sandy Bridge) at 2100 MHz and 4 GB of RAM.
All software was compiled with gcc-4.7.2 and compiler flags -O3 -msse2avx
```
-march=corei7-avx -fomit-frame-pointer. During the benchmarks Turbo
```
Boost and hyperthreading were switched off. The performance results for the
most important operations are given in Table 1. The message length was 59
bytes for the benchmarking of crypto sign and crypto sign open.
**Table 1. Cycle counts of our software; n = 512 and p = 8383489.**
**Operation** **Sandy Bridge cycles Ivy Bridge cycles**
crypto sign keypair 33894 31140
crypto sign 681500 634988
crypto sign open 47636 45036
ntt 4480 4484
poly mul 16052 16096
poly mul a 11100 11044
poly setrandom maxk 12788 10824
poly setrandom max1 6072 5464
**Polynomial-multiplication performance. The multiplication of two polyno-**
mials (poly mul) takes 16096 cycles on the Ivy Bridge. Out of those, 3 4484 =
_·_
13452 cycles are for 3 NTT transformations (ntt).
**Key-generation performance. Generating a key pair takes 31140 cycles on**
the Ivy Bridge. Out of those, 2 5464 = 10928 cycles are required to generate
_·_
|Operation|Sandy Bridge cycles|Ivy Bridge cycles|
|---|---|---|
|crypto sign keypair crypto sign crypto sign open|33894 681500 47636|31140 634988 45036|
|ntt poly mul poly mul a poly setrandom maxk poly setrandom max1|4480 16052 11100 12788 6072|4484 16096 11044 10824 5464|
-----
two random polynomials (poly setrandom max1); 11044 cycles are required for a
multiplication by the constant system parameter a (poly mul a); the remaining
9168 cycles are required for one polynomial addition, compression of the two
private-key polynomials and packing of the public-key polynomial into a byte
array.
**Signing performance. Signing takes 634988 cycles on average on the Ivy**
Bridge. Each signing attempt takes 85384 cycles. We need 7 attempts on average,
so those attempts account for about 7 85384 = 597688 cycles; the remaining
_·_
cycles are required for constant overhead for extracting the private key from the
byte array, copying the message to the signed message etc. Some of the remaining cycles may also be due to some measurements being polluted as explained
above.
Out of the 85384 cycles for each signing attempt, 2 10824 = 21648 cy_·_
cles are required to generate two random polynomials (poly setrandom maxk);
2 16096 = 32192 cycles are required for two polynomial multiplications; 11084
_·_
cycles are required for a multiplication with the system parameter a; the remaining 20460 cycles are required for hashing, the higher order transformation,
four polynomial additions, one polynomial subtraction and testing whether the
polynomial can be compressed.
**Verification performance. Verifying a signature takes 45036 cycles on the Ivy**
Bridge. Out of those, 16096 cycles are required for a polynomial multiplication;
11084 cycles are required for a multiplication with a; the remaining 17856 cycles
are required for hashing, the high-order transformation, a polynomial addition
and a polynomial subtraction, decompression of the signature, and unpacking of
the public key from a byte array.
**Comparison. As we provide the first software implementation of the signa-**
ture scheme we cannot compare our result to other software implementations.
In [14] only a hardware implementation is given which is naturally hard to compare to. For different types of FPGAs and parallelism, an implementation of
sign/verify of 931/998 (Spartan-6 LX16) up to 12627/14580 (Virtex-6 LX130)
messages/signatures per second is reported. However, the architecture is quite
different; in particular it uses a configurable number of high-clock-frequency
schoolbook multipliers instead of an NTT multiplier. The explanation for the
low verification performance on the FPGA, compared with the software implementation, is that only one such multiplier is used in the verification engine.
Another target for comparison is a recently reported implementation of an
ideal lattice-based encryption system in soft- and hardware [12]. In software,
the necessary polynomial arithmetic relies on Shoup’s NTL library [22]. Measurements confirmed that our basic arithmetic is faster than their prototype
implementation (although their parameters are smaller) as we can rely on AVX,
a hand-crafted NTT implementation and optimized modular reduction.
Various other implementations of post-quantum signature schemes have been
described in the literature and many of them have been submitted to eBACS [4].
In Table 2 we compare our software in terms of security, speed, key sizes and
-----
signature size to the Rainbow, TTS, and C _[∗]_ (pFLASH) software presented in [8],
and the MQQ-Sig software presented in [11]. The cycle counts of these implementations are obtained from the eBACS website and have been measured on
the same Intel Ivy Bridge machine that we used for benchmarking (h9ivy). We
reference these implementations by their names in eBACS (in typewriter font)
and their corresponding paper. For most of these multivariate schemes, the signing performance is much better, verification performance is somewhat better,
but they suffer from excessive public-key sizes.
We furthermore compare to software described in the literature that has not
been submitted to eBACS, specifically the implementation of the parallel-CFS
code-based signature scheme presented in [16], the implementation of the treeless signature scheme TSS12 presented in [23], and the implementation of the
hash-based signature scheme XMSS [6]. For those implementations we give the
performance numbers from the respective paper and indicate the CPU used for
benchmarking. Parallel-CFS not only has much larger keys, signing is also several orders of magnitude slower than with the lattice-based signature software
presented in this paper. However, we expect that verification with parallel-CFS
is very fast, but [16] does not give performance numbers for verification. The TSS
software is using the scheme originally proposed in [17]. It makes an interesting
target for comparison as it is similar to our scheme but relies on weaker assumptions. However, the software is much slower for both signing and verification.
Hash-based signature schemes are also an interesting post-quantum signature
alternative due to their well understood security properties and relatively small
keys. However, the XMSS software presented in [6] is still an order of magnitude
slower than our implementation and produces considerably larger signatures.
Finally we include two non-post-quantum signature schemes in the comparison in Table 2. First, the Ed25519 elliptic-curve signature scheme [3] and second, RSA-2048 signatures based on the OpenSSL implementation (ronald2048).
Comparing to those schemes shows that our implementation and also most of
the multivariate-signature software can even be faster or at least quite comparable to established schemes in terms of performance. However, the key and
signature sizes of those two non-post-quantum signature are not beaten by any
post-quantum proposal, yet.
Other lattice-based signature schemes that have a security reduction in the
standard model are given in [7] and [5]. However, those papers do not give
concrete parameters, security estimates or describe an implementation.
## 5 Future Work
As the initial implementation work has been carried out it is now necessary in
future work to evaluate the security claims of the scheme by careful cryptanalysis
and development of potential attacks. Especially, as the implemented scheme
relaxes some assumptions that are required for connection to worst-case lattice
problems more confidence is needed for real world usage. Other future work is
-----
**Table 2. Comparison of different post-quantum signature software; pk stands for**
public key; sk stands for private key. The sizes are given in bytes. All software was
benchmarked on h9ivy if not indicated otherwise.
**Software** **Security Cycles** **Sizes**
This work 100 bits sign: 634988 pk: 1536
**verify:** 45036 sk: 256
**sig:** 1184
`mqqsig160 [11]` 80 bits sign: 1996 pk: 206112
**verify:** 33220 sk: 401
**sig:** 20
`mqqsig192 [11]` 96 bits sign: 3596 pk: 333540
**verify:** 63488 sk: 465
**sig:** 24
`mqqsig224 [11]` 112 bits sign: 3836 pk: 529242
**verify:** 65988 sk: 529
**sig:** 28
`mqqsig256 [11]` 128 bits sign: 4560 pk: 789552
**verify:** 87904 sk: 593
**sig:** 32
`rainbow5640 [8]` 80 bits sign: 53872 pk: 44160
**verify:** 34808 sk: 86240
**sig:** 37
`rainbowbinary16242020 [8]` 80 bits sign: 29364 pk: 102912
**verify:** 17900 sk: 94384
**sig:** 40
`rainbowbinary256181212 [8]` 80 bits sign: 33396 pk: 30240
**verify:** 27456 sk: 23408
**sig:** 42
`pflash1 [8]` 80 bits sign: 1473364 pk: 72124
**verify:** 286168 sk: 5550
**sig:** 37
`tts6440 [8]` 80 bits sign: 33728 pk: 57600
**verify:** 49248 sk: 16608
**sig:** 43
Parallel-CFS [16] 80 bits sign: 4200000000[a] **pk:** 20968300
(20, 8, 10, 3) **verify:** - sk: 4194300
**sig:** 75
TSS12 [23] 80 bits sign: 93633000[b] **pk:** 13087
(n = 512) **verify:** 13064000[b] **sk:** 13240
**sig:** 8294
XMSS [6] 82 bits sign: 7261100[c] **pk:** 912
(H = 20, w = 4, AES-128) **verify:** 556600[c] **sk:** 19
**sig:** 2451
`ed25519 [3]` 128 bits sign: 67564 pk: 32
**verify:** 209328 sk: 64
**sig:** 64
`ronald2048` 112 bits sign: 5768360 pk: 256
(RSA-2048 based on **verify:** 77032 sk: 2048
OpenSSL) **sig:** 256
_a Benchmarked on an Intel Xeon W3670 (3.20 GHz)_
_b Benchmarked on an AMD Opteron 8356 (2.3 GHz)_
|Software|Security|Cycles|Sizes|
|---|---|---|---|
|This work|100 bits|sign: 634988 verify: 45036|pk: 1536 sk: 256 sig: 1184|
|mqqsig160 [11]|80 bits|sign: 1996 verify: 33220|pk: 206112 sk: 401 sig: 20|
|mqqsig192 [11]|96 bits|sign: 3596 verify: 63488|pk: 333540 sk: 465 sig: 24|
|mqqsig224 [11]|112 bits|sign: 3836 verify: 65988|pk: 529242 sk: 529 sig: 28|
|mqqsig256 [11]|128 bits|sign: 4560 verify: 87904|pk: 789552 sk: 593 sig: 32|
|rainbow5640 [8]|80 bits|sign: 53872 verify: 34808|pk: 44160 sk: 86240 sig: 37|
|rainbowbinary16242020 [8]|80 bits|sign: 29364 verify: 17900|pk: 102912 sk: 94384 sig: 40|
|rainbowbinary256181212 [8]|80 bits|sign: 33396 verify: 27456|pk: 30240 sk: 23408 sig: 42|
|pflash1 [8]|80 bits|sign: 1473364 verify: 286168|pk: 72124 sk: 5550 sig: 37|
|tts6440 [8]|80 bits|sign: 33728 verify: 49248|pk: 57600 sk: 16608 sig: 43|
|Parallel-CFS [16] (20, 8, 10, 3)|80 bits|sign: 4200000000a verify: -|pk: 20968300 sk: 4194300 sig: 75|
|TSS12 [23] (n = 512)|80 bits|sign: 93633000b verify: 13064000b|pk: 13087 sk: 13240 sig: 8294|
|XMSS [6] (H = 20, w = 4, AES-128)|82 bits|sign: 7261100c verify: 556600c|pk: 912 sk: 19 sig: 2451|
|ed25519 [3]|128 bits|sign: 67564 verify: 209328|pk: 32 sk: 64 sig: 64|
|ronald2048 (RSA-2048 based on OpenSSL)|112 bits|sign: 5768360 verify: 77032|pk: 256 sk: 2048 sig: 256|
-----
the investigation of efficiency on more constrained devices like ARM (which, in
some versions, also feature a SIMD unit) or even low-cost 8-bit processors.
## Acknowledgments
We would like to thank Michael Schneider, Vadim Lyubashevsky, and the anonymous reviewers for their helpful comments.
## References
1. Sanjeev Arora and Rong Ge. New algorithms for learning in presence of errors. In
Luca Aceto, Monika Henzinger, and Jiri Sgall, editors, Automata, Languages and
_Programming, volume 6755 of Lecture Notes in Computer Science, pages 403–415._
Springer, 2011.
2. Daniel J. Bernstein. The Salsa20 family of stream ciphers. In Matthew J. B. Robshaw and Olivier Billet, editors, New Stream Cipher Designs – The eSTREAM Fi_nalists, volume 4986 of Lecture Notes in Computer Science, pages 84–97. Springer,_
2008.
3. Daniel J. Bernstein, Niels Duif, Tanja Lange, Peter Schwabe, and Bo-Yin Yang.
High-speed high-security signatures. _J. Cryptographic Engineering, 2(2):77–89,_
2012.
4. Daniel J. Bernstein and Tanja Lange. eBACS: ECRYPT benchmarking of cryptographic systems. http://bench.cr.yp.to (accessed 2013-01-25).
5. Xavier Boyen. Lattice mixing and vanishing trapdoors: A framework for fully
secure short signatures and more. In Phong Q. Nguyen and David Pointcheval, editors, Public Key Cryptography, volume 6056 of Lecture Notes in Computer Science,
pages 499–517. Springer, 2010.
6. Johannes Buchmann, Erik Dahmen, and Andreas H¨ulsing. XMSS - a practical
forward secure signature scheme based on minimal security assumptions. In BoYin Yang, editor, Post-Quantum Cryptography, volume 7071 of Lecture Notes in
_Computer Science, pages 117–129. Springer, 2011._
7. David Cash, Dennis Hofheinz, Eike Kiltz, and Chris Peikert. Bonsai trees, or how
to delegate a lattice basis. In Henri Gilbert, editor, Advances in Cryptology –
_EUROCRYPT 2010, volume 6110 of Lecture Notes in Computer Science, pages_
523–552. Springer, 2010.
8. Anna Inn-Tung Chen, Ming-Shing Chen, Tien-Ren Chen, Chen-Mou Cheng, Jintai
Ding, Eric Li-Hsiang Kuo, Frost Yu-Shuang Lee, and Bo-Yin Yang. SSE implementation of multivariate PKCs on modern x86 CPUs. In Christophe Clavier and
Kris Gaj, editors, Cryptographic Hardware and Embedded Systems – CHES 2009,
volume 5747 of Lecture Notes in Computer Science, pages 33–48. Springer, 2009.
9. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein.
_Introduction to Algorithms (3. ed.). MIT Press, 2009._
10. Agner Fog. The microarchitecture of Intel, AMD and VIA CPUs: An optimization
_guide for assembly programmers and compiler makers, 2010. http://www.agner._
```
org/optimize/microarchitecture.pdf (version 2012-02-29).
```
11. Danilo Gligoroski, Rune Steinsmo Ødeg˚ard, Rune Erlend Jensen, Ludovic Perret,
Jean-Charles Faug`ere, Svein Johan Knapskog, and Smile Markovski. MQQ-SIG –
an ultra-fast and provably CMA resistant digital signature scheme. In Liqun Chen,
-----
Moti Yung, and Liehuang Zhu, editors, Trusted Systems, volume 7222 of Lecture
_Notes in Computer Science, pages 184–203. Springer, 2011._
12. Norman G¨ottert, Thomas Feller, Michael Schneider, Johannes Buchmann, and
Sorin A. Huss. On the design of hardware building blocks for modern latticebased encryption schemes. In Emmanuel Prouff and Patrick Schaumont, editors,
_Cryptographic Hardware and Embedded Systems – CHES 2012, volume 7428 of_
_Lecture Notes in Computer Science, pages 512–529. Springer, 2012._
13. Lov K. Grover. A fast quantum mechanical algorithm for database search. In
Gary L. Miller, editor, Proceedings of the Twenty-Eighth Annual ACM Symposium
_on the Theory of Computing, Philadelphia, Pennsylvania, USA, May 22-24, 1996,_
pages 212–219. ACM, 1996.
14. Tim G¨uneysu, Vadim Lyubashevsky, and Thomas P¨oppelmann. Practical latticebased cryptography: A signature scheme for embedded systems. In Emmanuel
Prouff and Patrick Schaumont, editors, Cryptographic Hardware and Embedded
_Systems – CHES 2012, volume 7428 of Lecture Notes in Computer Science, pages_
530–547. Springer, 2012.
15. Jeffrey Hoffstein, Nick Howgrave-Graham, Jill Pipher, Joseph H. Silverman, and
William Whyte. NTRUSIGN: Digital signatures using the NTRU lattice. In Marc
Joye, editor, Topics in Cryptology – CT-RSA 2003, volume 2612 of Lecture Notes
_in Computer Science, pages 122–140. Springer, 2003._
16. Gr´egory Landais and Nicolas Sendrier. Implementing CFS. In Steven Galbraith
and Mridul Nandi, editors, Progress in Cryptology – INDOCRYPT 2012, volume
7668 of Lecture Notes in Computer Science, pages 474–488. Springer, 2012.
17. Vadim Lyubashevsky. Lattice signatures without trapdoors. In David Pointcheval
and Thomas Johansson, editors, Advances in Cryptology – EUROCRYPT 2012,
volume 7237 of Lecture Notes in Computer Science, pages 738–755. Springer, 2012.
18. Vadim Lyubashevsky, Chris Peikert, and Oded Regev. On ideal lattices and learning with errors over rings. In Henri Gilbert, editor, Advances in Cryptology –
_EUROCRYPT 2010, volume 6110 of Lecture Notes in Computer Science, pages_
1–23. Springer, 2010.
19. Phong Q. Nguyen and Oded Regev. Learning a parallelepiped: Cryptanalysis of
GGH and NTRU signatures. In Serge Vaudenay, editor, Advances in Cryptology
_– EUROCRYPT 2006, volume 4004 of Lecture Notes in Computer Science, pages_
271–288. Springer, 2006.
20. Albrecht Petzoldt, Enrico Thomae, Stanislav Bulygin, and Christopher Wolf. Small
public keys and fast verification for multivariate quadratic public key systems. In
Bart Preneel and Tsuyoshi Takagi, editors, Cryptographic Hardware and Embedded
_Systems – CHES 2011, volume 6917 of Lecture Notes in Computer Science, pages_
475–490. Springer, 2011.
21. John M. Pollard. The Fast Fourier Transform in a finite field. Mathematics of
_Computation, 25(114):365–374, 1971._
22. Victor Shoup. NTL: A library for doing number theory. http://www.shoup.net/
```
ntl/ (accessed 2013-03-18).
```
23. Patrick Weiden, Andreas H¨ulsing, Daniel Cabarcas, and Johannes Buchmann.
Instantiating treeless signature schemes. IACR Crptology ePrint archive report
2013/065, 2013. http://eprint.iacr.org/2013/065.
24. Franz Winkler. Polynomial Algorithms in Computer Algebra (Texts and Mono_graphs in Symbolic Computation). Springer, 1 edition, 1996._
-----
| 13,152
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-642-38616-9_5?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-642-38616-9_5, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "CLOSED",
"url": ""
}
| 2,013
|
[
"JournalArticle"
] | false
| 2013-06-04T00:00:00
|
[] | 13,152
|
en
|
[
{
"category": "Business",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00263200e98a945d5312e7bad59c774b640cbbe5
|
[] | 0.877005
|
A Private and Efficient Triple-Entry Accounting Protocol on Bitcoin
|
00263200e98a945d5312e7bad59c774b640cbbe5
|
Journal of Risk and Financial Management
|
[
{
"authorId": "2225541206",
"name": "Liuxuan Pan"
},
{
"authorId": "2238776151",
"name": "Owen Vaughan"
},
{
"authorId": "2238591015",
"name": "C. S. Wright"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"J Risk Financial Manag"
],
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-318032"
],
"id": "5bccd387-3836-42f2-9b60-9c190333ae01",
"issn": "1911-8066",
"name": "Journal of Risk and Financial Management",
"type": "journal",
"url": "https://www.mdpi.com/journal/jrfm"
}
|
The ‘Big Four’ accountancy firms dominate the auditing market, auditing almost all the Financial Times Stock Exchange (FTSE) 100 companies. This leads to people having to accept auditing results even if they may be poor quality and/or for inadequate purposes. In addition, accountants may provide different auditing results with the same financial data. These issues are hard for regulators such as the Financial Reporting Council to identify because of insufficient resources or inconsistent compliance. In this paper, we proposed a triple-entry accounting protocol to allow users to report Bitcoin transactions to a third-party auditor to comply with regulations such as the travel rule. It allows the auditor to easily detect anomalies and identify the non-compliant parties, whilst the blockchain itself provides a transparent and immutable record of these anomalies. Despite building on a public ledger, our solution preserves privacy and offers an interoperability layer for information exchange. Merkle proofs were used to record non-compliant transactions whilst allowing compliant transactions to be pruned from an auditor’s active database.
|
Journal of
## ***Risk and Financial*** ***Management***
*Article*
# **A Private and Efficient Triple-Entry Accounting Protocol** **on Bitcoin**
**Liuxuan Pan *, Owen Vaughan and Craig Steven Wright**
nChain Ltd., 30 Market Place, London W1W 8AP, UK; [email protected] (O.V.);
[email protected] (C.S.W.)
***** Correspondence: [email protected]
**Abstract:** The ‘Big Four’ accountancy firms dominate the auditing market, auditing almost all the
Financial Times Stock Exchange (FTSE) 100 companies. This leads to people having to accept auditing
results even if they may be poor quality and/or for inadequate purposes. In addition, accountants
may provide different auditing results with the same financial data. These issues are hard for
regulators such as the Financial Reporting Council to identify because of insufficient resources or
inconsistent compliance. In this paper, we proposed a triple-entry accounting protocol to allow users
to report Bitcoin transactions to a third-party auditor to comply with regulations such as the travel
rule. It allows the auditor to easily detect anomalies and identify the non-compliant parties, whilst the
blockchain itself provides a transparent and immutable record of these anomalies. Despite building
on a public ledger, our solution preserves privacy and offers an interoperability layer for information
exchange. Merkle proofs were used to record non-compliant transactions whilst allowing compliant
transactions to be pruned from an auditor’s active database.
**Keywords:** triple entry accounting; bitcoin; blockchain; privacy; auditing
**1. Introduction**
Triple Entry Accounting (TEA) is an innovative discovery in the field of accounting and
**Citation:** Pan, Liuxuan, Owen is considered as an extension of double-entry accounting (Grigg 2005). Between 1995 and
Vaughan, and Craig Steven Wright. 1997, Grigg introduced the concept of triple-entry accounting, which combined financial
2023. A Private and Efficient information from two companies into a single transaction receipt. This transaction receipt
Triple-Entry Accounting Protocol on includes cryptographic signatures and constitutes the origin of triple entry (Ibañez et al.
Bitcoin. *Journal of Risk and Financial* 2023). Independent in 1997, Boyle proposed the idea of shared ledger, which allowed two
*Management* [16: 400. https://](https://doi.org/10.3390/jrfm16090400) parties to communicate transactions in a single shared transaction repository. The two
[doi.org/10.3390/jrfm16090400](https://doi.org/10.3390/jrfm16090400) streams converged into TEA in 2005. In traditional double-entry accounting, a receipt
Academic Editor: Eva R. Porras for a financial transaction is issued by a central party, such as a bank, to commit the
transaction between a payer and a payee. Grigg questioned this traditional accounting
Received: 18 July 2023 model, arguing that the central party has excessive power and this could result in the
Revised: 25 August 2023 central party committing fraud using receipts (Simoyama et al. 2017). To mitigate this risk,
Accepted: 28 August 2023 the TEA model was proposed to ensure that all involved parties receive the same receipt for
Published: 7 September 2023
that financial transaction. Such a receipt includes all related parties’ signatures to ensure
data integrity of the receipt.
The concept of TEA is sometimes confused with triple-entry bookkeeping (TEB).
**Copyright:** © 2023 by the authors. Ibañez et al. classified the distinction as bookkeeping is simply recording transactions
Licensee MDPI, Basel, Switzerland. in sequence (Ibañez et al. 2021a) while accounting is the process of summarizing and
This article is an open access article analysing company information based on bookkeeping to help the company make decisions
distributed under the terms and (Ibañez et al. 2021b). Thus, definitions of bookkeeping and accounting are inherited by
conditions of the Creative Commons TEA and TEB. TEB systems simply use the triple-entry method to record transactions, and
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/) TEA systems add an accounting layer on the top of TEB (Ibañez et al. 2021a). Additionally,
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/) in Grigg’s TEA, ‘entry’ represents a signature record which is a signed message by a party,
4.0/). or simply a signature, and TEA is a ‘signature gathering process’ (Ibañez et al. 2023).
*J. Risk Financial Manag.* **2023**, *16* [, 400. https://doi.org/10.3390/jrfm16090400](https://doi.org/10.3390/jrfm16090400) [https://www.mdpi.com/journal/jrfm](https://www.mdpi.com/journal/jrfm)
-----
*J. Risk Financial Manag.* **2023**, *16*, 400 2 of 9
Grigg’s TEA concept relies on a trusted third party with the shared ledger, and this
makes it challenging to implement in the real accounting world (Singh et al. 2021). With
the advent of Bitcoin (Nakamoto 2008), it becomes practicable as the Bitcoin blockchain
can replace the role of the trusted third party, making TEA increasingly viable. In other
words, Bitcoin simply uses the triple-entry method to record transactions. It is worth
noting that Bitcoin is a TEB system, but it can become a TEA by adding an accounting
layer on top (Ibañez et al. 2023). The accounting layer will record transactions in a systematic and controlled method to facilitate business events such as tax reporting and
invoicing. In general, unspent transaction output (UTXO)-based blockchains can be regarded as TEA examples (Grigg 2011). The blockchain-integrated TEA solutions are used to
improve the efficiency of processing data, reduce the risk of human error, enable fully automated auditing, and save time and costs in reporting, tax filing, payment, and compliance
(Ibañez et al. 2021b; Faccia and Mosteanu 2019; Baba et al. 2021).
Not all blockchains can immediately enable TEA. Some of them need to run smart
contracts to enable a TEA system, for instance, the account-based blockchains like Ethereum
and managed ledgers such as Ripple (XRP ledger) (Grigg 2017). In addition, some existing
blockchain-based TEA systems are facing a scalability issue. To resolve this issue, these
systems propose a second layer or off-chain solution, e.g., the Request network (Request
2018), or to use a permissioned ledger, e.g., Hyperledger (Ibañez et al. 2021b). The Request
TEA system (Request 2018), built on top of Ethereum blockchain, adopts an InterPlanetary
File System to store data and partially use the blockchain for time stamping. These solutions
can partially address the problem, but they still inherit the disadvantages of the adopted
ledgers, such as the poor stability of Ethereum and the low transparency of Ripple and
Hyperledger (Joseph et al. 2022).
Recent collapses of cryptocurrencies such as FTX collapse (Vidal-Tomás et al. 2023) and
Terra luna crash (Liu et al. 2023) may affect people’s perception of blockchain technology’s
capabilities and potential, especially when solutions are deployed on cryptocurrencypowered blockchains. While cryptocurrencies are the most well-known application of
blockchain technology, cryptocurrency collapses will not end blockchain itself. Blockchain
has proven valuable beyond cryptocurrencies and can continue to evolve in many industries
even if specific cryptocurrencies face challenges and different types of attacks appear
(Gountia 2019).
There are significant costs associated with auditing financial data. The UK publicly
listed companies paid more than £1bn to audit firms in 2021 (Financial Reporting Council
2022). It is anticipated by the Financial Reporting Council (FRC) that new auditing solutions
can reduce audit fees and improve the audit quality (Financial Reporting Council 2022).
Although blockchains are decentralised, they are by no means exempt from auditing and
the high associated costs. In fact, long-established regulations such as the *travel rule*, which
stipulates that transactions over a certain value must be reported to a financial authority,
are immediately applicable to Bitcoin and other decentralised cash systems, and, therefore,
an auditing system is required.
It was the goal of this paper to allow users of Bitcoin to be audited in a manner that
leverages the transparency and immutability of the blockchain whilst promoting on-chain
privacy. We carried this out by developing a TEA protocol on Bitcoin that is efficient
and practical. The starting point is to allow users to establish an off-chain link between
invoices and identity information with on-chain transactions used for payments. Users
then individually submit transactions to a third-party auditor and anomalies are detected
if one user submits a transaction that their counterparty does not submit. In this case, the
auditor can request the identity information of the non-compliant counterparty which is
provably linked to the transaction.
The advantages of our scheme are as follows.
*•* All transactions are automatically audited in real-time. The blockchain provides
transparency, immutability, and availability of transaction data.
-----
*J. Risk Financial Manag.* **2023**, *16*, 400 3 of 9
*•* Our protocol is private in the sense that an adversary monitoring the blockchain will
learn nothing about users’ identities or the details of the invoices. This is because
identity and invoice information are linked to on-chain public keys in a manner that
cannot be inferred by inspecting public keys alone.
*•* Before a payment is made, the two transacting parties exchange identity information
that will be linked to a single on-chain transaction. Once the transaction is published on the blockchain, a user can use the identity information and a Simplified
Payment Verification (SPV) (Nakamoto 2008) proof to independently prove that their
counterparty has taken part in the transaction.
*•* After a predefined time period, e.g., one day, each user makes a commitment to the
third-party auditor of the transactions they have made. This commitment is stored on
the blockchain and, so, cannot be changed retrospectively.
*•* A Merkle root is used for the commitment of a user’s transactions to an auditor. This
makes it efficient for a user to prove that they have included a specific transaction in
the commitment when challenged. It is also private in the sense that the user does not
need to give information about any other transactions during such a challenge.
*•* If all users are compliant, then they are never asked to provide identity information
to the auditor. If one party is non-compliant, the compliant party can provide independent proof to the auditor of their own compliance and their counterparty’s
involvement in the transaction.
The paper is organized as follows: Section 2 provides an overview of Bitcoin as a TEB
system, the travel rule, and how identity can be linked to a public key but still preserve
privacy on the blockchain. In Section 3, we outline our invoice auditing protocol including
the method of embedding the invoice into the blockchain and verifying it. The protocol
also describes how the auditor can efficiently and automatically check the data integrity of
all related invoices. We end with a conclusion in Section 4.
**2. Preliminaries**
In this section, we discuss how Bitcoin can be interpreted as a TEB system, the travel
rule, and how identity can be linked to a public key (United States Department of the
Treasury Financial Crimes Enforcement Network 1997).
*2.1. Bitcoin and Triple Entry Bookkeeping*
Bitcoin is the first and most well-known distributed ledger. The auditing solution
in this paper is presented in terms of the original design of Bitcoin, which is currently
embodied by the Bitcoin Satoshi Vision (BSV) protocol. This design offers scalability, data
integrity, transparency, low cost, and high transaction throughput (Joseph et al. 2022). Grigg
stated that a signed receipt or invoice can be considered as a transaction recorded on a
shared transaction repository (Grigg 2005). Such an invoice transaction involves three
entities’ signatures and is used to refer to the payment event.
Figure 1 shows an example of Bitcoin performing as a TEB system. Suppose Alice and
Bob are two parties, and their payments are recorded in the Bitcoin TEB system. Invoices
are recorded in the form of Bitcoin transactions, and the associated debit and credit can be
traced with the related transaction. For instance, Alice pays Bob for the invoice *IV* 1 and
records this payment on the blockchain using the *TXID* 1 . The transaction *TXID* 1 includes
Alice’s signature associated with her credit *I* *A* 1, Alice’s public key linked to the invoice
*IV* 1 and her debit *C* *A* 1, and Bob’s public key associated with his Debit *O* *B* 1 and *IV* 1 . All
information from *TXID* 1 can be stored in Alice or Bob’s off-chain ledger in a consistent
manner. Notably, the auditor does not need to access their off-chain ledgers but can track
all records from the Bitcoin blockchain.
-----
*J. Risk Financial Manag.* **2023**, *16*, 400 4 of 9
**Figure 1.** Bitcoin as a TEB system (processed by the authors with the help of PowerPoint, Tx# stands
for the transaction number).
*2.2. Travel Rule*
The Travel Rule (United States Department of the Treasury Financial Crimes Enforcement Network 1997) requires financial institutions to send the originator and beneficiary
information for each transaction over USD 3000 within the US, and over EUR 1000 in the
EU (European Union 2023; United States Department of the Treasury Financial Crimes
Enforcement Network 1997). It was extended by the Financial Action Task Force (FATF) in
2019 to include virtual assets (VA) and virtual asset service providers (VASP). FATF defines
VA as ‘the digital representation of value that can be digitally traded or transferred and can
be used for payment or investment purposes’, and VASP as ‘a business conducting one or
more of the following activities or operations for or on behalf of another natural or legal
person’ including ‘exchange between virtual assets and fiat currencies’, ‘exchange between
one or more forms of virtual assets’, and ‘transfer of virtual assets’ (Financial Action Task
Force 2019). Thus, we assume that Bitcoin transaction service providers such as wallets are
virtual asset service providers and must comply with the Travel Rule and FATF obligations.
*2.3. Identity-Linked Public Key*
Suppose Alice owns a wallet with a master public key *PK* *MA* associated with her
identity. This can be achieved by obtaining a digital certificate on *PK* *MA* from a Certificate
Authority (CA). However, the public key that Alice uses in the transaction, e.g., *TXID* 1,
is *PK* *A*, which is different from her master key. *PK* *A* is typically derived from *PK* *MA* in a
deterministic way. For example, we have Alice’s master public key *PK* *MA*, Bob’s master
public keys *PK* *MB* and an additional data *m* such as an invoice or other metadata known to
both Alice and Bob. Then, Alice’s public key *PK* *A* can be derived such that
*PK* *A* = *PK* *MA* + HMAC-256 (( *V* *MA* *×* *PK* *MB* ), *m* ) *×* *G*, (1)
where HMAC refers to a Hash-based Message Authentication Code that is used to verify
integrity and authenticity of messages, *V* *MA* is the master private key with respect to *PK* *MA*
and *G* is the elliptic curve generator point. Note that *V* *MA* *×* *PK* *MB* = *V* *MB* *×* *PK* *MA* is a
shared secret between Alice and Bob. A similar key *PK* *B* can also be derived for Bob.
This features both Alice and Bob to provide a provable link *PK* *A* with *PK* *MA*, *PK* *MB*
and *m* . However, without the knowledge of how the key is derived, someone looking at
transaction *TXID* 1 could not link the key to Alice. According to the FATF, Alice’s wallet
-----
*J. Risk Financial Manag.* **2023**, *16*, 400 5 of 9
needs to provide the provable link of *PK* *A* with *PK* *MA* to Bob, and the same applies to
Bob’s wallet. There are alternative approaches of linking identity and invoice data to a
public key which are explored in Section V of Benford’s Wallet (Tartan et al. 2022).
**3. Invoice Auditing Protocol**
In the auditing process, it is necessary to verify the accuracy and completeness of
invoices. However, invoice verification can be time-consuming, and it is not easy for
auditors to detect all invoices and mistakes related to these invoices. The traditional way
for the auditor is to randomly select a valid sample of invoices and detect the possible
mistakes from this sample. One blockchain solution has been provided to solve this issue
through publishing a blockchain transaction, which includes the hash values associated
with invoices (Vincent et al. 2020). However, there is a problem with this solution that the
invoices that are hashed directly on the blockchain can be easily traced if compromised.
Our solution will solve this problem without including any hash values on the blockchain
but still allowing stakeholders to verify the data integrity of the invoices.
We proposed an invoice-auditing protocol on top of Bitcoin, which allows entities to
independently verify the invoices and auditors to efficiently match transactions associated
with those invoices. This protocol can improve the auditing process and save time for
auditors. Furthermore, this makes auditing automatic and checking all invoices possible
(instead of a random selection). An invoice auditing overview is given in Figure 2.
**Figure 2.** Bitcoin-based invoice-auditing protocol overview (processed by the authors with the help
of the program PlantText UML).
It is implemented in two stages: invoice verification and transaction matching.
*•* Invoice verification—this makes the invoice verifiable by entities but without disclosing information of the invoice on the blockchain;
*•* Invoice audit—this allows the auditor to audit all invoices and the related payments
in an efficient way.
-----
*J. Risk Financial Manag.* **2023**, *16*, 400 6 of 9
*3.1. Invoice Verification*
Invoice verification refers to the process of reviewing and verifying invoices for
accuracy, completeness, and valid authorization from each party. The auditor needs
to check that invoices have been approved by appropriate parties and have not been
tampered with.
In our auditing model, we assumed that the invoice is recorded in a Bitcoin transaction
and is independently verifiable by entities. This section will introduce how an invoice is
embedded in the Bitcoin transaction and can be mutually authenticated, and then describe
how the auditor verifies the data integrity of the invoice based on the transaction.
Record and Sign Invoice
We suppose that Alice and Bob are the transaction-related parties. To comply with
Travel Rule and FATF regulations (United States Department of the Treasury Financial
Crimes Enforcement Network 1997; Financial Action Task Force 2019), they need to exchange information off-chain which provably links their identity to the transaction. For
example, we assume that they have a well-known public key, denoted, respectively, as
*PK* *AC* and *PK* *BC*, to identify each other and establish an authenticated and confidential
communication channel to exchange the invoices. However, it is worth noting that these
two public keys, *PK* *AC* and *PK* *BC*, are never used to send or receive any Bitcoin payments.
In other words, they will not appear on the blockchain.
Bob generates an invoice ( *IV* ) and signs it with the private key related to *PK* *BC* . Here,
we assume the Elliptic Curve Digital Signature Algorithm (ECDSA) with secp256k1 is used
to sign the invoice, and the signature is denoted as *SIG* *IV* . The signed invoice indicates that
Bob will provide the goods or services if Alice completes the payment to the invoice *IV* .
Alice can verify *SIG* *IV* with the given invoice and *PK* *B* . If *SIG* *IV* is not valid, Alice will not
make the payment. If *SIG* *IV* is invalid because the given invoice is not one signed by Bob,
Alice can require Bob to resend *SIG* *IV* that should be generated with the correct invoice.
If Alice requires amendments to the invoice, Bob updates the contents of the invoice
and regenerates a digital signature of each new iteration of the invoice until both parties
reach a final agreement. Having arrived at an agreement, Alice verifies the signature *SIG* *IV*,
to ensure that Bob signs the agreed invoice.
When Alice and Bob reach an agreement about the invoice, they create new public
keys to be used in the transaction. These public keys should be related to their identity and
the invoice in the manner given in Equation (1). Concretely, Bob creates a public key *PK* *B*
to receive funds and Alice creates a public key *PK* *change* to be used as a change address.
These keys are calculated as follows.
*PK* *B* = *PK* *BC* + HMAC-256 (( *V* *BC* *×* *PK* *AC* ), *IV* *Signed* � *×* *G* (2)
*PK* *change* = *PK* *AC* + HMAC-256 (( *V* *AC* *×* *PK* *BC* ), *IV* *Signed* � *×* *G* (3)
where *IV* *Signed* = SHA- 256 ( *IV* *||* *SIG* *IV* ), and SHA-256 is a cryptographic hash function that
outputs a fixed-length 256-bit hash value.
Bob sends a payment transaction template containing *PK* *B* to Alice. To complete the
transaction, Alice adds her change address *PK* *change* to the outputs and a funding UTXO in
the input along with a valid signature. (Note that the public key used in the input UTXO
may be linked to Alice’s identity as well.)
The finalised transaction is displayed in Table 1. *PK* *A* described in Section 2.3 is used
by Alice to make the payment, and *SIG* *A* is the associated signature. The value *x* is the
payment amount that Alice agrees to pay to Bob, and *y* is the change that Alice will receive
after completing the payment.
Note that the invoice is embedded within the public keys used in outputs of the above
transaction, but it is not disclosed directly on the blockchain either in its raw form or a hash.
Therefore, even if the invoice is leaked, it will be difficult to track the related transaction
-----
*J. Risk Financial Manag.* **2023**, *16*, 400 7 of 9
without the invoice-signed signature *SIG* *IV* and identity-related public keys. To ensure
their relationship is untraceable, signatures and public keys are not stored along with
the invoice.
**Table 1.** A payment transaction sent from Alice to Bob (processed by the authors with the help
of Word).
|Col1|TXID1|
|---|---|
|Inputs|Outputs|
|Outpoint Unlocking Script|Value Locking Script|
|UTXO <SIG > <PK > A A A|OP_DUP OP_HASH160 <H(PK B)> x OP_EQUALVERIFY OP_CHECKSIG|
|| y OP_DUP OP_HASH160 <H PK change > OP_EQUALVERIFY OP_CHECKSIG|
*3.2. Invoice Audit*
The auditor requests Bob to provide the following information to check the accuracy
and completeness of the invoices: the shared secret *V* *BC* *×* *PK* *AC*, the invoice *IV*, *SIG* *IV*,
and *TXID* 1 . The auditor can then verify *SIG* *IV* against *PK* *BC* . If *SIG* *IV* is valid, the auditor
generates the change public key using Equation (3) and compares it with *PK* *change* in the
locking script. If it matches, the auditor can confirm that the invoice embedded into the
*PK* *change* is the same as the invoice Bob provides to Alice. All invoices can be audited using
this way and, more importantly, this can be carried out automatically. However, if checked
only from Bob’s side, the auditor cannot be sure that Bob has provided all transactions
and invoices related to Alice. Therefore, the auditor also requires Alice to report all related
transaction IDs.
Transaction Compliance
The first step is for the auditor to ask for a commitment from Alice and Bob as to
the transactions they have reported. To make the process more efficient, the auditor can
require, e.g., Bob to gather all his transactions in a regular period, e.g., one month, to
construct a Merkle tree with Merkle root *MR* *B* . That is, the auditor is checking the equality
of transactions in batches. The auditor also requires Bob to report *MR* *B* using a Bitcoin
transaction. As shown in Table 2, the report transaction specifies the output to the auditor’s
public key *PK* *auditor* and embeds *MR* *B* as an `OP_RETURN` data payload. The value *z* is the
dust value, which is the minimum amount accepted by Bitcoin nodes. We assume that the
*PK* *auditor* is certified and given to Alice and Bob beforehand.
**Table 2.** A report transaction sent from Bob to Auditor (processed by the authors with the help
of Word).
|Col1|TXIDreportB|
|---|---|
|Inputs|Outputs|
|Outpoint Unlocking Script|Value Locking Script|
|UTXO <SIG′ > <PK′ > B B B|OP_DUP OP_HASH160 <H(PK auditor)> z OP_EQUALVERIFY OP_CHECKSIG|
||0 OP_FALSE OP_RETURN <MR B>|
After receiving *MR* *B* from *TXID* *report* *B*, the auditor calculates the Merkle root *MR* *[′]* *B*
of all *TXIDs* that Bob has sent that month, and checks *MR* *B* = *MR* *[′]* *B* . If they are not
equal, the auditor requires Bob to resubmit a new *MR* *B* . The auditor also requires Alice to
submit the similar report transaction including *MR* *A* . We apply the same process to check
*MR* *A* = *MR* *[′]* *A* .
The above step is intended for the audit to check that the auditor has accurately
received all transactions that were reported individually by Alice and Bob. If this is the case,
-----
*J. Risk Financial Manag.* **2023**, *16*, 400 8 of 9
the auditor can then check if the transactions match. Namely, the auditor should receive
the same transaction ID twice, one from Alice and the other from Bob. If a transaction ID
only appears once, then the auditor knows that someone has not reported their transaction.
If this is the case, the auditor asks the party who reported the transaction for the identity
and invoice information about the party who did not report the transaction. Recall that
this identity and invoice information is provably linked to the transaction, and available to
both parties.
For example, if there is a transaction reported by Alice and not by Bob, the auditor
asks Alice for the transaction, Bob’s identity information, and the invoice. The auditor can
then contact Bob with evidence of non-compliance and ask him for an explanation. In our
simple example, there are just two parties, Alice and Bob, and so, it is obvious who has
not reported their transaction. But it easily extends to multiple parties where it becomes
necessary for the auditor to specifically ask the compliant party who their non-compliant
counterparty was in the transaction.
**4. Conclusions**
This paper introduced a Bitcoin-based TEA protocol that allows transaction-related
parties to verify invoices and manage their off-chain ledger in a consistent manner. This
can reduce the risk of running fraudulent invoices. It also provides transparency and data
integrity of invoices to the auditor or tax regulator by embedding them into transactions
but not disclosing any information on the blockchain. The protocol adopted the Merkle
tree structure to consolidate related transactions from both parties. This enables auditors to
efficiently identify the non-compliant party.
Our TEA protocol only introduced the example of one transaction per invoice and
was mixed up with payment method. Parties willing to use this protocol need to pay
with satoshis. To improve the protocol, future work includes allowing parties willing to
use this TEA protocol to make payments in other ways and only use the blockchain for
auditability; batching multiple invoices in a single transaction if payments are decoupled
from the audit process.
**Author Contributions:** Conceptualization, L.P., O.V. and C.S.W.; methodology, L.P. and O.V.; software, L.P.; resources, L.P.; writing—original draft preparation, L.P.; writing—review and editing, O.V.;
visualization, L.P.; supervision, O.V.; project administration, L.P. All authors have read and agreed to
the published version of the manuscript.
**Funding:** This research received no external funding.
**Data Availability Statement:** No new data were created or analyzed in this study. Data sharing is
not applicable to this article.
**Conflicts of Interest:** The authors declare no conflict of interest.
**References**
Baba, Asif Iqbal, Subash Neupane, Fan Wu, and Fanta F. Yaroh. 2021. Blockchain in Accounting: Challenges and Future Prospects.
*International Journal of Blockchains and Cryptocurrencies* [2: 44–67. [CrossRef]](https://doi.org/10.1504/IJBC.2021.117810)
[European Union. 2023. Official Journal L 150/2023. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/](https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L:2023:150:FULL)
[?uri=OJ:L:2023:150:FULL (accessed on 26 June 2023).](https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L:2023:150:FULL)
Faccia, Alessio, and Narcisa Roxana Mosteanu. 2019. Accounting and Blockchain Technology: From Double-Entry to Triple-Entry. *The*
*Business and Management Review* 10: 108–16.
Financial Action Task Force. 2019. Virtual Assets and Virtual Asset Service Providers. Available online: www.fatf-gafi.org (accessed on
27 June 2023).
[Financial Reporting Council. 2022. Competition in the Audit Market—A Policy Paper. Available online: https://www.frc.org.uk/](https://www.frc.org.uk/getattachment/83bb5ce5-891f-46b1-af84-799fd3d5ee39/Competition-in-the-audit-market-_-2022.pdf)
[getattachment/83bb5ce5-891f-46b1-af84-799fd3d5ee39/Competition-in-the-audit-market-_-2022.pdf (accessed on 27 June 2023).](https://www.frc.org.uk/getattachment/83bb5ce5-891f-46b1-af84-799fd3d5ee39/Competition-in-the-audit-market-_-2022.pdf)
Gountia, Debasis. 2019. Towards Scalability Trade-off and Security Issues in State-of-the-Art Blockchain. *ICST Transactions on Security*
*and Safety* [5: 157416. [CrossRef]](https://doi.org/10.4108/eai.8-4-2019.157416)
Grigg, Ian. 2005. *Triple Entry Accounting* [. Itasca: Systemics Inc., pp. 1–10. [CrossRef]](https://doi.org/10.13140/RG.2.2.12032.43524)
[Grigg, Ian. 2011. Is BitCoin a Triple Entry System? Available online: https://financialcryptography.com/mt/archives/001325.html](https://financialcryptography.com/mt/archives/001325.html)
(accessed on 27 August 2023).
-----
*J. Risk Financial Manag.* **2023**, *16*, 400 9 of 9
[Grigg, Ian. 2017. EOS: An Introduction. Available online: http://iang.org/ (accessed on 26 June 2023).](http://iang.org/)
Ibañez, Juan Ignacio, Chris N. Bayer, Paolo Tasca, and Jiahua Xu. 2021a. Triple-Entry Accounting, Blockchain and next of Kin: Towards
[a Standardization of Ledger Terminology. Available online: https://ssrn.com/abstract=3760220 (accessed on 26 June 2023).](https://ssrn.com/abstract=3760220)
Ibañez, Juan Ignacio, Chris N. Bayer, Paolo Tasca, and Jiahua Xu. 2021b. The Efficiency of Single Truth: Triple-Entry Accounting.
[Available online: https://ssrn.com/abstract=3770034 (accessed on 26 June 2023).](https://ssrn.com/abstract=3770034)
Ibañez, Juan Ignacio, Chris N. Bayer, Paolo Tasca, and Jiahua Xu. 2023. REA, Triple-Entry Accounting and Blockchain: Converging
Paths to Shared Ledger Systems. *Journal of Risk and Financial Management* [16: 382. [CrossRef]](https://doi.org/10.3390/jrfm16090382)
Joseph, Daniel, Yuen Lo, Alessio Pagani, Liuxuan Pan, and Vlad Skovorodov. 2022. Chapter 1. Ledger Comparative Analysis. In
*Blockchain Technology: Advances in Research and Applications* [. Edited by Eva R. Porras. New York: Nova. [CrossRef]](https://doi.org/10.52305/RTZT8988)
[Liu, Jiageng, Igor Makarov, and Antoinette Schoar. 2023. Anatomy of a Run: The Terra Luna Cras. Available online: http://www.nber.](http://www.nber.org/papers/w31160)
[org/papers/w31160 (accessed on 27 August 2023).](http://www.nber.org/papers/w31160)
Nakamoto, Satoshi. 2008. *Bitcoin: A Peer-to-Peer Electronic Cash System* . San Jose: Bitcoin.org.
Request. 2018. Whitepaper Request Network the Future of Commerce a Decentralized Network for Payment Requests. Available
[online: http://gavwood.com/paper.pdf (accessed on 27 June 2023).](http://gavwood.com/paper.pdf)
Simoyama, Felipe de Oliveira, Ian Grigg, Ricardo Luiz Pereira Bueno, and Ludmila Cavarzere De Oliveira. 2017. Triple Entry Ledgers
with Blockchain for Auditing. *International. Journal Auditing Technology* [3: 163–83. [CrossRef]](https://doi.org/10.1504/IJAUDIT.2017.086741)
Singh, Kishore, Amlan Haque, Sabi Kaphle, and Janice Joowon Ban. 2021. Distributed Ledger Technology—Addressing the Challenges
of Assurance in Accounting Systems: A Research Note. *Journal of Accounting and Management Information Systems* 20: 646–69.
[[CrossRef]](https://doi.org/10.24818/jamis.2021.04004)
Tartan, Chloe Ceren, Wei Zhang, Owen Vaughan, and Craig Steven Wright. 2022. Benford’s Wallet. Paper presented at 1st Global
Emerging Technology Blockchain Forum: Blockchain & Beyond (iGETblockchain), Irvine, CA, USA, November 7–11.
[United States Department of the Treasury Financial Crimes Enforcement Network. 1997. Travel Rule. Available online: http:](http://www.fincen.gov)
[//www.fincen.gov (accessed on 27 June 2023).](http://www.fincen.gov)
Vidal-Tomás, David, Antonio Briola, and Tomaso Aste. 2023. FTX’s Downfall and Binance’s Consolidation: The Fragility of Centralized
Digital Finance, February. *arXiv* [arXiv:2302.11371. [CrossRef]](https://doi.org/10.1016/j.physa.2023.129044)
Vincent, Nishani Edirisinghe, Anthony Skjellum, and Sai Medury. 2020. Blockchain Architecture: A Design That Helps CPA Firms
Leverage the Technology. *International Journal of Accounting Information Systems* [38: 100466. [CrossRef]](https://doi.org/10.1016/j.accinf.2020.100466)
**Disclaimer/Publisher’s Note:** The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.
-----
| 8,847
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/jrfm16090400?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/jrfm16090400, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/1911-8074/16/9/400/pdf?version=1694076850"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-09-07T00:00:00
|
[
{
"paperId": "063bfa384acfe07ccce734e3b8b6a2bc92951baf",
"title": "Triple Entry Accounting"
},
{
"paperId": "34f07b9a356cdf04743c3be3436fd6a8cefaf943",
"title": "Anatomy of a Run: The Terra Luna Crash"
},
{
"paperId": "fef0f2f4f69c7d5f23034f08a43c6f98c9cd8561",
"title": "Ftx's Downfall and Binance's Consolidation: The Fragility of Centralized Digital Finance"
},
{
"paperId": "befbf45ae6fd4fbcfe629c8847c31e80d20cbdf2",
"title": "Distributed ledger technology - Addressing the challenges of assurance in accounting systems: A research note"
},
{
"paperId": "273280075c9677f1cd75c051d117fc52729c25ee",
"title": "The Efficiency of Single Truth: Triple-entry Accounting"
},
{
"paperId": "afae849082a359cea0f401587a25960cca1a251c",
"title": "Triple-entry Accounting, Blockchain and Next of Kin: Towards a Standardization of Ledger Terminology"
},
{
"paperId": "e61dbcf746794054e50852e3a79925110bc41de4",
"title": "Blockchain architecture: A design that helps CPA firms leverage the technology"
},
{
"paperId": "36b64785a56f053d5937ed7c5882c39ccdfa401a",
"title": "REA, Triple-Entry Accounting and Blockchain: Converging Paths to Shared Ledger Systems"
},
{
"paperId": "78fb1d0c962cbd1fde0b0abe0a3fab7155884ad5",
"title": "Towards Scalability Trade-off and Security Issues in State-of-the-art Blockchain"
},
{
"paperId": "3f8a81013647411b781690239f2fe9b4e60ffa79",
"title": "Triple entry ledgers with blockchain for auditing"
},
{
"paperId": "524b89e25d9058ee047de89f4c750217dc9ac853",
"title": "The Financial Action Task Force"
},
{
"paperId": null,
"title": "Travel Rule"
},
{
"paperId": null,
"title": "Financial Reporting Council. 2022."
},
{
"paperId": null,
"title": "Owen Vaughan"
},
{
"paperId": null,
"title": "Competition in the Audit Market — A Policy Paper"
},
{
"paperId": null,
"title": "Blockchain for Auditing. International"
},
{
"paperId": "44e43880fc15755da08ca2169165667b515278d4",
"title": "Blockchain in Accounting: Challenges and Future Prospects"
},
{
"paperId": "b09bbbe12d891b4945b77f3a59cb491447a8bea3",
"title": "Accounting and blockchain technology: from double-entry to triple-entry"
},
{
"paperId": "c7a922dffb06ee1a75e1a5527c557da21b3c1d90",
"title": "EOS-An Introduction"
},
{
"paperId": null,
"title": "Benford's Wallet. Paper presented at 1st Global Emerging Technology Blockchain Forum: Blockchain & Beyond (iGETblockchain)"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": "4b500b0ed89a2a503cdd67e9cec90424ee7ace7a",
"title": "in the European Union"
},
{
"paperId": null,
"title": "United States Department of the Treasury Financial Crimes Enforcement Network"
},
{
"paperId": null,
"title": "2022. Chapter 1. Ledger Comparative Analysis"
},
{
"paperId": null,
"title": "2022. Benford’s Wallet"
},
{
"paperId": null,
"title": "Is BitCoin a Triple Entry System"
}
] | 8,847
|
en
|
[
{
"category": "Economics",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/002691e54d58a6c55f5c3882f6c19760ca2e030e
|
[] | 0.918203
|
Investment in Cryptocurrencies
|
002691e54d58a6c55f5c3882f6c19760ca2e030e
|
International Journal of Health Sciences
|
[
{
"authorId": "3015349",
"name": "Archana Singh"
},
{
"authorId": "47706727",
"name": "A. Shukla"
}
] |
{
"alternate_issns": [
"0924-2287",
"2764-0159",
"1791-4299",
"1658-3639",
"2827-9603",
"2550-696X",
"2550-6978"
],
"alternate_names": [
"Int j health sci",
"International Journal of Health Science",
"International journal of health sciences",
"Int J Health Sci",
"International Journal Of Health Science"
],
"alternate_urls": [
"http://www.ijhs.org.sa/",
"https://www.ijhs.org.sa/index.php/journal",
"http://sciencescholar.us/journal/index.php/ijhs"
],
"id": "064cbda6-2af7-40cb-9a07-22d7899d8f57",
"issn": "2710-2564",
"name": "International Journal of Health Sciences",
"type": "journal",
"url": "https://interhealthjournal.page.tl/Home-IJHS.htm"
}
|
Technology has created a significant difference in the lives of the people due to paradigm shift from offline activities to online activities. Cryptocurrency is a digital coin money based on the concept of cryptography encryption and electronic connectivity to function. Cryptocurrency is one of the best inventions in the context of financial sector. Being a decentralised currency, it also opposes the intervention of central banks and digital currencies by them. It transforms the virtual trade market by introducing a free rein trading mechanism that operates without the involvement and regulation of a third party. Digital currencies in today’s scenario become need of the hour thus this paper compares the most prevalent cryptocurrencies of India on the basis of market capitalization rate. The paper also aims to study the key characteristics of the digital currencies.
|
**How to Cite:**
Singh, A., & Shukla, A. (2022). Investment in Cryptocurrencies: A comparative study. International
_[Journal of Health Sciences, 6(S1), 9950–9960. https://doi.org/10.53730/ijhs.v6nS1.7359](https://doi.org/10.53730/ijhs.v6nS1.7359)_
## Investment in Cryptocurrencies: A comparative study
**Dr. Archana Singh**
Assistant Professor, Department of Commerce and Business Administration,
University of Allahabad, Uttar Pradesh, India
**Ms. Aparna Shukla**
Research Scholar, Department of Commerce and Business Administration,
University of Allahabad, Uttar Pradesh, India
**_Abstract---Technology has created a significant difference in the lives_**
of the people due to paradigm shift from offline activities to online
activities. Cryptocurrency is a digital coin money based on the concept
of cryptography encryption and electronic connectivity to function.
Cryptocurrency is one of the best inventions in the context of financial
sector. Being a decentralised currency, it also opposes the
intervention of central banks and digital currencies by them. It
transforms the virtual trade market by introducing a free rein trading
mechanism that operates without the involvement and regulation of a
third party. Digital currencies in today’s scenario become need of the
hour thus this paper compares the most prevalent cryptocurrencies of
India on the basis of market capitalization rate. The paper also aims
to study the key characteristics of the digital currencies.
**_Keywords---Cryptocurrency, Bitcoin, Ethereum, Tether._**
**Introduction**
_“The more you dig deeper into crypto the more you will discover you know little_
_about so many things in life.”_
**Olawale Daniel**
Cryptocurrency is one of the best inventions in the context of financial sector. It is
a digital money that was developed with the aim of controlling and protecting its
transactions with the user's identity being hidden (Jani, 2018). Cryptocurrency
includes crypto and currency in the word were crypto means cryptography and
currency. "Cryptography" is a type of electronic medium technology that is used
International Journal of Health Sciences ISSN 2550-6978 E-ISSN 2550-696X © 2022.
_Manuscript submitted: 27 March 2022, Manuscript revised: 18 April 2022, Accepted for publication: 9 May 2022_
9950
-----
9951
to for the sake of privacy, information obfuscation, and authentication. Currency
means money that is in circulation and legally acceptable.
Cryptocurrency has evolved with the motive of being the less expensive,
trustworthy and quite efficient with comparison to the other currencies prevailing.
The essence of Cryptocurrency is that its locus of control lies in no hands, it’s a
freely moving currency, though these are issued in some definite quantity. It
enables the transmission of digital, cost-free cryptocurrency units, often known
as coins, between client programmes over a peer-to-peer network (Vejacka, 2014).
Cryptocurrency does not need an approval from central bank for its issue unlike
the country currency and the most interesting thing is that there are
intermediaries in the transactions and the whole control of that virtual currency
is within the ambit of one self. People can keep a check on its status and regulate
the quantity of it by their own. There is no need for an intermediary in this
system, and transactions are usually very cheap and simple and quick (Li &
Wang, 2016).
**Background of Cryptocurrency**
Satoshi Nakamoto, a pseudonym used by a creator, revolutionised the world of
online payments in 2009 by launching the very first decentralised peer-to-peer
payment method to the internet with the name Bitcoin. The technology that is
used is referred to blockchain technology and this digital currency works in a
decentralised way as compared to normal currencies which implies that no
competent authority will be able to regulate and control its volume and frequency
of being transacted. Many attempts to generate digital money were made in the
nineteenth century, but they all failed (Mukhopadhyay et.al. 2016). Satoshi
attempted to create a decentralised digital cash system after witnessing all of the
failures. File sharing through a network, similar to peer-to-peer.
**Blockchain technology**
Cryptocurrencies are founded on the basis of blockchain technology. Blockchain
is a type of shared database that differs from traditional databases in the manner
it is stored: data is stored in blocks, which are then connected together via
cryptography. Blockchains are distributed without a central body control and
usually decentralised digital ledgers that are fraud proof and resistant to
tampering They allow a group of users to record transactions in a shared ledger
within that group, with the result that no transaction can be modified once it has
been published, as long as the blockchain network is operational. Information
and history of cryptocurrency transactions are irreversible. A blockchain can
store a range of information, including legal contracts, state identifications, and a
company's goods inventory, in addition these transactions.
A blockchain is the foundation of the Bitcoin network. It is important to consider
here that Bitcoin only uses blockchain to create a transparent ledger of payments;
however, blockchain can theoretically be used to immutably record any amount of
data items. This can be used for various transactions, election votes, goods
inventories, state identifications, home deeds, and much more.
-----
9952
Source: Investopedia
**Introduction of cryptocurrency in India**
India being a striving country to achieve global targets in economy is enough
capable to hold digital currencies. it was 2012 when cryptocurrency is started
flourishing in India. The attention on it increases with the passing years. After
unexpected move of demonetisation by Indian government, people were very
much insecure and investing in cryptocurrency became a smart move for them
amidst the chaos. The crash occurred in 2017 after the government raised
concerns against the use of the technology and ruled out the possibility of 'Ponzi
scheme' fraud (Swetha & Meghashilpa, 2019). But in 2018, there was a drastic
change. In the budget speech of 2018-19, Nirmala Sitaraman the Finance
Minister of India announced that the government does not consider
cryptocurrencies as legal tender. The government also mentioned that they
will take all the necessary measures to make sure that the use of
cryptocurrencies is eliminated from all activities. Then, a ban was imposed on
the use of the same by RBI considering its unregulated setup and risks. On
April 2018, the RBI issues a circular suggesting commercial and co-operative
banks, payments banks, small finance banks, NBFCs and payment system
providers to prevent from virtual currency transaction and giving services to the
institutions dealing with them. This way cryptocurrency crash took place in India
(Shakya et al., 2021).
It was on march 2020, the honourable supreme court declared the government
ban on cryptocurrencies as unlawful as well as emphasised on the April 2018
circular as unconstitutional. The Supreme Court cited the fact that
-----
9953
cryptocurrencies are unregulated but not illegal in India which is one of the most
important reasons for reversing the ban. In this way the stagnant cryptocurrency
market gets revived.
At present, the central government is likely to propose "The Cryptocurrency and
Regulation of Official Digital Currency Bill, 2021". The bill aims to outlaw all
private cryptocurrencies in India, but it makes some exclusions in order to
promote cryptocurrency's technology platform and users. The law attempts to
create a mechanism that will make it easier for the Reserve Bank of India to
develop an official digital currency. It is one of 26 new bills set to be introduced in
the upcoming Parliament session.
According to the Indian government, persons dealing with cryptocurrencies
should be cautious and vigilant because there is no legal protection for this type
of currency, and the government cannot assist people if they are victims of fraud
(Singh & Singh, 2018).
**Literature Review**
(Swetha and Meghashilpa 2019) studies the future of cryptocurrencies with the
reference to client perspective and looks into clients' confidence in managing
digital money when they aren't totally controlled. The study reveals that digital
money is quite likely to be the next currency stage and the absence of legality is
regarded as the most serious worry for trading in digital currency. The study
concluded that clients should be cautious using cryptocurrencies until it is more
tightly regulated and monitored.
(Shakya et.al. 2021) performed a comparative study between China and India and
evaluate the present position as well as the scope of cryptocurrency in India. The
study found that cryptocurrency users express more trust in digital payment
system in comparison to traditional payment methods. The study concluded that
people investing in cryptocurrencies should be careful because they are prone to
its negative effects until suitable regulations and legal protection are provided to
users.
(Ahamed & Hussain, 2020) studies the features of top five cryptocurrencies
selected on the bases of market capitalization and their comparative analysis for 6
Months. The study concluded how different cryptocurrencies fluctuates and got
influenced by COVID-19. The absence of knowledge regarding trading parties is a
fundamental issue that all Cryptocurrencies confront, exposing investors to
unforeseen hazards like as anti-money laundering and terrorism funding.
(DeVries 2016) studies the concept of a cryptocurrency. The SWOT analysis of
Bitcoin is done in the research work and some of the recent events and
movements that impact the status of cryptocurrency. The study concluded that
cryptocurrency appears to have progressed beyond the early adaption phase that
new technologies go through. Bitcoin has begun to carve out a specialised market
for itself, which may either help cryptocurrencies grow further into the
mainstream, or be the primary cause of its decline. Cryptocurrencies are still in
-----
9954
their infancy, and it's hard to say whether they'll ever become a true main stream
presence in the global market.
**Objective of the study:**
1) To study the features of top cryptocurrencies selected on the basis of market
capitalization.
2) To study the comparative analysis of top three cryptocurrencies for 6 Months.
**Research Methodology**
The study is Descriptive type of research. The data is collected on the basis of
secondary data. The samples are collected on the basis of availability of
information on different websites hence convenience sampling is used.
Table 1
Ranking of the Cryptocurrencies Based on Market Capitalization
Sr. Cryptocur Supply Price Market Market Capitalization
No. rencies Capitalization in In rupees
dollars
1 Bitcoin 21 $40,431.13 $768,848,803,27 Rs.
million 4 58687459311989.66
2 Ethereum 120 $3,040.54 $366,100,198,50 Rs.27945013912509.5
million 9 9
3 Tether 82.7 $1.00 $82,725,912,913 Rs.6314601294109.95
billion
Source: www.coingecko.com US $ 1= Rs.76.33
**Comparative Analysis of Cryptocurrencies**
The top three cryptocurrencies have been selected for the study on the basis of
their market capitalization rate. These are Bitcoin at first, Ethereum at second
and Tether at third position. The data is collected for six months that is from
October 2022 to March 2022. Following are the analysis of particular securities
on the basis of their price fluctuations. The data has been taken from coingecko
website retrieved on 16 April 2022.
Table 2
Price Bitcoin from October 2022 to March 2022
|Sr. No.|Cryptocur rencies|Supply|Price|Market Capitalization in dollars|Market Capitalization In rupees|
|---|---|---|---|---|---|
|1|Bitcoin|21 million|$40,431.13|$768,848,803,27 4|Rs. 58687459311989.66|
|2|Ethereum|120 million|$3,040.54|$366,100,198,50 9|Rs.27945013912509.5 9|
|3|Tether|82.7 billion|$1.00|$82,725,912,913|Rs.6314601294109.95|
|Months|October|November|December|January|February|March|
|---|---|---|---|---|---|---|
|Price|$61837.26|57848.77|47191.87|37983.15|37803.59|47063.37|
-----
9955
# Bitcoin
70000
60000
50000
40000
30000
20000
10000
0
October November December January February March
**MONTHS**
Figure 1: Price Fluctuation in BITCOIN
**Bitcoin**
Bitcoin is a digital money that functions as a global payment system. It is a
decentralised digital currency that does not use the central bank system and has
no single administrator. There is peer-to-peer networking, and all digital currency
transfers were completed without the use of a middleman. The transactions are
properly confirmed by network protocols that use a particular type of
cryptography, and a blockchain record has been created for the public
distribution ledger. In the year 2009, an unknown person or group of people
released Bitcoin and produced the open-source software. The Bitcoin
cryptocurrency is employed in the mining process, which is a method of
remunerating users. Bitcoin was one of the first digital currencies to make use of
peer-to-peer technology to allow for instant transactions. Individuals and
businesses who hold the governing computational power and participate in the
Bitcoin network are known as miners. Bitcoin miners are responsible for
processing transactions on the blockchain and are rewarded with fresh Bitcoin
and transaction fees paid in Bitcoin.
Table 3
Price of Ethereum from October 2022 to March 2022
|Months|October|November|December|January|February|March|
|---|---|---|---|---|---|---|
|Price|4324.61|4444.53|3714.95|2610.18|2629.48|3383.79|
# Bitcoin
70000
60000
50000
40000
30000
20000
10000
0
October November December January February March
**MONTHS**
-----
9956
# Ethereum
5000
4000
3000
2000
1000
0
October November December January February March
**MONTHS**
Figure 2: Price Fluctuation in Ethereum
**Ethereum**
Ethereum is a platform that supports ether as well as a network of decentralised
apps, or dApps. Smart contracts, which emerged on the Ethereum platform, are
an integral part of the network's functionality. Smart contracts and blockchain
technology are used in many decentralised finance (DeFi) and other applications.
In 2013, Vitalik Buterin, who is credited with inventing the Ethereum concept,
released a white paper introducing Ethereum. Buterin and Joe Lubin, the
founder of the blockchain software start-up ConsenSys, established the
Ethereum platform in 2015. Beyond enabling safe virtual currency trade,
Ethereum's founders were among the first to contemplate the entire potential of
blockchain technology. Blockchain technology is being utilised to develop
applications that go beyond just facilitating the use of a digital currency.
Ethereum is the largest and most well-known open-ended decentralised software
platform, having been launched in July 2015. Ethereum proposed that
blockchain technology be used not only to maintain a decentralised payment
network, but also to store computer code that could be utilised to power tamperproof decentralised financial contracts and apps. The Ethereum network's
currency, ether, powers Ethereum apps and contracts. Ethereum is a
programmable blockchain that may be used for a variety of things, such as DeFi,
smart contracts, and NFTs.
Table 4
Price of Tether from October 2022 to March 2022
|Months|October|November|December|January|February|March|
|---|---|---|---|---|---|---|
|Price|1.00|0.999671|1.00|0.998582|0.999390|0.999742|
# Ethereum
5000
4000
3000
2000
1000
0
October November December January February March
**MONTHS**
-----
9957
# Tether
1.0005
1
0.9995
0.999
0.9985
0.998
0.9975
October November December January February March
**MONTHS**
Figure 3: Price Fluctuation in Ethereum
**Tether**
Tether (USDT) is a stable coin with a price tied to $1.00. It is a blockchain-based
cryptocurrency whose tokens in circulation are backed by an equivalent quantity
of US dollars. Tether was created with the goal of providing consumers with
stability, transparency, and low transaction fees by bridging the gap between fiat
currencies and cryptocurrencies. It is tied to the US dollar and maintains a
value-to-value ratio of one-to-one with the US dollar. Tether began as RealCoin
in July 2014 and was rebranded as Tether in November by Tether Ltd., the firm
in charge of maintaining fiat currency reserve quantities. In February of 2015, it
began trading. Tether is beneficial to cryptocurrency investors since it allows
them to escape the high volatility of other cryptocurrencies. Additionally, using
USDT (rather than the US dollar) eliminates transaction charges and delays that
stymie trade execution in the crypto market.
Table 5
Comparative Prices Cryptocurrencies and the Fluctuations from October 2022 to
March 2022
|Months|October|November|December|January|February|March|
|---|---|---|---|---|---|---|
|Bitcoin|$61837.26|57848.77|47191.87|37983.15|37803.59|47063.37|
|Ethereum|4324.61|4444.53|3714.95|2610.18|2629.48|3383.79|
|Tether|1.00|0.999671|1.00|0.998582|0.999390|0.999742|
# Tether
1.0005
1
0.9995
0.999
0.9985
0.998
0.9975
October November December January February March
**MONTHS**
-----
9958
## Cryptocurrency
70000 1.0005
60000 1
50000
0.9995
40000
0.999
30000
0.9985
20000
10000 0.998
0 0.9975
October November December January February MarchCryptocurrency
Bitcoin Ethereum Tether
Figure 4: Comparative 6-Month Price Analysis
The figure 4 shows the fluctuations in the price of top three cryptocurrencies
during considered six months. In comparison to bitcoin and Ethereum, tether has
less fluctuations as can be seen from graphs. Two major phenomena have
impacted cryptocurrencies which is reflected through the price fluctuations. An
announcement made by the CEO of Tesla, Elon Musk of no further acceptance of
Bitcoin as payment for its products. The reason put forwarded were the
environmental worries over Bitcoin's mining process. This news has a profound
impact on whole cryptocurrency market and Bitcoin and Ethereum were the most
affected cryptocurrencies. The second one is blow from China. All of China's
banks and financial institutions are prohibited from providing clients with any
cryptocurrency-related services, including coin offerings and transactions. This
has significant influence on the investment decisions of people in
cryptocurrencies not only in India but in all over the world. New information in
any market brings changes in the prices and volume of transactions. Crypto
market is not untouched of that.
The slow rise in prices of the top three cryptocurrencies can be seen in a positive
way. Jump Crypto Partner and DeFi Alliance Founding Partner Peter Johnson
said that there will be favourable drivers for cryptocurrency in 2022. He believes
that the macro inflationary backdrop is beneficial, and that the billions of dollars
in cash poised to be deployed into crypto hedge funds will also help the crypto
ecosystem move forward.
Table 6
Feature Comparison of Bitcoin vs Ethereum vs Tether
|Basis Bitcoin Ethereum Tether|Col2|Col3|Col4|
|---|---|---|---|
|Origin and Originator|2009, Satoshi Nakamato|2013, Vitalik Buterin|2014, Brock Pierce, Reeve Collins and Craig Sellars|
|Symbols|₿,|(Ξ)|₮|
-----
9959
**BTC** **ETH** USDT
**Transaction** The transaction The transaction The transaction
**Speed** speed of Bitcoin is speed of Ethereum speed of Tether is
10 Minutes per is 5 Minutes per 1 Minutes per
transaction transaction transaction
**Scalability** Bitcoin is able to Ethereum is Tether is able to
generate a capable of generate a
maximum of 4.6 delivering double maximum 10
transactions per as that of Bitcoin, transaction 1
second an approximate of minute
15 transactions
per second
**Circulating** **₿18,925,000** 119 million 83 billion
**Supply**
**Maximum Supply** 21 million No Upper Limit 82.73 billion
Source: (Ahmad and Hussain 2018)
**Conclusion**
As quoted by founder of Swedish pirate party Rick Falkvinge “Bitcoin will do to
banks what email did to the postal industry” is the new view of crypto market.
The inclination towards the investment in cryptocurrencies has been increased
and people are more influenced with the blockchain mechanism with no
regulating authority. It has experienced a setback due to COVID-19 which was as
expected. The 2021 news crisis also impacted the crypto market with vivid
fluctuations as shown in the study. Along with that the absence of regulating
authority is concern for investors because their money is at stake with no single
entity or person taking guarantee to pay back their money if things would not
work. The tesla decision is also had a tacit aspect for not accepting bitcoin
because there is no surety that they will be able to convert it in cash or not.
Rising trend of investment in cryptocurrencies should not overlook the limitations
imbibed in it. It can be concluded that investing decisions in cryptocurrencies
should incorporate the risks involved and the security of the investors.
**References**
1. Ahamed, A., & Hussain, A. (2020). A Comparative Study of Top Five Digital
Currencies in India : Cryptocurrencies. _International Journal of Science and_
_Research, 9(5), 1754–1760. https://doi.org/10.17492/mudra.v5i2.14328_
2. Jani, S. (2018). The Growth of Cryptocurrency in India: Its Challenges &
Potential Impacts on Legislation. Research gate publication
3. Li, X., & Wang, C. (2016). The Technology and Economic Determinants of 82
IITM Journal of Management and IT Cryptocurrency Exchange Rates: The
Case of Bitcoin. Decision Support System.
4. Mukhopadhyay, U., Skjellum, A., Hambolu, O., Oakley, J., Yu, L., & Brooks,
R. (2016, December). A brief survey of cryptocurrency systems. In 2016 14th
_annual conference on privacy, security and trust (PST) (pp. 745-752). IEEE_
5. Shakya, V., Kumar, P. V. G. N. P., Tewari, L., & Pronika. (2021). Blockchain
based Cryptocurrency Scope in India. _Proceedings - 5th International_
_Conference on Intelligent Computing and Control Systems, ICICCS 2021, 361–_
|Col1|BTC|ETH|USDT|
|---|---|---|---|
|Transaction Speed|The transaction speed of Bitcoin is 10 Minutes per transaction|The transaction speed of Ethereum is 5 Minutes per transaction|The transaction speed of Tether is 1 Minutes per transaction|
|Scalability|Bitcoin is able to generate a maximum of 4.6 transactions per second|Ethereum is capable of delivering double as that of Bitcoin, an approximate of 15 transactions per second|Tether is able to generate a maximum 10 transaction 1 minute|
|Circulating Supply|₿18,925,000|119 million|83 billion|
|Maximum Supply|21 million|No Upper Limit|82.73 billion|
-----
9960
368. https://doi.org/10.1109/ICICCS51141.2021.9432143
6. Singh, A. K., & Singh, K. V. (2018). Cryptocurrency in India-Its Effect and
Future on Economy with Special Reference to Bitcoin. International Journal of
_Research in Economics and Social Sciences (IJRESS), 8(3)._
7. Swetha, I. K., & Meghashilpa, R. (2019). A Conceptual Study on
Cryptocurrency: An Indian Perspective. _International Journal of Advance and_
_Innovative Research, 6(1), 1–6._
8. Thackeray, J. (5) Inherent Risks of Cryptocurrency. Financial Executives
International
9. Vejačka, M. (2014). Basic aspects of cryptocurrencies. Journal of Economy,
_Business and Financing, 2(2), 75-83._
10. [https://finance.yahoo.com/news/crypto-crash-2022-top-10-](https://finance.yahoo.com/news/crypto-crash-2022-top-10-203519771.html#:~:text=Like%20all%20other%20sectors%20and,their%20losses%20and%20selling%20out)
[203519771.html#:~:text=Like%20all%20other%20sectors%20and,their%20lo](https://finance.yahoo.com/news/crypto-crash-2022-top-10-203519771.html#:~:text=Like%20all%20other%20sectors%20and,their%20losses%20and%20selling%20out)
[sses%20and%20selling%20out.](https://finance.yahoo.com/news/crypto-crash-2022-top-10-203519771.html#:~:text=Like%20all%20other%20sectors%20and,their%20losses%20and%20selling%20out)
-----
| 6,118
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.53730/ijhs.v6ns1.7359?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.53730/ijhs.v6ns1.7359, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GOLD",
"url": "https://sciencescholar.us/journal/index.php/ijhs/article/download/7359/3713"
}
| 2,022
|
[] | true
| 2022-05-14T00:00:00
|
[
{
"paperId": "565a25351068079499935ee4e0100e5626b3598d",
"title": "Blockchain based Cryptocurrency Scope in India"
},
{
"paperId": "f1906a7118c0a0851c1ad9f2e19fce1a2b851d26",
"title": "A brief survey of Cryptocurrency systems"
},
{
"paperId": "2928a6d520c1df1c62e95e63ded7005f93f78e61",
"title": "A Comparative Study on Top Five Digital Currencies in India: Cryptocurrencies"
},
{
"paperId": null,
"title": "A Conceptual Study on Cryptocurrency: An Indian Perspective"
},
{
"paperId": null,
"title": "The Growth of Cryptocurrency in India: Its Challenges & Potential Impacts on Legislation. Research gate publication"
},
{
"paperId": null,
"title": "Cryptocurrency in India-Its Effect and Future on Economy with Special Reference to Bitcoin"
},
{
"paperId": null,
"title": "The Technology and Economic Determinants of 82 IITM Journal of Management and IT Cryptocurrency Exchange Rates: The Case of Bitcoin"
},
{
"paperId": "1fd353d566e7229e9961326065a68e5e13aa9556",
"title": "Finance"
},
{
"paperId": null,
"title": "(5) Inherent Risks of Cryptocurrency"
},
{
"paperId": null,
"title": ") Inherent Risks of Cryptocurrency. Financial Executives International"
}
] | 6,118
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0028396decb837338e69ed1149e115194e0748be
|
[
"Computer Science"
] | 0.892667
|
Enabling persistent queries for cross-aggregate performance monitoring
|
0028396decb837338e69ed1149e115194e0748be
|
IEEE Communications Magazine
|
[
{
"authorId": "49941940",
"name": "A. Mandal"
},
{
"authorId": "143859148",
"name": "I. Baldin"
},
{
"authorId": "1795167",
"name": "Yufeng Xin"
},
{
"authorId": "39332980",
"name": "Paul Ruth"
},
{
"authorId": "2754922",
"name": "Chris Heermann"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IEEE Commun Mag"
],
"alternate_urls": [
"https://ieeexplore.ieee.org/servlet/opac?punumber=35"
],
"id": "a1b15bc8-157e-45a9-b4c8-8211f938775d",
"issn": "0163-6804",
"name": "IEEE Communications Magazine",
"type": "journal",
"url": "http://www.comsoc.org/commag"
}
| null |
# Enabling Persistent Queries for Cross-aggregate Performance Monitoring
### TR-13-01 Anirban Mandal, Ilia Baldine, Yufeng Xin, Paul Ruth, Chris Heerman April 2013
RENCI Technical Report Series
#### http://www.renci.org/techreports
-----
##### Enabling Persistent Queries for Cross-aggregate Performance Monitoring
###### Anirban Mandal, Ilia Baldine, Yufeng Xin, Paul Ruth, Chris Heerman Renaissance Computing Institute, UNC - Chapel Hill
###### Abstract
It is essential for distributed data-intensive applications
to monitor the performance of the underlying network,
storage and computational resources. Increasingly, distributed applications need performance information from
multiple aggregates, and tools need to take real-time
steering decisions based on the performance feedback.
With increasing scale and complexity, the volume and
velocity of monitoring data is increasing, posing scalability challenges. In this work, we have developed
a Persistent Query Agent (PQA) that provides realtime application and network performance feedback to
clients/applications, thereby enabling dynamic adaptations. PQA enables federated performance monitoring by interacting with multiple aggregates and performance monitoring sources. Using a publish-subscribe
framework, it sends triggers asynchronously to applications/clients when relevant performance events occur.
The applications/clients register their events of interest
using declarative queries and get notified by the PQA.
PQA leverages a complex event processing (CEP) framework for managing and executing the queries expressed
in a standard SQL-like query language. Instead of saving all monitoring data for future analysis, PQA observes
performance event streams in real-time, and runs continuous queries over streams of monitoring events. In this
work, we present the design and architecture of the persistent query agent, and describe some relevant use cases.
###### 1 Introduction
Advanced multi-layered networks allow to connect
widely distributed computational and storage resources
to scientific instruments to pursue the goals of datadriven computational science. The increasingly dynamic
behavior of networks and the new connectivity options
at different layers enabled by new technologies has revolutionized the way computational activities are struc
tured. They permit a move from static arrangements of
resources that persist over long periods of time to highly
dynamic arrangements that respond to the needs of the
scientific applications by dynamically provisioning necessary network and edge resources with some notion of
optimality. Critically, no resource provisioning and allocation mechanism can operate on behalf of the application unless it is capable of providing feedback to the application. The feedback describes the performance and
state of the allocated resources, and the performance of
the application on the allocated resources. Large ensembles of network, compute and storage resources inevitably experience performance degradations and failures, and applications must be informed about them. Providing feedback about resource performance to the application, to enable closed-loop feedback control and dynamic adjustments to resource allocations is of utmost
importance. Many monitoring solutions exist today that
can provide such feedback including perfSONAR, Ganglia, MonALISA etc. However, presenting this information to an application in a sufficiently abstract and useful
fashion still remains a challenge.
The challenge is even greater when one has to monitor distributed infrastructure and distributed application
executions spanning multiple domains, and there is no
central point of control. In order to effectively analyze end-to-end bottlenecks with respect to several aspects of application execution (network congestion, high
latency, compute load, storage system bottlenecks), we
need a mechanism to federate performance information
from these diverse aggregates and derive useful insights
in an application specific manner. The focus should be on
gaining high-level insights important to application performance. This entails taking a cross-aggregate view of
computational, network and storage performance, gathering performance metrics (from several measurement
sources like perfSONAR services, network infrastructure
monitors, XMPP based monitoring entities, on-node performance information - OS, system, application counter
-----
data etc.) and reasoning about them in the context of a
particular application execution.
The volume and velocity of monitoring data are increasing rapidly with increased scale and complexity of
the substrate and increased availability of monitoring
data from various sources, each capable of generating
lots of monitoring data at a rapid rate. Often, monitoring
data is stored for future analysis to analyze past performance. With high volume performance monitoring data,
we can no longer afford to store all performance data for
post-processing and analysis. Since steady state performance is seldom interesting, not all performance data
tends to be useful. Also, current applications and tools
managing application executions need dynamic real-time
feedback of application performance so as to enable realtime steering based on observed performance. So, we are
facing scalability challenges in dealing with high volume
performance data and increasingly need to provide realtime feedback to tools.
In this work, we address some of the above challenges.
We have developed a persistent query agent (PQA) that
enables persistent queries on application and system performance. Applications or clients managing application execution are able to express important performance
metrics, threshold conditions, or event condition combinations using declarative queries. PQA enables federated
performance monitoring by interacting with multiple aggregates and performance monitoring sources. By leveraging a publish-subscribe framework, it asynchronously
sends triggers to applications/clients when relevant performance events occur. The applications/clients register
their events of interest using queries and get notified by
the PQA when those events occur. Our work presents a
novel use of an open source complex event processing
(CEP) framework to manage and execute these queries
expressed in a standard SQL-like query language. Instead of saving all monitoring data for future analysis,
PQA observes performance event streams in real-time,
and runs continuous queries over streams of events generated from the various performance monitoring sources.
The remainder of the paper is structured as follows.
Section 2 describes related work. Sections 3 and 4
present the motivation, design and architecture of PQA.
Section 5 describes some relevant use cases and section 6
concludes the paper.
###### 2 Related Work
perfSONAR [13, 15, 4] offers a web-services based infrastructure for collecting and publishing network performance monitoring. It consists of a protocol, architecture and set of tools developed specifically to work in a
multi-domain environment with the goal of solving endto-end performance problems on network paths crossing
multiple domains. perfSONAR provides hooks for delivering performance measurements in federated environments. However, it is the responsibility of higher level
tools to make use of perfSONAR data in a way relevant
to a particular distributed application.
There are several other multi-domain monitoring
tools. MonALISA [11] is a framework for distributed
monitoring. It consists of distributed agents that handle metric monitoring for each configured host at its site
and all the wide area links to other MonALISA sites.
MonALISA provides distributed registration and discovery, and is designed to easily integrate existing monitoring tools and procedures to provide metric information in a dynamic, customized way to other services or
clients. The underlying conceptual framework is similar
to that of perfSONAR. INTERMON [5] is another multidomain network monitoring framework, which focuses
on inter-domain QoS monitoring and large scale network
traffic analysis. They model abstractions based on traffic and QoS parameter patterns and run simulations for
planning network configurations. Their approach is centralized, where flow, topology and test information are
collected and stored in a central location for running the
analysis. Other notable multi-domain network monitoring frameworks are ENTHRONE and EuQoS. In [3],
Belghith et. al present a case for a configurable multidomain networking architecture, and discuss collaboration schemes used to select measurement points that participate in multi-domian monitoring, and to configure the
parameters of the measurement points selected.
OMF [8] provides a set of software services to run
repeatable experiments on network testbeds, and to
gather measurements from those experiments that are
potentially running across several domains. OMF enabled experiments can use the OMF measurement library
(OML) [14] to collect and store any type of measurements from applications. OML provides an API to add
user defined measurement points and to inject the measurement streams into the library. These streams are processed by the library as defined by the user, including
filtering etc. and results are pushed to local files, or to
OMF control servers that store the results in a database.
There has been some work on automated ways of using and analyzing perfSONAR data. OnTimeDetect [6]
does network anomaly detection and notification for
perfSONAR deployments. It enables consumers of perfSONAR measurements to detect network anomalies using sophisticated, dynamic plateau detection algorithms.
Pythia [9] is a data analysis tool that makes use of perfSONAR data to detect, localize and diagnose wide-area
network performance problems. Kissel et. al. [10] have
developed a measurement and analysis framework to
automate troubleshooting of end-to-end network bottlenecks. They integrate measurements from network, hosts
-----
and application sources using a perfSONAR compatible
common representation, and an extensible session protocol for measurement data transport, which enables tuning of monitoring frequency and metric selection. They
leverage measurement data from perfSONAR, NetLogger traces and BLiPP for collecting host metrics.
###### 3 Persistent Query Agent (PQA)
Although there exist tools that analyze monitoring
data from multi-domain measurement sources, they are
mostly targeted toward solving one particular problem.
It is difficult to configure or customize these tools to diagnose cross-aggregate performance problems. Clients
can’t programmatically ask questions about metrics, nor
can they be automatically notified. Also, most of the
tools do an after-the-fact analysis to determine what went
wrong post-mortem, which might not be always possible with proliferation of available monitoring data. The
requirements of applications and clients to obtain dynamic, real-time, cross-aggregate performance feedback
pose challenges not addressed by existing tools. So, we
have developed a persistent query agent (PQA) for providing real-time performance feedback to applications
or clients so as to enable steering. PQA interacts with
multiple aggregates and performance monitoring sources
and asynchronously sends triggers to applications/clients
when relevant performance events occur. The applications/ clients register their events of interest using queries
and get notified by the PQA when those events occur.
PQA doesn’t store monitoring data. It processes performance event streams in real-time using persistent client
queries.
PQA uses an off-the-shelf complex event processing (CEP) [12] engine for managing and executing the
queries expressed in a standard SQL-like query language. The queries enable expressing complex matching
conditions that include temporal windows, joining of different event streams, as well as filtering, aggregation, and
sorting. The CEP engine behaves like a database turned
upside-down. Queries “persist” in the CEP system. Data
or events are not stored, rather “watched” and analyzed
as they pass by. In the following sections, we present the
design, architecture and current implementation status of
the persistent query agent.
###### 4 PQA Architecture
There are various components of PQA as in Figure 1,
which are described in more detail in the following sections.
Esper engine: This is the complex event processing
_•_
engine that processes performance measurement
events injected, and triggers actions when queries
get satisfied. The various PQA monitoring clients
inject events into the Esper engine.
Trigger listeners: They are responsible for publish
_•_
ing events of interest when a query is satisfied. Applications/clients that are interested in those events
can subscribe to events of interest. Typically, events
of interest would correspond to queries submitted
by the applications. Applications would automatically be notified when such events occur.
Query manager: It is responsible for managing ap
_•_
plication queries through an XML-RPC interface.
Applications can register and delete queries with it.
It injects new queries and associated triggers into
the Esper engine.
PQA monitoring clients: A perfSONAR web ser
_•_
vices (pS-WS) client obtains measurement data
by querying available perfSONAR measurement
archives (MA) services. This client injects events
streams into the Esper engine. XMPP pubsub subscriber clients obtain measurement data by subscribing to pubsub nodes where measurements are
published periodically. Whenever new items are
published on the pubsub node, this client injects a
corresponding event stream into the Esper engine.
###### 4.1 Esper Engine
Esper is a framework for performing complex event processing, available open source from EsperTech [7]. Esper enables rapid development of applications that process large volumes of incoming messages or events, regardless of whether incoming messages are historical or
real-time in nature. Esper filters and analyzes events
in various ways, and responds to conditions of interest with minimal latency. CEP delivers high-speed processing of many events, identifying the most meaningful events within the event cloud, analyzing their impact,
and taking subsequent action in real time. Some typical examples of applications of CEP are in finance (algorithmic trading, fraud detection, risk management), business process management and automation (process monitoring, reporting exceptions, operational intelligence),
network and application monitoring (intrusion detection, SLA monitoring), and sensor network applications
(RFID reading, scheduling and control of fabrication
lines) [7].
Relational databases or message-based systems such
as JMS make it very difficult to deal with temporal data
and real-time queries. By contrast, Esper provides a
higher abstraction and intelligence and can be thought of
as a database turned upside-down: instead of storing the
-----
Figure 1: PQA architecture
data and running queries against stored data, Esper allows to store queries and run the data through. Response
from the Esper engine is real-time when conditions occur
that match user defined queries. The execution model is
thus continuous rather than only when a query is submitted. It is for this precise reason we have chosen Esper as
our persistent query engine.
Th Esper Event Processing Language (EPL) allows
registering queries into the engine. A listener class,
which is a plain Java object, is called by the engine
when the EPL condition is matched as events flow in.
The EPL enables to express complex matching conditions that include temporal windows, joining of different event streams, as well as filtering, aggregation, and
sorting. Esper EPL statements can also be combined together with “followed by” conditions thus deriving complex events from more simple events. Events can be represented as Java classes, JavaBean classes, XML document or java.util.Map, which promotes reuse of existing
systems acting as message publishers. Esper offers a mature API with features like
Event stream processing and pattern matching - Es
_•_
per provides (a) Sliding windows: time, length,
sorted, ranked, accumulating, etc., (b) Named win_dows with explicit sharing of data windows between_
statements, (c) Stream operations like grouping, aggregation, sorting, filtering, merging, splitting or
duplicating of event streams, (d) Familiar SQL_standard-based continuous query language using_
insert into, select, from, where, group-by, having,
order-by, and distinct clauses, (e) Joins of event
streams and windows, and so on. Esper provides
logical and temporal event correlation, and patternmatched events are provided to listeners.
Event representations - Esper supports event-type
_•_
inheritance and polymorphism as provided by the
Java language, for Java object events as well as
for Map-type and object-array type events. Esper
events can be plain Java objects, XML, object-array
(Object[]) and java.util.Map including nested objects and hierarchical maps.
We have leveraged the Esper engine in our design of
PQA. The PQA monitoring clients construct simple Java
object based Esper events and inject them into the Esper engine. The Esper EPL queries concerning these
monitoring events are injected into the Esper engine by
the query management module. The trigger listeners are
registered with the Esper engine as callbacks for performance event triggers.
###### 4.2 XMPP Publish Trigger Listeners
The XMPP pubsub specification [1] defines an XMPP
protocol extension for generic publish-subscribe functionality. The protocol enables XMPP entities to create nodes (topics corresponding to relevant events) at a
pubsub service and publish information at those nodes;
-----
an event notification (with or without payload) is then
broadcasted to all entities that have subscribed to the
node and are authorized to learn about the published information.
We have leveraged the XMPP pubsub mechanism to
publish triggers corresponding to events of interest, as
registered by client/application queries. UpdateListeners or trigger listeners are Esper entities that are invoked
when queries get satisfied. UpdateListeners are pluggable entities in the Esper system, which can peek into
event streams and are free to act on the values observed
on the event streams. There can be two types of UpdateListeners - (a) static UpdateListeners that are tightly
integrated with the server side of the Esper engine, and
(b) dynamic client side UpdateListeners that can be provided by clients any time and injected into the Esper
system. These ClientSideUpdateListeners can be tailored to queries of interest. When queries get registered
into the PQA, the pubsub node handle is passed back to
the client, and is used to seed the ClientSideUpdateListener. When the query gets satisfied, the ClientSideUpdateListener uses the pubsub node handle to publish values observed on the event streams. Depending on the
design of the ClientSideUpdateListener, it might choose
to apply any function (max, current, average etc.) on
these values, or ignore some of them. When new values are published on the pubsub nodes, the clients are
notified because they subscribe to the same pubsub node
handle. The clients/applications can take adaptation actions based on occurrences of event notifications. The
ClientSideUpdateListeners have publishing rights on the
pubsub nodes and the clients are granted subscribe rights
on the nodes. New ClientSideUpdateListeners can be
implemented using existing templates in a reasonably
straightforward manner, although the currently available
set of UpdateListeners, as implemented in PQA, are sufficient for simple use cases.
###### 4.3 Query management
In the PQA architecture, the clients or applications are
interested in specific patterns of events. They might be
interested in events where values of certain metrics exceed or drop below a threshold, or where a complex condition is met with respect to values of multiple metrics.
PQA allows the clients/applications to express these in
terms of queries into the PQA system.
PQA exposes a simple API for registering and deleting
such queries. The current implementation uses a simple
XML-RPC mechanism to expose this API to the clients.
The clients/applications can register their queries of interest with PQA and PQA provides a pubsub node handle
to the clients corresponding to the registered query. The
query management system in PQA hashes these queries
and pushes them onto the Esper engine for continuous
monitoring of event streams. The queries are injected
using a management interface provided by Esper. The
clients/applications can then subscribe to the provided
pubsub node handle and be notified by the XMPP pubsub
mechanism when their queries get satisfied. The query
management system is responsible for managing queries
from multiple clients. Although not implemented in the
current prototype, query management can be extended to
handle client authentication over SSL using certificates,
as implemented in a separate context by the same authors
[2].
In PQA, the queries are expressed using the Esper
Event Processing Language (EPL), which is a declarative language for dealing with high frequency time-based
event data. EPL statements derive and aggregate information from one or more streams of events, to join or
merge event streams, and to feed results from one event
stream to subsequent statements. EPL is similar to SQL
in it’s use of the “select” clause and the “where” clause.
However EPL statements use event streams and views
instead of tables. Similar to tables in an SQL statement,
views define the data available for querying and filtering.
Views can represent windows over a stream of events.
Views can also sort events, derive statistics from event
properties, group events or handle unique event property
values.
The following is an example EPL statement that computes the average memory utilization on a node for the
last 20 seconds and generates an event of interest when
the average memory utilization exceeds 70%.
```
"select avg(memutil) as avgMemUtil
from MemUtilEvent.win:time(20 sec)
where avgMemUtil > 70"
```
When a client registers a query with PQA, it is coupled
with a ClientSideUpdateListener that publishes relevant
metrics from the event stream when the query is satisfied. In the previous example, the ClientSideUpdateListener may choose to publish the avgMemUtil value, or
the instantaneous value that triggered the threshold to go
above 70.
A more complex example would be a query using joins
of several performance metrics from multiple domains.
```
"select
b.metricName as metricName1, b.metricValue as metricValue1,
m.metricName as metricName2, m.metricValue as metricValue2
from
BWUtilization.win:length(1) as b,
MemoryUtilization.win:length(1) as m
where b.metricValue > 1.40012E9 and m.metricValue > 70"
```
Here the query concerns instantaneous metric values for
bandwidth between two end points and memory utilization at an endpoint. The trigger is raised when both the
conditions are met.
-----
###### 4.4 PQA Monitoring Clients
Distributed application execution entails cross-aggregate
performance monitoring because a global insight is required to identify performance bottlenecks. It is important to monitor the performance of not only the system and network entities in the different aggregates,
but also specific application performance metrics as observed when applications are executing. One of the goals
of the PQA tool is to be able to gather these diverse
performance metrics from multiple measurement sources
belonging to different aggregates. This makes it possible to correlate and filter different observed metrics in an
application specific manner through use of queries into
PQA.
To this end, PQA includes different monitoring clients
that continuously gather data from different sources system and application specific. The monitoring clients
follow a simple design. They interact with measurement
sources using their respective native APIs, and collect
the metric data. They then construct Esper events corresponding to the observed metric and push event streams
into the Esper engine. As of current implementation,
PQA includes PerfSONAR and XMPP based clients. It
is possible to add new kinds of monitoring clients.
**4.4.1** **perfSONAR clients**
The perfSONAR service responsible for storing measurement information is called a measurement archive
(MA). MAs contain two types of information: data and
metadata. Data represents the stored measurement results, which are mostly obtained by perfSONAR measurement points (MP). This includes active measurements such as bandwidth and latency, and passive measurements such as SNMP counter records. Metadata is
an object that has data associated with it. For example, a bandwidth test identified by its parameters (i.e.
endpoints, frequency, duration) is the metadata associated with bandwidth measurement. The MA exposes
a web-service interface so that web service clients can
query for data/metadata stored in the MA. The PQA
perfSONAR clients obtain measurement data by querying available perfSONAR MA services, and then construct Esper events that get continuously inserted as event
streams into the Esper engine.
**4.4.2** **XMPP based clients**
Measurement information can be published by applications or system monitoring entities using the XMPP pubsub mechnism, so that interested third parties (other applications, decision engines, workflow tools) get notified
of those measurements. This is a general method to disseminate instantaneous performance information. The
XMPP based PQA monitoring clients subscribe to relevant pubsub nodes for measurement streams based on
configured events. On event notifications on the pubsub
nodes, these clients construct Esper events and continuously insert event streams into the Esper engine. Note
that these XMPP based PQA monitoring clients are different from application clients that query the PQA and
subscribe to XMPP pubsub node handles corresponding
to events of interest.
###### 5 Use Cases
The persistent query agent can be used in a multitude
of scenarios that require distributed monitoring. These
include data-intensive distributed scientific workflow applications running on networked clouds, as in Figure 2,
where it is important to monitor the computational performance on nodes and network performance to diagnose any performance problems, both inside the application and at the infrastructure level. For example, third
party clients like workflow engines would be able to
query PQA about existence of a combination of performance metric thresholds, and would get notified when
such conditions arise. This enables efficient feedback
for the workflow engine to steer the execution of rest
of the workflow. PQA can also be used exclusively at
the infrastructure level, monitoring health of distributed
infrastructure, and triggering events to relevant infrastructure owners when critical events occur. This entails running continuous health queries, so analysis happens real-time and no archival is required. Other cloud
based distributed applications like cloud oriented content
delivery networks could leverage PQA to monitor different performance metrics with respect to latency and
service rates. PQA would be useful for network monitoring to detect end-to-end bottlenecks, when network
paths span multiple domains, and measurement events
are made available to PQA.
###### 6 Conclusions and Future Work
We have presented the design, architecture and implementation of a persistent query agent (PQA). PQA enables federated performance monitoring by interacting
with multiple aggregates and performance monitoring
sources. The PQA implementation leverages an open
source complex event processing engine called Esper.
The applications/clients register their events of interest
using declarative queries expressed in EPL, an SQL-like
standard query language. PQA processes event streams
and asynchronously sends triggers to applications/clients
using an XMPP pubsub mechanism when relevant performance events occur. PQA is scalable - instead of
-----
Figure 2: PQA scientific workflow use case
storing all monitoring data for future analysis, PQA observes performance event streams in real-time, and runs
persistent queries over streams of events generated from
the various performance monitoring sources. The realtime performance feedback is useful in a variety of use
cases like workflow scheduling, resource provisioning,
anomaly and failure detection etc.
In future, we plan to extend PQA in different directions. We plan to improve the ability to plug in new kinds
of monitoring sources dynamically. We are also working
on extending the system so that clients are able to add
custom update listeners so that they are able to manage
what information gets published when an event trigger
happens. Our future plans also include coming up with
measurement ontologies so that it becomes easier to describe, register and discover new metrics.
###### Acknowledgments
This work is supported by the Department of Energy
award #: DE-FG02-10ER26016/DE-SC0005286.
###### References
[1] The XMPP Standards Foundation XEP-0060:
Publish-Subscribe http://xmpp.org/extensions/xep0060.html.
[2] I. Baldine, Y. Xin, A. Mandal, P. Ruth, C. Heermann, and J. Chase. Exogeni: A multi-domain
infrastructure-as-a-service testbed. In TRIDENT_COM, pages 97–113, 2012._
[3] A. Belghith, B. Cousin, S. Lahoud, and S. Ben
Hadj Said. Proposal for the configuration of multi
domain network monitoring architecture. In In_formation Networking (ICOIN), 2011 International_
_Conference on, pages 7 –12, jan. 2011._
[4] J. W. Boote, E. L. Boyd, J. Durand, A. Hanemann,
L. Kudarimoti, R. Lapacz, N. Simar, and S. Trocha.
Towards multi-domain monitoring for the european
research networks. In TNC, 2005.
[5] E. Boschi, S. DAntonio, P. Malone, and C. Schmoll.
Intermon: An architecture for inter-domain monitoring, modelling and simulation. In R. Boutaba,
K. Almeroth, R. Puigjaner, S. Shen, and J. Black,
editors, NETWORKING 2005. Networking Tech_nologies, Services, and Protocols; Performance of_
_Computer and Communication Networks; Mobile_
_and Wireless Communications Systems, volume_
3462 of Lecture Notes in Computer Science, pages
1397–1400. Springer Berlin Heidelberg, 2005.
[6] P. Calyam, J. Pu, W. Mandrawa, and A. Krishnamurthy. Ontimedetect: Dynamic network anomaly
notification in perfsonar deployments. In Modeling,
_Analysis Simulation of Computer and Telecommu-_
_nication Systems (MASCOTS), 2010 IEEE Interna-_
_tional Symposium on, pages 328 –337, aug. 2010._
[7] EsperTech. http://www.espertech.com, 2013.
[8] G. Jourjon, T. Rakotoarivelo, and M. Ott. A portal to support rigorous experimental methodology
in networking research. In 7th International ICST
_Conference on Testbeds and Research Infrastruc-_
_tures for the Development of Networks and Com-_
_munities (Tridentcom), page 16, Shanghai/China,_
April 2011.
[9] P. Kanuparthy and C. Dovrolis. Pythia: Distributed Diagnosis of Wide-area Performance Problems. Technical report, Georgia Institute of Technology, 2012.
[10] E. Kissel, A. El-Hassany, G. Fernandes, M. Swany,
D. Gunter, T. Samak, and J. Schopf. Scalable integrated performance analysis of multi-gigabit networks. In Network Operations and Management
_Symposium (NOMS), 2012 IEEE, pages 1227 –_
1233, april 2012.
[11] I. Legrand, H. Newman, R. Voicu, C. Cirstoiu,
C. Grigoras, C. Dobre, A. Muraru, A. Costan,
M. Dediu, and C. Stratan. MonALISA: An agent
based, dynamic service system to monitor, control and optimize distributed systems. _Computer_
_Physics Communications, 180:2472–2498, Dec._
2009.
-----
[12] A. Margara and G. Cugola. Processing flows of information: from data stream to complex event processing. In Proceedings of the 5th ACM interna_tional conference on Distributed event-based sys-_
_tem, DEBS ’11, pages 359–360, New York, NY,_
USA, 2011. ACM.
[13] B. Tierney, J. Boote, E. Boyd, A. Brown, M. Grigoriev, J. Metzger, M. Swany, M. Zekauskas, Y.-T.
Li, and J. Zurawski. Instantiating a Global Network Measurement Framework. Technical Report
LBNL-1452E, Lawrence Berkeley National Lab,
2009.
[14] J. White, G. Jourjon, T. Rakotoarivelo, and M. Ott.
Measurement architectures for network experiments with disconnected mobile nodes. In Inter_natinonal ICST Conference on Testbeds and Re-_
_search Infrastructures for the Development of Net-_
_works and Communities (TridentCom), pages 315–_
330, Berlin, May 2010. Springer-Verlag.
[15] J. Zurawski, J. Boote, E. Boyd, M. Glowiak,
A. Hanemann, M. Swany, and S. Trocha. Hierarchically federated registration and lookup within
the perfsonar framework. In Integrated Network
_Management, 2007. IM ’07. 10th IFIP/IEEE Inter-_
_national Symposium on, pages 705 –708, 21 2007-_
yearly 25 2007.
-----
| 7,262
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/MCOM.2014.6815907?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/MCOM.2014.6815907, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "CLOSED",
"url": ""
}
| 2,014
|
[
"JournalArticle"
] | false
| 2014-05-19T00:00:00
|
[] | 7,262
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/002d8c2a85305e43d8bc8f58c8f2ef34eca415f5
|
[
"Computer Science"
] | 0.89443
|
A jumping mining attack and solution
|
002d8c2a85305e43d8bc8f58c8f2ef34eca415f5
|
Applied intelligence (Boston)
|
[
{
"authorId": "1883428277",
"name": "Muchuang Hu"
},
{
"authorId": "2128675624",
"name": "Jiahui Chen"
},
{
"authorId": "3045042",
"name": "Wensheng Gan"
},
{
"authorId": "34842653",
"name": "Chien‐Ming Chen"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
Mining is the important part of the blockchain used the proof of work (PoW) on its consensus, looking for the matching block through testing a number of hash calculations. In order to attract more hash computing power, the miner who finds the proper block can obtain some rewards. Actually, these hash calculations ensure that the data of the blockchain is not easily tampered. Thus, the incentive mechanism for mining affects the security of the blockchain directly. This paper presents an approach to attack against the difficulty adjustment algorithm (abbreviated as DAA) used in blockchain mining, which has a direct impact on miners’ earnings. In this method, the attack miner jumps between different blockchains to get more benefits than the honest miner who keep mining on only one blockchain. We build a probabilistic model to simulate the time to obtain the next block at different hash computing power called hashrate. Based on this model, we analyze the DAAs of the major cryptocurrencies, including Bitcoin, Bitcoin Cash, Zcash, and Bitcoin Gold. We further verify the effectiveness of this attack called jumping mining through simulation experiments, and also get the characters for the attack in the public block data of Bitcoin Gold. Finally, we give an improved DAA scheme against this attack. Extensive experiments are provided to support the efficiency of our designed scheme.
|
## A Jumping Mining Attack and Solution
Muchuang Hu [1], Jiahui Chen [2], Wensheng Gan [3] *, and Chien-Ming Chen [4]
1 *Department of Science and Technology, People’s Bank of China Guangzhou, Guangzhou 510120, China*
2 *School of Computer, Guangdong University of Technology, Guangzhou 510006, China*
3 *College of Cyber Security, Jinan University, Guangzhou 510632, China*
4 *College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China*
*Email: [email protected], [email protected], [email protected], [email protected]*
**Abstract**
Mining is the important part of the blockchain used the proof of work (PoW) on its consensus,
looking for the matching block through testing a number of hash calculations. In order to attract more
hash computing power, the miner who finds the proper block can obtain some rewards. Actually, these
hash calculations ensure that the data of the blockchain is not easily tampered. Thus, the incentive
mechanism for mining affects the security of the blockchain directly. This paper presents an approach
to attack against the difficulty adjustment algorithm (abbreviated as DAA) used in blockchain mining,
which has a direct impact on miners’ earnings. In this method, the attack miner jumps between different
blockchains to get more benefits than the honest miner who keep mining on only one blockchain. We
build a probabilistic model to simulate the time to obtain the next block at different hash computing power
called hashrate. Based on this model, we analyze the DAAs of the major cryptocurrencies, including
Bitcoin, Bitcoin Cash, Zcash, and Bitcoin Gold. We further verify the effectiveness of this attack called
jumping mining through simulation experiments, and also get the characters for the attack in the public
block data of Bitcoin Gold. Finally, we give an improved DAA scheme against this attack. Extensive
experiments are provided to support the efficiency of our designed scheme.
*Keywords:* Proof of Work; Difficulty Adjustment Algorithm; Hashrate Simulation; Jumping Mining
Attack
**1. Introduction**
In 2009, Nakamoto [1] firstly proposed the original concept of blockchain in the Bitcoin-based peerto-peer electronic cash scheme. Since its release, blockchain has been extensively researched and developed globally, and its successful experience has attracted many organizational research on how to use
blockchain technology in recent years. So far, there are more than 1,300 blockchain cryptocurrencies
in the world, such as Bitcoin, Ethereum [2], Ripple [3], etc. According to incomplete estimates, the
cryptocurrency market is currently worth more than 150 billion USD. There is no doubt that we should
pay attention to the security of blockchain-based systems. How can these systems against the current or
future attacks from the classical (non-quantum) and quantum adversaries is quite important.
The blockchain is essentially a distributed consensus storage system [1], with consensus protocols
between nodes to make agree on the contents of the storage. It can ensure that the ledger stored by each
node in the distributed network are always consistent. Consensus protocols are, therefore, one of the
key technologies in blockchain [4]. In fact, with the development of blockchain, many projects have
proposed different consensus protocols, including proof of work (PoW) [1], proof of stake (PoS) [2],
delegated proof of stake (DPoS) [5], practical Byzantine fault tolerance (PBFT) [6], etc. For the details
of these consensus protocols, we recommend to refer the review paper [7]. The proof of work (PoW)
used by Bitcoin is adopted by many public blockchain projects or systems. The principle of PoW is to
achieve consensus by computing a mathematical problem. The miners who want to generate the next
block in the blockchain to package new transactions must solve this problem.
∗ Corresponding author. Email: [email protected]
*Preprint submitted to Applied Intelligence* *August 20, 2020*
-----
Generally speaking, the computing problem is to calculate and find a proper hash value *Hash* ( *X* )
which is less than the target value *PoW Target*, i.e. *Hash* ( *X* ) ≤ *Pow Target*, where *Hash* is a cryptographic
hash function. Here *X* is a random value determined by a nonce and the *hash* value of the previous block.
For example, there are fields including version, HashPreBlock [1], HashMerkleRoot [1], Time [1], and
Nonce [1] in the block head of Bitcoin. HashPreBlock is the hash value of the previous block. If the
previous block value is modified, it can be easily verified by the hash value, which ensures that the
historical block data is immutable. Bit stands for the difficulty of mining. The calculated hash value
must be less than or equal to this target hash value. Nonce is an random number. Mining is to modify
the random number so that the hash value of the entire block can meet the target hash value. In fact, it
is a contest with nodes involved in this puzzle competing against each other, and whoever finds the hash
value smaller than *PoW Target* firstly can generate the next block and get a reward. These nodes are the
so-called miners. Algorithms that can be used as the hash functions *Hash* include SHA256 [8], Scrypt
[9], Ethash [10], Cryptonight [11], Equihash [12], X11 [13], etc.
Generating a hash value is a random process, and the target value *PoW Target* sets the difficulty for
this computing problem. The task that looking for a *Hash* ( *X* ) ≤ *Pow Target* is actually a probability
problem. When the number of hash calculations that a miner can perform per unit time increases, the
probability for him/her to find a matching hash grows. It means that the higher hash computing power
(the so-called hashrate), the shorter time for getting the next block. Usually, a difficulty adjustment algorithm (DAA) [1] is used to ensure that the time for generating a block remains at a relatively stable value.
When the average time for generating a block decreases, the difficulty can be increased by changing the
*PoW Target* value, and vice versa. The DAA addresses which time to adjust the *PoW Target* value and
how much to adjust. Thus, the DAA directly affects the time it takes for a miner to find the next block
and get his/her mining rewards.
Hash calculation is actually the security mechanism to protect the blockchain. According to the
longest chain consensus principle, if someone wants to tamper with the data in a blockchain, s/he needs
to construct a blockchain that is longer than the existing one. To achieve this, s/he needs to recalculate
the hash of the tampered block and all its previous blocks at a faster hashrate than the total value of the
existing normal miners. If s/he has no more than 51% of the hashrate of the entire network of the miners,
it is theoretically impossible for her/him to do so. Therefore, it is important to attract more honest miners
to participate in mining and to give the whole network a high level hashrate protection. As mentioned
above, the DAA directly affects miners’ benefit. If there are deficiencies with the DAA, it can cause
fluctuations in the normal hashrate of the blockchain and threat the security of the blockchain.
The main goal of our study is to explore the interaction between PoW schemes and efficient DAA, so
that future systems can achieve better fairness and better protection. We also intend to raise the awareness
that a new attack are possible for PoW schemes, and that the assets protected by their deployments should
be carefully valued. The methodology and contributions in this paper are as follows:
- We firstly build a hashrate simulation model. The model can help us observe how the DAA adjusts the difficulty according the changes of the hashrate. At the same time, we analyze several
DAAs of the major cryptocurrencies. We draw that these cryptocurrencies cannot react quickly or
overreaction when the whole hashrate changes.
- By using a hashrate simulation model, we propose an attack method named jumping mining attack
for different cryptocurrencies. The main idea of the jumping mining attack is to switch the hashrate
between different cryptocurrencies used similar hash algorithm (e.g., SHA256, Scrypt, Ethash,
Cryptonight, Equihash, X11, etc.), so that the attacker’s benefit can be maximized. This attack
leads to unstable hashrate, which directly affects the security of the blockchain.
- What’s more, in order to verify the effectiveness of the attack, we conduct a number of simulation
attack experiments on three famous cryptocurrencies based on our attack method. The experimental results prove our scheme is effective, and we obtain further validation by the analysis of the
public block data of Bitcoin Gold.
- Finally, we propose an improved DAA to resist the attack by summarizing the characteristics of
the jumping mining attack. Similarly, we perform several experiments on the improved DAA and
verify its effectiveness.
The rest of the paper is organized as follows: Section 2 describes the related work. In Section 3, we
describe the hashrate simulation model, analyze several DAAs of the major cryptocurrencies, and point
out their weakness. In Section 4, we propose the jumping mining attack and validate it through some
2
-----
simulation experiments. In Section 5, we provide an improved DAA against this attack. Finally, we
conclude this paper in Section 6.
**2. Related Works**
The blockchain technology has attracted much attention since it was first proposed in Nakamoto’s
original bitcoin paper [1]. There are many use cases built around this technology. However, it also
introduces a lot of speculation because the lack of legislations. Due to its openness and rapid economic
growth, it has attracted many people to study on its security. Gervais et al. [14] showed that the scalability
measures adopted by Bitcoin come at odds with the security of the system. Mayer et al. [15] discussed
the security level of the signature scheme implemented in Bitcoin and Ethereum. After that, Moubarak
et al. [16] also exposed numerous possible attacks on the network. They evaluated blockchain security
by summarizing its current state.
In these studies, some have focused on how miners mining strategies affect their income in the PoW
blockchain. Nicolas et al. [17] looked at the miner strategies with particular attention paid to subversive
and dishonest strategies. After that, Kiayias et al. [18] studied the stochastic game that underlies these
strategic. In the games, when the computational power of a miner is large, s/he deviates from the expected
behavior, and other Nash equilibria arise.
DAA plays a key role in the mining process of PoW blockchains in order to maintain a consistent
inter-arrival time between blocks. It is the core algorithm that influences the miner’s strategies. Several
studies have analyzed how DAA affects mining. Aggarwal et al. [19] compare the equilibrium behavior
of miners between Bitcoin’s DAA and Bitcoin Cash’s [20] emergency difficulty adjustment algorithm.
Following Bitcoin Cash, considerable effort has been devoted to improve the DAA of PoW blockchain.
Kraft [21] and Fullmer [22] proposed an alternative DAA.
In addition to affecting the miner’s income, DAA is more closely related to the security of the
blockchain. Since Bitcoin was split into Bitcoin and Bitcoin Cash (BCH) in August 2017, the miners had a choice between different cryptocurrencies because they have compatible proof-of-work algorithms. There are several attacks focus on the famous cryptocurrencies. A double-spend attack through
the hashrate leasing market was proposed by Budish [23] in 2018. Biryukov et al. [24] analyzed two
privacy and security issues for the privacy-oriented cryptocurrency Zcash. They introduced two attacks
called Danaan-gift attack and Dust attack. After that, Auer [25] showed that in the long run PoW’s security will mainly come from transaction fees. In future research, we can test whether the theoretical
analysis of other attacks can mitigate the observation of the impact on costs.
**3. Our Hashrate Simulation Model**
In this section, we give the simulation model to observe the reflection of the changes of the network
hashrate. There are at least two limitations to analyze how the DAA of the public blockchain project
works when the hashrate change. On the one hand, it takes a lot of time to generate enough block data
for analysis even though we test it on test-net. On the other hand, it is hard to get enough hashrate for
test. Therefore, an effective model can greatly improve the efficiency of the analysis. The simulation
code is available in Github [1] .
*3.1. Hashrate Simulation Algorithm*
In general, mining is like a puzzle game. Firstly, we let the input be *X* and the output be *Hash* ( *X* )
where *Hash* is cryptographic hash function. Then the puzzle is to find an answer *X* where the value of
*Hash* ( *X* ) is less than a specified target value *PoW Target* . Take Bitcoin as an example, the hash function
is SHA256. The miners apply a 256-bit cryptographic hash algorithm [26] to an 80-byte block header
and an Nonce. The puzzle is solved if the resulting value is less than a known target value *PoW Target*
where 0 < *PoW Target* < 2 [256] and *Hash* ( *X* ) ≤ *PoW Target* . Here the input *X* contain the 80-byte block
header and the Nonce. To find a suitable *X* is a process of exhaustive test by raising the value of the
Nonce. Due to the randomness of the output of the hash function, finding a suitable input value *X* is
actually a probabilistic process. The more the hash calculation is performed, the greater the probability
of finding a suitable solution. The hashrate of a miner actually refers to how many times he can perform
1 `https://github.com/humuchuang/jumping-mining`
3
-----
hash calculations per unit time. The higher hashrate a miner has, the easier it is to find a suitable block.
Thus, we can build a probabilistic model to simulate the time for generating a block at different hashrate.
The probabilistic derivation process of our hashrate simulation model is presented below. Similarly,
we still take Bitcoin as an example. As we know, the specified target value *PoW Target* of Bitcoin is
a 256-bit number. The maximum value of *PoW Target* denoted as *Max Target* is 2 [256] . Usually, the
blockchain will have an initial target value to limit the minimum mining difficulty.Taking Bitcoin as an
example again, the limit target value is 2 [224] . Each different cryptocurrency sets different limit target
values based on different network hashrate. Here we assume that *PoW Target* equals 2 [248] . For ease of
understanding how to calculate the probability of finding a suitable answer, the assumed *PoW Target* can
be represented in hexadecimal as 0x00ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff. The probability to
get the suitable *X* in a hash calculation is *P* = [2] 2 [248][256] [ . Let] *[ D]* [ be the average number of hash calculations]
required to find *X* . Because of finding a hash value smaller than the target value *PoW Target* is an
independent repeated probability event, we have *D* = *P* [1] [=] [2] 2 [256][248] [ =][ 256.]
Generally, the difficulty to find a matching hash value can be defined as follow:
*D* = *[Max Tar]* *[g]* *[et]* (1)
*PoW Target* [,]
where *Max Target* indicates the maximum value of *PoW Target* when solving the hash problem. Let *m*
be the number of leading zero of *Max Target*, i.e., assume that the maximum target value for the main
network of Bitcoin is 0x00000000ffffffffffffffffffffffffffffffff, *m* = 8. Then the difficulty can be defined as
follow:
2 [256][−] *[m]*
*D* = (2)
*PoW Target* [.]
We assume the leading zero *m* = 0. The meaning of this difficulty *D* is the number of hashes to be
calculated. For example, if *D* = 2 [40], then it is necessary to find a hash value with 40 leading zeros. The
probability of matching the target in one hash calculation is *P* . Hence, we have
*P* = [1] (3)
*D* [.]
Then the probability that the *n* -th time the target value happens to be found is *p* ∗ (1 − *p* ) *[n]* [−][1], namely
the first *n* − 1 times all fail and the last time succeeds. Given the difficulty, the probability cumulative
function *P* ( *n* ) represents the probability of finding a target block hash for the first *n* times, as shown
below:
*n*
*P* ( *n* ) = *p* ∗ (1 − *p* ) *[n]* [−][1] = *p* ∗ [1][ −] [(] [1][ −] *[p]* [)] *[n]* (4)
� *k* =1 1 − (1 − *p* ) [=][ 1][ −] [(1][ −] *[p]* [)] *[n]* [.]
According the above derivation, the simulation for the *n* -th time the target value happens to be found
can be conducted as the follow steps.
- **I** . Firstly find a random number *Rand* in a 0 − 1 uniform probability distribution.
- **II** . Set an inequality equation *P* ( *n* − 1) < *Rand* < *P* ( *n* ).
- **III** . Solve the value of *n* .
Hence, we have:
*n* = *ceil* ( [1] *[o]* *[g]* [(] [1][ −] *[Rand]* [)] ). (5)
1 *og* (1 − *p* )
Assume the average time for the honest miner to generate a block is *T* seconds, then the hashrate of the
honest miner can be defined as *HR* = *[D]* *T* [. The hashrate of a miner actually refers to how many times s][/][he]
can perform hash calculations per unit time. After getting the number of hash calculations *n* which is
required for matching the target, the time of this process can be calculated as:
*n*
*solvetime* = (6)
*HR* [.]
Finally, we use the key hashrate simulation algorithm to obtain the solve time for generating a block
at any given difficulty, as described in Algorithm 1.
4
-----
**Al** **g** **orithm 1** GetSolveTime ( *HR*, *Rand*, *D* )
**Input:** *HR* : the total hashrate of the blockchain;
*Rand* : a random number generated in a 0-1 uniform probability distribution;
*D* : the current difficulty to generate a block.
**Output:** *S T* : the time it takes to generate the next block.
1: Set the base line of the difficulty *Lz* = 2 [40] ; // Similar to the limit target value explained before, the
baseline here also ensures the minimum difficulty.
1
2: Calculate the probability of success to match the target in one hash calculation *p* = *D* ∗ *Lz* [;]
3: *S T* = *n* / *HR* = *ceil* ( [1] *[o]* 1 *[g]* *og* [(] [1] (1 [−] *[Rand]* − *p* ) [)] [)*] *HR* [1] [.]
4: **return** *S T*
*3.2. Analysis on DAAs of Di* ff *erent Cryptocurrencies*
In this section, we mainly analyze the problems with the DAAs of several major cryptocurrencies
that currently adopt the PoW mechanism.
As we can see from the Bitcoin core code [27], Bitcoin’s DAA is not complicated. Bitcoin adjusts
its difficulty every 2016 blocks. If the whole hashrate is stable,the average time for generating a block is
10 minutes. It means the difficulty adjusts once per two weeks. When the adjustment cycle comes, the
blockchain calculate the time for generating the previous 2016 blocks. If the time is less than two weeks,
then the difficulty of next block will be increased in proportion. Conversely, if it is greater than two
weeks, the value needs to be reduced. For example, if the previous 2016 blocks was generated about one
week, then the difficulty should be double in the next adjustment cycle. In addition, the proportional limit
for difficulty adjustment is [0.25, 4] in order to avoid over regulation. Under normal circumstances, this
strategy is relatively stable. However, when there are large fluctuations in the hashrate of the blockchain
network, the response of this strategy is relatively lagged. For example, the attacker chooses to enter
on a lower difficulty cycle, and since he has a relatively high hashrate, he can generate a lot of blocks
quickly. When the adjustment cycle comes, the difficulty increases drastically and the attacker chooses
to exit. As a result, the honest miners who keep mining will mine at a higher difficulty for a long time
causing severe delays in the generation of blocks.
Bitcoin Cash [20] was started out as a hard fork project of Bitcoin with new features at the beginning.
Bitcoin Cash improved its difficulty adjustment algorithm after the fork. Bitcoin Cash’s DAA works on
a similar principle to BTC, using *N* previous blocks as a reference and adjusting in order to generate
a block every 10 minutes. The difference is that the difficulty adjustment of Bitcoin Cash is block-byblock, while the number of the past blocks *N* for reference is 144 rather than 2016. Likewise, to prevent
over-adjustment, Bitcoin Cash has a proportional limit for difficulty adjustment within [0.5, 2]. Although
this DAA is more responsive to the variety in the hashrate of the blockchain, it also creates instability.
Another new proposal points out that *N* should be smaller in the strategy that uses *N* past blocks as
a reference for difficulty adjustment in order to more accurately reflect changes of the hashrate of the
blockchain network over the recent period.
Zcash [28] and Bitcoin GOLD [29] is the cryptocurrency which uses this proposal. The DAA of
Zcash called Digishiled, where *N* = 17. Under this scheme, the most recently generated block reflect
the current state of the network’s hashrate best. However, another better scheme is to set weights on the
reference blocks. In Bitcoin Gold’s latest difficulty adjustment algorithm [30], the difficulty adjustment
with weights is used. The newer is the block, the higher weight is set. Let *N* be the number of the
reference block and the current height of the blockchain be *h* . When we calculate the difficulty of the
*h* + 1 block, the *h* -th block is weighted *N* while the ( *h* -1)-th block is weighted *N* − 1, and so on. The sum
*N* *N* *Tar* *g* *et* ( *h* − *i* )
of the weight is � *i* = *[N]* [∗] [(] *[N]* 2 [+][1] [)] . The average target for the past *N* blocks is *avgTarget* = � *h* − *i* . And
*i* =1 *i* =1
*N* *S T* ( *h* − *i* )
the average generation time for *N* blocks in the past is *avgT* = � *i* . Suppose the expected time to
*i* =1
generate a block is *T* . Hence, the target value of next block is *newTarget* = *[av]* *[g]* *T* *[Tar]* ∗ *adjust* *[g]* *[et]* [∗] *[av]* *[g]* *[T]*, where *adjust* is
an adjustment factor less than 1.
Although this strategy already seems relatively reasonable, it still has several problems. It does not
adequately take into account the fact that hashrate may jumps in and out frequently causing fluctuates
dramatically. When a large hashrate jumps out of the network, the difficulty of generating blocks will
increase significantly. What’s worse, because the network hashrate is not enough, the time delay to
generate blocks is longer, making the difficulty of the next few blocks will be significantly reduced. The
block difficulty drops too quickly will raise new problem. In the following section we can see that as
5
-----
long as the attacker has a certain scale of hashrate and chooses the right timing for her/his attack, s/he
can still gain better than the honest miner under this difficulty adjustment algorithm.
**4. Jumping Mining Attack**
In this section, we introduce a scheme of jumping mining between different cryptocurrencies that
using the same hash function in its PoW consensus algorithm. Therefore, the miners using this method
can obtain higher mining revenue than the miners who keep mining on one cryptocurrency. The jumping
mining actually damages the earnings of honest miners who keep mining on one cryptocurrency. In the
future, it may cause miners unwilling to act as honest miners, resulting in fluctuations in the cryptocurrencies’ computing hashrate. Hashrate plays an important role in ensuring the security of the blockchain.
If the total hashrate of a certain cryptocurrency drops to a certain level, it may be subject to other attacks,
such as 51% attack. Therefore, this method of jumping mining against the problems of the difficulty
adjustment algorithm is actually a serious attack. Its worst case may shake the foundation of the security
of the blockchain and eventually lead to the demise of the cryptocurrency.
Further more, based on the hashrate simulation model in Section 3, we ran a simulation experiment
of the jumping mining attack. The experiments not only recorded the hashrate curve of the entire tested
cryptocurrency during the test, but also logged the time when the attacker moved his hashrate into mining
and out. Hence, we can intuitively observe the block difficulty value and duration of each attack by the
attacker. Finally, we compared the average block generation time and mining benefits of honest miners
with the attacker to determine whether the attack is effective.
*4.1. The Attack Method*
According to the difficulty of adjustment algorithm, the value of the block difficulty vary as the
computing hashrate of the miners of the entire network. In the DAA as mentioned before, the speed to
update the block difficulty is either too slow or overreacted. For example, Bitcoin adjusts the difficulty
every 2016 blocks. If there is a big change in the hashrate of the whole network, it cannot be adjusted in
time, while the other DAA are overreacted. Our attack strategy is naive and we assume that the attacker
has a certain computing hashrate denoted as *HRAttacker*, and the hashrate of other honest miners on
the cryptocurrency network is *HRworker* . The attacker can randomly choose the time to join the mining,
which is actually the attack time stamp. By observing the block difficulty sequence of the cryptocurrency
network, the attacker start to mine when the difficulty is at a lower level, and jump out when the block
difficulty is adjusted to a certain higher level. In this way, the mining efficiency of the attacker is thus
higher than the honest miners. The key steps of the jumping mining attack are described in Algorithm 2.
It is worth mentioning that the parameters settings of “the difficulty threshold” can change according the
block difficulty fluctuation curve to obtain the best attack results.
In Algorithm 2, we choose the attack timing according to the difficulty threshold of the block. Note
that we can also use other conditions as the trigger for starting and exiting the attack. Considering
that Bitcoin’s adjustment strategy is different, we chose 2016 blocks as an attack cycle rather than the
difficulty threshold. Regardless of the choice, our fundamental purpose is to mine at a lower level of
block difficulty and to exit at a higher level. Thus, the jumping mining attack on Bitcoin should be the
following details, as described in Algorithm 3.
*4.2. Attack Results*
We conduct attack experiments on the DAA of each cryptocurrency introduced earlier. We separately
simulate the situation where the attacker’s computing hashrate is equal to three times that of an honest
miner. In terms of the timing of the attack, in addition to Bitcoin, we choose to attack when the block
difficulty is as 95 percent of the base difficulty, and to exit when the block difficulty reaches 1.45 times
of the base difficulty. We attack Bitcoin according its adjustment cycle. Note that the parameters for
choosing the timing can be selected based on the actual attack data. Our recommended strategy is to
select a difficulty threshold that maximizes the time for low difficulty mining.
**Results on Bitcoin** . As shown in Figure 1, the blue curve stands for the difficulty sequence, and the
green one refers to the entire hashrate. The peak area of the orange curve is the attack period.
- **I** . When the attacker’s hashrate is equal to the honest miner, the average time for the attacker
to mine a block is 703.8s, while the honest miner’s time requires 874.9s. The attacker’s mining
efficiency is 0.001421 while the honest one is 0.001143. Here we assume the efficiency is equal to
the average time to generate a block divided by the miner’s hashrate. The benefits of the attacker
are significantly higher than the honest one.
6
-----
**Al** **g** **orithm 2** JMAttack
1: Set the base line of the difficulty *BaseD* ∗ *Lz* = 4 ∗ 2 [40] ;
2: Set the hashrate of the honest miners *HRworker* = *[BaseD]* *T* [∗] *[Lz]*, where *T* is the expected average time for
the honest miner to generate a block;
3: Set *HRAttackerMulti* = 1; // Note that *HRAttackerMulti* can be changed as needed.
4: Set the hashrate of the attack miners *HRAttacker* = *HRAttackerMulti* - *HRworker* ;
5: Set the difficulty threshold *AttackIn* = 0.95; // When the difficulty of the block is 5 percent lower
than the benchmark level,the attacker start to attack.
6: Set the difficulty threshold for exiting an attack *AttackOut* = 1.45; // When the difficulty of the block
is 45 percent higher than the benchmark level,the attacker quit the attack.
7: Let *Dseri* be the block difficulty sequence;
8: Let *STseri* be the time sequence of block generation;
9: Set an attack flag as *Attackposition* == 0;
10: **for** *i* = 1 to *n*, where *i* is the block height **do**
11: **if** *Dseri* ( *i* − 1) < *AttackIn* - *BaseD* and *Attackposition* == 0 **then**
12: *Attackposition* = 1;
13: *HRnow* = *HRAttacker* + *HRworker* ; *HRnow* is total hashrate of the entire blockchain network;
14: **else if** *Dseri* ( *i* − 1) > *AttackOut* - *BaseD* and *Attackposition* == 1 **then**
15: *Attackposition* = 0;
16: *HRnow* = *HRworker* ;
17: **end if**
18: *Dseri* ( *i* ) = *GetNextDi* ffi *culty* ( *Dseri* ( *i* - *N* : *i* -1), *STseri* ( *i* - *N* : *i* -1), *T*, *N* ); // *GetNextDi* ffi *culty* is the difficulty adjustment algorithm defined by different cryptocurrencies.
19: *STseri* ( *i* ) = *GetSolveTime* ( *HRnow*, *RndSeri* ( *i* ), *Dseri* ( *i* )) 1;
20: **end for**
**Al** **g** **orithm 3** AttackOnBitcoin
1: Set the base line of the difficulty *BaseD* ∗ *Lz* = 4 ∗ 2 [40] ;
2: Set the hashrate of the honest miners *HRworker* = *BaseD* ∗ *LzT*, where *T* is the expected average
time for the honest miner to generate a block;
3: Let *Dseri* be the block difficulty sequence;
4: Let *STseri* be time sequence of block generation;
5: Set an attack flag as *Attackposition* == 0;
6: **for** *i* = 1 to *n*, where *i* is the block height **do**
7: **if** ( *mod* ( *i*, 2016) == 0) and *Attackposition* == 0 **then**
8: *Attackposition* = 1; // *Attackposition* is the attack flag
9: *HRnow* = *HRAttacker* + *HRworker* ; // *HRnow* is total hashrate of the entire blockchain network
10: **else if** ( *mod* ( *i*, 2016) == 0) and *Attackposition* == 1 **then**
11: *Attackposition* = 0;
12: *HRnow* = *HRworker* ;
13: **end if**
14: *Dseri* ( *i* ) = *GetNextDi* ffi *culty* ( *Dseri* ( *i* - *N* : *i* -1), *STseri* ( *i* - *N* : *i* -1), *T*, *N* ); // *GetNextDi* ffi *culty* is the
DAA of Bitcoin.
15: *STseri* ( *i* ) = *GetSolveTimeFunc* ( *HRnow*, *RndSeri* ( *i* ), *Dseri* ( *i* )) 1.
16: **end for**
7
-----
- **II** . When the attacker’s hashrate is three times than that of the honest miner, the average time for
the attacker to mine a block is 261.6s, while the honest miner’s time takes 1472.4s. The attacker’s
mining efficiency is 0.001421, and the honest miner is 0.000679. With the increase of the attacker’s
computing hashrate, the attack efficiency is much higher.
Attack cycle per 2016 block,Multiplier=1
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
x 10 4
(a) Equal Hashrate
Attack cycle per 2016 block,Multiplier=3
|Col1|Difficulty log(Total HashRate)|
|---|---|
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
x 10 4
(b) Three Times Hashrate
Figure 1: Simulation Results on Bitcoin
**Results on Bitcoin Cash** . It can be seen from Figure 2 that the hashrate is more unstable and the
fluctuation is more severe.
- **I** . When the attacker’s hashrate is equal to the honest miner, the average time for the attacker to
mine a block is 685.8s, while the honest one requires 707.8s. The attacker’s mining efficiency is
0.001458 while the honest one is 0.001413. The attacker has a certain advantage.
- **II** . When the attacker’s hashrate is three times that of the honest one, the average time for the
attacker to mine a block is 221.9s, while the honest one takes 733.2s. The attacker’s mining
efficiency is 0.001502, and the honest miner is 0.00136. Also, the higher attack hashrate is, the
more efficiency is.
**Results On ZCash** . The attack is still valid and the unfair situation gets worse as shown in Figure 3.
- **I** . When the attacker’s hashrate is equal to the honest miner, the average time for the attacker to
mine a block is 178.0s, while the honest one requires 221.2s. The attacker’s mining efficiency is
0.005617 while the honest one is 0.004521. The attacker has a large advantage.
- **II** . When the attacker’s hashrate is three times that of the honest one, the average time for the attacker to mine a block is 54.9s, while the honest one takes 217.4s. The attacker’s mining efficiency
8
-----
8
7
6
5
4
3
8
7.5
7
6.5
6
5.5
5
4.5
4
3.5
Attack at D=0.95, Stop at D=1.45,Multiplier=1
23.6
23.4
23.2
23
22.8
22.6
0 0.5 1 1.5 2 2.5 3
x 10 4
(a) Equal Hashrate
Attack at D=0.95, Stop at D=1.45,Multiplier=3
24.4
24
23.8
23.6
23.4
23.2
23
22.8
22.6
0 0.5 1 1.5 2 2.5 3
x 10 4
(b) Three Times Hashrate
Figure 2: Simulation Results on Bitcoin Cash
is 0.006070, and the honest miner is 0.004601. The higher attack hashrate is, the more efficiency
is.
**Results On Bitcoin Gold** . A little better but the attack still works as shown in Figure 4.
- **I** . When the attacker’s hashrate is equal to the honest miner, the average time for the attacker to
mine a block is 667.0s, while the honest one requires 771.5s. The attacker’s mining efficiency is
0.001499 while the honest one is 0.001296. The attacker has a large advantage.
- **II** . When the attacker’s hashrate is three times that of the honest one, the average time for the
attacker to mine a block is 221.4s, while the honest one takes 794.0s. The attacker’s mining
efficiency is 0.001506, and the honest miner is 0.001259. The higher attack hashrate is, the more
efficiency is.
**Analysis on the public block data of Bitcoin Gold** . We selected about 150 blocks from the block
data of the Bitcoin Gold [31]. As shown in Figure 5, the *x* coordinate is the block height, and the *y*
coordinate is the block time. The orange histogram represents the generation time of different block
heights, and the blue histogram represents the attackable area. We can see that there is a jumping mining
attack on it from the character of the attack obviously.
*4.3. Summary*
From the experimental results, no matter which DAA above, the attacker can benefit from more or
less by using the method of jumping mining attack. What’s worse is that when the attack hashrate increases, the attacker’s efficiency get higher. This may lead to a result that honest miners of the blockchain
9
-----
20
10
Attack at D=0.95, Stop at D=1.45,Multiplier=1
25
24.5
0 500 1000 1500 2000 2500 3000 3500
10
5
(a) Equal Hashrate
Attack at D=0.95, Stop at D=1.45,Multiplier=3
26
25
0 500 1000 1500 2000 2500 3000 3500
(b) Three Times Hashrate
Figure 3: Simulation Results on Zcash
network become less and less, and attackers are getting more and more.As a consequence, the foundation
of the security of cryptocurrency may be shaken and occur the 51% attack. In fact, BTG was attacked on
May 18, 2018 [32]. The official website issued an announcement on May 24, 2018, admitting to being
attacked and explaining the situation and improvement plans. Of course, the actual motivation for the
miner to mine needs to be calculated based on the market value of the cryptocurrency. Miners jumping
to attack low-value cryptocurrencies may have lower income than continuous mining on a high-value
cryptocurrency. Further more, it is difficult for the attacker to gain the huge hashrate that is multiples of
honest miner in the whole network. That might be one of the reasons why we still haven’t found similar
attack on the BTC and BCH, the famous large cryptocurrencies using the existing DAA. However, the
results tell us that this attack scheme is effective, and for new cryptocurrencies that are not protected by
large computing hashrate, the aforementioned attacks are prone to happen.
**5. Anti-attack Scheme**
In this section, we further propose an effective difficulty adjustment algorithm for anti-jumping mining attack. Attackers are always looking for a way to find blocks with relatively low difficulty values in
the block difficulty distribution curve for mining, while avoiding blocks with high difficulty as possible.
Through the previous attack analysis and experiments in Section 4, we can see that an effective attack
has at least the following characteristics:
- **I** . Attackers own a scale of computing hashrate.
- **II** . When the attacker enters, the speed of adjusting the difficulty to increase is not fast enough to
stop the attacker to generate blocks burstly.
10
-----
10
5
Attack at D=0.95, Stop when D=1.45,Multiplier=1
24
23
0 500 1000 1500 2000 2500
10
8
6
4
(a) Equal Hashrate
Attack at D=0.95, Stop when D=1.45,Multiplier=3
24.5
24
23.5
23
0 500 1000 1500 2000 2500
(b) Three Times Hashrate
Figure 4: Results On Bitcoin Gold
- **II** . When the attacker jump out, the block difficulty started to adjust down but the reaction was
too intense. When the attacker leaves, the difficulty of the next block become high for the existing
honest miners due to the decline in the computing hashrate of the entire network. If the adjustment
continue to react too violently, it will be last a long time for the honest miners to find the next
matching block. And then, the block difficulty will drops quickly again, thus the attacker can
begin his next attack after several blocks generated.
*5.1. Our Improved DAA*
Based on the analysis above, we have improved the difficulty adjustment algorithm based on the
weights of the block. Firstly, we continue to use the past *N* blocks as feedback data for difficulty adjustment, and give the newer blocks a higher reference value by weight. After that, we separately monitor
the block generation time of the last 5 and 10 blocks to determine whether the computing hashrate of
the entire network has suddenly increased, and adjust up the difficulty quickly. What’s more, in order to
prevent that the difficulty of generating blocks due to large delays decreasing rapidly, we limit the proportion of block difficulty adjusted down in the next block. In general, our solutions are to interfere with
the attacker, making him unable to keep mining in the low difficulty level last a long time, and increasing
the cost of his jumping mining. Thus, these reduce the profit of the jumping attacker. The key steps of
our anti-attack DAA are described in Algorithm 4.
*5.2. Attack test on the Improved DAA*
Similarly, we conducted extensive simulation experiments on the improved DAA. Through the experimental results, we can see that in the improved DAA, the attacker cannot take any advantage. When
the attacker’s hashrate is equal to the honest miner, the average time for the attacker to mine a block
11
-----
**Al** **g** **orithm 4** GetNextDifficult y
**Input:** *Di* ff *Seri* : the difficulty sequence of the last *N* blocks;
*STseri* : the sequence of the generation time of the last *N* blocks;
*T* : average target time to generate a block.
**Output:** *next Di* ffi *culty* : the difficulty of the next block.
1: **for** *i* = 1 to *N*, where *i* stands for the block height **do**
2: *solvetime* = *STseri* ( *i* );
3: *sum time* = *sum time* + *solvetime* ∗ *i* ;
2 [256][−][32]
4: *target* = *getTarget* ( *Di* ff *Seri* ( *i* )), here we have a *target* = *Di* ffi *culty* [;]
5: *sum target* = *sum target* + *target* ;
6: **if** ( *i* ≥ *N* − 10 + 1) **then**
7: *sum last10 time* = *sum last10 time* + *solvetime*, record the time of the last ten blocks;
8: *sum last10 target* = *sum last10 target* + *target* ;
9: **else if** ( *i* ≥ *N* − 5 + 1) **then**
10: *sum last5 time* = *sum last5 time* + *solvetime*, record the time of the last five blocks;
11: *sum last5 target* = *sum last5 target* + *target* ;
12: **end if**
13: **end for**
14: **if** ( *sum time* < *N* ∗ *N* ∗ *T* /6) **then**
15: *sum time* = *[N]* [∗] *[N]* 6 [∗] *[T]* ;
16: keep *sum time* reasonable in case strange *solvetime* occurred.
17: **end if**
18: *next target* = [2] [∗] *N* *[sum time]* ∗( *N* +1) [*] *[sum tar]* *N* *[g]* *[et]* - *[ad ]* *T* *[j]* *[ust]*, calculate the difficulty of the next block normally in the
absence of attack;
19: *avg last5 target* = *[sum last]* 5 [5] *[ tar]* *[g]* *[et]* ;
20: *avg last10 target* = *[sum last]* 10 [10] *[ tar]* *[g]* *[et]* ;
21: **if** *sum last5 time* ≤ 1.5 ∗ *T* **then**
22: **if** ( *next target* - *[av]* *[g]* *[ last]* 4 [5] *[ tar]* *[g]* *[et]* ) **then**
23: *next target* = *avg last5 target* /4;
24: **end if**
25: **else if** *sum last10 time* ≤ 5 ∗ *T* **then**
26: **if** ( *next target* - *[av]* *[g]* *[ last]* 2 [10] *[ tar]* *[g]* *[et]* ) **then**
27: *next target* = *avg last10 target* /2;
28: **end if**
29: **else if if** *sum last* 10 *time* ≤ 10 ∗ *T* **then**
30: **if** ( *next target* - *[av]* *[g]* *[ last]* [10] 3 *[ tar]* *[g]* *[et]* [∗][2] ) **then**
31: *next target* = *avg last* 10 *target* ∗ 2/3;
32: **end if**
33: **end if**
34: *last target* = *getTarget* ( *Di* ff *Seri* ( *end* )); // the target of the previous block.
35: **if** ( *next target* - *[last tar]* 10 *[g]* *[et]* [∗][13] ) **then**
36: *next target* = *[last tar]* 10 *[g]* *[et]* [∗][13] ; in case difficulty drops too soon compared to the last block;
37: **end if**
38: **if** ( *next target* - *pow limit* ) **then**
39: *next target* = *pow limit* ; // *pow limit* is the maximum value of *PoW Target* set by the cryptocur
rency.
40: *next Di* ffi *culty* = *getTarget* ( *next target* ).
41: **end if**
42: **return** *next Di* ffi *culty*
12
-----
Figure 5: Attack Character from Bitcoin Gold’s Public Block Data
is 135.7s, while the honest miner’s time requires 135.1s. The attacker’s mining efficiency is 0.007369
while the honest one is 0.007403. The benefits of the honest miners are slightly higher than the attackers.
Attack at D=0.95, Stop at D=1.45,Multiplier=3
0 1000 2000 3000 4000 5000 6000
Figure 6: Attack test on the Improved DAA
**6. Conclusions**
Mining is one of the most critical part in the cryptocurrencies based on PoW. It determines the
security of the blockchain on its consensus. Whether the profits of the miners match the hashrate they
provide is crucial. If the honest miners devote their hashrate but cannot get a reasonable reward, then
they will gradually decrease the hashrate in the network. And we know that hashrate is the basis for
ensuring the security of the blockchain. The research of our paper starts from this key point. In the
mining activities of the PoW blockchain, the difficulty adjustment algorithm directly affects the income
of the miners.
There are at least two limitations to analyze how the DAA of the public blockchain project works
when the hashrate change: on the one hand, it takes a lot of time to generate enough block data for
analysis even though we test it on testnet. On the other hand, it is hard to get enough hashrate for
test. Therefore, a convenient and efficient research method is needed. We firstly propose a simulation
model, which can effectively observe the relationship between network hashrate and block generation
time. With this model, we analyze the DAA of several mainstream cryptocurrencies. By observing
the character of these DAAs, we propose an attack scheme to make the attacker’s income higher than
the honest one. Furthermore, we conducted a large number of simulation experiments to verify the
effectiveness of the attack scheme. In addition, we also analyzed BTG’s historical block data to verify
13
-----
its existence of relevant attackable features. Finally, we propose an effective anti-attack scheme and also
verify it through simulation experiments.
At present, our research work is still in its preliminary stage, and the following limitations still exist:
(i) Our anti-attack method is only for jumping mining attack, and has not considered other mining attack
schemes, such as selfish mining [33]; (ii) We have not fully considered the impact of the cost of an
attacker’s jumping and the market price of the cryptocurrencies on the behavior of miners. In future
work, we would like to consider these factors and address these limitations.
**References**
[1] S. Nakamoto, Bitcoin: A peer-to-peer electronic cash system, Tech. rep., Manubot (2019).
[2] G. Wood, et al., Ethereum: A secure decentralised generalised transaction ledger, Ethereum Project Yellow Paper 151 (2014)
(2014) 1–32.
[3] F. Armknecht, G. O. Karame, A. Mandal, F. Youssef, E. Zenner, Ripple: Overview and outlook, in: International Conference
on Trust and Trustworthy Computing, Springer, 2015, pp. 163–180.
[4] E. K. Wang, R. Sun, C.-M. Chen, Z. Liang, S. Kumari, M. K. Khan, Proof of x-repute blockchain consensus protocol for iot
systems, Computers & Security (2020) 101871.
[5] F. Schuh, D. Larimer, Bitshares 2.0: general overview, Accessed June-2017. [Online]. Available: http://docs.
bitshares.org/downloads/bitshares-general.pdf.
[6] M. Castro, B. Liskov, Practical byzantine fault tolerance and proactive recovery, ACM Transactions on Computer Systems
20 (4) (2002) 398–461.
[7] M. Du, X. Ma, Z. Zhang, X. Wang, Q. Chen, A review on consensus algorithm of blockchain, in: IEEE International
Conference on Systems, Man, and Cybernetics, IEEE, 2017, pp. 2567–2572.
[8] N. T. Courtois, M. Grajek, R. Naik, Optimizing sha256 in bitcoin mining, in: International Conference on Cryptography and
Security Systems, Springer, 2014, pp. 131–144.
[9] D. Watkins, Scrypt mining with asics (2017).
[10] E. Wiki, Ethash, GitHub Ethereum Wiki. https://github. com/ethereum/wiki/wiki/Ethash.
[11] M. Seigen, T. Jameson, N. Nieminen, A. Juarez, Cryptonight hash function, in: CryptoNote Standard 008, 2013.
[12] A. Biryukov, D. Khovratovich, Equihash: Asymmetric proof-of-work based on the generalized birthday problem, Ledger 2
(2017) 1–30.
[13] D. E, X11 white paper, Available: https://github.com/dashpay/dash/wiki/Whitepaper.
[14] A. Gervais, H. Ritzdorf, G. O. Karame, S. Capkun, Tampering with the delivery of blocks and transactions in bitcoin, in:
Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, 2015, pp. 692–705.
[15] H. Mayer, Ecdsa security in bitcoin and ethereum: a research survey, CoinFaabrik, June 28 (2016) 126.
[16] J. Moubarak, E. Filiol, M. Chamoun, On blockchain security and relevant attacks, in: IEEE Middle East and North Africa
Communications Conference, IEEE, 2018, pp. 1–6.
[17] N. T. Courtois, L. Bahack, On subversive miner strategies and block withholding attack in bitcoin digital currency, arXiv
preprint arXiv:1402.1718.
[18] A. Kiayias, E. Koutsoupias, M. Kyropoulou, Y. Tselekounis, Blockchain mining games, in: Proceedings of the ACM Conference on Economics and Computation, 2016, pp. 365–382.
[19] V. Aggarwal, Y. Tan, A structural analysis of bitcoin cash’s emergency difficulty adjustment algorithm, Available at SSRN
3383739.
[20] Bitcoin Cash, https://www.bitcoincash.org/.
[21] D. Kraft, Difficulty control for blockchain-based consensus systems, Peer-to-Peer Networking and Applications 9 (2) (2016)
397–413.
[22] D. Fullmer, A. S. Morse, Analysis of difficulty control in bitcoin and proof-of-work blockchains, in: IEEE Conference on
Decision and Control, IEEE, 2018, pp. 5988–5992.
[23] E. Budish, The economic limits of bitcoin and the blockchain, Tech. rep., National Bureau of Economic Research (2018).
[24] A. Biryukov, D. Feher, G. Vitto, Privacy aspects and subliminal channels in zcash, in: Proceedings of the ACM SIGSAC
Conference on Computer and Communications Security, 2019, pp. 1813–1830.
[25] R. Auer, Beyond the doomsday economics of’ proof-of-work’ in cryptocurrencies.
[26] HashCash, https://en.bitcoin.it/wiki/Hashcash. (Last retrieved June 2019).
[27] Bitcoin Core DAA, https://github.com/bitcoin/bitcoin/blob/master/src/pow.cpp (Last retrieved June 2019).
[28] Zcash, https://z.cash/technology/.
[29] Bitcoin GOLD, http://bitcoingold.org/.
[30] Bitcoin GOLD DAA, https://github.com/BTCGPU/BTCGPU/blob/master/src/pow.cpp (Last retrieved Jan 2020).
[31] Bitcoin GOLD public data, https://btg.tokenview.com/en/block (Last retrieved Jan 2020).
[32] News about Bitcoin GOLD, https://news.bitcoin.com/bitcoin-gold-51-attacked-network-loses-70000-in-double-spends.
[33] I. Eyal, E. G. Sirer, Majority is not enough: Bitcoin mining is vulnerable, in: International conference on financial cryptography and data security, Springer, 2014, pp. 436–454.
14
-----
| 13,282
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2008.08184, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "CLOSED",
"url": ""
}
| 2,020
|
[
"JournalArticle"
] | false
| 2020-08-18T00:00:00
|
[
{
"paperId": "15c6a198eaaa7bb9fd9df41775bb39631c52dcbe",
"title": "Scrypt Mining with ASICs"
},
{
"paperId": "67cf2385456c3bae345a8d187167360a956c307e",
"title": "Proof of X-repute blockchain consensus protocol for IoT systems"
},
{
"paperId": "a081b1f4d4f8551153c50046f24ddc298260f648",
"title": "Privacy Aspects and Subliminal Channels in Zcash"
},
{
"paperId": "568d53a57705b2b6f75edaaa6eed8f928913e04f",
"title": "A Structural Analysis of Bitcoin Cash's Emergency Difficulty Adjustment Algorithm"
},
{
"paperId": "7e460ec741db6828ce6f64326fe842d84e1e73fd",
"title": "Beyond the Doomsday Economics of 'Proof-of-Work' in Cryptocurrencies"
},
{
"paperId": "59deff318e794afe760b2fd300f9251ce640b0c1",
"title": "Analysis of Difficulty Control in Bitcoin and Proof-of-Work Blockchains"
},
{
"paperId": "6ade3d5b5a97594b43121df72e24a382390d97af",
"title": "The Economic Limits of Bitcoin and the Blockchain"
},
{
"paperId": "4a1f97438a28ecf8cb08bf7b8752386c8ec6aa1c",
"title": "On blockchain security and relevant attacks"
},
{
"paperId": "0bf2cb4ae68275f4fd71a30f191dc95793a0d49e",
"title": "A review on consensus algorithm of blockchain"
},
{
"paperId": "67772e21e6d6d4af801c8ed67987da7f6a79077e",
"title": "Asymmetric proof-of-work based on the Generalized Birthday problem"
},
{
"paperId": "6ec8dea8891e1d313b2640299acbadede6d77842",
"title": "Blockchain Mining Games"
},
{
"paperId": "d47dc51a061635ac3056c1e61a85ddb96a8e67f5",
"title": "Tampering with the Delivery of Blocks and Transactions in Bitcoin"
},
{
"paperId": "cc9b027bd1b8c0ee854992bb56e64d72ebf3355e",
"title": "Ripple: Overview and Outlook"
},
{
"paperId": "c50dcc3dd1460dad455a97146f20ae8cd6bfe1b5",
"title": "Difficulty control for blockchain-based consensus systems"
},
{
"paperId": "37a622a6891149a3a1a17643ebf15e95dabeaf2d",
"title": "Optimizing SHA256 in Bitcoin Mining"
},
{
"paperId": "822693248834147245d6ff2309192122d1326396",
"title": "On Subversive Miner Strategies and Block Withholding Attack in Bitcoin Digital Currency"
},
{
"paperId": "7bf81e964a7c20d829c1225685ae138bf1489c99",
"title": "Majority is not enough"
},
{
"paperId": "48326c5da8fd277cc32e1440b544793c397e41d6",
"title": "Practical byzantine fault tolerance and proactive recovery"
},
{
"paperId": "c5c6eefc414d32637890dbe40a1440e46f68e10f",
"title": "BITSHARES 2.0: GENERAL OVERVIEW"
},
{
"paperId": "434a117a2717cdbf78035365d8bab2b0a3410be9",
"title": "ECDSA Security in Bitcoin and Ethereum : a Research Survey"
},
{
"paperId": "3c50bb6cc3f5417c3325a36ee190e24f0dc87257",
"title": "ETHEREUM: A SECURE DECENTRALISED GENERALISED TRANSACTION LEDGER"
},
{
"paperId": null,
"title": "Cryptonight hash function"
},
{
"paperId": "ecdd0f2d494ea181792ed0eb40900a5d2786f9c4",
"title": "Bitcoin : A Peer-to-Peer Electronic Cash System"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": null,
"title": "X11 white paper"
},
{
"paperId": null,
"title": "News about Bitcoin GOLD"
}
] | 13,282
|
en
|
[
{
"category": "Business",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Economics",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/003022a0ab24687c32ff959f39e65e948e4350f7
|
[] | 0.82184
|
OVERVIEW OF FINANCIAL TECHNOLOGY IN BANKING SECTOR: A BIBLIOMETRIC STUDY
|
003022a0ab24687c32ff959f39e65e948e4350f7
|
Jurnal RAK (Riset Akuntansi Keuangan)
|
[
{
"authorId": "2309357609",
"name": "Alfiana Nur Fadhilah"
},
{
"authorId": "2309367134",
"name": "An Nurrahmawati"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"J RAK (riset Akunt Keuang"
],
"alternate_urls": null,
"id": "0b5bfa25-f207-4f67-90e9-de9b56125ddd",
"issn": "2541-1209",
"name": "Jurnal RAK (Riset Akuntansi Keuangan)",
"type": "journal",
"url": "https://jurnal.untidar.ac.id/index.php/RAK"
}
|
This study aims to provide empirical evidence regarding the growth and trend of fintech-related publications in the banking sector and examine what variables are often associated with fintech. Utilizing bibliometric analysis via the VOSviewer application, this study analyzes 816 articles published on Scopus from 2013 to 2023. The results of this study indicate that research publications on fintech in the banking sector have been widely carried out in various countries. Several variables related to fintech and banking topics include technology, industry, covid, future, and blockchain. Some of the authors that are widely found are M.K. Hassan, H. Banna, and M.R. Rabbani which can be considered as references. The study's limitation lies in its inability to provide an overview of variable usage trends in 2023. This research is expected to provide implications for the development of future research, especially related to fintech and banking.
|
_Jurnal RAK (Riset Akuntansi Keuangan) Vol 9 No 1_
**JURNAL RAK (RISET AKUNTANSI KEUANGAN)**
URL: https://journal.untidar.ac.id/index.php/rak
**Tinjauan Financial Technology dalam Sektor Perbankan: Sebuah Studi Bibliometrik**
**_OVERVIEW_** **_OF_** **_FINANCIAL_** **_TECHNOLOGY_** **_IN_** **_BANKING_** **_SECTOR:_** **_A_** **_BIBLIOMETRIC_** **_STUDY_**
**Alfiana Nur Fadhilah[1*], An Nurrahmawati[2 ]**
1, 2 Universitas Sebelas Maret, Surakarta
[[email protected]](mailto:[email protected])
**ARTICLE INFORMATION** **ABSTRAK**
**JURNAL RAK (RISET AKUNTANSI KEUANGAN)**
URL: https://journal.untidar.ac.id/index.php/rak
_Article history:_
_Received date: January, 2024_
_Accepted: March, 2024_
_Available online: May, 2024_
Penelitian ini bertujuan untuk memberikan bukti empiris mengenai pertumbuhan
dan tren publikasi terkait fintech di sektor perbankan dan mengkaji variabel apa saja
yang sering dikaitkan dengan _fintech. Data yang digunakan merupakan artikel_
publikasi tahun 2013-2023, dibatasi sampai saat penelitian ini dilaksanakan dengan
analisis bibliometrik menggunakan aplikasi VOSviewer. Jumlah artikel yang
dijadikan data berjumlah 816 artikel yang diperoleh dari laman Scopus. Hasil
penelitian ini menunjukkan bahwa publikasi penelitian mengenai _fintech pada_
sektor perbankan telah banyak dilakukan di berbagai negara. Ada beberapa
variabel yang terkait dengan topik _fintech dan perbankan, antara lain teknologi,_
industri, covid, masa depan, dan blockchain. Beberapa nama penulis yang banyak
ditemui yakni M.K. Hassan, H. Banna, dan M.R. Rabbani yang dapat dijadikan
referensi. Keterbatasan penelitian ini adalah tidak dapat memberikan gambaran
mengenai arah tren penggunaan variabel penelitian pada tahun 2023. Penelitian ini
diharapkan dapat memberikan implikasi dalam pengembangan penelitian
selanjutnya khususnya terkait fintech dan perbankan.
**Kata kunci: Fintech; Perbankan; Bibliometrik**
**ABSTRACT**
_This study aims to provide empirical evidence regarding the growth and trend of fintech-_
_related publications in the banking sector and examine what variables are often associated_
_with fintech. Utilizing bibliometric analysis via the VOSviewer application, this study_
_analyzes 816 articles published on Scopus from 2013 to 2023. The results of this study_
_indicate that research publications on fintech in the banking sector have been widely carried_
_out in various countries. Several variables related to fintech and banking topics include_
_technology, industry, covid, future, and blockchain. Some of the authors that are widely_
_found are M.K. Hassan, H. Banna, and M.R. Rabbani which can be considered as_
_references. The study's limitation lies in its inability to provide an overview of variable usage_
_trends in 2023. This research is expected to provide implications for the development of future_
_research, especially related to fintech and banking._
**_Keywords: Fintech; Banking; Bibliometrics_**
©2024 Akuntansi UNTIDAR. All rights reserved.
- Corresponding author: P-ISSN: 2541-1209
Address: Universitas Sebelas Maret [E-ISSN: 2580-0213](http://u.lipi.go.id/1493993173)
[E-mail: [email protected]](mailto:[email protected])
-----
**INTRODUCTION**
_j_ _gy_ _(_ _f_ _,_ _)_
Fintech incorporates innovation into retail
The banking sector is the backbone of the
global economy and plays a central role in
supporting a country’s growth and financial
stability. DPR (2021) stated that as one of the
main pillars of the country’s economic structure,
the banking sector also plays an important role
in allocating financial resources and facilitating
investment and consumption. As a financial
institution that provides various essential
services such as lending, investment, and fund
management, banks have a strategic role in
allocating financial resources to support
productive projects and economic growth.
A company that employs technology,
specifically automated information processing,
and the internet, to deliver financial solutions is
known as financial technology or fintech (Gabor
& Brooks, 2017; Milian et al., 2019; Zavolokina
et al., 2016; Alt et al., 2018; Gomber et al., 2018;
Puschmann, 2017). Broadly, fintech is defined as
financial technology innovation that produces
new business models, applications, processes,
or products with material effects related to
financial institutions and the provision of
financial services (Financial Stability Board,
2017). This innovation in the financial industry
has led to improved business operations, high
efficiency, speed, flexibility, and cost reduction.
(Zavolokina et al., 2016; Lee & Shin, 2018;
Thakor, 2020). Based on these various
definitions, it can be concluded that fintech is a
financial industry that applies technology to
provide services and increase financial
activities. In this study, the term fintech is
associated with the use of technology that helps
the financial and banking industries in providing
services to the public to increase financial
activities.
banking, cryptocurrency, investments, financial
literacy, and education (Gomber et al., 2018).
Due to the automation of most services,
business models have changed to allow for the
provision of individualized services to customers
without regard to time zone or location. Fintech
has additionally aided in disintermediation
(Thakor, 2020) and provided online platforms
for trading, lending (crowdfunding and peer-topeer, P2P), and asset management, for
example, robo-advising (Gomber et al., 2018; Alt
et al., 2018; Lee & Shin, 2018; Puschmann,
2017). The growth of infrastructure, data
analysis, big data, and mobile devices are
further methods used to accomplish this
intermediation (Lee & Shin, 2018).
There are three periods of fintech
evolution according to Arner et al., 2016,
namely: fintech 1.0 (1866-1967), characterized
by the invention of ATM and telegraph
technology that allows rapid transmission,
information, and financial transactions; fintech
2.0 (1967-2008), dominated by electronic
payments, clearing systems, ATMs, and online
banking services; and fintech 3.0 (2008
present), where established technology
companies provide direct financial products and
services using online platforms to businesses
and the general public. At least, there are two
main factors for the evolution in fintech
innovation (Awrey, 2013). First is the shift in
people’s preferences, especially the millennial
generation, who grew up in a digital
environment. In addition, the ease of internet
access drives expectations for the convenience,
speed, cost and ease of financial
services. Second is the emergence of businesses
that use technologies such as big data, artificial
intelligence, blockchain, and cryptocurrencies
-----
_(_ _g_ _)_
which are currently growing rapidly (Frame et
al., 2018).
The success of fintech industry innovation
requires transparent and clear regulations for
new start-ups, the banking industry, and
financial innovation companies (Muhammad &
Sari, 2020). Several research results show the
importance of the role of regulators (state) in
providing a platform for fintech companies to
promote innovation in the field of financial
services as well as safeguard the interests of
consumers and investors.
In Indonesia, there are Peraturan Otoritas
_Jasa_ _Keuangan_ (POJK) Number
10/POJK.05/2022 concerning Information
Technology-Based Joint Funding Services (POJK
LPBBTI/Fintech P2P Lending) and POJK Number
13/POJK.02/2018 concerning Digital Financial
Innovation in the Financial Services Sector. In
addition, there is also Peraturan Bank Indonesia
Number 18/40/PBI/2016 concerning the
Implementation of Payment Transaction
Processing. In the European Union, Payment
Services Directive 2 (PSD2) was enacted to
regulate electronic payment services and
strengthen the security of electronic
transactions. In Singapore, the Monetary
Authority of Singapore (MAS) enforces
regulations to grant licenses to fintech
companies that meet certain requirements. This
license covers a wide range of business models,
including payment services, e-money, and
digital asset services. The Financial Conduct
Authority (FCA) in the UK issued regulations to
oversee crowdfunding activities, including
licensing requirement requirements, limits on
investment amounts, and consumer protection.
In a modern era driven by technological
innovation, fintech has become a revolutionary
force in the banking sector. In the ever-evolving
digital era, fintech has become the driving force
of transformation in the financial
industry. Fintech summarizes various
technological innovations that are changing the
way money is managed, transferred, and
invested (Mauline, 2022). Along with these
developments, the banking sector has also
undergone significant transformation.
Innovations such as digital banking applications,
application-based payment services, peer-topeer lending (P2P lending), advanced security
technologies, and online investment platforms
are clear examples of how fintech has changed
the way traditional banking operates. Fintech in
the banking sector is a very interesting and
relevant topic in the context of a changing
global economy.
Fintech presents new financial solutions
supported by modern technology such
as artificial intelligence, big data analytics, and
application-based financial services (mobile
finance) (Ma’ruf, 2021). These innovations not
only facilitate customer access to financial
services but also create operational efficiencies
and trigger breakthroughs in risk
management. By driving operational efficiency,
improving the accessibility of financial services,
and providing a better customer experience,
fintech creates new opportunities and
challenges for the banking sector. Rapid changes
in fintech technology force traditional banks to
continue to adapt in order to remain
competitive and relevant (Ayu, 2023).
Therefore, it is important to know how fintech
develops in the banking sector.
Fintech is supported by the public for its
ease in financial transactions compared to rigid
and convoluted conventional banking
administrative processes (Kristianti & Tulenan,
2021). Complicated administrative processes
-----
_j_ _gy_ _(_ _f_ _,_ _)_
and strict regulations are some of the reasons
why banking has not been optimal for financial
penetration. Fintech’s presence in the banking
sector has created broader financial
inclusion. Fintech has opened the door for
individuals and small businesses to access
previously hard-to-reach financial services,
expanding accessibility significantly. Fintech
makes low-income people able to access
financial services such as low-interest loans
more easily (Ramlah, 2021).
This article aims to present a
comprehensive literature review of the growth
and trends of fintech publications within the
banking sector and the variables attributed to
fintech. An overview of fintech within the
banking sector is not only important to
understand these industry trends, but also to
help banks anticipate and adapt to everchanging technological developments. By
analyzing various empirical research, conceptual
frameworks, and case studies, this article
explains how fintech is developing in the
banking sector. Thus, the role of fintech in the
banking sector has become very relevant in
responding to the demands of the everchanging and increasing financial ecosystem.
Some bibliometric analysis has already
been carried out on fintech trends. However,
this paper contributes to the literature as it
focuses specifically on fintech over ten years
from 2013 to 2023 in the banking sector.
**RESEARCH** **METHODS**
This study employs bibliometric analysis
(bibliometrics), a type of literature analysis that
is a component of the research assessment
technique. It is feasible to do bibliometric
analysis utilizing a unique methodology from a
variety of widely generated literature (Ellegaard
& Wallin, 2015). The research method uses the
VOSViewer application which involves a series
of steps for the analysis and visualization of
bibliometric data. Bibliographic data relevant to
the research topic is downloaded from the
scientific database Scopus for the reason that
this page is assumed to include all publications
at the international level. This data is then
imported into VOSViewer for network analysis.
The research period is from 2013 to 2023.
Furthermore, the data processing steps include
filtering to limit the time range, subject area, as
well as the language used. At this stage,
VOSViewer builds a network of co
citations or co-authors based on the
interrelationships between documents or
authors. Cluster analysis is also carried out to
identify similar thematic groups or research
focuses. During this process, special features of
VOSViewer, such as layout and coloring, are
used to improve understanding of structure and
trends in bibliometric networks. The results of
this analysis can help identify research
developments, collaboration between authors,
and key concepts in the literature. This method
provides a strong visual and deep insight into
the existing knowledge framework in a
particular research domain.
**RESULTS AND DISCUSSION**
The initial data obtained in this study
amounted to 1,149 papers which were then
carried out in the screening stage to produce
816 papers. Data is taken from journal
publications on the Scopus page obtained using
fintech and banking keywords through several
screening stages presented in Figure 1 below.
-----
_(_ _g_ _)_
**Figure 1. Data Selection Process**
_Source: data processed by author (2023)_
Figure 1 is the process of selecting data
from the Scopus page where there were 1,149
papers at the beginning of the search. The year
2013-2023 is used as the second screening stage
for the reason that fintech publications have
developed a lot during that period. The
third screening stage is data taken from
publications in journals with the subject of
Business, Management, &
Accounting; Economics, Econometrics, &
Finance; and Social Sciences because
research on this subject is research in a field
that is in accordance with the aims and
objectives of the study. The data used in this
study were from English and Indonesian
language publications, which were 816
papers. The distribution and development of
such publications are shown in Figure 2.
**Figure 2. Graph of Research Development**
Related to Fintech and Banking Topics
[Source: https://www.scopus.com/](https://www.scopus.com/)
Figure 2 shows that in general, the
publication of articles related to fintech and
banking has increased in quantity since 2013. In
2014, the number of articles published on
Scopus was 1 document and continued to show
a consistent increase until 2019 of 66
documents. A significant increase occurred in
2020 as many as 121 documents. This number
continues to increase until 2023, when this year
the number of articles has reached 235
documents published on Scopus. Furthermore,
the distribution of fintech and banking topics by
country is presented in Figure 3.
**Figure 3. Publication Graph on Fintech and**
Banking Topics By Country
[Source: https://www.scopus.com/](https://www.scopus.com/)
Figure 3 shows the 10 countries with the
highest paper publication contributions on
fintech and banking topics. The highest position
is the United States with 109 documents,
followed by India with 93 documents, and so
on. This shows that in general, the United States
has concerns about fintech issues. Out of the 10
countries, Indonesia occupies the seventh
position with 37 documents published in
Scopus. It can be assumed that Indonesia also
has a fairly high concern for fintech. Table 1
presents about 10 (ten) publication articles with
the highest citations on the Scopus page.
-----
_j_ _gy_ _(_ _f_ _,_ _)_
**Table 1. Ten Papers with The Highest Citations**
on Fintech in The Banking Sector
**Title and** **Quartile**
**No** **Year** **Journal**
**Author** **Scopus**
**Number**
**of**
**Citations**
Author: J.
Jagtiani
C. Lemie
7 Banking goes
digital: The
adoption of
FinTech
services by
German
households
Author: M.
Jünger
M. Mietzner
8 Does fintech
innovation
improve bank
efficiency?
Evidence from
China's
banking
industry
Author: C. C.
Lee,
X. Li,
C.H. Yu,
J. Zhao
9 Data security
and consumer
trust in
FinTech
innovation in
Germany
Author: H.
Stewart,
J. Jürjens
10 Can fintech
improve the
efficiency of
commercial
banks? —An
analysis based
on big data
Author: Y.
Wang,
S. Xiuping,
Q. Zhang
2020 Finance
Research
Letters
2021 International
Review of
Economics
and Finance
2018 Information
and
Computer
Security
2021 Research in
International
Business and
Finance
**Title and** **Quartile**
**No** **Year** **Journal**
**Author** **Scopus**
**Number**
**of**
**Citations**
Q1 128
Q1 127
Q2 114
Q1 112
1 Fintech:
Ecosystem,
business
models,
investment
decisions, and
challenges
Author: I. Lee,
Y.J. Shin
2 Fintech and
banking:
What do we
know?
Author: A.V.
Thakor
3 Taming the
beast: A
scientific
definition of
fintech
Author: P.
Schueffel
4 Fintech and
regtech:
Impact on
regulators and
banks
Author: I.
Anagnostopoulos
5 Fintech
investments
in European
banks: a
hybrid IT2
fuzzy
multidimensio
nal decisionmaking
approach
Author: G.
Kou.
Ö. Olgu
Akdeniz,
H. Dinçer,
S. Yüksel
6 Do fintech
lenders
penetrate
areas that are
underserved
by traditional
banks?
2018 Business
Horizons
2020 Journal of
Financial
Intermediation
2016 Journal of
Innovation
Management
2018 Journal of
Economics
and Business
2021 Financial
Innovation
2018 Journal of
Economics
and Business
Q1 600
Q1 337
Q2 242
Q2 217
Q1 187
Q2 173
Source: Scopus (processed)
Table 1 shows the 10 papers with the
highest citations related to fintech and banking
topics. The number of citations is an indicator to
measure the impact or influence of the paper on
other papers related to fintech and banking. The
high number of citations is often taken as an
-----
_(_ _g_ _)_
indication that the paper is important or
influential in the scientific community. Authors
who receive many citations are often
considered to have a significant contribution to
knowledge in a particular field. A high citation
rate can also affect the reputation of the author,
the ranking of the journal where the author
publishes, and the selection of publications as
references in scientific literature. Furthermore,
the variables related to the fintech and banking
topics are presented in Figure 4.
**Figure 4. VOSviewer Results: Net of Variables in**
Publications Related to Fintech and Banking
Source: data processed by author (2023)
Figure 4 informs about variables that
frequently surface in conjunction with
discussions of fintech and banking, including
technology, industry, digital transformation,
blockchain, fintech innovation, financial
innovation, future, and covid. This shows that
these variables are widely observed by the
authors in their study. Thus, it can be said that
these variables are closely related to fintech.
The relationship between the variables in the
published study is indicated by the line
connecting them. For example, in some studies,
industry variable is associated with digital
transformation, fintech innovation, and Islamic
fintech. In another study, the relationship
variable is associated with fintechs and
determinants. The relationship between these
determinant 7 6 11 2021
fintechs 7 3 6 2022
variables is illustrated in Table 2 regarding
cluster linkages between variables.
**Table 2. Variables Used According to The Cluster**
**Total link**
**Variables** **Cluster**
**strength**
**Occurr**
**ences**
**Average**
**Year**
technology 1 25 52 2020
opportunity 1 15 20 2020
fintech
adoption
1 8 15 2022
open banking 1 9 10 2021
business 1 5 9 2020
industry 2 24 35 2020
digital
transformation
blockchain
technology
2 11 18 2021
2 5 10 2022
financial sector 2 10 9 2020
fintech
innovation
2 5 9 2021
covid 3 26 21 2022
lending 3 16 19 2021
pandemic 3 16 11 2022
implication 3 10 10 2020
digital financial
inclusion
3 8 8 2021
intention 4 20 25 2022
trust 4 11 8 2020
empirical study 4 5 7 2022
fintech service 4 5 6 2021
fintech services 4 6 5 2022
future 5 10 16 2020
islamic finance 5 10 14 2019
financial
innovation
fintech
company
5 7 11 2020
5 4 10 2020
islamic bank 5 3 6 2021
blockchain 6 20 29 2021
cryptocurrency 6 4 8 2020
disruption 6 7 8 2020
fintech industry 6 1 6 2019
insurance 6 2 5 2021
comparative
study
7 2 6 2020
-----
_j_ _gy_ _(_ _f_ _,_ _)_
**Total link**
**Variables** **Cluster**
**strength**
**Occurr**
**ences**
**Average**
**Year**
**Year** **Trend of Variables Used**
banking; traditional bank
2021 Islamic bank; technology; industry;
blockchain; lending; digital
transformation; determinant; fintech
company; open banking; fintech
innovation; peer; digital financial
inclusion; age; smes; islamic fintech;
advancement; fintech service; insurance
2022 Covid; intention; fintech adoption;
pandemic; blockchain technology;
relationship; sustainability; empirical
study; nexus; fintechs
Source: Data processed by author (2023)
Table 3 shows the trend of variables
used in research and publication in the 20192022 time frame. For the years 2013 to 2018,
there are still very few publications so they do
not appear in the results of VOSviewer data
processing. Meanwhile, in 2023, the variables
used are increasingly varied so there are still
very few levels of occurrence, resulting in these
variables could not be detected by VOSviewer.
The trend of the emergence of this variable
indicates the movement of problems observed
by authors who develop in the world. This trend
can give the authors a broad overview from
which to grow further, both from variables that
are already known and from those that have not
yet been thoroughly investigated. This is
undoubtedly consistent with the evolution of
issues within the organization and in society at
large.
Some authors, either individually or in
conjunction with other writers, publish research
findings on subjects linked to this research. A
picture of several authors names and their
connections to other authors is shown below as
presented in Figure 5.
relationship 7 4 8 2022
traditional bank 7 1 5 2020
_Source: data processed by author (2023)_
Table 2 shows the group of variables
found as a result of data processing using
VOSviewer, the application used in this study.
According to the processed publication data, the
variable grouping consists of seven clusters. This
suggests that there are groups of variables that
have a tendency to be related to each other,
which often appear together in research
conducted and published. This grouping is noncaptive, meaning it is possible that variables
within one cluster correlate with other variables
outside the cluster.
The number in the total link strength
column shows how strongly one variable is
associated with another variable. The greater
the number in this column, the more frequently
this variable is related to other variables. The
number in the occurrences column indicates
how much research used the variable. The
numbers in these two columns are almost
aligned. Publications with associated variables
appeared on average in that year, as indicated
by the number in the average year column. An
overview of the trends in the variables that
surfaced in the span of years of this study is
shown below.
**Table 3. Trend Variables Used**
**Year** **Trend of Variables Used**
2019 Fintech industry; islamic finance;
introduction
2020 Disruption; opportunity; future; financial
innovation; implication; business;
financial sector; cryptocurrency; trust;
rise; shadow banking; comparative
study; digital technology; mobile
-----
_(_ _g_ _)_
**Figure 5. Authors and Their Relationships with**
Other Authors
Source: data processed by author (2023)
Figure 5 shows the names of authors who
published their research related to this research
topic. The line in Figure 5 shows the relationship
or partner between one author and others. For
example, Hassan in conducting research and
publications has partnered with Jreisat, and in
other studies Hassan partnered with
Mohammed. Furthermore, on another occasion,
Zhang partnered with Li, Liu, and Xu. The larger
the circle on the author’s name indicates the
more publications have been made. This may
suggest that the author is delving deeper into
the subjects covered by this research the more
frequently they publish. The following table of
author names related to the topic in this study is
presented based on the results of VOSviewer
data processing. The distribution of published
research related to the research topics by the
authors is presented in Table 4.
**Table 4. Authors Who Often Publish Research**
Related to Research Topics
**Total link**
**Author** **Cluster** **Documents**
**strength**
Hassan, M.K. 1 12 7
Banna, H. 4 6 4
Rabbani, M.R. 5 4 4
Zhang, W. 3 9 4
Chen, Z. 6 8 3
**Total link**
**Author** **Cluster** **Documents**
**strength**
Khan, S. 5 2 3
Rabbani, M. R. 5 4 2
Ahmad, R. 4 5 2
Alam, M.R. 4 4 2
Bashar, A. 5 2 2
Friedline, T. 6 1 2
Jreisat, A. 1 2 2
Li, J. 2 5 2
Li, W. 3 1 2
Li, X. 2 6 2
Li, Y. 6 2 2
Li, Z. 4 2 2
Source: Data processed by author (2023)
Table 4 shows that the aforementioned
writers published their research findings using
the variables listed in the cluster column, which
are included in the cluster. The number in the
total link strength column shows how strongly
or often the author partners with other authors
in publishing on topics relevant to this study.
The number in the documents column indicates
how many publications the VOSviewer
application found by mentioning the author’s
name. The names of the authors listed in Table
4 can be used as references for further research
related to the topic. The more often the name
of an author appears, the more the author
studies and understands the intended research
topic.
Overall, the data processing findings
obtained with the VOSviewer application
indicate that several variables are often utilized
in numerous papers in this area of study. All of
the variables used in processed publication
data, particularly those with very limited
quantity, cannot be displayed by VOSviewer. On
the one hand, this makes it difficult for the
author to get a more detailed picture of how
relevant variables were used in related studies.
-----
_j_ _gy_ _(_ _f_ _,_ _)_
On the other hand, this may indicate that
variables that have not been mentioned have
not been thoroughly explored to allow for the
latest future research.
**CONCLUSION**
This study aims to provide empirical
evidence regarding the growth and trend of
fintech-related publications in the banking
sector and examine what variables are often
associated with fintech. This study found that
the development of research related to fintech
and the banking sector is very varied, as
evidenced by the many publications related to
the topics. There are several variables
associated with fintech and banking topics,
including technology, industry, covid, future,
and blockchain. However, there are still many
unexplored variables that can be attributed to
this research topic, such as advancement, digital
technology, shadow banking, mobile banking,
and insurance. These variables can be
considered for further research as the novelty of
future research.
Several names of authors related to the
fintech and banking topics, namely Hassan,
Banna, and Rabbani, indicate that these authors
have published the results of their research
several times related to this research topic.
Thus, the author can be used as a reference
consideration. The limitation of this study is its
inability to give a broad picture of the usage of
research variables in 2023, including their trend
direction. An impact on how future research is
developed, particularly in the areas of fintech
and banking.
**REFERENCES**
Alt, R., Beck, R., & Smits, M. T. (2018). FinTech
and the transformation of the financial
industry. _Electronic Markets,_ _28(3), 235–_
243. https://doi.org/10.1007/s12525-0180310-9
Anagnostopoulos, I. (2018). Fintech and
Regtech: Impact on regulators and banks.
_Journal of Economics and Business, 100, 7–_
25.
https://doi.org/10.1016/j.jeconbus.2018.0
7.003
Arner, D. W., Barberis, J. N., Buckley, R. (2016).
The Evolution of Fintech: A New Post-Crisis
Paradigm. _Georgetown_ _Journal_ _of_
_International Law, 47(4), 1271–1320._
Awrey, D. (2013). Toward a supply-side theory
of financial innovation. _Journal_ _of_
_Comparative Economics,_ _41(2), 401–419._
https://doi.org/10.1016/j.jce.2013.03.011
Ayu, R. D. (2023). Mengenal Fintech: Pengertian,
Jenis, Manfaat, dan Aturan Terbarunya.
Retrieved November 13, 2023, from
[https://koran.tempo.co/read/ekonomi-dan-](https://koran.tempo.co/read/ekonomi-dan-bisnis/484994/mengenal-fintech-pengertian-jenis-manfaat-dan-aturan-terbarunya)
[bisnis/484994/mengenal-fintech-](https://koran.tempo.co/read/ekonomi-dan-bisnis/484994/mengenal-fintech-pengertian-jenis-manfaat-dan-aturan-terbarunya)
[pengertian-jenis-manfaat-dan-aturan-](https://koran.tempo.co/read/ekonomi-dan-bisnis/484994/mengenal-fintech-pengertian-jenis-manfaat-dan-aturan-terbarunya)
[terbarunya](https://koran.tempo.co/read/ekonomi-dan-bisnis/484994/mengenal-fintech-pengertian-jenis-manfaat-dan-aturan-terbarunya)
DPR. (2021). Sektor Perbankan jadi tulang
Punggung Pemulihan Ekonomi Nasional.
Retrieved November 13, 2023, from
[https://www.dpr.go.id/berita/detail/id/334](https://www.dpr.go.id/berita/detail/id/33437/t/Sektor%20Perbankan%20Jadi%20Tulang%20Punggung%20Pemulihan%20Ekonomi%20Nasional)
[37/t/Sektor%20Perbankan%20Jadi%20Tula](https://www.dpr.go.id/berita/detail/id/33437/t/Sektor%20Perbankan%20Jadi%20Tulang%20Punggung%20Pemulihan%20Ekonomi%20Nasional)
[ng%20Punggung%20Pemulihan%20Ekono](https://www.dpr.go.id/berita/detail/id/33437/t/Sektor%20Perbankan%20Jadi%20Tulang%20Punggung%20Pemulihan%20Ekonomi%20Nasional)
[mi%20Nasional](https://www.dpr.go.id/berita/detail/id/33437/t/Sektor%20Perbankan%20Jadi%20Tulang%20Punggung%20Pemulihan%20Ekonomi%20Nasional)
Ellegaard, O., & Wallin, J. A. (2015). The
bibliometric analysis of scholarly
production: How great is the impact?
_Scientometrics,_ _105(3),_ 1809–1831.
https://doi.org/10.1007/s11192-015-1645z
Financial Stability Board. (2017). Financial
Stability Implications from Fintech:
Supervisory and Regulatory Issues that
Merit Authorities’ Attention. _Financial_
_Stability_ _Board,_ _June,_ 1–61.
www.fsb.org/emailalert
Frame, B., Lawrence, J., Ausseil, A. G., Reisinger,
A., & Daigneault, A. (2018). Adapting global
-----
_(_ _g_ _)_
shared socio-economic pathways for
national and local scenarios. _Climate Risk_
_Management,_ _21(May),_ 39–51.
https://doi.org/10.1016/j.crm.2018.05.001
Gabor, D., & Brooks, S. (2017). The digital
revolution in financial inclusion:
international development in the fintech
era. _New Political Economy,_ _22(4), 423–_
436.
https://doi.org/10.1080/13563467.2017.12
59298
Gomber, P., Kauffman, R. J., Parker, C., &
Weber, B. W. (2018). On the Fintech
Revolution: Interpreting the Forces of
Innovation, Disruption, and Transformation
in Financial Services. _Journal_ _of_
_Management Information Systems,_ _35(1),_
220–265.
https://doi.org/10.1080/07421222.2018.14
40766
Jagtiani, J., & Lemieux, C. (2018). Do fintech
lenders penetrate areas that are
underserved by traditional banks? _Journal_
_of Economics and Business,_ _100, 43–54._
https://doi.org/10.1016/j.jeconbus.2018.0
3.001
Jünger, M., & Mietzner, M. (2020). Banking goes
digital: The adoption of FinTech services by
German households. _Finance Research_
_Letters,_ _34(July),_ 1–8.
https://doi.org/10.1016/j.frl.2019.08.008
Kou, G., Olgu Akdeniz, Ö., Dinçer, H., & Yüksel, S.
(2021). Fintech investments in European
banks: a hybrid IT2 fuzzy multidimensional
decision-making approach. _Financial_
_Innovation,_ _7(1)._
https://doi.org/10.1186/s40854-02100256-y
Kristianti, I., & Tulenan, M. V. (2021). Dampak
financial technology terhadap kinerja
keuangan perbankan. Kinerja, 18(1), 57–65.
http://journal.feb.unmul.ac.id/index.php/K
INERJA/article/view/8254
Lee, C. C., Li, X., Yu, C. H., & Zhao, J. (2021). Does
fintech innovation improve bank
efficiency? Evidence from China’s banking
industry. International Review of Economics
_and Finance,_ _74(June 2020), 468–483._
https://doi.org/10.1016/j.iref.2021.03.009
Lee, I., & Shin, Y. J. (2018). Fintech: Ecosystem,
business models, investment decisions, and
challenges. _Business Horizons,_ _61(1), 35–_
46.
https://doi.org/10.1016/j.bushor.2017.09.0
03
Ma’ruf, M. (2021). Pengaruh Fintech Terhadap
Kinerja Keuangan Perbankan Syariah.
_Yudishtira Journal : Indonesian Journal of_
_Finance and Strategy Inside,_ _1(1), 42–61._
https://doi.org/10.53363/yud.v1i1.53
Mauline, R. (2022). Pengaruh Pertumbuhan
Perusahaan Financial Technology Terhadap
Kinerja Perbankan. _Jurnal_ _Economic,_
_Finance and Banking, 1(1), 143–155._
Milian, E. Z., Spinola, M. de M., & Carvalho, M.
M. d. (2019). Fintechs: A literature review
and research agenda. Electronic Commerce
_Research and Applications,_ _34(September_
2018).
https://doi.org/10.1016/j.elerap.2019.1008
33
Muhammad, H., & Sari, N. P. (2020). Pengaruh
Financial Technology Terhadap Perbankan
Syariah: Pendekatan ANP-BOCR (The
Influence of Financial Technology on
Islamic Banking: ANP-BOCR Approach).
_Perisai : Islamic Banking and Finance_
_Journal,_ _4(2),_ 113–125.
https://doi.org/10.21070/perisai.v4i2.868
Puschmann, T. (2017). Fintech. _Business and_
_Information Systems Engineering,_ _59(1),_
69–76. https://doi.org/10.1007/s12599
017-0464-6
Ramlah, R. (2021). Penerapan Fintech ( Financial
Technologi ) Pada PT. Bank Rakyat
Indonesia (Persero) Tbk KCP Slamet Riyadi
Makassar. _CEMERLANG :_ _Jurnal_
_Manajemen Dan Ekonomi Bisnis,_ _1(4), 81–_
91.
https://doi.org/10.55606/cemerlang.v1i4.4
66
Schueffel, P. (2016). Taming the beast: A
-----
_j_ _gy_ _(_ _f_ _,_ _)_
scientific definition of fintech. _Journal of_
_Innovation Management,_ _4(4), 32–54._
https://doi.org/10.24840/21830606_004.004_0004
Stewart, H., & Jürjens, J. (2018). Data security
and consumer trust in FinTech Innovation
in Germany Information & Computer
Security Data security and consumer trust
in FinTech Innovation in Germany Article
information : _Information & Computer_
_Security, 26(1), 109–128._
Thakor, A. V. (2020). Fintech and banking: What
do we know? _Journal_ _of_ _Financial_
_Intermediation,_ _41(July_ 2019).
https://doi.org/10.1016/j.jfi.2019.100833
Wang, Y., Xiuping, S., & Zhang, Q. (2021). Can
fintech improve the efficiency of
commercial banks? —An analysis based on
big data. Research in International Business
_and_ _Finance,_ _55,_ 101338.
https://doi.org/10.1016/j.ribaf.2020.10133
8
Zavolokina, L., Dolata, M., & Schwabe, G. (2016).
The FinTech phenomenon: antecedents of
financial innovation perceived by the
popular press. _Financial Innovation,_ _2(1)._
https://doi.org/10.1186/s40854-016-00367
-----
| 10,231
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.31002/rak.v9i1.1330?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.31002/rak.v9i1.1330, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GOLD",
"url": "https://journal.untidar.ac.id/index.php/rak/article/download/1330/549"
}
| 2,024
|
[
"JournalArticle",
"Review"
] | true
| 2024-05-10T00:00:00
|
[] | 10,231
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Engineering",
"source": "external"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00323f5d22c03fe67fdfc1ba688f456ad14e397b
|
[
"Computer Science",
"Engineering"
] | 0.839152
|
Hybrid blockchain-enabled secure microservices fabric for decentralized multi-domain avionics systems
|
00323f5d22c03fe67fdfc1ba688f456ad14e397b
|
Defense + Commercial Sensing
|
[
{
"authorId": "144583532",
"name": "Ronghua Xu"
},
{
"authorId": "2144836470",
"name": "Yu Chen"
},
{
"authorId": "46748462",
"name": "Erik Blasch"
},
{
"authorId": "1917528",
"name": "Alexander J. Aved"
},
{
"authorId": "2116388691",
"name": "Genshe Chen"
},
{
"authorId": "145837605",
"name": "Dan Shen"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
Advancement in artificial intelligence (AI) and machine learning (ML), dynamic data driven application systems (DDDAS), and hierarchical cloud-fog-edge computing paradigm provide opportunities for enhancing multi-domain systems performance. As one example that represents multi-domain scenario, a “fly-by-feel” system utilizes DDDAS framework to support autonomous operations and improve maneuverability, safety and fuel efficiency. The DDDAS “fly-by-feel" avionics system can enhance multi-domain coordination to support domain specific operations. However, conventional enabling technologies rely on a centralized manner for data aggregation, sharing and security policy enforcement, and it incurs critical issues related to bottleneck of performance, data provenance and consistency. Inspired by the containerized microservices and blockchain technology, this paper introduces BLEM, a hybrid BLockchain-Enabled secure Microservices fabric to support decentralized, secure and efficient data fusion
|
## Hybrid Blockchain-Enabled Secure Microservices Fabric for Decentralized Multi-Domain Avionics Systems
### Ronghua Xu[a], Yu Chen*[a], Erik Blasch[b], Alexander Aved[b], Genshe Chen[c], and Dan Shen[c]
aBinghamton University, SUNY, Binghamton, NY, USA
bU.S. Air Force Research Laboratory, Rome, NY, USA
cIntelligent Fusion Tech, Inc, Germantown, MD, USA
#### ABSTRACT
Advancement in artificial intelligence (AI) and machine learning (ML), dynamic data driven application systems (DDDAS), and hierarchical cloud-fog-edge computing paradigm provide opportunities for enhancing multidomain systems performance. As one example that represents multi-domain scenario, a “fly-by-feel” system
utilizes DDDAS framework to support autonomous operations and improve maneuverability, safety and fuel
efficiency. The DDDAS “fly-by-feel” avionics system can enhance multi-domain coordination to support domain
specific operations. However, conventional enabling technologies rely on a centralized manner for data aggregation, sharing and security policy enforcement, and it incurs critical issues related to bottleneck of performance,
data provenance and consistency. Inspired by the containerized microservices and blockchain technology, this paper introduces BLEM, a hybrid BLockchain-Enabled secure Microservices fabric to support decentralized, secure
and efficient data fusion and multi-domain operations for avionics systems. Leveraging the fine-granularity and
loose-coupling features of the microservices architecture, multidomain operations and security functionalities are
decoupled into multiple containerized microservices. A hybrid blockchain fabric based on two-level committee
consensus protocols is proposed to enable decentralized security architecture and support immutability, auditability and traceability for data provenience in existing multi-domain avionics system. Our evaluation results
show the feasibility of the proposed BLEM mechanism to support decentralized security service and guarantee
immutability, auditability and traceability for data provenience across domain boundaries.
**Keywords: Blockchain, Microservices, Dynamic Data Driven Applications Systems (DDDAS), Multidomain**
Data Analytics, Fly-by-Feel Avionics
#### 1. INTRODUCTION
As a recent trend, data science has become essential in engineering, business, and medical applications thanks
to the advancements in artificial intelligence (AI), machine learning (ML), as well as information fusion technologies.[1] Developments in information fusion have moved from surveillance applications based on video and
text analytics[2][,] [3] towards that of the Internet of things (IoT) scenarios,[4][,] [5] multi-domain applications,[6] and battle
management.[7] As an example of multi-domain applications, avionics systems follow principles of layered sensing,[8][,] [9] where each layer represents data and information from different domains including space, air, ground,
and sea. With the plethora of information available in multi-domain avionics systems, the big data needs to be
considered in the 5-V dimensions: volume, velocity, variety, veracity, and value.[10]
As a conceptual framework that synergistically combines models and data in order to facilitate the analysis and prediction of physical phenomena,[11][,] [12] DDDAS developments in deep manifold learning,[13] nonlinear
tracking,[14][,] [15] and information fusion,[16][–][18] showing promise for advanced avionics assessments. The concept of a
DDDAS approach to “fly-by-feel” avionics systems is proposed for efficient multi-domain coordination through
leveraging modeling (data at rest), real-time control (data in motion) and analytics (data in use).[19] The design
of a multi-domain fly-by-feel avionics system could coordinate the space,[20] air,[21] ground,[22] subsurface[23] and
cyber domains to determine the mission needs for autonomous surveillance of a designated area.
Further author information: (Send correspondence to Yu Chen)
Yu Chen: E-mail: [email protected]
1
-----
While DDDAS based “fly-by-feel” avionics systems can enhance multi-domain coordination to support multiintelligence information fusion, it also brings new architecture, performance and security concerns. The multidomain operations require the coordination among different domain platforms with high heterogeneity, dynamics
and different non-standard development technologies. It needs a scalable, flexible and efficient system architecture
to support fast development and easy deployment among participants. In addition, to make appropriate, timely
decisions in the multi-domain operations, the Android Team Awareness Kit (ATAK)[24] conveyed Situational
Awareness (SA) in a decentralized manner to the users at the edge of the network as well as at operations centers.
However, a conventional security and management framework relies on a centralized third-party authority, which
can be a performance bottleneck and is susceptible to a single point of failure in distributed SA scenarios, where
real-time SA information is shared among geographically scattered command centers and operational troops.
Furthermore, DDDAS combines structural health data from the on-board sensors with data from off-line sources
for feedback control, Therefore, the data in use should be consistent, unaltered and auditable through the entire
lifetime, which means that data quality should be ensured in terms of integrity, traceability and auditability.
In this paper, a hybrid BLockchain-Enabled secure Microservices fabric (BLEM) is proposed to support
decentralized, secure and efficient data fusion and multi-domain operations for avionics systems. Leveraging
the fine-granularity and loose-coupling features of the microservices architecture,[25][,] [26] multi-domain operations
and security functionalities are decoupled into multiple containerized microservices. Thus, challenges resulted
from the heterogeneity are addressed by allowing development and deployment by participants from different
domains, and those lightweight microservices are computationally affordable on resource-constrained IoT devices
used in SA scenarios. To enable a decentralized security architecture and support immutability, auditability and
traceability for data provenience, a hybrid blockchain fabric is integrated into existing multi-domain avionics
concept by using two-level committee consensus protocols. Experimental results demonstrate the feasibility and
effectiveness of the proposed BLEM scheme.
The major contributions of this work are as follows:
1. A complete architecture of hybrid blockchain-enabled secure microservices fabric for decentralized multidomain avionics system is proposed, which includes multi-domain fly-by-feel system, secure microservices
layer, and a hybrid blockchain network;
2. Security policies, like authentication and access control, are implemented as separate containerized microservices, which utilize a smart contract to act as decentralized application (DApp);
3. A hybrid blockchain fabric, which consists of a two-level consensus protocol, intra-domain consensus and
inter-domain consensus, is proposed to improve the scalability and efficiency of consensus in the hierarchical
multi-domain network; and
4. A proof-of-concept prototype is implemented and tested on the Ethereum and Tendermint blockchain
network, and the evaluation results show that the proposed BLEM scheme provides a decentralized security
service and guarantees immutability, auditability and traceability for data provenience in multi-domain
scenarios.
The remainder of this paper is organized as follows: Section 2 reviews background knowledge of DDDAS
based multi-domain avionics systems, and the state of the art in blockchain-based decentralized solutions. Section 3 illustrates the details of the proposed hybrid blockchain fabric for multi-domain avionics systems. The
experimental results and evaluation are discussed in Section 4. Finally, the summary, current limitations and
future works are discussed in Section 5.
#### 2. STATE OF ART AND RELATED WORK
2.1 Dynamic Data Driven Applications Systems (DDDAS)
Dynamic Data Driven Applications Systems (DDDAS) is a conceptual framework that synergistically combines
models and data in order to facilitate the analysis and prediction of physical phenomena. In a broader context,
DDDAS is a variation of adaptive state estimation that uses a sensor reconfiguration loop as shown in Fig.
2
-----
1.[27] This feedback loop seeks to reconfigure the sensors in order to enhance the information content of the
measurements. The sensor reconfiguration is guided by the simulation of the physical process. Consequently,
the sensor reconfiguration is dynamic, and the overall process is data driven.
Figure 1. Dynamic data-driven application systems (DDDAS) concept.[27]
The core of the DDDAS is the data assimilation loop, which uses sensor data error to drive the physical
system simulation so that the trajectory of the simulation more closely follows the trajectory of the physical
system. The data assimilation loop uses input data if input sensors are available. The innovative feature of
DDDAS paradigm is the additional sensor reconfiguration loop, which guides the physical sensors in order to
enhance the information content of the collected data. The data assimilation and sensor reconfiguration feedback
loops are computational rather than physical feedback loops. The simulation guides the sensor reconfiguration
and the collected data, and in turn, improves the accuracy of the physical system simulation. The “modelbased simulated data” positive feedback loop is the essence of DDDAS. Key aspects of DDDAS include the
algorithmic and statistical methods that incorporate the measurement data with that of the high-dimensional
modeling and simulation. The power of DDDAS is to use simulated data from a high-dimensional model to
augment measurement systems for systems design to leverage statistical methods, simulation, and computation
architectures.[19]
The DDDAS concepts developed over two decades with the simulation methods includes scientific theory,
domain methods and architecture design. Scientific theory utilizes modeling and analysis for enhancing the
phenomenology of science models by using measurement information and adaptive sampling incorporated into
multiphysics, for example avionics[28] and smart cities.[29] Domain methods utilize data assimilation and multimodal analysis to that of control and filtering for methods of tracking,[30][,] [31] situation awareness,[32] and contextenhanced information fusion.[18] Architecture design is mainly for designing scalable systems architectures and
cyber network analysis, with recent efforts in cloud computing based information fusion.[33][,] [34]
#### 2.2 Multi-domain Fly-by-Feel Avionics
In the fly-by-feel DDDAS approach,[35] the structures of the aircraft can provide real-time measurements to adjust
the flight control. The integration of on-line data with the off-line model creates a positive feedback loop, where
the model judiciously guides the sensor selection, sensor data collection, from which the sensor data improves
the accuracy of the flight control model. From the recent Handbook on Dynamic Data Driven Applications
_Systems,[11]_ multi-domain scenarios demonstrate techniques to incorporate physics models in support of domain
specific operations. Figure 2 illustrates a multi-domain fly-by-feel concept for future UAVs (or a swarm of UAVs),
which leverages DDDAS developments for multi-domain coordination among different platforms in space, air,
and ground domains.
1. Space Domain: provides valuable functions for navigation, communication, data routing and services for
data in motion. In space situation awareness, space weather detection is important for the continuous
satellite operations,[36] and it can help mitigate the effects of threats to satellites supporting tracking,
3
-----
Figure 2. Multi-domain coordination for fly-by-feel avionics system.
communication, navigation, and remote sensing.[37][,] [38] Current DDDAS developments in situation awareness
focus on the results of weather effecting reliable communications.[39][–][41] Satellite health monitoring (SHM)
includes the power and electronics to control the satellite.[42][,] [43] Secure uplink and downlink services can
provide data in collect.[44][,] [45] The space domain is critical for multi-domain services such as the control and
positing of a UAV that provides situation awareness.
2. Air Domain: provides the coordinated autonomous actions on information fusion and control diffusion for
data in collect and work as a network of swarm UAVs.[46] A recent example of multidomain concept is flyby-feel that incorporates active sensing for flying.[47] To enable fly-by-feel concept, various sensors need to
be designed[48] to leverage the other domains such as that of biological systems.[49] Aeroelastic sensing,[50][,] [51]
is evident as a DDDAS method to enhance real time management and control in fly-by-feel system. The
fly-by-feel techniques incorporate stochastic sensing and filtering as part of the on-line structural health of
the aircraft that is incorporated with the measurements of position and air fluid flow.[52][,] [53]
3. Ground Domain: The Android Team Awareness Kit (ATAK)[24] is a situation awareness tool that includes
many feature displays for a portable device that supports multi-domain operations. ATAK focuses on
improving the real-time SA of small units at the tactical edge. which means knowing where you are, where
the rest of your team is, and having a variety of ways to communicate with your team (and, if feasible
with reach-back, to operation centers).[24] While ATAK features the display of various data sources, for
multidomain operations; it could provide additional information to the user towards the health of the
systems for command and control.[54] The DDDAS rendering options support the design of a User Defined
Operating Picture (UDOP)[55] that can be displayed on the ATAK system. The ability to plot tracks,
discussions, and labels of objects[56][,] [57] enhances the situation understanding.[58][,] [59]
As Fig. 2 shows, multi-domain operations require cross-domain data sharing techniques include: data in
collect, data at rest, data in use, data in transit and data in motion. Data at Rest acts as long-term storage
service which provides structure (i.e., translations) between data for integration, analysis, and storage. Data
_in Collect leverage the power of modeling from which data is analyzed for information, delivered as knowledge,_
and supports prediction of data needs. Data in Transit works as a Data as a Service (DaaS) architecture that
incorporates contextual information, metadata, and information registration to support the systems-of-systems
design. Data in Motion utilizes feedback control loops to dynamically adapt to changing priorities, timescales,
and mission scenarios. The intersection of the information is Data in Use, which provide context-based humanmachine interactions based on dynamic mission priorities, information needs, and resource availability.
4
-----
#### 2.3 Microservices in IoT
The traditional service-oriented architecture (SOA) utilizes a monolithic architecture that constitutes different
software features in a single interconnected and interdependent application and database. Owing to the tightly
coupled dependence among functions and components, such a monolithic framework is difficult to adapt to
new requirements in an IoT-enabled system, such as scalability, service extensibility, data privacy, and crossplatform interoperability.[26] Though encapsulating a minimal functional software module as a fine-grained and
independently executable unit, the microservices architecture allows for fast development and easy deployment
in multi-domain scenarios. The individual microservices communicate with each other through a lightweight
and asynchronous manner, such as HTTP RESTful API. Finally, multiple decentralized individual microservices
cooperate with each other to perform the functions of complex systems. The flexibility of microservices enables
continuous, efficient, and independent deployment of application function units. As two most significant features
of the microservices architecture, fine granularity means each of the microservices can be developed in different
frameworks and with minimal development resources, while loose coupling implies that functions of microservices
and its components are independent of each other’s deployment and development.[60]
Thanks to the fine-granularity and loose-coupling properties, the microservices architecture has been investigated in many smart developments to improve the scalability and security of IoT-based applications. The IoT
systems are advancing from “things”-oriented ecosystem to a widely and finely distributed microservices-orientedecosystem.[26] To enable a more scalable and decentralized solution for advanced video stream analysis for large
volumes of distributed edge devices, a system design of a robust smart surveillance systems was proposed based
on microservices architecture and blockchain technology.[3][,] [61][,] [62] It aims at offering a scalable, decentralized and
fine-grained access control solution for smart public safety. A BlendSM-DDM[63] is proposed by decoupleing business logic functions and security services into multiple containerized microservices rather than using a monolithic
service architecture, and it supports loose-coupling, fine-granularity and easy-maintenance for decentralized data
marketing applications.
#### 2.4 Blockchain and Smart Contract
As a fundamental technology of Bitcoin,[64] _blockchain initially was used to promote a new cryptocurrency that_
performs commercial transactions among independent entities without relying on a centralized authority, like
banks or government agencies. Essentially, the blockchain is a public ledger based on consensus rules to provide
a verifiable, append-only chained data structure of transactions. Blockchain relies on a decentralized architecture
which data is verified, stored and updated distributively. In a blockchain network, a consensus mechanism is
enforced on a large amount of distributed nodes called miners to maintain the sanctity of the data recorded on
the blocks. The transactions are validated by miners and recorded in the time-stamped blocks, and each block
is identified by a cryptographic hash and chained to preceding blocks in a chronological order. Thanks to the
trustless consensus protocol running on miners across the network, participants can trust the system of the public
ledger stored worldwide on many different decentralized nodes maintained by ”miner-accountants”, as opposed
to having to establish and maintain trust with a transaction counter-party or a third-party intermediary.[65] Thus,
blockchain offers a prospective decentralized architecture to support secure distributed transactions among all
participants in a trustless multidomain environment,
Emerging from the intelligent property, a smart contract allows users to achieve agreements among parties
through a blockchain network. By using cryptographic and security mechanisms, a smart contract combines
protocols with user interfaces to formalize and secure relationships over computer networks.[66] A smart contract
includes a collection of pre-defined instructions and data that have been saved at a specific address of blockchain
as a Merkle hash tree, which is a constructed bottom-to-up binary tree data structure. Through exposing public
functions or application binary interfaces (ABIs), a smart contract interacts with users to offer the predefined
business logic or contract agreement.
The blockchain and smart contract enabled security mechanism for applications has been a hot topic and
some efforts have been reported recently, for example, smart surveillance system,[4][,] [61][,] [62] social credit system,[67]
decentralized data marketing,[63][,] [68] space situation awareness,[20] biomedical imaging data processing,[69] and access
control strategy.[70][,] [71] Blockchain and smart contract together are promising to provide a decentralized solution
to support secured data sharing and accessing in multi-domain avionics systems.
5
-----
#### 3. BLEM SYSTEM ARCHITECTURE
The design of a multi-domain fly-by-feel avionics system requires operation coordination and data exchange
across boundaries of space, air, ground and the cyber domain. Such a multi-domain system is deployed in a heterogeneous network environment that with high dynamics and different technologies. In addition, advancement
in edge computing based SA, like ATAK, also requires a lightweight and scalable architecture to enable services
on a large volume of resource constrained IoT devices. The virtualization technology, like virtual machines (VMs)
or containers, is platform independent and could provide resource abstraction and isolation features, they are
ideal for system architecture design to address the heterogeneity challenge in multi-domain scenarios. Compared
to VMs, containers are more lightweight and flexible with operating system (OS)-level isolation, so that is an
ideal selection for service deployment on edge computing platforms.
Widely used ATAK technology can improve accuracy and real-time decision for multi-domain task through a
decentralized SA manner. However, existing security and management frameworks normally rely on a centralized
authority, which can be a performance bottleneck or susceptible to a single point of failure. Furthermore, crossdomain data sharing technologies is essential for DDDAS operations like feedback control, so that the data
should be consistent, unaltered and auditable through the entire lifetime. To address above issues, blockchain
and smart contract offer a promising solution to enable a decentralized trust network and secure data sharing
service, where data and its history are reliable, immutable and auditable.
Figure 3. Architecture of BLEM: a Hybrid Blockchain Fabric for Multi-domain Fly-by-Feel Avionics.
Figure 3 illustrates the system architecture of the proposed BLEM scheme, a hybrid blockchain-enabled fabric
for multi-domain fly-by-feel avionics system. The whole system consists of (i) a multi-domain fly-by-feel system
that relies on DDDAS method to increase maneuverability, safety and fuel efficiency in avionics scenario, (ii) a
blockchain-enabled security services layer that leverages microservices and smart contract to support flexible,
efficient and secure multidomain operations, and (iii) a hybrid blockchain fabric as the fundamental network
6
-----
infrastructure that utilizes lightweight consensus protocols and distributed ledger to enable decentralized security
mechanism.
#### 3.1 Multi-Domain Fly-by-Feel System
The multi-domain fly-by-feel avionics system measures the aerodynamic forces (wind, pressure, temperature) for
physics-based adaptive flight control to increase maneuverability, safety and fuel efficiency. The upper left of Fig.
3 presents a DDDAS method that identifies safe flight operation platform position needs from which models,
data, and information are invoked for effective flight control. Context, measurement and cyber/info awareness
are three methods to support a combined systems awareness analysis.
1. Measurement awareness includes signal and structure awareness based on air, fluid, and structural analysis.
For structure aware, structures of the aircraft can provide real-time measurements, such as stain and
temperature, to adjust the flight control. Given the data collected by the sensors, signal aware can provide
estimates of initial conditions, boundary conditions, inputs, parameters, and states to enhance the accuracy
of the model.
2. Context awareness methods includes space and situation awareness. The space awareness generally consists
of two major areas: satellite operations and space weather. The satellite operations are focused on the local
perspective to enable continuous operations by understanding the space environment and build models to
support satellite health monitoring (SHM).[20] For context situation awareness, target tracking, pattern
classification, and coordinated control are components of information fusion which can applied to video
tracking and wide area motion imagery.
3. Cyber/info awareness uses security, power, and scene (data) modeling of the system to enable energy and
process awareness. These functions operate over the layered domain operations as DDDAS-based resilient
cyber battle management services.
The above fly-by-feel air platform concept leverages modeling (data at rest), real-time control (data in motion)
and analytics (data in use) for multi-domain coordination. Given information gathered from space (e.g., GPS), air
(e.g., aircraft measurements), and ground Automatic Dependent Surveillance Broadcast (ADS-B), the DDDAS
system based on multi-domain coordination can determine the mission needs for autonomous surveillance of a
designated area.
#### 3.2 Security Microservices
The blockchain-enabled security services layer, as shown in right part of Fig. 3, acts as a fundamental microservices oriented infrastructure to support decentralized security mechanism. The key elements and operations are
described below.
1. Service Policy Management: acts as security service managers who is responsible for entity registration
and smart contract authorization. To join the network, a participant uses its blockchain address as request
to entity registration process which associates entitys unique blockchain account address with a Virtual ID
(VID).[20] For smart contract authorization, domain owners or system administrator deploy the smart contracts that encapsulate security function, like data integrity and access control. After the smart contracts
have been deployed successfully on the blockchain network, only authorized participants could interact
with smart contract through the Remote Procedure Call (RPC) interfaces.
2. Data Integrity: to support DDDAS multidomain task, data fusion among online (data in motion) and
offline (data at rest) is need and intersection of the information is data in use. Thus, it necessary to ensure
data integrity as combining those data in decision-making tasks. Data integrity technologies are mainly
to ensure reliable and immutable data access at the same time avoid storing a huge amount of redundant
data in the blockchain. The data integrity microservices provides the dynamic data synchronization and
efficient verification through a hashed index authentication process by smart contract.[4] The data owners
just simply save the hashed index of data to distributed ledger through authorized ABI functions of smart
contract. In verification process, data user just fetch a key-value index from distributed ledge and compares
it with calculated hash values of the received data.
7
-----
3. Identity Authentication: Since each blockchain account is uniquely indexed by its address that is derived
from his/her own public key, the account address is ideal for identity authentication needed by other security
microservices, such as data integrity and access control. Once an identity verification service request is
acknowledged, the identity authentication decision making process checks the requester identity profile
by referring with the RESTful API to other microservices-based service providers for referring identity
verification results.
4. Access Control : The domain administrator and data owners could transcode access control (AC) models
and policies into a smart contract-based access control (AC) microservice.[62][,] [70][,] [71] To successfully access
data or execute task in multidomain coordination, an user initially sends an access right request to the AC
microservices to get a capability token. Given results from identity verification and access right decision
making process, the AC microservice issues the capability token encoding authorized access right and
update the token data in the smart contract.
The security microservices allows service providers and data owner to deploy their own security policies as
smart contracts instead of relying on a centralized third party authority. It provides a decentralized security
mechanism for distributed multi-domain scenarios.
#### 3.3 Hybrid Blockchain Fabric
The hybrid blockchain fabric is responsible for consensus protocol and persistent storage, which are enabling
technology for decentralized security mechanism. As the core of blockchain, the consensus protocol is mainly to
maintain data integrity, consistence and order of data in the distributed ledger across the trustless multi-domain
network. To improve the scalability and efficiency of executing consensus protocols in a multi-domain network
with heterogeneity and dynamics, a two-level consensus protocol is proposed: intra-domain consensus and interdomain consensus, as shown at the bottom of Fig. 3. For an individual domain, a classical Byzantine Fault
Tolerant (BFT)[72] based intra-committee consensus protocol is executed among committee member to validate a
disjoint set of transactions within domain. For multi-domain coordination, an inter-domain consensus protocol
is responsible to validate those blocks across domain boundary and finalize a global distributed ledger. Key
components and workflows are explained as follows:
1. Permissioned committee network : Following the idea of delegation, only a small subset of the nodes in the
network are selected as validators who form a committee and perform the consensus protocol. Permissioned
networks provide basic security primitives, such as public key infrastructure (PKI), identity authentication and access control, etc. Public key cryptography is used to secure communication and transactions
validation, like digital signature, etc.
2. Intra-domain consensus: The BFT replication consensus protocols, like Practical BFT (PBFT),[73] execute
the consensus algorithm among a small group of nodes which are authenticated by the network administrator. They are well adopted in the permissioned blockchain network in which the access control strategies
for network management are enforced. For each domain, data transactions within domain are broadcasted
among validators who record verified transactions in blocks. The consensus agreement is achieved as those
proposed intra-domain blocks are signed by no less than 2/3 of validators in the committee. Owing to
the small size of the intra-domain committee, only a limited network delay is introduced for messages
propagation, so that it ensures high throughput of transactions in intra-domain scenarios, which require
high data transactions rate and fast response to service requests.
3. Inter-domain consensus: To jointly address several critical issues such as pseudonymity, scalability and
poor synchronization in an open-access inter-domain network environment, the Proof-of-Concept (PoC)
consensus mechanism, like PoW, is adopted as the inter-domain consensus protocol. The inter-domain
committee is responsible to verify data transactions across inter-domain, and propose new block containing
verified transactions, then finalize blocks in a global distributed ledger. The security of the consensus
protocol requires that the majority (51%) of the nodes are honest and they can correctly execute the
consensus protocol. The inter-domain consensus is aimed to support the scalability and probabilistic
finality in the partial synchronous multi-domain networks environment.
8
-----
#### 4. IMPLEMENTATION AND EVALUATION
To verify the proposed BLEM scheme, a proof-of-concept prototype is implemented in a real physical network
environment. The security microservices have been implemented as Docker containers, which are deployed both
on the edge (Raspberry Pi) and fog (desktop) units. The web service application development is built on Flask
framework[74] using Python. For the blockchain part, we use Ethereum[75] for inter-domain operations, while
Tendermint[76] is used for intra-domain consensus mechanism. The smart contract development use Solidity,[77]
which is a contract-oriented, high-level language for implementing smart contracts.
#### 4.1 Experimental Setup
Table 1 shows configurations of nodes used in the experiments. In this prototype, the laptops acts as domain
administrators, which takes role of oracle to manage domain network. All desktops work as fog computing nodes,
while a Raspberry PI runs as edge computing node. The inter-domain network is built on a Ethereum private
network which includes six desktops as miners and two Raspberry PIs as nodes. The security microservices
are hosted both on fog and edge computing nodes. All devices use Go-Ethereum[78] as the client application to
interact with ethereum network. The intra-domain network is built on a private Tendermint network which uses
16 Raspberry PIs as validators.
Table 1. Configurations of Experimental Nodes.
**Device** Dell Optiplex 760 Raspberry Pi 3 Model B+
**CPU** 3 GHz Intel Core TM (2 cores) Broadcom ARM Cortex A53 (ARMv8), 1.4GHz
**Memory** 4GB DDR3 1GB SDRAM
**Storage** 250G HHD 32GB (microSD card)
**OS** Ubuntu 16.04 Raspbian GNU/Linux (Jessie)
#### 4.2 Performance Evaluation
To evaluate the performance of the microservices-based security mechanism, a service access experiment is carried
out on a physical network environment by simulating service request and acknowledge. A Raspberry PI works as
a client to send service request, while server side is a service provider, who has been both hosted on Raspberry Pi
(edge) and Desktop (fog) nodes. For blockchain fabric evaluation, we focus on transaction rate and throughput
by calculating transactions committed time on Tendermint network.
**4.2.1 Security Service Overhead**
To evaluate the overhead of running microservices on the host machine, key security microservices including
identity verification, access control and data integrity microservices are deployed on three Raspberry Pi and
three desktops, separately. 50 test runs have been conducted based on the proposed test scenario, where the
client sends a data query request to server side for an access permission. Figure 4 demonstrates the computation
overhead incurred by running individual microservice on different platform. The results show that computation
overhead increase as the task complexity grows.
Compared with data integrity, access control and identity verification consist of more cryptography and
authentication operations. Therefore, they incur higher computation overhead both on the Raspberry Pi and
the desktop. Since identity verification microservice involves multiple smart contract interactivities, like registry
reference and identity authentication, it takes longer execution time for querying the data in blockchain.
**4.2.2 Network Latency**
For an intra-domain committee, validators receive and verify transactions, and execute BFT consensus to guarantee security of the distributed ledger. The consensus protocol and ledger storage process inevitably introduce
extra delays on normal service requests and operations. Figure 5 shows the network latency when a validator
publishes a transaction within the domain and waits until it committed on the ledger. The network latency is
9
|Device|Dell Optiplex 760|Raspberry Pi 3 Model B+|
|---|---|---|
|CPU|3 GHz Intel Core TM (2 cores)|Broadcom ARM Cortex A53 (ARMv8), 1.4GHz|
|Memory|4GB DDR3|1GB SDRAM|
|Storage|250G HHD|32GB (microSD card)|
|OS|Ubuntu 16.04|Raspbian GNU/Linux (Jessie)|
-----
Figure 4. Performance of running security microservices.
measured by committing fixed size transaction data in domain committee given difference transaction rate. The
transaction used in the test is 1 KB to reduce the influence of data size on network performance. Given test
Tendermint network with 16-validator Raspberry Pi devices, we evaluated the end-to-end delay with a validator
sending multiple transactions per second (TPS), which varies from one to 100 TPS. In terms of the communication complexity of broadcasting transactions, the latency of committing transactions is almost linear scale to the
transaction rate, and it varies from 2.5 s to 3.7 s. For the inter-domain scenario, sixty blocks were appended to
the blockchain and the average block confirmation time was calculated as 7.7 s on our Ethereum private network.
Figure 5. Delay with different transaction rate.
**4.2.3 Throughput Evaluation**
Figure 6 shows the time that takes for an intra-domain committee to complete an entire consensus protocol run
with variable transaction size between 1K and 256K. The transaction rate in this test is 1 TPS to reduce the
influence of data traffic on network performance. The transaction data throughput is specified in M/h, means
Mbytes per hour. With variant data sizes, corresponding results are obtained as shown in Table 2. Given a fixed
transaction rate of 1 TPS, increasing the transaction size allows committing more data on the distributed ledger,
and therefore reach a higher throughput, which maximizes the system capability.
10
-----
Figure 6. Throughput evaluation.
Table 2. Data Throughputs vs. Transaction Data Sizes.
**Transaction Size** 1K 16K 32K 64K 128K 256K
**Throughput (M/h)** 1.4 20.9 40.6 71.9 114.5 151.5
#### 5. CONCLUSIONS
In this paper, BLEM, a hybrid blockchain-enabled secure microservices fabric is proposed to enable decentralized
security mechanism and support secure and efficient data fusion and multi-domain operations for multi-domain
avionics system. A comprehensive overview of the system architecture is presented, and critical elements are
illustrated. A concept-proof prototype has been developed and verified on a physical network environment. The
experimental results demonstrate the feasibility of proposed solutions to address performance and security issues
in multi-domain avionics systems.
While the reported work has shown great potential, there is still open questions to be addressed before a
practical decentralized security solution can be deployed in real-world multi-domain avionics application. Future
efforts include further simulation and development towards a prototype for multi-domain avionics scenarios.
#### REFERENCES
[1] Blasch, E., Steinberg, A., Das, S., Llinas, J., Chong, C., Kessler, O., Waltz, E., and White, F., “Revisiting the
jdl model for information exploitation,” in [Proceedings of the 16th International Conference on Information
_Fusion], 129–136, IEEE (2013)._
[2] Hammoud, R. I., Sahin, C. S., Blasch, E. P., Rhodes, B. J., and Wang, T., “Automatic association of
chats and video tracks for activity learning and recognition in aerial video surveillance,” Sensors 14(10),
19843–19860 (2014).
[3] Nikouei, S. Y., Xu, R., Chen, Y., Aved, A., and Blasch, E., “Decentralized smart surveillance through microservices platform,” in [Sensors and Systems for Space Applications XII], 11017, 110170K, International
Society for Optics and Photonics (2019).
[4] Nikouei, S. Y., Xu, R., Nagothu, D., Chen, Y., Aved, A., and Blasch, E., “Real-time index authentication
for event-oriented surveillance video query using blockchain,” in [2018 IEEE International Smart Cities
_Conference (ISC2)_ ], 1–8, IEEE (2018).
11
|Transaction Size|1K|16K|32K|64K|128K|256K|
|---|---|---|---|---|---|---|
|Throughput (M/h)|1.4|20.9|40.6|71.9|114.5|151.5|
-----
[5] Blasch, E., Kadar, I., Grewe, L. L., Brooks, R., Yu, W., Kwasinski, A., Thomopoulos, S., Salerno, J., and
Qi, H., “Panel summary of cyber-physical systems (cps) and internet of things (iot) opportunities with
information fusion,” in [Signal Processing, Sensor/Information Fusion, and Target Recognition XXVI ],
**10200, 102000O, International Society for Optics and Photonics (2017).**
[6] Rogers, S., Culbertson, J., Oxley, M., Clouse, H. S., Abayowa, B., Patrick, J., Blasch, E., and Trumpfheller,
J., “The quest for multi-sensor big data isr situation understanding,” in [Ground/Air Multisensor Inter_operability, Integration, and Networking for Persistent ISR VII], 9831, 98310G, International Society for_
Optics and Photonics (2016).
[7] Blasch, E. and B´elanger, M., “Agile battle management efficiency for command, control, communications,
computers and intelligence (c4i),” in [Signal Processing, Sensor/Information Fusion, and Target Recognition
_XXV_ ], 9842, 98420P, International Society for Optics and Photonics (2016).
[8] Mendoza-Schrock, O., Patrick, J. A., and Blasch, E. P., “Video image registration evaluation for a layered sensing environment,” in [Proceedings of the IEEE 2009 National Aerospace & Electronics Conference
_(NAECON)_ ], 223–230, IEEE (2009).
[9] Yang, C., Kadar, I., and Blasch, E., “Performance-driven resource management in layered sensing,” in [2009
_12th International Conference on Information Fusion], 850–857, IEEE (2009)._
[10] Blasch, E., Seetharaman, G., and Reinhardt, K., “Dynamic data driven applications system concept for
information fusion,” Procedia Computer Science 18, 1999–2007 (2013).
[11] Blasch, E., Ravela, S., and Aved, A., [Handbook of dynamic data driven applications systems ], Springer
(2018).
[12] Blasch, E., Pham, K. D., Shen, D., and Chen, G., “Dddas for space applications,” in [Sensors and Systems
_for Space Applications XI_ ], 10641, 1064108, International Society for Optics and Photonics (2018).
[13] Shen, D., Blasch, E., Zulch, P., Distasio, M., Niu, R., Lu, J., Wang, Z., and Chen, G., “A joint manifold
leaning-based framework for heterogeneous upstream data fusion,” Journal of Algorithms & Computational
_Technology 12(4), 311–332 (2018)._
[14] Yang, C., Bakich, M., and Blasch, E., “Pose angular-aiding for maneuvering target tracking,” in [2005 7th
_International Conference on Information Fusion], 1, 8–pp, IEEE (2005)._
[15] Yang, C., Nguyen, T., and Blasch, E., “Mobile positioning via fusion of mixed signals of opportunity,” IEEE
_Aerospace and Electronic Systems Magazine 29(4), 34–46 (2014)._
[16] Blasch, E., Al-Nashif, Y., and Hariri, S., “Static versus dynamic data information fusion analysis using
dddas for cyber security trust,” Procedia Computer Science 29, 1299–1313 (2014).
[17] Blasch, E. P., Rogers, S. K., Holloway, H., Tierno, J., Jones, E. K., and Hammoud, R. I., “Quest for information fusion in multimedia reports,” International Journal of Monitoring and Surveillance Technologies
_Research (IJMSTR) 2(3), 1–30 (2014)._
[18] Snidaro, L., Garcia-Herrera, J., Llinas, J., and Blasch, E., “Context-enhanced information fusion,” in
[Boosting Real-World Performance with Domain Knowledge], Springer (2016).
[19] Blasch, E., Ashdown, J., Kopsaftopoulos, F., Varela, C., and Newkirk, R., “Dynamic data driven analytics for multi-domain environments,” in [Artificial Intelligence and Machine Learning for Multi-Domain
_Operations Applications_ ], 11006, 1100604, International Society for Optics and Photonics (2019).
[20] Xu, R., Chen, Y., Blasch, E., and Chen, G., “Exploration of blockchain-enabled decentralized capabilitybased access control strategy for space situation awareness,” Optical Engineering 58(4), 041609 (2019).
[21] Blasch, E., Xu, R., Chen, Y., Chen, G., and Shen, D., “Blockchain methods for trusted avionics systems,”
_arXiv preprint arXiv:1910.10638 (2019)._
[22] Blasch, E. P., Maupin, P., and Jousselme, A.-L., “Sensor-based allocation for path planning and area
coverage using ugss,” in [Proceedings of the IEEE 2010 National Aerospace & Electronics Conference ],
361–368, IEEE (2010).
[23] Wang, Z., Chen, G., Blasch, E., Lynch, R., and Pham, K., “Submarine tracking via fusing multiple measurements based on gaussian sum mixture approximation,” in [2011 Aerospace Conference ], 1–7, IEEE
(2011).
12
-----
[24] Usbeck, K., Gillen, M., Loyall, J., Gronosky, A., Sterling, J., Kohler, R., Hanlon, K., Scally, A., Newkirk,
R., and Canestrare, D., “improving situation awareness with the android team awareness kit (atak),” in
[Sensors, and Command, Control, Communications, and Intelligence (C3I) Technologies for Homeland Se_curity, Defense, and Law Enforcement XIV], 9456, 94560R, International Society for Optics and Photonics_
(2015).
[25] Butzin, B., Golatowski, F., and Timmermann, D., “Microservices approach for the internet of things,” in
[2016 IEEE 21st International Conference on Emerging Technologies and Factory Automation (ETFA) ],
1–6, IEEE (2016).
[26] Datta, S. K. and Bonnet, C., “Next-generation, data centric and end-to-end iot architecture based on
microservices,” in [2018 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia)], 206–
212, IEEE (2018).
[27] Blasch, E., Xu, R., Nikouei, S. Y., and Chen, Y., “A study of lightweight dddas architecture for real-time
public safety applications through hybrid simulation,” in [2019 Winter Simulation Conference (WSC)],
762–773, IEEE (2019).
[28] Imai, S., Blasch, E., Galli, A., Zhu, W., Lee, F., and Varela, C. A., “Airplane flight safety using error-tolerant
data stream processing,” IEEE Aerospace and Electronic Systems Magazine 32(4), 4–17 (2017).
[29] Fujimoto, R. M., Celik, N., Damgacioglu, H., Hunter, M., Jin, D., Son, Y.-J., and Xu, J., “Dynamic
data driven application systems for smart cities and urban infrastructures,” in [2016 Winter Simulation
_Conference (WSC)_ ], 1143–1157, IEEE (2016).
[30] Dunik, J., Straka, O., Simandl, M., and Blasch, E., “Random-point-based filters: Analysis and comparison
in target tracking,” IEEE Transactions on Aerospace and Electronic Systems 51(2), 1403–1421 (2015).
[31] Jia, B., Pham, K. D., Blasch, E., Shen, D., Wang, Z., and Chen, G., “Cooperative space object tracking using
space-based optical sensors via consensus-based filters,” IEEE Transactions on Aerospace and Electronic
_Systems 52(4), 1908–1936 (2016)._
[32] Blasch, E., Seetharaman, G., Palaniappan, K., Ling, H., and Chen, G., “Wide-area motion imagery (wami)
exploitation tools for enhanced situation awareness,” in [2012 IEEE Applied Imagery Pattern Recognition
_Workshop (AIPR)_ ], 1–8, IEEE (2012).
[33] Liu, B., Chen, Y., Hadiks, A., Blasch, E., Aved, A., Shen, D., and Chen, G., “Information fusion in a cloud
computing era: a systems-level perspective,” IEEE Aerospace and Electronic Systems Magazine 29(10),
16–24 (2014).
[34] Wu, R., Liu, B., Chen, Y., Blasch, E., Ling, H., and Chen, G., “A container-based elastic cloud architecture
for pseudo real-time exploitation of wide area motion imagery (wami) stream,” Journal of Signal Processing
_Systems 88(2), 219–231 (2017)._
[35] Kopsaftopoulos, F., “Data-driven stochastic identification for fly-by-feel aerospace structures: Critical assessment of non-parametric and parametric approaches,” in [AIAA Scitech 2019 Forum ], 1534 (2019).
[36] Shu, Z., Tian, X., Wang, G., Shen, D., Pham, K., Blasch, E., and Chen, G., “Mitigation of weather on
channel propagation for satellite communications,” in [Sensors and Systems for Space Applications IX ],
**9838, 98380J, International Society for Optics and Photonics (2016).**
[37] Jia, B., Blasch, E., Pham, K. D., Shen, D., Wang, Z., and Chen, G., “Cooperative space object tracking via
multiple space-based visible sensors with communication loss,” in [2014 IEEE Aerospace Conference ], 1–8,
IEEE (2014).
[38] Blasch, E. P., Pham, K., and Shen, D., “Orbital satellite pursuit-evasion game-theoretical control,” in [2012
_11th International Conference on Information Science, Signal Processing and their Applications (ISSPA)_ ],
1007–1012, IEEE (2012).
[39] Shen, D., Chen, G., Pham, K., Blasch, E., and Tian, Z., “Models in frequency-hopping-based proactive
jamming mitigation for space communication networks,” in [Sensors and Systems for Space Applications
_V_ ], 8385, 83850P, International Society for Optics and Photonics (2012).
[40] Tian, X., Tian, Z., Pham, K., Blasch, E., and Shen, D., “Jamming/anti-jamming game with a cognitive jammer in space communication,” in [Sensors and Systems for Space Applications V], 8385, 83850Q,
International Society for Optics and Photonics (2012).
13
-----
[41] Wang, G., Shu, Z., Chen, G., Tian, X., Shen, D., Pham, K., Nguyen, T. M., and Blasch, E., “Performance evaluation of satcom link in the presence of radio frequency interference,” in [2016 IEEE Aerospace
_Conference], 1–10, IEEE (2016)._
[42] Yu, W., Wei, S., Xu, G., Chen, G., Pham, K., Blasch, E. P., and Lu, C., “On effectiveness of routing
algorithms for satellite communication networks,” in [Sensors and Systems for Space Applications VI ],
**8739, 87390Q, International Society for Optics and Photonics (2013).**
[43] Tian, X., Chen, G., Pham, K., and Blasch, E., “Joint transmission power control in transponded satcom
systems,” in [MILCOM 2016-2016 IEEE Military Communications Conference], 126–131, IEEE (2016).
[44] Liu, B., Chen, Y., Shen, D., Chen, G., Pham, K., Blasch, E., and Rubin, B., “An adaptive processbased cloud infrastructure for space situational awareness applications,” in [Sensors and Systems for Space
_Applications VII_ ], 9085, 90850M, International Society for Optics and Photonics (2014).
[45] Liu, K., Jia, B., Chen, G., Pham, K., and Blasch, E., “A real-time orbit satellites uncertainty propagation and visualization system using graphics computing unit and multi-threading processing,” in [2015
_IEEE/AIAA 34th Digital Avionics Systems Conference (DASC)_ ], 8A2–1, IEEE (2015).
[46] Chen, G., Tian, Z., Shen, D., Blasch, E., and Pham, K., “A novel framework for command and control
of networked sensor systems,” in [Defense Transformation and Net-Centric Systems 2007], 6578, 65780L,
International Society for Optics and Photonics (2007).
[47] Mitchell, D., Aponso, B., and Klyde, D., “Feel systems and flying qualities,” in [20th Atmospheric Flight
_Mechanics Conference], 3425 (1995)._
[48] Chaney, R., Hackler, D., and Wilson, D., “Pliable smart sensor system,” GOMAC, March (2012).
[49] Salowitz, N., Guo, Z., Kim, S.-J., Li, Y.-H., Lanzara, G., and Chang, F.-K., “Bio-inspired intelligent sensing
materials for fly-by-feel autonomous vehicles,” in [SENSORS, 2012 IEEE ], 1–3, IEEE (2012).
[50] Mangalam, A. S. and Brenner, M. J., “Fly-by-feel sensing and control: Aeroservoelasticity,” in [AIAA
_Atmospheric Flight Mechanics Conference], 2189 (2014)._
[51] Suryakumar, V. S., Babbar, Y., Strganac, T. W., and Mangalam, A. S., “Control of a nonlinear wing section
using fly-by-feel sensing,” in [AIAA Atmospheric Flight Mechanics Conference], 2239 (2015).
[52] Kopsaftopoulos, F., Nardari, R., Li, Y.-H., Wang, P., and Chang, F.-K., “Stochastic global identification
of a bio-inspired self-sensing composite uav wing via wind tunnel experiments,” in [Health Monitoring of
_Structural and Biological Systems 2016], 9805, 98051V, International Society for Optics and Photonics_
(2016).
[53] Armanious, G. and Lind, R., “Fly-by-feel control of an aeroelastic aircraft using distributed multirate
kalman filtering,” Journal of Guidance, Control, and Dynamics 40(9), 2323–2329 (2017).
[54] Blasch, E., “Decisions-to-data using level 5 information fusion,” in [Ground/Air Multisensor Interoperability,
_Integration, and Networking for Persistent ISR V_ ], 9079, 907903, International Society for Optics and
Photonics (2014).
[55] Blasch, E., “Enhanced air operations using jview for an air-ground fused situation awareness udop,” in
[2013 IEEE/AIAA 32nd Digital Avionics Systems Conference (DASC) ], 5A5–1, IEEE (2013).
[56] Connare, T., Blasch, E., Schmitz, J., Salvatore, F., and Scarpino, F., “Group imm tracking utilizing track
and identification fusion,” in [Proc. of the Workshop on Estimation, Tracking, and Fusion; A Tribute to
_Yaakov Bar Shalom_ ], 205–220 (2001).
[57] Blasch, E. P. and Yang, C., “Ten methods to fuse gmti and hrrr measurements for joint tracking and
identification,” tech. rep., AIR FORCE RESEARCH LAB WRIGHT-PATTERSON AFB OH (2004).
[58] Blasch, E., Kadar, I., Salerno, J., Kokar, M. M., Das, S., Powell, G. M., Corkill, D. D., and Ruspini, E. H.,
“Issues and challenges of knowledge representation and reasoning methods in situation assessment (level 2
fusion),” in [Signal Processing, Sensor Fusion, and Target Recognition XV], 6235, 623510, International
Society for Optics and Photonics (2006).
[59] Blasch, E. P., Lambert, D. A., Valin, P., Kokar, M. M., Llinas, J., Das, S., Chong, C., and Shahbazian, E.,
“High level information fusion (hlif): Survey of models, issues, and grand challenges,” IEEE Aerospace and
_Electronic Systems Magazine 27(9), 4–20 (2012)._
14
-----
[60] Yu, D., Jin, Y., Zhang, Y., and Zheng, X., “A survey on security issues in services communication of
microservices-enabled fog applications,” Concurrency and Computation: Practice and Experience, e4436
(2018).
[61] Nagothu, D., Xu, R., Nikouei, S. Y., and Chen, Y., “A microservice-enabled architecture for smart surveillance using blockchain technology,” in [2018 IEEE International Smart Cities Conference (ISC2) ], 1–4,
IEEE (2018).
[62] Xu, R., Nikouei, S. Y., Chen, Y., Blasch, E., and Aved, A., “Blendmas: A blockchain-enabled decentralized
microservices architecture for smart public safety,” in [2019 IEEE International Conference on Blockchain
_(Blockchain)_ ], 564–571, IEEE (2019).
[63] Xu, R., Ramachandran, G. S., Chen, Y., and Krishnamachari, B., “Blendsm-ddm: Blockchain-enabled
secure microservices for decentralized data marketplaces,” arXiv preprint arXiv:1909.10888 (2019).
[64] Nakamoto, S., “Bitcoin: A peer-to-peer electronic cash system,” tech. rep., Manubot (2019).
[65] Swan, M., [Blockchain: Blueprint for a new economy], ” O’Reilly Media, Inc.” (2015).
[66] Szabo, N., “Formalizing and securing relationships on public networks,” First Monday 2(9) (1997).
[67] Xu, R., Lin, X., Dong, Q., and Chen, Y., “Constructing trustworthy and safe communities on a blockchainenabled social credits system,” in [Proceedings of the 15th EAI International Conference on Mobile and
_Ubiquitous Systems: Computing, Networking and Services], 449–453, ACM (2018)._
[68] Ramachandran, G. S., Wright, K.-L., Zheng, L., Navaney, P., Naveed, M., Krishnamachari, B., and Dhaliwal,
J., “Trinity: A byzantine fault-tolerant distributed publish-subscribe system with immutable blockchainbased persistence,” in [2019 IEEE International Conference on Blockchain and Cryptocurrency (ICBC)],
227–235, IEEE (2019).
[69] Xu, R., Chen, S., Yang, L., Chen, Y., and Chen, G., “Decentralized autonomous imaging data processing
using blockchain,” in [Multimodal Biomedical Imaging XIV], 10871, 108710U, International Society for
Optics and Photonics (2019).
[70] Xu, R., Chen, Y., Blasch, E., and Chen, G., “Blendcac: A blockchain-enabled decentralized capability-based
access control for iots,” in [The 2018 IEEE International Conference on Blockchain (Blockchain-2018) ], 1–8,
IEEE (2018).
[71] Xu., R., Chen, Y., Blasch, E., and Chen, G., “Blendcac: A smart contract enabled decentralized capabilitybased access control mechanism for the iot,” Computers 7(3), 39 (2018).
[72] Lamport, L., Shostak, R., and Pease, M., “The byzantine generals problem,” in [Concurrency: the Works
_of Leslie Lamport], 203–226 (2019)._
[73] Castro, M., Liskov, B., et al., “Practical byzantine fault tolerance,” in [OSDI ], 99(1999), 173–186 (1999).
[[74] “Flask: A Pyhon Microframework.” http://flask.pocoo.org/.](http://flask.pocoo.org/)
[[75] “Ethereum Homestead Documentation.” http://www.ethdocs.org/en/latest/index.html.](http://www.ethdocs.org/en/latest/index.html)
[76] Kwon, J., “Tendermint: Consensus without mining,” Draft v. 0.6, fall 1, 11 (2014).
[[77] “Solidity.” http://solidity.readthedocs.io/en/latest/.](http://solidity.readthedocs.io/en/latest/)
[[78] “Go-ethereum.” https://ethereum.github.io/go-ethereum/.](https://ethereum.github.io/go-ethereum/)
15
-----
| 13,514
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2004.10674, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/2004.10674"
}
| 2,020
|
[
"JournalArticle"
] | true
| 2020-04-16T00:00:00
|
[
{
"paperId": "92b889406beb784a7a9f98750612648c51b33f58",
"title": "A Study of Lightweight DDDAS Architecture for Real-Time Public Safety Applications Through Hybrid Simulation"
},
{
"paperId": "20faceefdfa8a7f93e60f8c385f1c208e67d2f90",
"title": "A survey on security issues in services communication of Microservices‐enabled fog applications"
},
{
"paperId": "7947fe4227fa9e7cdc97bde8da3dffbdfacfd4ee",
"title": "BlendSM-DDM: BLockchain-ENabled Secure Microservices for Decentralized Data Marketplaces"
},
{
"paperId": "f81f7fa67a0ebc34b49e9cd7b84a5194f72f973b",
"title": "Blockchain Methods for Trusted Avionics Systems"
},
{
"paperId": "720f6260c42b818b339ae8b4f227b2bf3e704a37",
"title": "Dynamic data driven analytics for multi-domain environments"
},
{
"paperId": "379f0e0665a49d1fa59aa139d589b5b40bc3b9cf",
"title": "Trinity: A Byzantine Fault-Tolerant Distributed Publish-Subscribe System with Immutable Blockchain-based Persistence"
},
{
"paperId": "9de105b956eca2c6ea8db171be032543a371d1e8",
"title": "Decentralized smart surveillance through microservices platform"
},
{
"paperId": "d1156ab2d59e6c5f9cf8c4098ec2b1a294bc7ce5",
"title": "BlendMAS: A Blockchain-Enabled Decentralized Microservices Architecture for Smart Public Safety"
},
{
"paperId": "915e97adcccc230c3273a3e1236465a2643fbc3b",
"title": "Decentralized autonomous imaging data processing using blockchain"
},
{
"paperId": "a12eebe883629c2584ad6213ad8f0d924bb8f375",
"title": "Data-driven Stochastic Identification for Fly-by-feel Aerospace Structures: Critical Assessment of Non-parametric and Parametric Approaches"
},
{
"paperId": "799139ed091b1316b62b7cee3185b023751fd19b",
"title": "Exploration of blockchain-enabled decentralized capability-based access control strategy for space situation awareness"
},
{
"paperId": "78f5fd4c1db1e493e72b050d67adb27606c1747c",
"title": "Constructing Trustworthy and Safe Communities on a Blockchain-Enabled Social Credits System"
},
{
"paperId": "7105d0ece78c6e3d0650041cf8c7367d0038a5ae",
"title": "A joint manifold leaning-based framework for heterogeneous upstream data fusion"
},
{
"paperId": "1e96dbe3e74a303143847f25a5880bab86fe6e38",
"title": "A Microservice-enabled Architecture for Smart Surveillance using Blockchain Technology"
},
{
"paperId": "733335154c907d49062945fad5d67d7909fb9cf5",
"title": "Real-Time Index Authentication for Event-Oriented Surveillance Video Query using Blockchain"
},
{
"paperId": "83bc879120207d575fa92dbbc1d34f40351e6085",
"title": "BlendCAC: A Smart Contract Enabled Decentralized Capability-Based Access Control Mechanism for the IoT"
},
{
"paperId": "5897f14e9c923f4b178da1c7e57fc95cd3adbb3c",
"title": "Next-Generation, Data Centric and End-to-End IoT Architecture Based on Microservices"
},
{
"paperId": "e4707bce530e81d57612e00509bc3e0511351481",
"title": "DDDAS for space applications"
},
{
"paperId": "78c5dcc0e93b0ff44036841f7709ef15a8edef19",
"title": "BlendCAC: A BLockchain-Enabled Decentralized Capability-Based Access Control for IoTs"
},
{
"paperId": "74943a9a22d125dbf59760df049b954d7737093e",
"title": "Fly-by-Feel Control of an Aeroelastic Aircraft Using Distributed Multirate Kalman Filtering"
},
{
"paperId": "c6236730753a93cb62cd5b63e023dcca72b92526",
"title": "Airplane flight safety using error-tolerant data stream processing"
},
{
"paperId": "008c347a6ac39f90cac9fb901cfe061b9b3fc3cf",
"title": "Panel summary of cyber-physical systems (CPS) and Internet of Things (IoT) opportunities with information fusion"
},
{
"paperId": "cd56f55d6e3c9a581f3b6c795a267833abd85e59",
"title": "Dynamic data driven application systems for smart cities and urban infrastructures"
},
{
"paperId": "def302ca1310c19d72cbf7a7fd25876749bf5251",
"title": "A Container-Based Elastic Cloud Architecture for Pseudo Real-Time Exploitation of Wide Area Motion Imagery (WAMI) Stream"
},
{
"paperId": "630e6dc6494a6faaf3d98561e250249ede3dfceb",
"title": "Joint transmission power control in transponded SATCOM systems"
},
{
"paperId": "4bb8340a8c237efd80fac258398d180f9aa32887",
"title": "Microservices approach for the internet of things"
},
{
"paperId": "78b9eb80d61ab8f675bac04f1f9a39402d075082",
"title": "Cooperative space object tracking using space-based optical sensors via consensus-based filters"
},
{
"paperId": "996a4944fbf0f45d73177e0d8a7370c6bdf05ea6",
"title": "Agile battle management efficiency for command, control, communications, computers and intelligence (C4I)"
},
{
"paperId": "c824aea7a4c14a00449c5e07973917bc280f11d0",
"title": "Mitigation of weather on channel propagation for satellite communications"
},
{
"paperId": "0dc8139071d537282f0df280450f998c6c8ab723",
"title": "The QuEST for multi-sensor big data ISR situation understanding"
},
{
"paperId": "e0dd87d6ea55da7647f0f217f31bd80ba862461d",
"title": "Stochastic global identification of a bio-inspired self-sensing composite UAV wing via wind tunnel experiments"
},
{
"paperId": "29518b157f7c3e2d2ed250a87bfa6eb221035842",
"title": "Performance evaluation of SATCOM link in the presence of radio frequency interference"
},
{
"paperId": "68a13138a3fbfc3f3cf42fcab032b33e44afd9e3",
"title": "A real-time orbit SATellites Uncertainty propagation and visualization system using graphics computing unit and multi-threading processing"
},
{
"paperId": "ede0152e10229cdde459f88a301d67f337f5b355",
"title": "Control of a Nonlinear Wing Section using Fly-by-Feel Sensing"
},
{
"paperId": "280991d5728d9be4b9ddca373ed548d23d0c4c80",
"title": "Random-point-based filters: analysis and comparison in target tracking"
},
{
"paperId": "3742d234eef4c644b165a1629c2716c6b748e7a0",
"title": "Improving situation awareness with the Android Team Awareness Kit (ATAK)"
},
{
"paperId": "c63782e024ac74e62f5cdafae3debfd0193c80d4",
"title": "Sensors, and Command, Control, Communications, and Intelligence (C3I) Technologies for Homeland Security, Defense, and Law Enforcement XIV"
},
{
"paperId": "97fddbbfd681bce9eeb8e0a013353b4d5b2ba0db",
"title": "Blockchain: Blueprint for a New Economy"
},
{
"paperId": "90921e25247de1b56dc6c6eaefcb66f3bb43f794",
"title": "Information fusion in a cloud computing era: A systems-level perspective"
},
{
"paperId": "4bd48ec6e5e11251468245e1f39aa52662e12409",
"title": "Automatic Association of Chats and Video Tracks for Activity Learning and Recognition in Aerial Video Surveillance"
},
{
"paperId": "451e1e085db8f4a0605f7c1f38d53efc85bbd7e0",
"title": "QuEST for Information Fusion in Multimedia Reports"
},
{
"paperId": "6b09758d474b1da6beab2f0463ab6793b1ab1c63",
"title": "Fly-by-Feel Sensing and Control: Aeroservoelasticity"
},
{
"paperId": "ed333bff68f10dce2fab89c9e73981362ff66710",
"title": "Decisions-to-Data using Level 5 information fusion"
},
{
"paperId": "70544c6521306a9a90f7947efc2a9ebf962e0aa2",
"title": "An adaptive process-based cloud infrastructure for space situational awareness applications"
},
{
"paperId": "7d05ff4bdc6d08ceddf501d16990797885292cb6",
"title": "Mobile positioning via fusion of mixed signals of opportunity"
},
{
"paperId": "a71028b5bc5fcc68ba8ce0d84e27e06e3462537b",
"title": "Cooperative space object tracking via multiple space-based visible sensors with communication loss"
},
{
"paperId": "28f1af51c8a78bce45ba495d7cff8bdc96d8827a",
"title": "Enhanced air operations using JView for an air-ground fused situation awareness udop"
},
{
"paperId": "5d6723bed52eaedb4b83b18d8a7768adc6b4034e",
"title": "Revisiting the JDL model for information exploitation"
},
{
"paperId": "4e67eedf010f0fca22bb739eafcee790ac495ac4",
"title": "On effectiveness of routing algorithms for satellite communication networks"
},
{
"paperId": "154c9572b97ecf5ef46928ccc7fbb979e4959af7",
"title": "High Level Information Fusion (HLIF): Survey of models, issues, and grand challenges"
},
{
"paperId": "9408fdbfc86d000903395b0336421e922e371eb7",
"title": "Wide-area motion imagery (WAMI) exploitation tools for enhanced situation awareness"
},
{
"paperId": "3dc6ca5de7da07154befa9902079c8207c71a966",
"title": "Bio-inspired intelligent sensing materials for fly-by-feel autonomous vehicles"
},
{
"paperId": "a2e6f1f7a508ac03099965de8ea87cb9affaea64",
"title": "Orbital satellite pursuit-evasion game-theoretical control"
},
{
"paperId": "d93bfab04077e22adc4316a2295f0666f30a0ad3",
"title": "Jamming/anti-jamming game with a cognitive jammer in space communication"
},
{
"paperId": "3fbce4c54c981af1fbb3d7ca74f23223bf690539",
"title": "Models in frequency-hopping-based proactive jamming mitigation for space communication networks"
},
{
"paperId": "9cf4d175f03b9a9c526dc1e0933daaced263c001",
"title": "Submarine tracking via fusing multiple measurements based on Gaussian sum mixture approximation"
},
{
"paperId": "79af2ed4a95840bf8b677c06e369d2694a2005ed",
"title": "Sensor-based allocation for path planning and area coverage using UGSs"
},
{
"paperId": "1ff0607cad532d8f2398a290861067038a5b3c16",
"title": "Video image registration evaluation for a layered sensing environment"
},
{
"paperId": "5be43f4045d2b7b60d6c8528741dfe5163ad570c",
"title": "Performance-driven resource management in layered sensing"
},
{
"paperId": "d81005a2ba5d55faff4219ebf9f300abaaa9f919",
"title": "A novel framework for command and control of networked sensor systems"
},
{
"paperId": "11a9d645d6d082c987147273bbbde49f222f0d3b",
"title": "Issues and challenges of knowledge representation and reasoning methods in situation assessment (Level 2 Fusion)"
},
{
"paperId": "1b797af91468e362116be850219323abdd1a605d",
"title": "Pose angular-aiding for maneuvering target tracking"
},
{
"paperId": "18b9f5cefaac428c4c5fb67ebdd8fc711e3c6b0e",
"title": "Ten Methods to Fuse GMTI and HRRR Measurements for Joint Tracking and Identification"
},
{
"paperId": "8132164f0fad260a12733b9b09cacc5fff970530",
"title": "Practical Byzantine fault tolerance"
},
{
"paperId": "5b4cf1e37954ccd1ca6b315986d45904f9d2f636",
"title": "Formalizing and Securing Relationships on Public Networks"
},
{
"paperId": "dbf9e65c9a51624603730ea25436ba2037b969bf",
"title": "Feel Systems and Flying Qualities"
},
{
"paperId": "1689f401f9cd18c8fd033d99d1e2ce99b71e6047",
"title": "The Byzantine Generals Problem"
},
{
"paperId": "da80e4a73eadf0316c79e2b37a23c06463474f6a",
"title": "Solidity"
},
{
"paperId": "c158b56625d138c4a3d8b5344ad289e7b87ef6b7",
"title": "Handbook of Dynamic Data Driven Applications Systems"
},
{
"paperId": "e6b6e4000be0e07b7c59ac10ee2d320800bb84e3",
"title": "Context-Enhanced Information Fusion"
},
{
"paperId": "7690c18b1571b388f0c934f0f04072f7836aeef4",
"title": "Static Versus Dynamic Data Information Fusion Analysis Using DDDAS for Cyber Security Trust"
},
{
"paperId": "df62a45f50aac8890453b6991ea115e996c1646e",
"title": "Tendermint : Consensus without Mining"
},
{
"paperId": "edf9feb93084ed12bf650912d72f27b989b7cc08",
"title": "Dynamic Data Driven Applications System Concept for Information Fusion"
},
{
"paperId": null,
"title": "Pliable smart sensor system"
},
{
"paperId": "ecdd0f2d494ea181792ed0eb40900a5d2786f9c4",
"title": "Bitcoin : A Peer-to-Peer Electronic Cash System"
},
{
"paperId": "8d18747df73cd53ee38aa8deeb47acb70b1e9ff6",
"title": "GRoup IMM Tracking utilizing Track and Identification Fusion"
},
{
"paperId": null,
"title": "Ethereum Homestead Documentation"
},
{
"paperId": null,
"title": "Go-ethereum"
},
{
"paperId": null,
"title": "Flask: A Pyhon Microframework"
},
{
"paperId": null,
"title": "Identity Authentication : Since each blockchain account is uniquely indexed by its address that is derived from his/her own public key, the account address is ideal"
},
{
"paperId": null,
"title": "Security policies, like authentication and access control, are implemented as separate containerized mi-croservices, which utilize a smart contract to act as decentralized application (DApp)"
}
] | 13,514
|
en
|
[
{
"category": "Engineering",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0033139bb93e9c5860d7a390beccddbb589c9563
|
[
"Engineering"
] | 0.857525
|
A performance-aware Public Key Infrastructure for next generation connected aircrafts
|
0033139bb93e9c5860d7a390beccddbb589c9563
|
Digital Avionics Systems Conference
|
[
{
"authorId": "9259308",
"name": "Mohamed Slim Ben Mahmoud"
},
{
"authorId": "2793603",
"name": "N. Larrieu"
},
{
"authorId": "1799083",
"name": "Alain Pirovano"
}
] |
{
"alternate_issns": [
"2155-7209"
],
"alternate_names": [
"Digit Avion Syst Conf"
],
"alternate_urls": [
"http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000202"
],
"id": "591cf945-1817-43c7-a41c-44e40654e9df",
"issn": "2155-7195",
"name": "Digital Avionics Systems Conference",
"type": null,
"url": "https://www.ieee.org/about/index.html"
}
| null |
# A performance-aware Public Key Infrastructure for next generation connected aircrafts
## Mohamed-Slim Ben Mahmoud, Nicolas Larrieu, Alain Pirovano
To cite this version:
Mohamed-Slim Ben Mahmoud, Nicolas Larrieu, Alain Pirovano. A performance-aware Public Key
Infrastructure for next generation connected aircrafts. DASC 2010, 29th IEEE/AIAA Digital
Avionics Systems Conference, Oct 2010, Salt Lake City, United States. pp 3.C.3-1 - 3.C.3-16,
10.1109/DASC.2010.5655369. hal-01022208
## HAL Id: hal-01022208
https://enac.hal.science/hal-01022208
Submitted on 9 Sep 2014
**HAL is a multi-disciplinary open access**
archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
-----
## A PERFORMANCE-AWARE PUBLIC KEY INFRASTRUCTURE FOR NEXT GENERATION CONNECTED AIRCRAFTS
### Mohamed Slim Ben Mahmoud, Nicolas Larrieu, Alain Pirovano French Civil Aviation University (ENAC), LEOPART Laboratory, Toulouse, France
## Abstract communications in future Air Traffic Management
(ATM): the worldwide airspace will become more
This paper aims to illustrate the feasibility of a
and more congested, as the traffic is forecast to
scalable Public Key Infrastructure (PKI) adapted for
increase steadily the next ten years. Consequently,
upcoming network-enabled aircrafts with a particular
European and American international programs such
emphasis on the revocation and verification
as SESAR [1] and NextGen [2] have been created to
procedures: many techniques are discussed and their
modernize the ATM and integrate innovative
benefits in term of resulting overheads are underlined
approaches to the aviation world.
through a performance assessment study. The
proposed PKI is also used to secure a negotiation Moreover, airlines aim to offer a better flight
protocol for the supported and common security experience to passengers by deploying a variety of
mechanisms between two end entities. The PKI additional services, mainly through broadband
presented in this paper is a sub-task of an overall Internet access and In-Flight Entertainment (IFE),
security architecture for the FAST (Fiber-like Aircraft while reducing design and maintenance costs of such
Satellite Telecommunications) project, co-funded by disposals. Other services can be imagined such as
the Aerospace Valley pole and the French duty free credit card purchasing or cellular phone
government (Direction Générale de la Compétitivité, usage. Consequently, the use of Commercially
de l'Industrie et des Services – DGCIS, Fonds Unique available Off-The-Shelf (COTS) components
Interministériel – FUI). The purpose behind the becomes necessary to maintain high efficiency and
project is to demonstrate the feasibility of high- interoperability at reduced overall cost. Such
capacity aircraft-earth communications through a evolutions in the civil aviation industry may engender
low-cost satellite antenna technology. The project many potential security threats which have to be
federates both industrial (EADS Astrium, Axess carefully addressed.
Europe, Vodea and Medes) and
academic/institutional (ISAE, ENAC, LAAS-CNRS, **_PKI Considerations in Future ATM Systems_**
Telecom Bretagne) partners.
For this purpose, a PKI can be an effective
solution to cope with these emerging security issues.
## Introduction and Problem Statement PKI is usually defined as a set of practices,
technologies, and policies involved in several
### Characteristics of the Future Aeronautical processes such as deployment, management, storage, Communication Systems and revocation of public key certificates when
asymmetric cryptography is used. The aim is to
Over the last decade, safety and security have
create a “chain of trust” for securing digital data and
been considered as the highest priority concerns in
authenticating end entities. In ground-based
the air transport industry. Although physical security
networks, PKI's are often deployed whenever a large
remains the major issue in people's thoughts,
group of users must communicate securely without
researchers and experts concord in their concern to
necessarily knowing or trusting each other directly
focus on digital information security for the future
network-enabled aircrafts. This is due, partially, to
the increasingly heterogeneous nature of air-ground
communications (Air Traffic Services – ATS, 1 Single European Sky ATM Research (SESAR) is the Single
European Sky (SES) technological and operational program
Operational Control Services – AOC, and
initiative to meet future capacity and air safety needs.
Aeronautical Passenger Communication Services – 2 NextGen is the American program for ongoing evolution of the
APC) and the expected shift from voice to data American National Airspace System (NAS) from a ground-based
system of ATC to a satellite-based system of ATM.
978-1-4244-6618-4/10/$26.00 ©2010 IEEE
3.C.3-1
-----
(e.g. securing emails, remote access, or web
applications). The PKI concept has been modified in
many ways to take into consideration the
management of public keys, certificates, or digital
identities in different networks such as wireless or
mobile (e.g. 3G, MANET) networks.
In the aeronautical context, some works have
relied on PKI to secure communication protocols [1]
or to address electronic distribution of airplane
software [2], for instance. Recommendations and best
practices are also being defined in the Air Transport
Association (ATA) specification 42 “Aviation
Industry Standards for Digital Information Security”
document [3], proposed by the Digital Security
Working Group (DSWG). The ATA DSWG group
develops industry specifications to facilitate the
implementations of information security practices
and technologies to the civil aviation community.
This document deals with digital identity
management and specifies standard digital certificate
profiles for the air transport industry. PKIs are also
intended to be used in the future commercial
connected aircrafts such as AIRBUS A350 and
BOEING B787, where many digital applications are
deployed either for cabin facilities, or AOC specific
applications such as Electronic Flight Bag3 (EFB)
application.
However, with the increasing number of aircraft
in the worldwide airspace, some scaling issues, not
yet addressed, arise: long term forecast studies
predict an average air traffic growth up to 3,5% per
year between 2007 and 2030 [4]. Moreover, a single
airplane is expected to carry out miscellaneous
embedded end entities, ranging from avionic systems
to on-board users (e.g. a passenger accessing to
various Internet services). The 53th edition of the
World Air Transport Statistics (WATS) document of
the International Air Transport Association (IATA)
[5] reported a worldwide passenger growth of
+22.1% between 1999 and 2008: as the number of
aircrafts/passengers/systems using security grows, it
is apparent that the amount of key pairs and digital
certificates handled by the PKI increases. Also, the
management of the PKI credentials gets more
complicated because of the typical constricted
network capacity of air-ground technologies: both
3 EFB is an electronic display system used to perform AOC flight
management tasks and intended to replace paper-based document
used by the crew.
signaling and data messages induced by the PKI have
to be performed at lower cost. Air-ground link will
probably no longer be a problem in future since
SATCOM technologies will offer high capacities for
effective PKI processing, but retrieving large
certificate revocation lists (CRLs) for instance, can
be an issue if aircrafts do not use caching
mechanisms onboard.
The certificate format is another aspect which
needs to be investigated in details: certificate
parameters have to be tailored to applications in
which they are used (APC, AOC, and ATS) and to
the certificate owner (aircraft, passenger, avionic
system, etc). Also, aircraft networks are mobile
communication systems, and then some mobility
considerations are important when a PKI is used:
since the aircraft should get seamless service before
landing, mutual authentication with an entity of
another airline, airport or domain should be possible.
Because different aviation organizations may have
different security policies in their own PKIs, complex
inter-working and roaming schemes between the
aircrafts, end entities, or airlines are required. In such
a system, deploying a “classical” PKI model becomes
a difficult task, then, a great challenge lies in finding
a well-suited PKI for the next-generation connected
aircrafts.
This paper aims to illustrate the feasibility of a
novel PKI adapted for upcoming network-enabled
aircrafts. This is a performance-aware model using a
combination of hierarchical Certificate Authorities
(CA) in order to minimize the air-ground exchanges
caused by any PKI-related operational process
(checking and revoking certificates, for instance).The
PKI model we propose in this paper works across
three levels: the first level is relevant to ground-CAs
interactions. The second level is related to the
communications between airline-CAs and
subordinate CAs on each aircraft. The last level deals
with the onboard users and the subordinate CAs.
Different phases of the certification process and key
management are also described. Online Certificate
Status Protocol (OCSP) [6] and CRLs servers are
discussed to emphasize their benefits in terms of
resulting network and computation overheads. The
PKI model is finally applied on an ad-hoc protocol
we proposed in the FAST project for the negotiation
of the commonly Supported Security Protocols (SSP)
between two end entities.
3.C.3-2
-----
## Nomenclature
Table 1 contains the notations used in the
following sections:
**Table 1. Notations**
**Notation** **Description**
𝐾𝑖+ The public key of an entity i
𝐾𝑖− The private key of an entity i
𝑁𝐶 Total number of certificates
𝑁𝑓 Flight number at time t
𝑆𝑖𝑧𝑒𝐶 Average size of a certificate
𝑡𝐶 Certificate validity period (in days)
𝑡𝑆 SSP validity period (in days)
ℎ𝑆 Digest using a hash function
𝑁𝑜𝑛𝑐𝑒𝑖 _i[th ]randomly generated number_
𝑙𝑠𝑖𝑔 Digital signature length
𝑙𝑠𝑛 Certificate serial number length
𝐶𝑠𝑖𝑔 Signature generation time
𝐶𝑣 Signature verification time
𝑀 Exchanged data
{𝑖, 𝐾𝑖+}𝐾𝐶𝐴− Certificate of i issued by CA
𝑅𝐶 % of revoked certificates
𝑁𝑅 Certificate revocation check status
messages per day
𝑁𝑈 Revocation information update
messages per day
𝑁𝐶,𝐶𝐴 Certificate average number
handled by one CA
𝐶𝑈𝑁𝑒𝑡 Network cost to update a
certificate between CA and CMSE[4]
𝐶𝑈,𝐶𝐴𝐶𝑃𝑈 Computation cost at _CA to update_
a certificate
𝐶𝑈,𝐶𝑀𝑆𝐸𝐶𝑃𝑈 Computation cost at _CMSE to_
update a certificate
𝐶𝑅𝑁𝑒𝑡 Network cost to check a certificate
between CMSE and a verifier
𝐶𝑅,𝐶𝐴𝐶𝑃𝑈 Computation cost at CA to check a
certificate
𝐶𝑅,𝐶𝑀𝑆𝐸𝐶𝑃𝑈 Computation cost at _CMSE to_
check a certificate
𝐶𝑅,𝑉𝐶𝑃𝑈 Computation cost at _verifier to_
check a certificate
4 CMSE: Certificate Management Subordinate Entity, see section
_“Hierarchical PKI Model for Next Generation Connected_
_Aircrafts” for details._
## Introduction to Basic PKI Concepts
In this section, we present a non exhaustive overview
of the basic PKI concepts commonly used. More
details about PKIs can be found in [7].
### Security Services
A PKI is intended to offer the following security
features:
- _Confidentiality of communications: only_
allowed persons will be able to read
encrypted messages;
- _Non repudiation: the sender cannot deny to_
a third party that he sent a given message;
- _Integrity of communications: the recipient_
of a message is able to determine if the
message content was not altered during its
exchange;
- _Authentication of the sender: the recipient_
is able to identify the sender of a message
and to demonstrate to a third party, if
required, that the sender was properly
identified.
### PKI Cryptographic Resources
When a PKI is deployed, fundamental
cryptographic elements are used:
- _Public and private keys: also known as_
asymmetric key pairs. Every end entity
holds two keys; the public key is made
publicly available to all the other entities
of the system while the private key is kept
secret. The keys are one-way functions,
which means it is considerably difficult to
decrypt a message if it has been encrypted
with one of the two keys. Also, the keys
are mathematically related: if a message 𝑀
is encrypted using the public key 𝐾𝑖+, only
the private key 𝐾𝑖− allows us to reveal the
message:
{{𝑀}𝐾𝑖+}𝐾𝑖− = 𝑀
The reciprocal function is also true: if 𝑀 is
encrypted with the private key
𝐾𝑖−, the public key 𝐾𝑖+ is used to find the
message:
{{𝑀}𝐾𝑖−}𝐾𝑖+ = 𝑀
|Col1|Table 1. Notations|
|---|---|
|Notation|Description|
|𝐾+ 𝑖|The public key of an entity i|
|𝐾− 𝑖|The private key of an entity i|
|𝑁 𝐶|Total number of certificates|
|𝑁 𝑓|Flight number at time t|
|𝑆𝑖𝑧𝑒 𝐶|Average size of a certificate|
|𝑡 𝐶|Certificate validity period (in days)|
|𝑡 𝑆|SSP validity period (in days)|
|ℎ 𝑆|Digest using a hash function|
|𝑁𝑜𝑛𝑐𝑒 𝑖|ith randomly generated number|
|𝑙 𝑠𝑖𝑔|Digital signature length|
|𝑙 𝑠𝑛|Certificate serial number length|
|𝐶 𝑠𝑖𝑔|Signature generation time|
|𝐶 𝑣|Signature verification time|
|𝑀|Exchanged data|
|{𝑖, 𝐾 𝑖+} 𝐾 𝐶− 𝐴|Certificate of i issued by CA|
|𝑅 𝐶|% of revoked certificates|
|𝑁 𝑅|Certificate revocation check status messages per day|
|𝑁 𝑈|Revocation information update messages per day|
|𝑁 𝐶,𝐶𝐴|Certificate average number handled by one CA|
|𝐶𝑁𝑒𝑡 𝑈|Network cost to update a certificate between CA and CMSE4|
|𝐶𝐶𝑃𝑈 𝑈,𝐶𝐴|Computation cost at CA to update a certificate|
|𝐶𝐶𝑃𝑈 𝑈,𝐶𝑀𝑆𝐸|Computation cost at CMSE to update a certificate|
|𝐶𝑁𝑒𝑡 𝑅|Network cost to check a certificate between CMSE and a verifier|
|𝐶𝐶𝑃𝑈 𝑅,𝐶𝐴|Computation cost at CA to check a certificate|
|𝐶𝐶𝑃𝑈 𝑅,𝐶𝑀𝑆𝐸|Computation cost at CMSE to check a certificate|
|𝐶𝐶𝑃𝑈 𝑅,𝑉|Computation cost at verifier to check a certificate|
3.C.3-3
-----
RSA (Rivest, Shamir, Adleman) [8] is a
well-known asymmetric algorithm based
on public/private keys cryptography;
- _Digital Certificates: this is a central_
element in the use of asymmetric key
pair’s technique. A certificate is a data
structure used to bind a public key to an
end entity in an authentic way. The
certificate has to be signed by a trusted
third party (cf. PKI entities below) and it
ensures that the public key really belongs
to the entity that is stated in the certificate.
A certificate aggregates many information
such as a unique certificate number, the
issuer identifier, the owner identifier, the
public key, the algorithm used to generate
the signature or a validity period. Other
information fields can be included,
depending on the type and the purpose of
the certificate. The ITU-T X.509 format is
the most known and widely used
certificate; in Internet applications [9];
- _Hash values: (also known as checksums or_
digests), a hash value is a piece of data
computed using a hash function. A hash
function is a mathematic function which
takes a variable size data and returns a
fixed size value. When used in
cryptography, a hash function has to be
one-way (computationally hard to invert),
collision free (computationally impossible
to find the same hash for two different data
inputs), and fixed length output (the
function has to produce always the same
size data length). SHA-1 (Secure Hash
Algorithm) [10] is an example of a hash
function which can be used to compute
160 bits length hashes. In PKI, hashes are
used to produce digital signatures;
- _Digital signatures: a digital signature is the_
output of a cryptographic process used to
certify the signer identity and also the
integrity of the data being signed. A digital
signature is produced as follow: a
checksum is computed then encrypted
using the private key 𝐾𝑖− of the signer. The
resulting digital signature is added to the
signer's certificate and attached to the
signed data. In order to verify a digital
signature, the first condition is the validity
of the signer's digital certificate (i.e. not
expired and not revoked). A relying party
decrypts the signature using the public key
𝐾𝑖+ of the signer (bound to the certificate)
to get the signer's hash value. Then, the
relying party computes himself the hash of
the data and compares the two hashes; if
they match then data integrity can be
assumed.
### PKI Components
A PKI is composed of the following entities:
- _Certification Authority (CA): this is the_
core component of a PKI since it is the
only entity that can issue public key
certificates. Any digital certificate is
signed by the issuing CA, which makes it
the very foundation of a security
architecture using a PKI. If CRLs have not
been delegated to an autonomous CRL
issuer, CAs can also be responsible of
issuing the CRLs;
- _Registration Authority_ (RA): this is an
optional component that verifies the users’
identity and requests the CA to issue an
adequate digital certificate;
- _End Entities: an end entity is a generic_
term used to denote a user, a device or a
piece of software that need a digital
certificate. In the aeronautical context, an
end entity can be a passenger, an aircraft,
an airline or an operator for instance;
- _Repository: this is also an optional_
component since it denotes a storage
device for certificates and CRLs so that
they can be retrieved by end entities.
### Certificate Life Cycle Management
The management of certificate life cycle is the
primary function of a PKI; the main steps are the
following:
- _Registration_ _and_ _public/private_ _keys_
_generation (RK): the first step is the end_
entity registration and identity
establishment. The registration procedure
depends on which component has to
generate the public/private keys. If the CA
generates the key pair then the private key
3.C.3-4
-----
is securely passed to the registering end
entity through an Out-Of-Band [5] (OOB)
mechanism, if the end entity generates the
key pair, then the public key is passed to
the CA which checks the validity of the
private key by means of proof
mechanisms. The digital signature, which
is generated using the private key and
verified using the corresponding public
key, can be such a mechanism;
- _Certificate generation and distribution_
_(CGD): after the end entity registration and_
key pair generation, a certificate is issued
and distributed respectively to the end
entity and the certificate repository;
- _Certificate regeneration (CRG): when a_
certificate expires, the corresponding end
entity informs the CA which has to renew
the certificate;
- _Certificate revocation (CRV): when a_
private key has been compromised, the
certificate is no longer valid and has to be
revoked;
- _Certificate retrieval (CRT): end entities_
retrieve certificates from the repository or
may exchange certificates between each
other (when the Pretty Good Privacy[6] PGP
is used, for instance [11]);
- _Certificate validation (CV): end entities_
may retrieve the CRLs from a repository
or may connect to an OCSP server to
validate a certificate when needed.
Figure 1 shows how all the PKI components
interoperate which each others.
The performance analysis we made focused on
two most important certificate life cycle management
steps: generation/distribution and revocation
certificate processes.
In order to highlight the advantages of our PKI
model; we describe in the following section the most
used certificate revocation schemes with more
details.
5 OOB can be offline or using a secure and trusted channel
6 PGP is a protocol used to enhance the security of e-mail
communications by providing cryptographic privacy and
authentication mechanisms for exchanged data.
End
Entity
CA
**Figure 1. Basic PKI Environment**
**Certificate Revocation Schemes**
Certificate validation is the process of verifying
that a certificate is still valid: the validity period is
checked and the process performs an integrity check
based on the signature of the issuing CA and the
revocation status to ensure that the certificate has not
been revoked.
Certificate revocation is a different process since
it is the action of declaring a certificate invalid before
its expiration. For instance, the certificate revocation
is required when the private key is compromised: the
certificate becomes useless since the public key
attached to it is mathematically related to the private
key.
In a safety-related context such as data link
communications, we think that the certificate
revocation is an important process in the certificate
cycle life management: any implemented PKI has to
necessarily deploy a mechanism for revoking
certificates and inform all involved entities about the
certificate status. There are several approaches to
revoke a certificate. The traditional technique is to
publish a CRL containing all the revoked certificates
ID’s periodically.
The shortcoming of this approach is that the list
size grows for large domains with many end entities
downloading the list, and thus the network load
becomes really heavy and unacceptable. Cache
techniques can be used at the end entities, but it is
End
Entity
3.C.3-5
RA
-----
difficult to define the frequency of CRL updates and
get a list as fresh as possible. Many modifications
and extensions for improving CRL performances
were proposed such as Delta CRL, Over-issued CRL
or CRL distribution points [9].
The second standardized approach is to provide
an online server and use some protocols to check in
real-time the certificate revocation status. Compared
to the CRLs, the main advantage is to request a
targeted certificate status instead of a full revocation
lists where only one entry matters for the verifier.
OCSP is an example of an online revocation status
checking protocol. The protocol has been designed to
check the revocation status exclusively: an end entity
requests the revocation information for one or more
certificates using OCSP request to the OCSP server.
The OCSP responder checks the revocation status
information and issues an OCSP response containing
the certificate ID and the certificate status to the end
entity.
The problem with this approach is that the server
response has to be signed (which means processing
and network overheads for each response). Another
issue is that the server is always connected, which
makes it vulnerable to Denial of Service (DoS)
attacks. As for CRLs, there are some proposals to add
functionalities to OSCP and avoid this kind of issues
such as OCSP-X [12]. Simple Certificate Verification
Protocol (SCVP) [13] is another online protocol but
little bit different from OSCP since it fully validates a
certificate using all certificate validation criteria
(expiration lifetime, issuer ID, etc). Since the
classical CRLs and the online OCSP protocol are the
two revocation mechanisms recommended in ATA
Spec42 document [3], we perform a comparative
analysis using only these two revocation schemes,
but the study can be extended to other revocation
mechanisms in further work.
## PKI Activities in Civil Aviation
Many research works have been carried on PKIs
to enhance the security of next generation connected
aircrafts. For instance, [14] investigated an
authentication protocol for Controller-Pilot Data Link
Communications (CPDLC). As far as public keys and
certificates are needed (the protocol is based on
elliptic curve primitives), a PKI was used and the
authors assumed that a CA exists to create and
distribute the credentials between the aircraft
applications and the ground-CPDLC applications.
But, there were no cost or performance
considerations when the PKI was presented.
Moreover, the PKI described here is specific to one
particular protocol.
[1] proposed a secure version of the Aircraft
Communications Addressing and Reporting System
(ACARS). ACARS system is worldwide used by
commercial airlines for the air-ground operational
communications and over oceanic regions when radar
coverage is no longer available. The messages are
transferred over Radio Frequency (RF) channels in
readable forms: then, it is possible to determine
aircraft position, cargo content or operational details
of the flight using low cost eavesdropping
techniques[7]. The AMS (ACARS Message Security)
protocol is a security solution for ACARS and uses
cryptographic algorithms that have been validated by
the industry, such as PKI for the key and certificate
management life cycle. Unfortunately, The ACARS
is intended to be replaced progressively over the
years with the ATN (Aeronautical
Telecommunication Network) over IPS (Internet
Protocol Suite) system.
Besides, the use of data networks creates some
opportunities to corrupt safety-critical airplane
software’s: [2] presented a security framework for a
specific aeronautical network application, namely the
Electronic Distribution of Software (EDS). First, the
authors introduced a new approach called Airplane
Assets Distribution system (AADS) to model the
information assets exchange between the entities.
They identified safety and business threats, then
suggested to use digital signatures and a PKI to
secure the model, but they considered the PKI
security solution too much complex (because of the
certification mandatory procedure) and they proposed
to investigate a light-weight alternative to PKI.
[15] addresses some of the emerging challenges
for network-enabled airplanes that use public key
cryptography-based applications. Two approaches
have been presented, respectively an ad-hoc
technique without trust chains between certificates,
and a structured approach employing a PKI for EDS
on commercial airplanes. The ad-hoc approach
7 Acarsd is a free ACARS decoder for Linux and Windows OS
which attempts to decode ACARS transmissions in real-time
using soundcard devices.
3.C.3-6
-----
consisted in pre-loading trusted certificates on
airplane via an OOB mechanism: the main advantage
of the solution is its simplicity and reduced cost, the
big drawback is the fact that this solution does not
consider the scaling issues we discussed before. The
structured PKI solution seems much more
appropriate and offers long-term benefits in terms of
scalability. But it is considered more expensive than
the ad-hoc solution, specifically because of the
setting up and maintenance costs of the PKI.
The paper discussed also the certificate
revocation main techniques: the authors suggested
using CRLs for checking certificates at the airplane
to avoid the necessity of direct connectivity to
external networks, which is a condition imposed by
the use of an OCSP online server. In our study, we
evaluate these techniques according to the induced
network and computational overheads and we give
suggestions for design and implementation according
to the results obtained at the end of our paper.
[16] depicts in general how a PKI supports an
ATM environment with an emphasis on the ATN and
the Federal Aviation Administration (FAA) facilities
and devices (routers, end systems, LANs, etc). The
authors suggested the use of cross certification to
handle inter-domain certification. Cross certification
is basically an extension of the third party trust term:
when different CAs are deployed in separate
domains, they are able to establish and maintain
trustworthy certification processes and interact
seamlessly with each other. In this way, users are
able to trust any CA as much as their own CA and
can communicate with users not necessarily attached
to the same CA. However, the key distribution and
certification processes were not described in this
paper.
## Hierarchical PKI Model for Next Generation Connected Aircrafts
### Standard PKI Model
Several types of PKI have been defined for
ground networks [17]. As a reference (and because it
is widely deployed model), we have chosen a single
CA model as a standard PKI model for the
performance study.
Figure 2 shows the entities involved with a
single root CA, these CAs are deployed by the
airlines on the ground:
- _Verifier: an end entity which aims to verify_
the validity of a certificate;
- _Owner:_ an end entity which possesses the
digital certificate to be verified;
- _Certificate_ _Management_ _Subordinate_
_Entity[8] (CMSE): this is an entity through_
which the verifier is able to check the
certificate status and its validity (e.g. an
OCSP server). In most case, the CMSE is
merged with the CA.
It is important to note that both verifier and
certificate owner can be either onboard or groundlocated. The computation and network overheads are
also depicted in figure 2 (cf. table 1 for the
descriptions of notations). The owner is denoted _O,_
the verifier _V, and the ground CA_ _GCA in the_
equations.
𝐶𝑅,𝑉𝐶𝑃𝑈 𝐶𝑅,𝐶𝑀𝑆𝐸𝐶𝑃𝑈 + 𝐶𝑈,𝐶𝑀𝑆𝐸𝐶𝑃𝑈
Verifier CMSE
𝐶𝑈𝑁𝑒𝑡 + 𝐶𝑅𝑁𝑒𝑡
Owner Ground
CA
𝐶𝑈,𝐶𝐴𝐶𝑃𝑈 + 𝐶𝑅,𝐶𝐴𝐶𝑃𝑈
**Figure 2. Standard PKI model**
### Hierarchical PKI Model
In this section, we propose a PKI model adapted
to the future aeronautical air-ground
communications. Figure 3 illustrates the model and
the function of each entity:
8 Depending on the used terminology, CMSE might have a
different name.
Ground
CA
Owner
3.C.3-7
CMSE
-----
Cross Certification
End
Entity2
Root
CA1
Root
CAx
Sub-CA2
**Figure 3. Hierarchical PKI Model for Future**
**Aeronautical Communications**
The PKI model we propose works across three
levels:
- The first level is relevant to the inter-CA
communications: a ground-located rootCA (RCA in the equations) is deployed for
each airline and is responsible of all the
end entities that belong to this airline. The
end entity can be on the ground such as an
ATN router (out-of-scope of this paper) or
an aircraft (see the second level of the
hierarchy). As long as every root-CA is
independent of the others and has the
authority on the aircrafts labeled within the
airline domain, cross certification can be
used between the root-CAs. Thus, the
autonomy of local ground CAs and
interaction between end entities belonging
to different airlines can be always
provided;
- The second level is relevant to the
communications between the root CA of
an airline and the aircrafts managed by this
root-CA: delegated (or subordinate) CAs
(denoted _SCA in the equations) are_
deployed onboard each aircraft and used to
handle the onboard certificate entities (see
the last level of the hierarchy). Actually,
using a device as a CA in mobile networks
is of common use, especially for
performance purposes (in MANETs for
instance): we used this idea as a starting
point to develop our scalable PKI model;
- The third and last level of the hierarchy
concerns every end entity onboard the
aircraft: the sub-CA is responsible of
managing all the certificates of these
entities. In the analysis performed below,
only passengers are considered as end
entities holding a certificate, but the study
can be extended to avionic devices or
AOC crew for instance.
## Performance Analysis
In this section, we compare the two PKI models
in three different study cases; depending on the
verifier, the certificate owner and the CAs physical
locations (ground to ground case is out-of-scope of
the study since there are no messages exchanged on
the air-ground link). The comparison study is done
for two PKI steps: the certificate generation and
revocation procedures. The main goal of the study is
to evaluate network and computation overheads
generated by the different PKI models according to
the physical locations of PKI entities defined for each
scenario.
### Aircraft Source Data
Our study is passenger-based approach, which
means we rely exclusively on the number of growing
passengers to evaluate the benefits of the proposed
model. For this purpose, it is adequate to use real data
for the performance study: then we managed to use
source traffic data issued from the DSNA-DTI
(Direction des Services de la Navigation AérienneDirection de la Technique et de l'Innovation)
databases. These are daily air traffic statistics for
medium-range aircrafts in the French airspace and are
structured by hour of flight, aircraft family label (e.g.
B738), and ICAO (International Civil Aviation
Organization) code.
In order to make these information more useful,
we tried to estimate the maximum number of
passengers that every aircraft can carry, and then we
extrapolate the results by the total number of
End
Entity1
Root
CA2
Sub-CAv
3.C.3-8
End
Entityn
-----
aircrafts. We used The EUROCONTROL
performance database [9] V2.0 and some additional
information about aircraft seats [10] to deduce the
maximum capacity of each aircraft according to its
ICAO code, then we synthesize the data and extract
the relevant information we need. Also, as suggested
by a recent DGAC [11] (Direction Générale de
l'Aviation Civile) report [18], we used an average
aircraft filling (between 70% and 80%) instead of the
maximum aircraft capacity. Also, as we used to
deploy an airline-dedicated PKI (cross-certification
between the airlines is out-of-scope of this paper), we
concentrate our efforts on the largest's airline in the
source data, namely the French Air France airline.
**Figure 4. Daily Passenger and Aircraft Statistics**
**(Air France Airline)**
Figure 4 shows the global number of flights
handled per hour (an average of 38 aircrafts) and the
total passenger’s number per hour (an average of
4200 passengers). These statistics will be used later
to study the certificate management procedures and
the network and computational costs.
### Experimental Scenarios
**Scenario 1: Ground-Verifier/on Board-Owner**
This is a typical case where a passenger sends an
email (signed) to a ground entity which wants to
proceed for certificate verification. Figure 5 and
figure 6 shows respectively the exchanged data in
this scenario for the two PKI models. The dashed line
is the air-ground separation.
9 www.elearning.ians.lu/aircraftperformance/
10 www.seatguru.com
11 The DGAC is the French civil aviation authority.
Ground
CA
Verifier
Owner
**Figure 5. Scenario 1 – Standard Model**
{𝑂, 𝐾𝑂+}𝐾𝑆𝐶𝐴−
Sub-CA 𝐾𝑂+ Owner
𝐾𝑆𝐶𝐴+ 𝑀 | {𝑀}𝐾𝑂− |{𝑂, 𝐾𝑂+}𝐾𝑆𝐶𝐴−
Root
Verifier
CA 𝐾𝑅𝐶𝐴+ |{𝑆𝐶𝐴, 𝐾𝑆𝐶𝐴+ }𝐾𝑅𝐶𝐴−
**Figure 6. Scenario 1 – Hierarchical Model**
**Scenario 2: On Board-Verifier/Ground-Owner**
In this scenario, the certificate owner (e.g. an
email sender) is on the ground and the verifier is on
board (see figure 7 and 8):
Verifier
Owner
Sub-CA
−
Ground
CA
Owner
{𝑂, 𝐾𝑂+}𝐾𝐺𝐶𝐴−
|Col1|Col2|
|---|---|
|𝐾 𝐺+ 𝐶 𝐴 𝑀 | {𝑀} 𝐾− |{𝑂, 𝐾 𝑂+} 𝐾− 𝑂 𝐺𝐶𝐴 Ground 𝐾+ Owner 𝑂 CA {𝑂, 𝐾+} −||
Root
CA
**Figure 7. Scenario 2 – Standard Model**
Verifier
3.C.3-9
-----
Verifier
Sub-CA
𝑆𝐶𝐴+ |{𝑅𝐶𝐴, 𝐾𝑅𝐶𝐴+ }𝐾𝑆𝐶𝐴−
𝑂+}𝐾𝑅𝐶𝐴−
**Figure 10. Scenario 3 – Hierarchical Model**
### Results
**Certificate Generation and Distribution Process**
In order to assess the network and the processing
costs according to the two PKI models and the three
different scenarios previously introduced, some
assumptions have to be made:
- RSA is used for the key pairs and the
digital signature with a signature key
length 𝑙𝑠𝑖𝑔 = 256 𝐵𝑦𝑡𝑒𝑠 . For simplicity
matter, we use 𝑙𝑠𝑖𝑔 notation to denote
simultaneously the signature length and
the public key length.
- The average certificate length is 𝑆𝑖𝑧𝑒𝐶 =
1 𝐾𝐵𝑦𝑡𝑒𝑠 (based on the average X.509
certificate length);
- The exchanged data 𝑀 is not considered
since the study aims to measure only the
additional overheads of PKI mechanisms.
Here are the two network cost equations
respectively for the standard and the hierarchical PKI
models (scenario 1):
𝑁𝐶. �𝐾𝑂+ + {𝑂, 𝐾𝑂+}𝐾𝐺𝐶𝐴− + 𝑀 �{𝑀}𝐾𝑂− �{𝑂, 𝐾𝑂+}𝐾𝐺𝐶𝐴− �
≅2. 𝑁𝐶. (𝑙𝑠𝑖𝑔 + 𝑆𝑖𝑧𝑒𝐶)
and
𝑁𝑓. 𝐾𝑆𝐶𝐴+ + 𝑁𝐶. (𝑀 |{𝑀}𝐾𝑂− �{𝑂, 𝐾𝑂+}𝐾𝑆𝐶𝐴− �
≅𝑁𝑓. 𝑙𝑠𝑖𝑔 + 𝑁𝐶. (𝑙𝑠𝑖𝑔 + 𝑆𝑖𝑧𝑒𝐶)
The passenger is assumed to send one request
for the certificate generation. We extrapolate the
equations with the results we obtained from the
aircraft and passenger statistics (cf. Aircraft Source
Data Section):
Root
CA
Owner
{𝑂, 𝐾𝑂+}𝐾𝑅𝐶𝐴−
|Col1|𝐾 + |{𝑅𝐶𝐴, 𝐾 + }|Col3|
|---|---|---|
||𝐾 𝑆+ 𝐶 𝐴|{𝑅𝐶𝐴, 𝐾 𝑅+ 𝐶 𝐴} 𝐾− 𝑆𝐶𝐴||
|𝐾 𝑅+ 𝐶 𝐴 𝑀 | {𝑀} 𝐾 𝑂− |{𝑂, 𝐾 𝑂+} 𝐾 𝑅− Root 𝐾+ Owner 𝑂 CA {𝑂, 𝐾+} −|||
**Figure 8. Scenario 2 – Hierarchical model**
**Scenario 3: Both Verifier and Owner Are on**
**Board**
In the last scenario, the verifier and the owner
are both on board two different aircrafts as shown in
figure 9 and figure 10. Intra-airline AOC information
exchange can be a direct application of this specific
scenario:
Owner Verifier
𝑀 | {𝑀}𝐾𝑂− |{𝑂, 𝐾𝑂+}𝐾𝐺𝐶𝐴−
𝐾𝑂+ {𝑂, 𝐾𝑂+}𝐾𝐺𝐶𝐴− 𝐾𝐺𝐶𝐴+
Ground
CA
**Figure 9. Scenario 3 – Standard Model**
Ground
CA
Owner
Owner
Sub-CA2
3.C.3-10
Root
CA
Verifier
-----
**Figure 11. Scenario 1 – Network Costs**
As shown in figure 11, it is clear that the
hierarchical PKI model is less greedy than the
standard model; the difference between the two
model costs is about 55%. The hierarchical model is
also better in the scenario 2 configuration, the
network cost equations for the standard and
hierarchical PKI models are:
𝑁𝐶. (𝐾𝐺𝐶𝐴+ + 𝑀 |{𝑀}𝐾𝑂− �{𝑂, 𝐾𝑂+}𝐾𝐺𝐶𝐴− �
≅𝑁𝐶. ( 2. 𝑙𝑠𝑖𝑔 + 𝑆𝑖𝑧𝑒𝐶)
and
𝑁𝑓. 𝐾𝑅𝐶𝐴+ + 𝑁𝐶. (𝑀 |{𝑀}𝐾𝑂− �{𝑂, 𝐾𝑂+}𝐾𝑅𝐶𝐴− �
≅𝑁𝑓. 𝑙𝑠𝑖𝑔 + 𝑁𝐶. (𝑙𝑠𝑖𝑔 + 𝑆𝑖𝑧𝑒𝐶)
**Figure 12. Scenario 2 - Network Costs**
Figure 12 illustrates the network costs; the
difference between the two PKI models is 20%. In
the last scenario, the network cost equations are:
𝑁𝐶. �𝑁𝑓 −1�. � 𝐾𝑂+ + {𝑂, 𝐾𝑂+}𝐾𝐺𝐶𝐴− + 𝐾𝐺𝐶𝐴+
+ 𝑀 �{𝑀}𝐾𝑂− �{𝑂, 𝐾𝑂+}𝐾𝐺𝐶𝐴− �
≅𝑁𝐶. �𝑁𝑓 −1�. (3. 𝑙𝑠𝑖𝑔 + 2. 𝑆𝑖𝑧𝑒𝐶)
and
(𝑁𝑓 −1). �𝐾𝑆𝐶𝐴+ 1 + 𝐾𝑅𝐶𝐴+ ��𝑆𝐶𝐴1, 𝐾𝑆𝐶𝐴+ 1�𝐾𝑅𝐶𝐴− �
+ 𝑁𝐶. (𝑀 | {𝑀}𝐾𝑂− |{𝑂, 𝐾𝑂+}𝐾𝑆𝐶𝐴1− )
≅(𝑁𝑓 −1). (2. 𝑙𝑠𝑖𝑔 + 𝑆𝑖𝑧𝑒𝐶)
+ 𝑁𝐶( 𝑙𝑠𝑖𝑔 + 𝑆𝑖𝑧𝑒𝐶)
The hierarchical model network cost remains
always below the standard model network cost as we
can see in figure 13. We used a logarithmic scale for
this figure to see better the difference between the
two models: the average difference for network costs
is about 92 % per hour for all the passengers.
**Figure 13. Scenario 3 - Network Costs**
As we can see in both cost equations and
figures, the hierarchical model is advantaged thanks
to the number of total certificates that a root-CA has
to manage; the deployment of the sub-CA minimizes
the air-ground exchanges for the PKI credentials
(public keys, signature and certificates). In the
standard model, all these credentials are handled by a
single ground-located CA, and then the air-ground
amount of data is much larger.
**Certificate Revocation Process**
In this section, we analyze the same comparison
study (using the same scenarios for both the standard
and the hierarchical PKI models) regarding the
revocation process using two techniques: CRLs and
3.C.3-11
-----
OCSP protocol. Table 2 shows the value of each cost
per revocation mechanisms:
**Table 2. Network and Processing Costs for the**
**Certificate Revocation Procedure**
**Cost** **CRL** **OCSP**
𝐶𝑈𝑁𝑒𝑡 𝑁𝑈. ([𝑁][𝐶][. 𝑅][𝐶]2[. 𝑡][𝐶][. 𝑙][𝑠𝑛] + [𝑁]𝑁[𝐶]𝐶,𝐶𝐴[. 𝑙][𝑠𝑖𝑔]) _0_
𝐶𝑈,𝐶𝐴𝐶𝑃𝑈 𝑁𝑈. 𝐶𝑠𝑖𝑔 _0_
𝐶𝑈,𝐶𝑀𝑆𝐸𝐶𝑃𝑈 𝑁𝑈. 𝐶𝑣 _0_
𝐶𝑅𝑁𝑒𝑡 𝑁𝑅. ([𝑁][𝐶,𝐶𝐴][. 𝑅]2[𝐶][. 𝑡][𝐶][. 𝑙][𝑠𝑛] + 𝑙𝑠𝑖𝑔) 𝑁𝑅. 𝑙𝑠𝑖𝑔
𝐶𝑅,𝐶𝐴𝐶𝑃𝑈 _0_ _0_
𝐶𝑅,𝐶𝑀𝑆𝐸𝐶𝑃𝑈 _0_ 𝑁𝑅. 𝐶𝑠𝑖𝑔
𝐶𝑅,𝑉𝐶𝑃𝑈 𝑁𝑅. 𝐶𝑣 𝑁𝑅. 𝐶𝑣
As for the certificate generation process, we
make some assumptions on the parameters used in
the certificate revocation performance study:
- A passenger holds only one certificate and
then the total number of certificates
𝑁𝐶 is equal to the total number of
passengers (per hour);
- 𝑁𝑅 (the certificate revocation check status
messages per day) depends on the total
number of certificates:
𝑁𝑅 = 𝑁𝐶. 𝑅𝐶, where 𝑅𝐶 = 10%;
- 𝑁𝐶,𝐶𝐴 depends on the considered PKI
model: in the standard model,
𝑁𝐶,𝐶𝐴 = 𝑁𝐶 (equal to the total number of
passengers per airline), in the
hierarchical PKI model, 𝑁𝐶,𝐶𝐴 = 110
(average passengers per sub-CA);
- Revocation information update frequency
is one day: 𝑁𝑈 = 24 (ℎ𝑜𝑢𝑟𝑠);
- RSA is always used for the key pairs and
the digital signature: 𝑙𝑠𝑖𝑔 = 256 𝐵𝑦𝑡𝑒𝑠;
- The certificate serial number length 𝑙𝑠𝑛 =
20 𝑏𝑖𝑡𝑠;
- The signature and verification time’s
𝐶𝑠𝑖𝑔 and 𝐶𝑣 are respectively equal to _420_
_msec and_ _0.113 msec. These values are_
processed using a Pentium 8x Core i7 CPU
at 2.67 Ghz, 4Go RAM and a Linux
2.6.26-2-64 kernel.
**Figure 14. Requested Network Capacity between**
**CA and CMSE for Updating Certificate**
**Revocation Information**
The CRLs are heavy and, then the update
operation is expensive for the two PKI models: the
difference is not significant. The OSCP approach is
not represented in figure 14 because the server is
usually co-located with the CA and then the
requested network capacity is null. The
computational cost of the CRL approach is really
weak (up to 48 msec), for OCSP this cost is null.
**Figure 15. Requested Network Capacity between**
**CA and Verifiers for Revocation Requests**
The benefits of the hierarchical PKI model are
much clearer when the comparison is done for the
revocation request messages: the standard model is
disadvantaged because of the total number of
certificates handled by one ground CA. For the
|Cost|CRL|OCSP|
|---|---|---|
|𝐶𝑁𝑒𝑡 𝑈|𝑁 . 𝑅 . 𝑡 . 𝑙 𝑁 . 𝑙 𝐶 𝐶 𝐶 𝑠𝑛 𝐶 𝑠𝑖𝑔 𝑁 . ( + ) 𝑈 2 𝑁 𝐶,𝐶𝐴|0|
|𝐶𝐶𝑃𝑈 𝑈,𝐶𝐴|𝑁 . 𝐶 𝑈 𝑠𝑖𝑔|0|
|𝐶𝐶𝑃𝑈 𝑈,𝐶𝑀𝑆𝐸|𝑁 . 𝐶 𝑈 𝑣|0|
|𝐶𝑁𝑒𝑡 𝑅|𝑁 . 𝑅 . 𝑡 . 𝑙 𝐶,𝐶𝐴 𝐶 𝐶 𝑠𝑛 𝑁 . ( + 𝑙 ) 𝑅 2 𝑠𝑖𝑔|𝑁 . 𝑙 𝑅 𝑠𝑖𝑔|
|𝐶𝐶𝑃𝑈 𝑅,𝐶𝐴|0|0|
|𝐶𝐶𝑃𝑈 𝑅,𝐶𝑀𝑆𝐸|0|𝑁 . 𝐶 𝑅 𝑠𝑖𝑔|
|𝐶𝐶𝑃𝑈 𝑅,𝑉|𝑁 . 𝐶 𝑅 𝑣|𝑁 . 𝐶 𝑅 𝑣|
3.C.3-12
-----
hierarchical PKI model, OCSP is better than the
classic CRL approach: OCSP computes only one
signature per request whereas the CRL method is
much more demanding in term of network capacity
(cf. Figure 15). The computational costs are nearly
the same except a difference for OCSP server (up to
_9ms versus 0 ms for the CRL)._
As expected, the hierarchical PKI has better
performances than the standard PKI model. The CRL
revocation method has many advantages such as its
simplicity, an important amount of information, and a
reduced risk. But, as shown in the experiments, the
big size of the CRLs is a major issue since the
requested network capacity for updating and
checking the status of the certificates is extremely
high. Also, for freshness purposes, every CRL
contains the next update date of the revocation
information: since all the verifiers are going to send
CRL requests at the same time to retrieve the new
CRL, the network might be overloaded at this time.
These consequences cannot be accepted in the
aeronautical context were the air-ground network
resources cannot be wasted, thus, we recommend the
use of OCSP as a revocation method instead of the
CRL classic approach.
## Securing a Negotiation Protocol of Supported Security Mechanisms
In a previous work, we introduced a negotiation
protocol as a component of a whole security
framework for aeronautical data link communications
[19]. The aim of the proposal is to provide an
adaptive security policy for APC, AOC, and ATS
communications. A component called _SecMan_
(Security Manager) is designed to pick up the best
security mechanism, depending on real-time network
and computational considerations. For the initiation
of the adaptive algorithm, the onboard and ground
servers have to negotiate the ciphers commonly
supported before a secure connection can be
established. Thus, we designed a negotiation protocol
of the supported security mechanisms for air-ground
communications. Initially, we proposed an unsecure
version of the protocol, but quickly, we realized that
the protocol was subject to many critical attacks such
as replay and Man in The Middle (MITM) attacks.
Then, we propose to use the PKI model in order to
secure this negotiation protocol.
As an extension of the performance study
discussed in this paper, we perform here the same
comparison between the standard PKI model and the
hierarchical PKI model. In this paper, we do not need
to explain all the steps of the negotiation phase; the
protocol is detailed in [19]. Instead, we focus only on
the air-ground messages exchanged between the
onboard security proxies (called SMP – Security
Manager Proxy) and the ground server: if a passenger
requests for a secure connection with a groundlocated server, the SMP takes the lead and makes the
negotiation with the server. In order to respect the
terminology used above, the SMP is the verifier and
the ground server (noted _S) is the certificate_ _owner._
This case study is relative to the second scenario
described before (an onboard _verifier and a ground_
_owner). For simplicity matter, the study is done only_
for the initiation phase of the negotiation protocol
since the PKI credentials are mainly used in this step.
Here are the numerical values used for the study:
- The Supported Security Protocols (SSP)
set (added to its lifetime 𝑡𝑆 ) is stored in
XML files and has a size equal to
400 𝐵𝑦𝑡𝑒𝑠;
- The hash ℎ𝑆 is generated using SHA-1 and
has a 20 𝐵𝑦𝑡𝑒𝑠 length;
- The Nonce size is equal to 16 𝐵𝑦𝑡𝑒𝑠;
- RSA is used for the digital signature with a
signature key length 𝑙𝑠𝑖𝑔 = 256 𝐵𝑦𝑡𝑒𝑠;
- Certificate length is equal to 1 𝐾𝐵𝑦𝑡𝑒𝑠.
Figure 16 and 17 depicts the exchanged
messages of the initial negotiation protocol phase
using respectively the standard and hierarchical PKI
models:
3.C.3-13
-----
Ground
CA
SMP
Server
**Figure 16. Securing the Negotiation Protocol**
**(Standard PKI Model)**
|Su b-CA|Col2|
|---|---|
SMP
𝑆𝐶𝐴+ |{𝑅𝐶𝐴, 𝐾𝑅𝐶𝐴+ }𝐾𝑆𝐶𝐴−
Sub-CA
𝑁𝑜𝑛𝑐𝑒1
Figure 18 shows the network cost comparison
between the two models. The hierarchical PKI model
is 20% less expensive than the standard model (the
average difference data size is about 1408 Bytes).
**Figure 18. Network Costs to Secure the**
**Negotiation Protocol (Initialization Phase)**
## Conclusion
In this paper, we presented a new hierarchical
PKI model for future ATM systems. We introduced
the basic PKI concepts, and then we highlighted the
advantages of our model through a performance
analysis. We also performed a comparison between
the CRL and OCSP revocation approaches.
As the final results have shown, it seems
promising to deploy the hierarchical PKI using an
online revocation checking status protocol like
OCSP. In fact, this combination enhances
considerably the network and system consumption
performances in an ATM environment. Finally, we
used the PKI to secure a negotiation protocol for the
supported security mechanisms between two end
entities and we quantified the signaling overhead.
Again, the hierarchical model performances are better
than the classical model.
However, some issues remain unsolved and the
study can be extended with some additional features.
First, the OCSP server is vulnerable to DoS attacks:
when a certificate revocation server is corrupted, end
entities (aircrafts, passengers, avionics systems) are
enable to check the validity of the certificates and
then the integrity of the communications will be
compromised. Thus, some modifications are required
Server
{𝑆, 𝐾𝑆+}𝐾𝑅𝐶𝐴−
|Col1|𝐾|𝑆+ 𝐶 𝐴|{𝑅𝐶𝐴, 𝐾 𝑅+ 𝐶 𝐴} 𝐾− 𝑆𝐶𝐴|Col4|
|---|---|---|---|
|𝑁𝑜𝑛𝑐𝑒 |𝑁𝑜𝑛𝑐𝑒 |𝑡 1 2 𝑆 𝐾 𝑅+ 𝐶 𝐴 𝑆𝑆𝑃 𝑆 | {𝑆𝑆𝑃 𝑆} 𝐾− |{𝑆, 𝐾 𝑆 R oot 𝐾+ Serv 𝑆 CA {𝑆, 𝐾+} 𝐾−||𝑁𝑜𝑛𝑐𝑒 |𝑁𝑜𝑛𝑐𝑒 |𝑡 1 2 𝑆 𝑆𝑆𝑃 𝑆 | {𝑆𝑆𝑃 𝑆} 𝐾− |{𝑆, 𝐾 𝑆 𝐾+ Serv 𝑆||ℎ | 𝑆 𝑆+} 𝐾− 𝐺𝐶𝐴 er|
Root
CA
**Figure 17. Securing the Negotiation Protocol**
**(Hierarchical PKI Model)**
The certification revocation process is not
addressed here since we already recommended the
used of OCSP and there is no difference between the
uses of OCSP for both PKI models (c.f. figure 15).
The network cost for the standard PKI model is:
𝑁𝐶. (𝐾𝐺𝐶𝐴+ + 𝑆𝑆𝑃𝑆 |{𝑆𝑆𝑃𝑆}𝐾𝑆− �{𝑆, 𝐾𝑆+}𝐾𝐺𝐶𝐴− |𝑡𝑆|ℎ𝑆
+ 2. 𝑁𝑜𝑛𝑐𝑒1 + 𝑁𝑜𝑛𝑐𝑒2�
≅𝑁𝐶. ( 2. 𝑙𝑠𝑖𝑔 + 𝑆𝑖𝑧𝑒𝐶 + 3. 𝑁𝑜𝑛𝑐𝑒
+ 𝑆𝑆𝑃𝑆 + ℎ𝑆)
The network cost for the hierarchical PKI model
is:
𝑁𝑓. 𝐾𝑅𝐶𝐴+ + 𝑁𝐶. (𝑆𝑆𝑃𝑆 |{𝑆𝑆𝑃𝑆}𝐾𝑆− �{𝑆, 𝐾𝑆+}𝐾𝑅𝐶𝐴− |𝑡𝑆|ℎ𝑆
+ 2. 𝑁𝑜𝑛𝑐𝑒1 + 𝑁𝑜𝑛𝑐𝑒2�
≅𝑁𝑓. 𝑙𝑠𝑖𝑔 + 𝑁𝐶. (𝑙𝑠𝑖𝑔 + 𝑆𝑖𝑧𝑒𝐶
+ 3. 𝑁𝑜𝑛𝑐𝑒+ 𝑆𝑆𝑃𝑆 + ℎ𝑆)
3.C.3-14
-----
to enhance the security of the OCSP server in that
way. Also, because of the aircraft’s mobility and
roaming between two distinct domains, some
interoperability problems arise: for instance, when a
CA has to manage some aircrafts that do not belong
to its domain for instance. Then, the first level of the
hierarchical PKI model we proposed has to be
investigated to find some solutions to this kind of
issues.
Also, the performance study is limited to
passengers (as end entities), but it might be
interesting to perform some tests for the avionic
systems and devices requiring digital certificates for
air-ground communications. Also, only the basic
version of CRL method and the OCSP protocol have
been considered for the revocation scheme
comparison: other alternatives such as SCVP or CRL
extensions can be added to this comparison study.
## References
[1] ARINC, 2007, Draft 1 of ARINC Project Paper
823 Data Link security, Part 1 – ACARS Message
Security (AMS).
[2] Richard V. Robinson, Mingyan Li, Scott A.
Lintelman, Krishna Sampigethaya, Radha
Poovendran, David Von Oheimb, Jens-Uwe Buber,
and Jorge Cuellar, 2007, Electronic Distribution of
airplane Software and the Impact of Information
Security on airplane Safety, The 26[th] International
Conference on Computer Safety, Reliability and
Security (SAFECOMP 2007).
[3] Air Transport Association ATA, Revision 2009.1,
Aviation Industry Standards for Digital Information
Security ATA Spec 42.
[4] EUROCONTROL, 2008, Long-Term Forecast,
Flight Movements 2008-2030.
[5] International Air Transport Association (IATA),
2009, World Air Transport Statistics (WATS), 53[th]
edition.
[6] M. Myers, R. Ankney, A. Malpani, S. Galperin,
and C. Adams, June 1999, X.509 Internet Public Key
Infrastructure, Online Certificate Status Protocol –
OCSP, IETF RFC 2560.
[7] Joel Weise, August 2001, Public Key
Infrastructure Overview, Sun Microsystems, Inc.
[8] R.L. Rivest, A. Shamir, and L. Adleman, 1978, A
Method for Obtaining Digital Signatures and Publickey Cryptosystems, Communications of the ACM,
Vol. 21, Issue 2, Pages 120-126.
[9] D. Cooper, S. Santesson, S. Farrell, S. Boeyen, R.
Housley, and W. Polk, May 2008, Internet X.509
Public Key Infrastructure Certificate and Certificate
Revocation List (CRL) Profile, IETF RFC 5280.
[10] National Institute of Standards and Technology
(NIST), 2002, Federal Information Processing
Standards Publication (FIPS) 180-2, Secure Hash
Standard.
[11] J. Callas, L. Donnerhacke, H. Finney, D. Shaw,
and R. Thayer, November 2007, OpenPGP Message
Format, IETF RFC 4880.
[12] Phillip Hallam-Baker, 1999, OCSP Extensions,
Draft IETF PKIX OCSPX.
[13] T. Freeman, R. Housley, A. Malpani, D. Cooper,
and W. Polk, December 2007, Server-based
Certificate Validation Protocol (SCVP), IETF RFC
5055.
[14] Dawit Getachew and James H. Griner, 2005, An
Elliptic Curve Based Authentication Protocol for
Controller-Pilot Data link Communications,
International Journal of Computer Science and
Network Security.
[15] Richard V. Robinson, Mingyan Li, Scott A.
Lintelman, Krishna Sampigethaya, Radha
Poovendran, David Von Oheimb, and Jens-Uwe
Buber, 2007, Impact of Public Key Enabled
Applications on the Operation and Maintenance of
Commerical Airplaines, Aviation Technology
Integration and Operation (ATIO) Conference,
Belfast, Northern Ireland.
[16] Patel, V. and McParland, T., October 2001,
Public Key Infrastructure for Air Traffic
Management Systems, Digital Avionics Systems,
2001, DASC. The 20[th] Conference, pages 7A5/1 –
7A5/7 vol.2.
[17] Perlman, R., 1999, An Overview of PKI Trust
Models, Network IEEE, Pages 38-43, Vol. 13.
[18] Direction Générale de l’Aviation Civile,
Direction du Transport Aérien, 2010, Observatoire de
l’Aviation Civile : Tendance et Derniers Résultats du
Transport Aérien International.
3.C.3-15
-----
[19] Ben Mahmoud, MS. and Larrieu, N. and
Pirovano, A., 2010, An Adaptive Security
Architecture For Future Aircraft Communications,
Digital Avionics Systems Conference, 2010, Salt
Lake City, USA.
## Acknowledgements
We would like to thank Nicolas Staut and
Antoine Saltel, students at ENAC for their help and
involvement in the performance analysis.
## Email Addresses
Mohamed Slim Ben Mahmoud:
[email protected]
Nicolas Larrieu: [email protected]
Alain Pirovano: [email protected]
### 29th Digital Avionics Systems Conference October 3-7, 2010
3.C.3-16
-----
| 18,387
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/DASC.2010.5655369?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/DASC.2010.5655369, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GREEN",
"url": "https://enac.hal.science/hal-01022208/file/174.pdf"
}
| 2,010
|
[
"Conference"
] | true
| 2010-12-03T00:00:00
|
[] | 18,387
|
en
|
[
{
"category": "Mathematics",
"source": "external"
},
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Mathematics",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0037a6039efa181511aa8f04e6dda1dae576b524
|
[
"Mathematics",
"Computer Science"
] | 0.85698
|
Weak Invertibiity of Finite Automata and Cryptanalysis on FAPKC
|
0037a6039efa181511aa8f04e6dda1dae576b524
|
International Conference on the Theory and Application of Cryptology and Information Security
|
[
{
"authorId": "145156707",
"name": "Z. Dai"
},
{
"authorId": "2164465",
"name": "Dingfeng Ye"
},
{
"authorId": "49535103",
"name": "Kwok-Yan Lam"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"ASIACRYPT",
"Int Conf Theory Appl Cryptol Inf Secur"
],
"alternate_urls": null,
"id": "62d378e9-dcaf-4ce7-9eeb-31175bbe3071",
"issn": null,
"name": "International Conference on the Theory and Application of Cryptology and Information Security",
"type": "conference",
"url": "https://www.iacr.org/meetings/asiacrypt/"
}
| null |
Weak Invertibili ty of Finite Automata and
�
Cryptanalysis on FAPKC
, Ding Feng Ye
and Kwok Yan Lam
ZongDuo Dai
Dept. of Math., State Key Lab. of Information Security
Graduate Scho ol, Academia Sinica, 000 -0, Beijing, China,
[email protected]
Dept. of ISCS, National University of Singap ore,
Lower Kent Ridge Road, Singap ore 0
[email protected] [email protected]
Abstract
FAPKC [, , , 0, ] is a public key cryptosystem based weakly
invertible �nite automata. Weak invertibili ty of FAs is the key to under
stand and analyze this scheme. In this pap er a set of algebraic terminolo
gies describing FAs is develop ed, and the theory of weak invertibil ity of
FAs is studied. Based on this, a cryptanalysis on FAPKC is made. It
is shown that the keys prop osed in [, , , 0, ] for FAPKCs are
insecure b oth in encrypting and in signing.
Keywords: �nite automaton, public key cryptosystem, cryptanalysis
Intro duction
Finite automaton (FA) is a widely used concept in computer science and has
several de�nitions slightly di�erent to each other according to applications. In
this context, it refers to a �nite sequential state machine, which was studied
widely, say for example in [-]. The action of such a machine is controlled by
a clo ck which ticks with inputs, i.e., on receiving an input symb ol, it pro duces
an output symb ol and its state transfers to a new one according to certain rules,
and thus with an initial state and an input sequence of �nite length it pro duces
an output sequence of the same length. Hence a �nite automaton is an analogue
to a usual function when viewed as a transformation from input sequences to
output sequences. A weakly invertible �nite automaton (WIFA) with delay �,
or simply � -weakly invertible �nite automaton, is such a FA that any input is
uniquely determined by the corresp onding state and output together with the
subsequent � outputs. That is, the input information can b e recovered from the
outputs after waiting � steps, or in other words, with � delays. WIFAs are similar
to the usual injective functions in the resp ect that one can retrieve the input
� This work was supp orted by Chinese Natural Science Foundation.
-----
information from the outputs. However the delay � and the state dep endence
make it much more complicated for one to recover the input information than
the usual injective functions. The �rst ob jective of this pap er is to set up a
systematic theory in dealing with the the problems such as how to construct
weak invertible FAs and their weak inverses, and how to routinely retrieve input
information from outputs and the initial state.
FAPKC, which is a public key cryptosystem and can do b oth encrypting and
signing, is based on weakly invertible �nite automata (WIFAs). FAPKC was �rst
intro duced in [], named as FAPKC0. Some versions were published in
[], named as FAPKC and FAPKC. Then a new version was intro duced
in [0], named as FAPKC. Roughly sp eaking, in all these systems, the
private key consists of two FAs whose weak inverses can b e easy constructed, and
the public key is the comp osition of them. It is b elieved in [-] that it is hard
to decomp ose the public key to get the private two FAs and that it is hard to get
a weak inverse of the comp osed FA without knowing this decomp osition, hence
any user can encrypt messages or verify signatures using the public key, but
can neither decrypt the cipher-texts nor forge the signatures without knowing
its two private comp onents. To hide the comp onents from the comp osed FA,
it is prop osed to use b o olean functions to express the comp osition. Then how
to maintain a mo derate public key size b ecomes a big problem, as comp osition
would generally yield b o olean expression explo ding when the outer comp onent
is nonlinear. The prop osed metho d is to restrict the input set X equal or smaller
than F
, where F is the binary �eld GF (), and to restrict the nonlinear degree
of the comp onents to b e small. The early versions were analysed in some pap ers,
say in [, , , ].
The main contribution of this pap er consists of two parts. In the �rst part
(Section -), we develop a set of algebraic terminologies to describ e FAs and
give a systematic treatment to the weak invertibility theory on the sep erable
memory FAs. In the second part (Section -), based on the develop ed theory,
we make a simple intro duction to FAPKC and then a cryptanalysis on it. Our
results show that all the keys prop osed for FAPKC in [, , , 0, ] are
insecure b oth in encrypting and in signing. Before coming to the main topic, we
recall some basic de�nitions in the next section.
Due to lack of space, the pro ofs of all the lemmas and theorems in this pap er
are ommited.
Basic De�nitions
For convenience, in this section we restate some basic concepts, which can b e
found in [] except some concepts like the natural pairs and the right � -weak
inverses.
A �nite automaton (FA) is a p entad M = (X ; Y ; S; �; �) where X ; Y are input
and output symb ol sets resp ectively, S is the state set, X ; Y and S are all �nite,
� : S � X ! S is the next state function, and � : S � X ! Y is the output
i
function. In the sequel, let X
= fx0
x
jxj
X ; 0 � j < ig b e the set
� � � xi�
-----
i
of all input sequence of length i, similarly for Y . For any s S; we use M (s)
S S
i i
and � (s) denote the function from
S
i
i� X
and the function from
to
Y
X
to S de�ned as
M (s)x0 x
� (s)x0 x
� � � xi�
� � � xi�
i�
= y0
= si
y
i�
� � � yi�
, where s0
= s,sj +
0
= � (sj
; x
j
), y
; x
j
), xj
X, 0 � j < i. For any
j
= �(sj
two FAs M ; M
which have the same input space X and the same output space
0
Y, we say a state s in M is equivalent to a state s
0
in M
0
0
if M (s) = M
0 0
(s
0
),
denoted by s � s
; we say M is a sub-automaton of M, denoted by M � M
0 0
, if
0
for any state s in M there exists a state s
0
such that s � s
; we say M and M
are equivalent if M � M
rest of this pap er.
� M . We do not distinguish equivalent FAs in the
0
A FA M is called � -weakly invertible, if for any s S; xi
condition
i ; xi
X, the following
i
0
M (s)x0
0
x
0
� � � x�
= M (s)x0
x
� � � x
�
0
implies x0
of M .
Let M
) b e two FAs, de�ne
= x0
. The least such �, denoted by � (M ), is called information delay
; � )
) and M
= (Y ; Z ; S ;
; � ;
; �
= (X ; Y ; S
; �
the composition of M
the composition of M and M to b e the FA M � M = (X ; X ; S � S ; � � �,
� � � ) where
(� � � )((s ; s ); x) = � (s ; � (s ; x))
(� � � )((s ; s ); x) = (� (s ; � (s ; x)); � (s ; x))
(s ; s ) S � S ; x X ; we usually call M the inner component, M the
and M
outer component. It is true that (M
�
Let M = (X ; Y ; S; �; �) and M
� M
)(s
; s
�
) = M
�
(s )
)M
(s
).
; �
� �
; � )
�
) b e two FAs. For s
�
S; s
S
�
= (Y ; X ; S
�
�
, we say (s
; s) is a � -pair in M
�
� M, or s
is a left � -match of s, or
s is a right � -match of s
, if
�
�
� M )(s
(M � M )(s ; s)x0 x � � � xn+� � = w0 w � � � w� � x0 x � � � xn�
n+� �
for all x0 x � � � xn+� � X, where w0 w � � � w� � X may dep endent on
x0 x � � � x� � . If further w0 w � � � w� � is indep endent on x0 x � � � x� �, we say
�
that (s ; s) is a natural � -pair.
� �
Let M and M
b e as ab ove, M
�
is called a � -weak inverse of M and � is
called the recovery delay of M
(with resp ect to M ), if for any s S, there
�
�
exists a s
�
S
�
, such that (s
; s) is a � -pair in M
� M . It is clear that a � -weak
inverse of M can recover the input sequence except the last � inputs.
In studying the commutability of a FA M and its a weak inverse, we intro duce
�
the so-called right weak inverse of M . A FA M
is called a right � -weak inverse
�
of M, if for any state s in M, there exists a state s
�
�
�
, such that (s; s
) is
in M
a � -pair in M � M
.
-----
Input Memory FAs and Quasi-Ring F
l
From now on, we assume X = Y = F
(elements b eing written as column
vectors), where F = GF () is the binary �eld, though all the results in this
pap er hold true when F is any �nite �eld. We will concentrate on the so called
input memory FAs whose states are determined by some numb er of the past
inputs (see b elow for the exact de�nition). Instead of investigating these FAs
individually, we study them as a whole set (the quasi-ring F ) endowed with some
algebraic structure. That is essential to our understanding of FAs. We b egin
with some de�nitions.
� Y
0
k
Let � = � (t
�h ; � � � ; t0
; u�k ; � � � ; u
�
) b e a function: X
! Y . De�ne
+h
0
the memory order of � to b e the minimal integer pair (h
; k
) such that � is
0
0
� �; j - k
� �g, and denote it
irrelevant to all the variables ft�i
; u
�j ji - h
0
by m(� ) = (h
0
; k
).
0
This function � together with any integer pair (h; k ), h � h
; k � k, deter
0
(h;k )
mines a memory FA M (� )
h
k
; �� ;
� X
; ��
k
) of typ e (h; k ),
where for any state s0
= (x
= (X ; X ; S� = X � Y
h
� � � x� ; y�k � � � y� ) X
where for any state s0 = (x�h � � � x� ; y�k � � � y� ) X � X, which is made of
the past h inputs and the past k outputs, and any input x0 X,
�� (s0 ; x0) = � (x�h ; � � � ; x�; x0 ; y�k ; � � � ; y� )
�� (s0 ; x0) = (x�h+ � � � x0 ; y�k + � � � y� �� (s0 ; x0 ))
(h;k ) 0 0
Notice that all the FAs M (� )
; h � h ; k � k
, are equivalent to each other,
so we do not care the typ e (h; k ), and write them by the same notation M (� ),
or simply by � when there is no ambiguity.
If the function � is of the form
� = f (t
; � � � ; t0 )
) + g (u
; � � � ; u�
) ()
�h
�k
we say M (� ) is a separable memory FA, written also as Mf ;g . If g = 0, Mf ;g will
b e called a input memory FA and will b e written simply as Mf ; in this case, the
memory order of f = � is simply an integer h, will b e denoted by m(f ) = h.
It is clear that M
is so, and all the
f ;g
is � -weakly invertible if and only if Mf
problems on the weak invertibility of the separable memory FAs can b e reduced
to those of the input memory FAs. In order to understand the separable memory
FAs, it is enough to understand the input memory FAs, so, in this pap er we will
mainly care ab out input memory FAs.
l
Let F b e the set of all p ossible input memory FAs with X = Y = F
+h
:
F = ff jf = f (t�h ; � � � ; t�
t
; t
) : X
! X ; h � 0; g
0
Here t
�i;; � � � ; t�i;l
)
, where t means the transp ose, and t�i;j
is a
�i
= (t�i; ; t
variable taking the values from F .
; � � � ; t0 )) ()
Let f = f (t�h
of f and g as
h ; � � � ; t� ; t0 ); g = g (t�h
f g = f (g (t�h�h0 ; � � � ; t�h
; � � � ; t�
; t0 )
0 ; � � � ; t� ; t0
); � � � ; g (t�h0
) F . De�ne the product
-----
0
The FA Mf g is denoted by the notation C (Mg ; Mf ) in []. For any state
h+h0
s = (a�h�h0 � � � a� a� ) X = Sf g, it is known [] that
s � (t; s0 ) Sf � Sg ; ()
where
h0
s0 = (a�h0 � � � a� a� ) X = Sg
h
t = Mg (a�h�h0 � � � a�h� a�h� )a�h � � � a� X = Sf
hence Mf g
is a sub-automaton of Mf
� Mg
.
With the ab ove multiplication and the usual addition, F forms a quasi-ring,
that is, these op erations satisfy the laws of a ring except the right-distribution
law.
Let Mm;l
(F ) denote the set of all m � l matrices over F, similarly for
Mm;l
(F [z ]) : : : etc. Under the mapping
i
A =
X
0�i�r
Ai
z
!
; A Ml;l
(F [z ]); where Ai
Ml;l (F );
X
0�i�r
Ai
i t�i ;
the matrix ring Ml;l (F [z ]) is emb edded in F and b ecomes a subring of F, it is
exactly the set of all linear FAs in F . t0 is the identity of F and will b e identi�ed
with the identity matrix I and written as sometimes. Similarly, t�i
can b e
i
i
identi�ed with the matrix z
More generally, let Fm;l
I and written as z
sometimes.
b e the set of input memory FAs whose output and
input space have dimension m and l resp ectively, the set of linear FAs in Fm;l
can b e identi�ed with Mm;l
can b e identi�ed with Mm;l (F [z ]). We can similarly de�ne pro ducts of elements
of Fn;m and elements of Fm;l for any n; m; l . In particular, elements in Fm;l
can b e multiplied by elements in Mn;m (F [z ]) for any n; m; l . So the b o olean
of F
n;m
expression of an element f = f (t�h ; � � � ; t0
) F can b e written as:
t
f = C T ; C M
(F [z ]); T = (T
; T
; � � � ; Tn
)
n;l
()
F
l;n
where Ti
; � i � n, are distinct standard monomials, here by a standard mono
mial we mean a monomial of the form:
Y
0�i�h
Y
�j �l
ai;j
t�i;j
; ai;j f0; g
0
such that there exists a j such that a0;j
= 0, where t�i:j
= , t�i:j
= t�i:j
.
Right Weak Inverses and Mf -Equations
In this section we study the problem of the existence of the right weak inverses
and the problem of solving the equation determined by the op erator Mf (s).
The following Lemma is critical in our studies. From Lemma one may draw
an analogy b etween a WIFA and a usual map, as it is well known that a map
-----
b etween two �nite sets of the same size is injective if and only if it is surjective.
To start with, we need to intro duce a notion which generalizes the surjectiveness
of the usual functions. For a state s of M, we say M (s) is � -surjective if
�
M (s)X
= (M (s)X
) � X
k
� +
k
g; k � .
where M (s)X
= fM (s)xjx X
Lemma Let f F, then f is � -weakly invertible if and only if Mf
(s) is
� -surjective for al l s Sf .
�
�
= (X ; X ; S
�
�
Theorem Let f F ; M
�
; �
; �
) be a � -weak inverse of Mf
�
.
Then M
�
��
, let s
is also a right � -weak inverse of f . Moreover, if (s
= �
�
�
for an arbitrary x
�
X
, then (s; s
; s) is a � -pair in
��
M
� Mf
� �
(s
)Mf
�
�
(s)x
) is a
natural � -pair in Mf � M .
Remark Based on the above Theorem, we may concentrate only on the weak
inverses.
�
Theorem Let f F be weakly invertible with � (f ) = �, and let M
=
�
�
�
0
�
, and let (s
0
(X ; X ; S
�
; �
; �
) be a �
�weak inverse of Mf
; s) be a �
-pair in
M
� Mf . Then
. The Mf �equation
n+�
Mf
(s)x = a; a = a0
a
� � � an�+�
X
; x = x0
x
� � � xn�+�
; n � 0;
()
�
� +n
has a solution x X
has a solution x X if and only if a0 a �
it has a solution, then the �rst n inputs x0 x
if and only if a0
� � � a� �
� � � xn�
Mf
(s)X
, and if
are uniquely deter
mined.
. If the equation () has a solution, then x is a solution if and only if it can
� 0
for some �
0
� 0
�
� 0
be read out by applying M
�
(s
) on a �
X
as fol lows:
0
�
)a �
�
�
�
x = M
�
(s
0
� 0
�
where �
X
is irrelevant data.
In the sequel, a separable memory FA is denoted by the notation Mf ;z g
naturally, where f F and g F .
Theorem . For any f F and g F, the equation Mf ;z g (s; r )x = a is
0
(s)x = a
= M�z g
equivalent to the equation Mf
0
; a
h��
X
�
, a X
� +n
(r )a.
n
h
�
. Let f F ; � � � (f ); Sf
= X
; s
, then the equation
�
Mf (s
�
n+�
�
x
. Moreover, the data
)x = a always has a solution x
x X
�
x
x X
is a solution if and only if x
x satis�es:
�
�
s
�
�
)x
.
x = c
a
Mf
�
Mf (b
� �
for some b X
� �
, c
)X
(b
s
-----
. Assume f = f
f
and s is equivalent to the state (s
�
; s
) Sf
�
� Sf
(see ()). Assume M
�
is a �
-weak inverse of Mf
, and (s ; s
) is a �
pair in M
� Mf .
it satis�es :
0
. Then x is a solution of the equation () if and only if
0
a
= Mf
(s )x
�
where a
is obtained as fol lows:
= M (s
�
�
�
�
X
:
�
0
a
�
)a �
; �
�
Constructing WIFAs
Denote the set consisting of all p ossible weakly invertible elements in F by F
,
and denote the set consisting of all p ossible � -weakly invertible elements in F by
�
�
�
F
. In this section, we study how to construct the elements in F
and how to
construct their weak inverses and the related state pairs. The last problem will
b e considered in Theorem . As will b e shown, there are two typ es of primitive
weakly invertible elements, namely weakly invertible linear FAs and 0-weakly
invertible FAs, they and their weak inverses can b e constructed systematically
�
(Theorem and ). More elements in F
can b e generated with these two typ e
of primitive elements by making (�nite numb er of ) the multiplicative and some
prop er additive op erations (Theorem and ). Note that 0-WIFAs have no
contribution to the information delay in such constructions, it would b e inter
esting if one can construct systematically nonlinear WIFAs with p ositive delays
without using any linear FAs as ingredients, but it seems a hard task.
In the sequel we denote the group consisting of all invertible l � l matrices
over F [z ] by GLl
over F [z ] by GLl (F [z ]), similarly for GLl (F ) : : : etc.
� � � �
Theorem Let M = (X ; X ; S ; � ; � ) be a � -weak inverse of f F
� � h
�
, given
a single � -pair (b
� Mf
; s) is a natural � -pair in M
, for any state s F
�
= Sf
in Mf
d
, let
d
�
s
�
= �
�
(b
)Mf
� Mf
, where �
,
(b)�
; b) in M
d �
s, then (s
X
and d = 0 if h � � and d = � � h if � - h.
Remark For any given IMFA Mf
and its a � -weak inverse M�
, in order to be
able to construct a � -match for each of the states in Mf
�
, it is enough to be able
to construct only a single � -pair in M
� Mf
according to the above Theorem.
�
In order to describ e all the linear elements in F
, we need the following kind
of decomp ositions of matrices in Ml;l (F [z ]). For any 0 = B Ml;l
(F [z ]), by
using the well-known algorithm [] for transforming a matrix over F [z ] into
diagonal form, one can get a decomp osition of B of the form as b elow,
B = P D Q( � z b) ()
-----
where P GLl (
(F [z ]); Q GLl
l (F ); b Ml;l
(F [z ]) and D is a l � l diagonal
matrix determined by a tuple n = (n0 ;
; n ;
; � � � ; n�
) of integers
�
D = diag (In0
X
; z In
; z
n = l �
ni
; � � 0; n
�
In ; � � � ; z In� ; 0n );
- 0; ni � 0 (i < � );
where I
is the n � n zero matrix. The tuple n
n
0�i��
is the n � n identity matrix, 0n
is uniquely determined by B and will b e called the structure parameter of B .
Theorem [, , ] Let B Ml;l (F [z ]) is of the form as in (), then
. B is weakly invertible if and only if det(B ) = 0, which is equivalent to
P
l =
n
.
0�i�� i
. If B is weakly invertible, then � (B ) = � .
�
,
. If � (B ) = �, then MA;z b
�
is a � -weak inverse of B, where A = Q C P
�
C = z
�
D
; and ((0; 0); 0) is a � -pair in MA;z b
� MB .
Theorem Let f = f (t�h
�
; � � � ; t
; t0
) F, then f is 0-weakly invertible
if and only if f (a�h ;
; � � � ; a� ;
; t0
) is a permutation on X for each state s =
(a�h ; � � � ; a
form:
) in M
, and in this case, f can be expressed as the fol lowing
X
�
f
f =
ci
(t�h
; � � � ; t
�
)Pi
(t
0
)
) is a
where n � ; Pi
�i�n
is a permutation on X, the coe�cient c (t
i
�h
; � � � ; t�
function taking the values in f0; g on the understanding that 0Pi
P
(t0 ) = 0,
Pi
put
i (t0
) = Pi
(t0
), and
�i�n
X
� =
�
ci (t�h ; � � � ; t�
ci (u�h ; � � � ; u
) = (as integer sum), moreover,
�
)
)Pi
i (t0
�i�n
then the memory FA M (� ) is a 0-weak inverse of Mf, it has the same state set
as Mf, and (s; s) is a 0-pair in M (� ) � Mf for any state s in Mf . In particular,
the fol lowing three types of elements are al l 0-weakly invertible: . permutations
on X ; . + z k ; k F ; . + U k V, where U V = V U = 0, U Ml;l (F [z ]),
V Ml;l (F [z ]), and k F .
�
It is known that F
�
is closed under the multiplicative op eration, i.e., if
�
fi F (i = ; ), then f f F . Moreover, we have
� �
Theorem Let fi F, i = ; , then f f F if and only if fi F for
i = and . And in this case � (fi ) � � (f f ) � � (f ) + � (f ).
To describ e the inverse of the comp osed FA Mf
is useful. Given M = (X ; X ; S; �; �), let
(� )
(� )
f, the following construction
M
= (X ; X ; S � f0; ; � � � ; � g; �
(� )
; �
)
-----
where
and
�
(� )
�
(s; i)x =
(� )
(s; i + ); 0 � i < �
(� (s; x); � ); i = �
�
0 X ; 0 � i < �
�(s; x); i = �
�
(s; i)x =
The following theorem is well-known:
�
Theorem Let fi
�(� ) �
M � M is a (�
F, i = ; , and Mi
be a �i
-weak inverse of Mfi, then
+ �
)-weak inverse of Mf
. Moreover, for any state s in
f
Mf
�
) be the state in Mf
�
f
, let (s ;
) be a �
; s
� Mf and equivalent to s (see (), and let
� �
(s
; s
� M
, then ((s
; 0); s
); (s ;
; s
)) is a (�
+ � )-pair
i i
in (M
� M
i
fi
i
�(�
)
-pair in Mi
�
) � M
.
�
f
f
The next result shows that F�
� +
is closed under the op eration adding the
� +
elements of the form z
g, g F . To see how the inverses of f + z
g is related
to that of f, we de�ne the circle product of M = (X ; Y ; S; �; �) and M
FA M � M�
= (X ; Y ; S � S�
h+
; �
k
cle product of M = (X ; Y ; S; �; �) and M� to b e the
� �
; � ) where � = � (t�h ; � � � ; t� ; t0 ; u�k ; � � � ; u� )
h k
to Y, S� = X � Y, and for any state (s0 ; r0 =
�
is a function from X
� Y
(x�h
�
; � � � ; x� ;
; y�k
; � � � ; y�
)) S � S�
, and any input x0, the functions �
and
�
are de�ned as
�
� ((s0 ; r0); x0 ) = �(s0 ; �� (r0 ; x0 ));
� �
� ((s0 ; r0); x0 ) = (� (s0 ; �� (r0 ; x0 )); �� (r0 ; � ((s0 ; r0 ); x0 )):
� � � � �
Theorem Let f F�, and M = (X ; X ; S ; � ; � ) be a � -weak inverse of
Mf, then
+� � +�
. f � z g F� for any g F, moreover � (f � z g ) = � (f ).
� +�
. M � Mt0 ;z � + g is a � -weak inverse of f � z g . For any state s in
+� � � �
�
�
�
((s0
((s0
; r0
0); x0
0); x0
f � z
�
g, if (s
; s) is a � -pair in M
� Mf, then ((s
; s); s) is a � -pair in
(M � Mt0 ;z +� g ) � Mf �z +� g, where s is considered natural ly also as both
a state of Mt0 ;z +� g and a state of Mf .
(M
� M
Brief Intro duction of FAPKC
In this section we describ e the scheme FAPKC [, , , 0, ] in terminologies
develop ed ab ove.
�
Cho ose two elements f0
�
and f
in F
whose weak inverses can b e constructed
easy, and let Mi
b e the constructed �i -weak inverse of Mfi
; i = 0; . Cho ose
�
�(�0 )
�
g F . Write f = f0
f
, � = �
= M
� M0
(which is a �
+ �, M
weak inverse of Mf, see Theorem ). Write h = h + h, where hi
h k 0 0 h k �
k = m(z g ). Cho ose (s; r ) X � X and (s ; r ) X � X, let (s
� 0 ��
= m(fi ),
; s)) b e a
�
� -pair in M
� M
(see Theorem ), and (s
) b e a � -pair in Mf
h��
X
� M
(see
f
0
Theorem ). Write s
�
= bs
, where b X
�
�
; s
; s
. Let f = C T b e the
b o olean expression of f (see ()).
-----
The keys and the algorithm in FAPKC are as b elow:
�
Public key: C ; T ; g ; s; r; s
0
; � .
n
; r
�
Private key: M
�
; s
��
; s
.
�
X
�
Encrypting: Supp ose p X
�
is the plaintext sequence, select x
X
randomly, then the ciphertext is c = MC T ;g
�
(s; r )px
n+�
.
Decrypting : The plaintext p can b e read out from the equation �
p =
�
�
�
is irrelevant data.
�
M
� �
(s
)M�z g
(r )c, where �
X
n
�
is the message to b e signed, select �
Signing : Supp ose m X
� � ��
randomly, then d d = M (s )M
X
0
�
randomly, then d d = M (s )M�z g (r )m x is the digital signature for m .
� � 0
Verifying signature: The receiver veri�es whether MC T ;z g (s d ; r )d = m.
�
The receiver accepts d d as the legal signature if the equality holds, and rejects
it otherwise.
Remark In the proposed schemes [, , , 0], there are some restrictions
�
on choosing the partial state s
. These restrictions are not necessary in order to
make the algorithm work, so al l these restrictions have been deleted in the above
description.
Now we list the keys which are prop osed in [, , , 0, ] as follows.
Form [, ]: f0
Form [ ]: f0
f0 is linear, � (f ) = 0.
is linear, � (f ) - 0, Mf
has a weak inverse of the form MA;z k
t
with A Ml;l (F [z ]), l = and
T = (t0; ; t0; ; � � � ; t0
; t
t
; t0; t
; � � � ; t0;
t�;
)
: ()
t�; ;
0;
0;
�;
Form [0]: f0
is linear, l = m = , m(T ) = (the memory order of T ),
�0
�0 = ; � = ; h0
Form []: f0
= ; �
+ h
= B0
� 0, but no examples for f
P0
are given in [0].
Q
0
; f
= B
P
Q
or f
= B
, where Bi
Ml;l
(F [z ]),
Qi
Ml;l
(F ), each Pi
is a p ermutation on X and is determined by a exp onen
a b
+
l
l l
), where GF (
) is
tial function of the form x
l
which is de�ned over GF (
identi�ed with X = F
in a natural way.
As the outer comp onent f0 is nonlinear, the comp osition f0
explo ding b o olean expression, though the nonlinear degree of f0
f
causes an
is just . In
order to keep the public key size tolerable, the parameters have to b e very small.
The following table is copied from [] to illustrate the suggested parameters and
the corresp onding public key sizes, where �0
� h0
= m(f0
); �
� h
= m(f
),
N (
(N
) is the corresp onding public key size when f is linear (nonlinear).
l
; h ) (,) (,) (, ) (0,0) (,) (0,) (,)
(h0
N
(Bits) 00 0 00 0
-----
Remark In describing the basic algorithm of FAPKC, it is stated in the
section of [0] that the outer component automaton of the public key is a
memory �nite automaton, which is not neccessarily restricted to be of the above
form . In this paper, we consider only the latter (i.e., form ) which is stated in
the Section of [0] in describing an implementation of FAPKC, because in [0]
there is neither an example nor suggested parameters for the former except the
form . We guess it is hard to give such an example with a tolerable public key
size.
It was shown that the encrypting is insecure when the key is of the form
in [] and of the form in []. It was shown in [] that b oth the encrypting
and signing are insecure when the key is of the form without the restriction ().
Cryptanalysis on FAPKC
In this sectin we keep the notations in the last section, and consider the following
Problem How to decode the ciphertexts and how to forge the signatures with
out knowing the private key of FAPKC?
We will show Problem can b e solved for any one of the keys of the form
{ listed in the last section, and also for the keys of the form without the
restriction shown in ().
�
= c
�
To deco de the ciphertext is exactly to solve the equation Mf ;z g
(s; r )p x
n+�
X
0
�
(where p x
0
=
c
, where c
= M
are unknowns), which is reduced to the equation Mf
�z g (r )c according to Theorem .
� �
t0
(s)p x
0
To forge a signature is exactly to solve the equation Mf ;z g
�
(s
d
; r )
�
(s
)d = m
�
(where d
d are unknowns), which is reduced to the equation Mf
d
)d =
�
�
s )d
0
m
�
0
0
d =
; m
0
= M�z g
(r )m according to Theorem :, and further to M
f
�
(b
c
m
according Theorem :. Therefore Problem is reduced to
Problem How to solve the M
-equation of the form () for f = f
f
?
f
0
The following theorem shows that f, as an arbitrary element in F
�
, has a
routine decomp osition which will b e used to reduce Problem . We'll say two
elements f and g in F are similar if there exists G GLl
written as f ' g .
�
(F [z ]) such that f = Gg,
Theorem 0 Assume f = C T F
; C Ml;n
(F [z ]), then
. Using the wel l-known method [] to transform a matrix over F [z ] to a
diagonal form, we may get
C = B (I ; 0)Q; B Ml;l
(F [z ]); det(B ) = 0; Q GLn (F [z ]); ()
-----
where (I ; 0) is a matrix of size l � n, I is the identity matrix of l � l and
N
N
0 is the zero matrix of size l � (n � l ), let f
N
= (I ; 0)QT, then f = B f
,
where f
is uniquely determined up to the similarity.
N
We'l l cal l f
of f .
the T -nonlinear factor of f, and cal l B the T -linear factor
. For any weakly invertible linear A F, denote the T -nonlinear factor of
N N
Af by (Af ), then (Af )
N
N
and � ((Af )
) = � (f
N
' f
) � � (f ).
From Theorem 0 and Theorem we get
N
Corollary Let f
N
be the T -nonlinear factor of f de�ned as in Theorem 0,
then � (f ) � � for any one of the keys of the forms { listed in the last
section, and also for the keys of the form without the restriction shown in ().
Notice that the weak inverses of the linear factor of f is easy constructed (see
theorem ), so basing on Theorem 0 and Theorem :, Problem is reduced
to
n+�
,
Problem How to solve the equation of the form Mf
N
N (s)x = a, a X
n � (where f
is the T -nonlinear factor of f de�ned as in Theorem 0) ?
One may try to solve Problem case by case by means of the divide-and
conquer searching metho d, or according to Theorem try to solve it systemati
cally by solving the following
or Mf
N
0
can
Problem How to construct a �
be chosen arbitrarily)?
0
-weak inverse of Mf
N (where �
Problem can b e solved if we can decomp ose f or f
into a pro duct of
several FAs each of which can b e inverted. It is the case when the key is of
the form without the restriction (), as shown in the following theorem, which
characterizes the so-called quasi-linear elements de�ned as b elow.
De�nition The element f in F is cal led quasi-linear if Mf
has a weak inverse
of the form MA;z k
with A Ml;l
�
(F [z ]), k F .
Theorem Let f F
, then
. f is quasi-linear if and only if f has a decomposition:
f = B ( � z g ); B Ml;l (F [z ]); det(B ) = 0; g F ( )
As a consequence, if f is quasi-linear, so is Af for any A Ml;l (F [z ]),
det(A) = 0.
-----
. If f is quasi-linear, then its a decomposition f = B ( � z g ) of the form ( )
and its a weak inverse can be obtained easy from its boolean expression f =
� �
t0
C T as fol lows. Assume T = 0, C Ml;n (F [z ]), correspondingly write
T
0 0
C = (C0 ; C ); C0 Ml;l (F [z ]); C Ml;n�l (F [z ]). Let C0 = P D Q(I � z b)
�
�
�
be a decomposition of C0
of the form (), and let A = z
Q
D
P
�
,
0
� +
then A M
g = b � H T
, B = P D Q. Then f = B ( � z g ), and MA;z g
l;l (F [z ]) and AC
0
= z
H for some H Ml;n�l
l (F [z ]). Let
is a � -weak
inverse of Mf .
We claim that Problem can b e solved case by case practically by means of
the divide-and-conquer searching metho d when the key is any one of the form
{ listed in the last section. To see this, we consider how large l � (f ) should
b e in order to resist the devide-and-conquer searching attacks on the equation
n+�
n+�
. Let's see at �rst how to estimate the
of the form Mf (s)x
= a; a X
actual complexity of such an attack. For plain exhaustive searching, an obvious
l(+� (f ))
upp er b ound is
, but the exact b ound may b e much smaller. When f
is linear, the logarithm of the b ound to base can b e expressed by its struc
ture parameters de�ned in section , and the mean value for this expression is
l(+� (f ))
. There are no strong reasons why exhaustive searching with a nonlinear
l(+� (f ))=
FA should b e much harder than with a linear one. So, that we use
to estimate the complexity of the devide-and-conquer searching typ e attacks is
not to o p essimistic. Thus to resist such attacks to Problem , we should require,
))
))
l(+� (f N
for any
say,
l(+� (f N
- 0. Basing on Corollary , the parameter
one of the keys of the forms {, and for any one of the suggested keys of the
form is estimated as b elow, and one can see that non of them meets the b ound
0.
N
= .
is shown in the
. When the key is of the form and the form ,
N
))
. When the key is of the form ,
l(+� (f
=
l(+� (f
l(+� (f
N
))
))
. When the key is of the form , the parameter
following table for the suggested parameters.
(h0
; h ) (,) (,) (, ) (0,0) (,) (0,) (,)
From the ab ove cryptanalysis of this section, we see all the keys prop osed
in [, , , 0, ] for FAPKC are insecure b oth in encrypting and signing.
References
[] Hu�man D.A., Canonical Forms for Information Lossless Finite State Logi
cal Machines, IRE Transaction on Circuit Theory, IRE Trans. Cir. Theory,
sp ecial supplement, , pp.- .
-----
[] Massey J.L. and Sain M.K., Inverses of Linear Sequential Circuits, IEEE
Trans. Comput., , : pp.0-.
[] Massey J.L. and Sain M.K., A mo di�ed Inverse for Linear Dynamical Sys
tems, Pro c. IEEE th Adaptive Pro cesses Symp., , pp. a-a.
[] Massey J.L. and Sain M.K., IEEE Trans. AC-, No., , pp.- .
[] Forney G.D., Convolution Co des I: Algebraic Structures, IEEE Trans. I.T.,
0, : pp.0-.
[] Tao R.J., Invertible Linear Finite Automata, Scientia Sinica, , :
pp.-.
[] Kyimit A.A., Information Lossless Automata of Finite Order, New York:
Wiley, .
[] Tao R.J., Invertibility of Finite Automata (in Chinese), Beijing, Science
Press, : pp. -,.
[ ] Tao R.J., Invertibility of Linear Finite Automata over a Ring, Automata,
Languages and Programming (Edited by Timo Lepisto, Arto Salomaa),
Lecture Notes in Computer Sciences, Springer Verlag, , :pp.
0.
[0] Lai X. and Massey J.L., Some Connections b etween Scramblers and Invert
ible Automata, Pro c. Beijing Int. Workshop on Info.Th., , pp.
DI.-DI..
[] Juhani Heino, Finite Automata: a layman approach, text p osted in
sci,cript newsgroup, Octob er , [email protected].�, University
of Helsinki, Finland, .
[] Tao R.J., Generating a kind of nonlinear �nite automata with invertibility
by transformation metho d, Lab oratory for Computer Science, Institute of
Software, Chinese Academy of Sciences, Beijing 0000, China, ISCAS{
LCS{ {0.
[] Tao R.J., On invertibility of some comp ound �nite automata, Lab oratory
for Computer Science, Institute of Software, Chinese Academy of Sciences,
Beijing 0000, China, ISCAS{LCS{ {0.
[] Dai.Z.D., Invariants and Inversibility of Linear Finite Automata, Advances
in Cryptology{ChinaCrypt' (In Chinease), Science Press, pp.-.
[] Dai Z.D., Ye D.F., Weak Invertibility of Linear Finite Automata over Com
mutative Rings {Classi�cation and Enumeration (in Chinese), KEXUE
TONGBAO(Bulletin of Science), Vol., No., , , pp.-0.
-----
[] Dai Z.D., Ye D.F., Weak Invertibility of Linear Finite Automata I, Clas
si�cation and Enumeration of Transfer Functions, SCIENCE IN CHINA
(Series A), Vol. , No. , June , pp.-.
[] Tao R.C. and Chen S.H., A Finite Automaton Public Key Cryptosystem
and Digital Signatures, Chinese J. of Computer, (), pp.0-0 (in
Chinese).
[] Tao R.J. and Chen S.H., Two Varieties of Finite Automaton Public Key
Cryptosystem and Digital Signatures, J. of Compt. Sci. and Tech., (),
No., pp. -.
[ ] Tao R.J., Conference rep ort, ChinaCrypt' , Xian, .
[0] Tao R.J. and Chen S.H. and Chen X.M., FAPKC: a new �nite automaton
public key cryptosystem, Lab oratory for Computer Science, Institute of
Software, Chinese Academy of Sciences, Beijing 0000, China, June, .
ISCAS{LCS{ {0.
[] Chen X.M., The Invertibility Theory and Application of Quadratic Finite
Automata, Lab oratory for Computer Science, Institute of Software, Chinese
Academy of Sciences, Beijing 0000, China, Novemb er, , Do ctoral the
sis.
[] Schneier B., Applied Cryptography, second addition, .
[] Dai D.W. Wu K. and Zhang H.G., Cryptanalysis on a Finite Automaton
Public Key Cryptosystem, Science in China, (in Chinese).
[] Bao, F., Igarashi,Y., Break Finite Automata Public Key Cryptosystem, Au
tomata, Languages and Programming, Lecture Notes in Computer Sciences,
( ), Springer,-.
[] Qin Z.P., Zhang H.G., Cryptanalysis of Finite Automaton Public Key Cryp
tosystems (in Chinese), {Chinacrypt' , Science Press, pp.-.
[] Dai Z.D., A Class of Sep erable Memory Finite Automata-Cryptoanalysis
on FAPKC {Chinacrypt' , Science Press, pp.-.
[] Jacobson N., Basic Algebra I, W.H.Freeman and Company, San Francisco,
pp.- .
[] Wan Z.X., Algebra and Co ding Theory (in Chinese), Beijing, Science Press,
, p.0.
-----
| 16,055
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/3-540-49649-1_19?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/3-540-49649-1_19, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://link.springer.com/content/pdf/10.1007%2F3-540-49649-1_19.pdf"
}
| 1,998
|
[
"JournalArticle",
"Conference"
] | true
| 1998-10-18T00:00:00
|
[] | 16,055
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Medicine",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/003caedfa295ca70bc3d37773ef552cf5b7be320
|
[
"Computer Science"
] | 0.894732
|
An Empirical Study of a Trustworthy Cloud Common Data Model Using Decentralized Identifiers
|
003caedfa295ca70bc3d37773ef552cf5b7be320
|
Applied Sciences
|
[
{
"authorId": "144828170",
"name": "Yun-Mi Kang"
},
{
"authorId": "38029758",
"name": "Jaehyuk Cho"
},
{
"authorId": "2110425502",
"name": "Young B. Park"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Appl Sci"
],
"alternate_urls": [
"http://www.mathem.pub.ro/apps/",
"https://www.mdpi.com/journal/applsci",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-217814"
],
"id": "136edf8d-0f88-4c2c-830f-461c6a9b842e",
"issn": "2076-3417",
"name": "Applied Sciences",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-217814"
}
|
The Conventional Cloud Common Data Model (CDM) uses a centralized method of user identification and credentials. This needs to be solved in a decentralized way because there are limitations in interoperability such as closed identity management and identity leakage. In this paper, we propose a DID (Decentralized Identifier)-based cloud CDM that allows researchers to securely store medical research information by authenticating their identity and to access the CDM reliably. The proposed service model is used to provide the credential of the researcher in the process of creating and accessing CDM data in the designed secure cloud. This model is designed on a DID-based user-centric identification system to support the research of enrolled researchers in a cloud CDM environment involving multiple hospitals and laboratories. The prototype of the designed model is an extension of the encrypted CDM delivery method using DID and provides an identification system by limiting the use cases of CDM data by researchers registered in cloud CDM. Prototypes built for agent-based proof of concept (PoC) are leveraged to enhance security for researcher use of ophthalmic CDM data. For this, the CDM ID schema and ID definition are described by issuing IDs of CDM providers and CDM agents, limiting the IDs of researchers who are CDM users. The proposed method is to provide a framework for integrated and efficient data access control policy management. It provides strong security and ensures both the integrity and availability of CDM data.
|
# applied sciences
_Article_
## An Empirical Study of a Trustworthy Cloud Common Data Model Using Decentralized Identifiers
**Yunhee Kang** **[1]** **, Jaehyuk Cho** **[2,]*** **and Young B. Park** **[3]**
1 Division of Computer Engineering, Baekseok University, Cheonan 31065, Korea; [email protected]
2 Department of Electronic Engineering, Soongsil University, Seoul 06978, Korea
3 Department of Software Science, Dankook University, Yongin 16891, Korea; [email protected]
***** Correspondence: [email protected]
**Featured Application: The proposed DID-based service model is designed as an agent that is**
**based on a platform with DID. It provides interoperability, privacy, and efficiency to manage**
**identity in cloud CDM.**
[����������](https://www.mdpi.com/article/10.3390/app11198984?type=check_update&version=1)
**�������**
**Citation: Kang, Y.; Cho, J.; Park, Y.B.**
An Empirical Study of a Trustworthy
Cloud Common Data Model Using
Decentralized Identifiers. Appl. Sci.
**[2021, 11, 8984. https://doi.org/](https://doi.org/10.3390/app11198984)**
[10.3390/app11198984](https://doi.org/10.3390/app11198984)
Academic Editors: Seongsoo Cho and
Bhanu Shrestha
Received: 17 August 2021
Accepted: 24 September 2021
Published: 27 September 2021
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright: © 2021 by the authors.**
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: The Conventional Cloud Common Data Model (CDM) uses a centralized method of user**
identification and credentials. This needs to be solved in a decentralized way because there are
limitations in interoperability such as closed identity management and identity leakage. In this
paper, we propose a DID (Decentralized Identifier)-based cloud CDM that allows researchers to
securely store medical research information by authenticating their identity and to access the CDM
reliably. The proposed service model is used to provide the credential of the researcher in the
process of creating and accessing CDM data in the designed secure cloud. This model is designed
on a DID-based user-centric identification system to support the research of enrolled researchers
in a cloud CDM environment involving multiple hospitals and laboratories. The prototype of the
designed model is an extension of the encrypted CDM delivery method using DID and provides
an identification system by limiting the use cases of CDM data by researchers registered in cloud
CDM. Prototypes built for agent-based proof of concept (PoC) are leveraged to enhance security
for researcher use of ophthalmic CDM data. For this, the CDM ID schema and ID definition are
described by issuing IDs of CDM providers and CDM agents, limiting the IDs of researchers who
are CDM users. The proposed method is to provide a framework for integrated and efficient data
access control policy management. It provides strong security and ensures both the integrity and
availability of CDM data.
**Keywords: common data model; collaborative work; identity; distributed ledger; credential; decen-**
tralized identifiers
**1. Introduction**
Today, the key issue of medical services is moving forward from treatment to prevention and management of diseases [1]. Medical institutions and companies have been
promoting technology development in related fields to provide services based on artificial
intelligence and big data technology using medical data [2–4]. Clinical studies based on
patient data from numerous hospitals can provide more meaningful results. However,
since each hospital uses a different structure of Hospital Information System (HIS), the
need for a CDM is recognized for systematic data management and integrated research [5].
CDM is a data structure defined to efficiently utilize hospitals’ data. It is composed based
on international standard terms and has different components depending on the purpose.
Through CDM, various data structures and meanings for each institution are converted
to have the same structure and meaning, and various difficulties caused by different data
structures between institutions can be solved when conducting multi-institutional joint
research.
-----
_Appl. Sci. 2021, 11, 8984_ 2 of 20
However, despite the advantage of being able to efficiently manage data, it still has
problems such as regulation and protection of personal information due to the fundamental
characteristics of medical data. In the existing CDM, identity management methods have
mainly been isolated, centralized, and federated. These methods have limitations in interoperability due to closed identity management, Identifier (ID) leakage, and subordination
with external ID management subjects [6–8]. In cloud CDM, it is necessary to design a
secure cloud on a permission-type block chain in which the access control of the authorized
and registered researcher is established [9].
In order to use the CDM data, the request of access permission from the researcher and
Institutional Review Board (IRB) approval are required in the data supervision process, and
the results of the process are maintained in the block chain. When various hospitals and
research institutes take part in cloud CDM, an access control system is required to prove the
researcher’s permission to participate in the research as well as the interoperability of the
participating institution’s systems. In the operating organization of cloud CDM, a stepwise
qualification process is required according to the roles of CDM provider, CDM consumer,
researcher, and IRB. In the cloud CDM environment, verifiable identities are essential to
handle CDM data securely and ensure the system supports the reliable and tamper-evident
nature of the subject’s identity. It allows the development of independent digital identities
rooted on a distributed ledger [10,11]. It also helps bring building applications with a
solid digital foundation of trust by enabling the verifiable credentials model. For identity,
verifiable credentials are derived from a registry.
Due to restrictions of the domestic medical law, sharing medical information outside
the medical institutions in a domestic medical information utilization environment is restricted except when the patient himself/herself requests his/her own records for personal
information. Because the data management system is fragmented and centralized, the
exchange and use of medical information is limited, and the information management is
insufficient, making it difficult for cooperative research [12,13].
One of important points about data sharing in this regulatory aspect is the IRB. For
clinical studies of medical data, researchers must comply with the conditions set forth in
the Research Participation Regulations. In the cloud CDM environment, the researcher
has special requirements that the researcher’s affiliated institution may be different from
the CDM data provider. To solve this problem, the researcher must obtain permission to
participate in the research from the IRB of the institution that provides clinical information
and controls the conduct of the research.
This paper describes the application of decentralized identifiers (DID) to prove user
identity in the cloud CDM environment. DIDs’ transactions are configured using Hyperledger Indy, and CDM subjects are configured as agents based on Hyperledger Aries [14,15]
to evaluate the behavior of CDM use cases. Here, we design and prototype a DID-based
user-centric identification system to support the research of registered researchers in the
cloud CDM environment involving multiple hospitals and research institutions. The prototype is an extension of the delivery method of encrypted CDM using DID and provides the
identification system by limiting the use case of the CDM data of the researcher registered
in the cloud CDM. The prototype constructed for agent-based PoC (proof of concept) is
utilized for enhanced security of researcher use of ophthalmic CDM data. In this paper,
the CDM identity schema and its definition are described by limiting the identity of main
entities.
This proposed method aspires to provide a unified and efficient data access control
policy management framework. It provides strong security and ensures both the integrity
and the availability of CDM data. It aims to build upon and improve existing data governance processes between different organizations, translating the information sharing
policies they already apply in their current operational interactions into electronically
enforceable rules embedded in credentials.
The main contributions of our work can be summarized as follows:
-----
_Appl. Sci. 2021, 11, 8984_ 3 of 20
1. DID-based user-centric identification is the first to approach supporting researchers
autonomously with the identity verification with a verified proof without a third
parity having central authority in the cloud CDM.
2. We propose and solve the service model that extends the DID basic model in order to
solve the structural problem where it is difficult to participate in external researchers
in the hospital situation related to IRB approval.
3. We validate user access control by applying the DID service model in the safe data
transfer process between hospitals in Korea.
4. Our service model provides high interoperability by operating the prototype of
identity proof using the standard messaging environment using DIDComm.
**2. Related Works**
_2.1. CDM_
With the widespread adoption of electronic health records (EHRs) in healthcare systems, clinical data are entering the digital era [16]. Large-scale EHR data analysis has
produced influential discoveries, which have enabled the practice of precision healthcare [17]. However, there are many barriers that limit the usefulness of EHR data, primarily
revolving around available expertise. Since EHR data are large and typically stored in
relational databases, many clinical experts and scientists have no experience, lack sufficient
time to spare, and need knowledge of Structured Query Language (SQL) programming.
Moreover, the structure and data components of an EHR system are complex and require
strong familiarity to be utilized most effectively. Many people solve this problem by
building effective collaboration across multiple disciplines (e.g., doctors working with
data science teams) but enabling more researchers to directly work with data is important.
Thus, CDM facilitates the interoperability of EHR data for research, and it requires strong
familiarity to enable many researchers to handle the data directly [18].
CDM is a typical database model of medical information standardization for clinical
data-based research [19]. Simultaneous multi-center analysis can be performed in the form
of standardized schemas and vocabulary systems and has been continuously updated
considering existing limitations. This allows us to transform different data structures and
meanings from an institution to have the same structure and meaning, thus solving the
difficulties due to difference in data structures for each institute [20].
There are various CDMs such as Observational Medical Outcomes Partnership (OMOP)CDM, Sentinel-CDM, and national-scale clinical research network (PCORnet) CDM [21].
In particular, the OMOP CDM is a common data model developed and operated by the
Observational Health Data Sciences and Informatics (OHDSI) international consortium,
with more than 200 organizations from 14 countries participating in the transition to
CDM [22]. OMOP-CDM uses a common medical terminology system called the OMOP
code as well as the same data structure, enabling an integrated analysis of clinical healthcare
databases across multiple institutions. A CDM database built in each institution has the
advantage of being able to perform more efficient and systematic analysis using the already
developed CDM-based open-source standard analysis methods and analysis programs
from libraries and web bases [23].
_2.2. Blockchain and Its Application in CDM_
Blockchain is a technology in which a number of transaction details are bundled to
form a block. Additionally, several blocks are connected like a chain using hashes, and then
a number of people copy and store them in a distributed manner [24,25]. It allows anyone to
make reliable and secure transactions. Blockchain can not only be used for cryptocurrency
but also for all data processing that has online transaction history and requires history
management. Blockchain-based smart contracts, logistics management systems, document
management systems, medical information management systems, copyright management
systems, social media management systems, game item management systems, electronic
voting systems, identity verification systems, etc., can be used in various ways [25].
-----
_Appl. Sci. 2021, 11, 8984_ 4 of 20
Although the demand for the use of medical data is increasing, medical information
contains personal information, so there are restrictions on its use. In the domestic health
and medical field, as the need to provide medical services tailored to patient characteristics
by integrating genomic information, treatment, clinical information, and lifestyle information is increasing, it is essential to secure a security system for the safe use of medical
information.
Blockchain has a function that can be used for safe and clean data distribution of CDM
data in a collaborative research environment in which multiple institutions participate. In
a collaborative research environment, CDM providers and consumers operate blockchain
nodes and manage the process of transactions related to access to CDM data by researchers.
SimplyVital [26] uses a private blockchain network to share patients’ personal medical
data with multiple medical research institutions but maintains it on the distributed ledger
of the patient’s medical information provider. However, maintaining medical information
on the blockchain is an illegal matter of domestic medical information, and there is a
limitation to applying it to the joint use of medical information. OmniPHR [27] operates
based on a blockchain to convert and maintain the dispersed personal medical information
of patients into a standardized data format and manage the authority to access the data
from any device. As access control is performed, there is a limitation that access control
must be performed for each individual researcher.
In this paper, we present a study on blockchain technology for user identity management of medical data and a model for cross-institutional CDM data access control.
The proposed model is based on a decentralized identity to provide self-sovereignty, and
through this, proves the qualifications of researchers in an environment in which multiple
institutions participate. The DID model for credentialing is currently an effective approach
for accessing CDM data, a unique use case in the medical field.
_2.3. Decentralized Identifier (DID)_
From a security standpoint, an identity is an entity such as a user, virtual group, or
organization that can be used to define permissions on a security item. The two main
functions of an identity are accountability and access control. The identifier is used to
uniquely identify entities and give unique names to data to express its characteristics.
DID is a technology that allows individuals to have complete control over their
information, unlike the existing identification method controlled by a central system. By
using DID, if an individual interacts with a specific institution, the owner of the identity
information can control whether or not the information is provided so that the identity
information can be managed transparently. We classified the ID management techniques
required for handling the CDM between cooperative organizations into four types and
compared their characteristics [28,29]. These management types are isolated, centralized,
federated, and self-sovereign.
In the isolated type, the identity of the user is managed by service, and the user has to
go through the self-authentication (membership registration) and identity authentication
(authentication) procedures for each service. This is to securely establish and operate identities within a single institution. However, it is not suitable for the secure service operation
of multi-participant cloud CDM and requires significant costs for user authentication and
access control [30].
The centralized identity management is a method that centrally manages the user’s
identity in terms of efficiency, and the construction and operation of the identity management system are superior to the isolated type based on individual identity management.
When users register their IDs in the central management system, they can access and use
various services through the central identity management server. The centralized type with
these technical characteristics is suitable for centrally managed and controlled single cloud
CDM operation. However, if a failure of a single central management system happens, a
single point of failure that cannot use the entire service is unavailable. It also has limitations
in terms of scalability and interoperability.
-----
_Appl. Sci. 2021, 11, 8984_ cloud CDM operation. However, if a failure of a single central management system hap-5 of 20
pens, a single point of failure that cannot use the entire service is unavailable. It also has
limitations in terms of scalability and interoperability.
In the federated type, different service providers form a trust relationship and jointly
In the federated type, different service providers form a trust relationship and jointly
manage the user’s identity [29]. Hospitals participating in the cloud CDM network can
manage the user’s identity [29]. Hospitals participating in the cloud CDM network can
operate by applying standards such as SAML. However, federated identity management
operate by applying standards such as SAML. However, federated identity management
is required to establish a trust relationship between the hospital and the cloud CDM first.
is required to establish a trust relationship between the hospital and the cloud CDM first.
This has the constraint that it has to depend on a specific service provider through ID
This has the constraint that it has to depend on a specific service provider through ID
management, and there is also a single point of failure problem.
management, and there is also a single point of failure problem.
In the self-sovereign identity (SSI) type, individual information can be controlled by
In the self-sovereign identity (SSI) type, individual information can be controlled
the individual himself/herself and is based on Distributed Ledger Technology (DLT) with
by the individual himself/herself and is based on Distributed Ledger Technology (DLT)
out the intervention of a third party [30]. The identity information required for the service
without the intervention of a third party [30]. The identity information required for
can be selectively submitted through the channel, and the reliability of the submitted in
the service can be selectively submitted through the channel, and the reliability of the
formation can also be proven without the intervention of a third party [22,23]. A repre
submitted information can also be proven without the intervention of a third party [22,23].
sentative example is DID, which is being standardized by the W3C. Individuals are inde
A representative example is DID, which is being standardized by the W3C. Individuals are
pendent of any single organization because they provide their identity as their identity
independent of any single organization because they provide their identity as their identity
provider. A self-sovereign identity system can use blockchain to look up distributed iden
provider. A self-sovereign identity system can use blockchain to look up distributed
tifiers without a central directory. When you register your ID in the blockchain, your ID
identifiers without a central directory. When you register your ID in the blockchain, your
proof based on the block chain is issued and you sign the ID proof. When a general estab
ID proof based on the block chain is issued and you sign the ID proof. When a general
lishment presents their blockchain ID proof, the establishment verifies that the user’s iden
establishment presents their blockchain ID proof, the establishment verifies that the user’s
tity information is appropriate by inquiring about their ID to the blockchain.
identity information is appropriate by inquiring about their ID to the blockchain.
DIDs are a hashed form of a public key. The private keys for DIDs are stored in a DIDs are a hashed form of a public key. The private keys for DIDs are stored in a
wallet. The wallet is used for allowing any user to store their digital information securely wallet. The wallet is used for allowing any user to store their digital information securely
on a personal device [29–31]. An agent is any application that stores and uses DIDs. It is on a personal device [29–31]. An agent is any application that stores and uses DIDs. It is
the software that interacts with other entities via DIDs. The verifiable DID model consists the software that interacts with other entities via DIDs. The verifiable DID model consists
of three roles, issuer, holder, and/or verifier. Figure 1 presents a basic DID model pro-of three roles, issuer, holder, and/or verifier. Figure 1 presents a basic DID model proposed
posed by the World Wide Web Consortium (W3C). The statements of verifiable creden-by the World Wide Web Consortium (W3C). The statements of verifiable credentials are
tials are generated by an issuer. The presentations based on the credential are sent to the generated by an issuer. The presentations based on the credential are sent to the verifier to
verifier to attest the authenticity of verifiable credentials issued by the issuer. attest the authenticity of verifiable credentials issued by the issuer.
**Figure 1. The DID model proposed by W3C.**
**Figure 1. The DID model proposed by W3C.**
Hyperledger Indy is a public and permissioned blockchain platform tailored to build
Hyperledger Indy is a public and permissioned blockchain platform tailored to build
DID. The following describes the main characteristics of Indy:
DID. The following describes the main characteristics of Indy:
It provides individuals with independent control over their personal data.
_•_
- It provides individuals with independent control over their personal data.
It has to allow interoperability with other decentralized ledgers.
_•_
- It has to allow interoperability with other decentralized ledgers.
It supports the attribute and claims schema system written to the ledger for dynamic
_•_
- It supports the attribute and claims schema system written to the ledger for dynamic
discovery of claim types.
discovery of claim types.
Hyperledger Aries provides a library for handling verifiable digital credentials. The
Hyperledger Aries provides a library for handling verifiable digital credentials. The
envelope of the messages between agents has been standardized in the form of the DID
envelope of the messages between agents has been standardized in the form of the DID
Comm protocol. DIDComm describes how messages should be encrypted and decrypted
Comm protocol. DIDComm describes how messages should be encrypted and decrypted in transport. The agent is an entity working in the cloud CDM, where it interprets messages
in transport. The agent is an entity working in the cloud CDM, where it interprets mes-on behalf of its organization and executes a command to support secure access to the CDM
sages on behalf of its organization and executes a command to support secure access to service. The agent has secure storage that is used for all the information collected by it. In
this paper, we design a researcher as one of agents in the cloud CDM.
In summary, in terms of information security, identity management plays an important role in preventing illegal access from the outside. However, the traditional identity
management model relies on a third-party central system for information management.
-----
_Appl. Sci. 2021, 11, 8984_ 6 of 20
In this approach, it is difficult for the central system to have complete reliability, and it
involves the problem of information exposure to the outside. Identities must not be held
by a single third-party, even if it is a trusted entity temporally [32,33]. Additionally, the
central system has a single fault risk [34,35]. To solve the above problems, the proposed
model is based on SSI, where individual researchers maintain their identity, and supports
secure CDM data transmission and solves the privacy problem.
The proposed model is used for safe CDM data transmission and access control of
transmitted data in the cloud CDM environment. Table 1 compares the traditional identity
management technology and the DID in the following five aspects.
**Table 1. Comparison of traditional identity managements and DID.**
**Traditional Identity Management**
**DID**
**Isolated** **Centralized** **Federated**
Privacy and protection Low Low Low High
User control and consent Low Low Moderate High
Dependency Moderate High High Low
Fault tolerance High High Moderate Low
Usability Low Low Moderate High
Privacy and protection
_•_
The rights of users must be protected on the set of tasks such as handling data. The
users must be able to choose their privacy model. When personal data are disclosed, that
disclosure implicates the minimum amount of information required to complete the given
task.
User control and consent
_•_
In the cloud CDM, researchers must control their identities. They may enable referring
to their own identity, updating it, and accessing their own data. Their identities must not
be held by a single third-party entity in the cloud CDM.
Dependency
_•_
Each of the organizations is running an independent corporation without dependency
in the cloud CDM.
Fault tolerance
_•_
To access a system handling user identity, it enables the continued working of the
system despite failures or malfunctions in cloud CDM.
Usability
_•_
The access right is a system granted to users according to the domain policy. The
researcher’s experience must be consistent with their expectations in a research process.
**3. The Extended to Identify Management Scheme for Cloud CDM**
_3.1. The Cloud CDM Model_
In order to collect and integrate clinical data of multiple hospitals, it is required to
solve the heterogeneity of data structure and format, differences in quality and quantity of
data, technical limitations of interoperability, and security issues. CDM should support the
linking of common analysis codes for electronic medical record (EMR) resource linkage to
support integrated data analysis of research institutions, without leaking sensitive personal
information.
Data extracted from EMRs tend to be stored in different relational database schemas.
Figure 2 illustrates the conventional concept of CDM and its operation scheme derived
from several sources of EMR in hospitals.
-----
_Appl. Sci. 2021, 11, 8984_ Figure 2 illustrates the conventional concept of CDM and its operation scheme derived 7 of 20
from several sources of EMR in hospitals.
**Figure 2. Figure 2.Conventional concept of the Common Data Model (CDM) and operation scheme. Conventional concept of the Common Data Model (CDM) and operation scheme.**
The cloud CDM reference model shown in Figure 3 is a partial result from the previ-The cloud CDM reference model shown in Figure 3 is a partial result from the previous
_Appl. Sci. 2021, 11, x FOR PEER REVIEW ous works and consists of several CDM providers and CDM consumers participating [10]. works and consists of several CDM providers and CDM consumers participating [8 of 21 10]._
Using this presented reference model, clinical researchers can isolate and securely distrib-Using this presented reference model, clinical researchers can isolate and securely distribute
ute CDM data. CDM data.
- Cryptography can be used for protecting information, using a hash value to maintain
management of large-capacity CDMs. Encryption can be used to protect information
using symmetric and asymmetric keys to maintain the management of large-capacity
CDMs.
- A distributed ledger is used to provide data integrity and share information through
a CDM signature.
- In the process of data creation and use, the distributed ledger guarantees data integrity, and transparently signed CDM can be accessed.
- Cryptography can be used for protecting information, using a hash value to maintain
management of large-capacity CDMs. Encryption can be used to protect information
using symmetric and asymmetric keys to maintain the management of large-capacity
CDMs.
- A distributed ledger is used to provide data integrity and share information through
a CDM signature.
- In the process of data creation and use, the distributed ledger guarantees data integrity, and transparently signed CDM can be accessed.
**Figure 3. Figure 3.Concept of the Secure-Cloud Common Data Model (SC-CDM). Concept of the Secure-Cloud Common Data Model (SC-CDM).**
_3.2. The Operation Scheme for Trustworthiness in CDM Cloud •_ Cryptography can be used for protecting information, using a hash value to maintain
management of large-capacity CDMs. Encryption can be used to protect information
In this paper we are focused on how to guarantee the trustworthiness using DID
using symmetric and asymmetric keys to maintain the management of large-capacity
among the entities in cloud CDM. Hence, this model has no consideration of authentica
CDMs.
tion and authorization based on in-person and group verification of cloud CDM. In cloud
_•_ A distributed ledger is used to provide data integrity and share information through a
CDM, it is necessary to design a secure cloud on a permission-type blockchain in which
CDM signature.
the access control of authorized and registered researcher is established. In order to use
-----
_Appl. Sci. 2021, 11, 8984_ 8 of 20
_•_ In the process of data creation and use, the distributed ledger guarantees data integrity,
and transparently signed CDM can be accessed.
_3.2. The Operation Scheme for Trustworthiness in CDM Cloud_
In this paper we are focused on how to guarantee the trustworthiness using DID
among the entities in cloud CDM. Hence, this model has no consideration of authentication
and authorization based on in-person and group verification of cloud CDM. In cloud CDM,
it is necessary to design a secure cloud on a permission-type blockchain in which the access
control of authorized and registered researcher is established. In order to use the CDM data,
the request of access permission from the researcher and IRB approval are required in the
data supervision process, and the results of the process are maintained in the blockchain.
The following shows the process for uploading the CDM derived from the researcher’s
query in Figure 4:
1. A researcher registered in a medical institution, Hospital B, sends a query to the EMR
DB managed Hospital A.
2. The researcher requests the trust manager of Hospital A for CDM to hold the cloud
CDM based on the result of the query.
3. The trust manager of Hospital A obtains the IRB’s approval for the request for the
EMR data with the credential for identifying the researcher.
4. The trust manager in Hospital A builds the approved EMR data into CDM data and
_Appl. Sci. 2021, 11, x FOR PEER REVIEW its metadata associated with encryption keys and storing the CDM encrypted to_ 9 of 2
distribute to a repository in cloud CDM.
5. The trust manager in Hospital A uploads the encrypted data to the cloud CDM.
**Figure 4. The overall concept of authentication and authorization in cloud CDM.**
**Figure 4. The overall concept of authentication and authorization in cloud CDM.**
We assume the requirements of authentication and authorization as the research back
_3.3. The Basic DID Model for Cloud CDM_
ground. The authentication is the basic process of verifying that. the entities (researcher,
In the basic model, the identity information necessary for the information subject to
IRB, CDM provider, CDM consumer) are who they claim to be before allowing access. In
receiving the desired service from the verification agency is issued and submitted by th
the context of cloud CDM, authorization determines the entitlement of an entity to perform
personalization agency. To ensure the validity of the identity information issued by th
tasks that are authorized within the system. A user’s authorization and authentication are
personalization agent, the certificate of the personalization agent is stored in a verifiabl
initially activated by an identity provider (IRB) and provide CDM data about the person
data registry. The verification body that has received the proof of identity verifies th
granted by the IRB.
proof in the registry and provides services. A credential is an attestation of qualification
-----
_Appl. Sci. 2021, 11, 8984_ 9 of 20
_3.3. The Basic DID Model for Cloud CDM_
In the basic model, the identity information necessary for the information subject to
receiving the desired service from the verification agency is issued and submitted by the
personalization agency. To ensure the validity of the identity information issued by the
personalization agent, the certificate of the personalization agent is stored in a verifiable
data registry. The verification body that has received the proof of identity verifies the
proof in the registry and provides services. A credential is an attestation of qualification,
competence, or authority issued to an entity (e.g., an individual or organization) by a third
party with a relevant or de facto authority or assumed competence to do so.
If research involves human subjects or is regulated by the Food and Drug Administration (FDA), it requires review and approval from an institutional review board (IRB) or
the Human Subjects Office. It is the responsibility of all faculty and students to obtain IRB
approval or Exempt determination before initiating any human subjects research projects.
Hence, IRB uses a public DID published globally. The IRB play a role as a verifiable
credential issuer. Since the researcher as holder of the credential may present the credential
to anyone, the identity (via the public DID) of the issuer must be part of what the verifier
learns from the presentation. The verifier can investigate (as necessary) to decide if they
trust the issuer. The public DID of IRB is put on a blockchain so that it can be globally
resolved. It is used to establish secure, point-to-point messaging channels between the
agents of the participants. With a verifiable credential, DIDs are used as the identifier for
IRB as the issuer in cloud CDM.
IRB (the issuer) DID is used to uniquely identify the issuer and is resolved to obtain a
public key related to the DID. That public key is then used to verify that the data in the
verifiable credential did indeed come from the issuer. This public DID ensures that the
_Appl. Sci. 2021, 11, x FOR PEER REVIEW_ 10 of 21
verifier knows who issued the credential a holder presents.
Figure 5 shows the basic DID model for cloud CDM. Node A represents a CDM
provider, and Node B represents a CDM consumer. Two trust managers located in the
broker play role as agents of the CDM provider operated in Node A and the CDM con-service broker play role as agents of the CDM provider operated in Node A and the CDM
sumer operated in Node B for trustily delivering the CDM (represented as CDMA→ B in
consumer operated in Node B for trustily delivering the CDM (represented as CDMA→B in
Figure 3). The verifiers may not fully trust the researcher without a verifiable credential
Figure 3). The verifiers may not fully trust the researcher without a verifiable credential (VC)
(VC) and want to share only a subset of data or respond with data retrieved from a par-and want to share only a subset of data or respond with data retrieved from a particular
ticular query. They might also want to share different subsets of data to the researcher. query. They might also want to share different subsets of data to the researcher. The grant
The grant of access may also need to be revoked, updated, or set to expire. of access may also need to be revoked, updated, or set to expire.
**Figure 5. Figure 5.The DID-based trust model for cloud CDM. The DID-based trust model for cloud CDM.**
In cloud CDM, credentials need to be issued and verified through the following ap-In cloud CDM, credentials need to be issued and verified through the following
plication use cases: application use cases:
-----
_Appl. Sci. 2021, 11, 8984_ 10 of 20
A researcher is a member of a group of researchers of a specific subject on which he or
_•_
she wants to conduct research and is assigned a role as a research participant through IRB
approval and is registered. Through the IRB, researchers are provided with a certificate
of research participation (issuing research participation certificate through IRB).
_•_ CDM users apply to the creation of CDM data, encryption of the generated CDM data,
and proof of access service for use in distributed storage.
_•_ CDM users apply for access service verification for decryption and distributed storage
of CDM data in the process of accessing the created CDM data.
The following is assumed to operating environment:
_•_ For CDM use, researchers are registered with the CDM provider or user organization.
Through the registration process, the researcher assumes that the mutual trust relationship of the cloud CDM participating organizations can be established, managed,
and managed through the certificate authority (CA).
_•_ IRB approval documents are used for the purpose of price proof for CDM provision and
use (users who have received credentials in the IRB use DID to identify their identity).
The researcher is provided with the ID of the CDM provider through the approval of
_•_
the IRB.
_•_ The CDM provider decides to provide the CDM through verification of the researcher’s
identity certificate. After qualification verification, the CDM provider performs en
_Appl. Sci. 2021, 11, x FOR PEER REVIEW_ 11 of 21
cryption and distributed storage of CDM data.
CDM users access the encrypted and distributed CDM data through verification of
_•_
the researcher’s identity certificate via the CDM consumer.
_••_ The researcher’s research participation certificate maintains the research period as anThe researcher’s research participation certificate maintains the research period as an
attribute and allows access to CDM services and data limited to the valid period.attribute and allows access to CDM services and data limited to the valid period.
The overall process of issuing and verifying credential when handling CDM in use-The overall process of issuing and verifying credential when handling CDM in use
cases is shown in Figurecases is shown in Figure 6. 6.
**Figure 6.Figure 6. The process of issuing and verifying credential when handling CDM.The process of issuing and verifying credential when handling CDM.**
-----
_Appl. Sci. 2021, 11, 8984_ 11 of 20
_3.4. Credential Definition of Identity_
Self-sovereign identity consists of an identifier and identifier data. In cloud CDM,
identifiers use DID, and identifier data consists of several attribute information. The
main attribute information for identity consists of personal information, credentials, and
verifiable presentation. A legal entity’s identity (i.e., an individual or an organization) can
be represented using a set of attributes associated with the entity (such as name and role).
The identity of the CDM providing and consuming institutions and the participant of these
institutions is expressed in various attribute information. Identity management provides
the functions for maintaining the identity data and their access control. IRB identity is
_Appl. Sci. 2021, 11, x FOR PEER REVIEW defined based on its schema. The identity certificate is issued by the IRB provider. Figure12 of 21 7_
is the schema definition for CDM identity stored in Indy DLT.
**Figure 7. Schema definition for CDM identity issued by IRB.**
**Figure 7. Schema definition for CDM identity issued by IRB.**
The following shows the schema defined for the issued CDM identity stored in the The following shows the schema defined for the issued CDM identity stored in the
DLT. It shows that the schema was created by the IRB through the credential definition DLT. It shows that the schema was created by the IRB through the credential definition ID.
ID.
Schema ID: T8j4DNmf7Us8tTzpvoK6No:2:IRB schema:51.1.53
_•_
- Schema ID: T8j4DNmf7Us8tTzpvoK6No:2:IRB schema:51.1.53 Cred def ID: T8j4DNmf7Us8tTzpvoK6No:3:CL:38:irb.agent.IRB_schema
_•_
- Cred def ID: T8j4DNmf7Us8tTzpvoK6No:3:CL:38:irb.agent.IRB_schema Type: CRED_DEF
_•_
- Type: CRED_DEF Reference: 38
_•_
- Reference: 38 Signature type: CL
_•_
- Signature type: CL Tag: irb.agent.IRB_schema
_•_
- Tag: irb.agent.IRB_schema Attributes: affiliation, approved_date, gcp, irb_no, master_secret, name, role, times
_•_
- Attributes: affiliation, approved_date, gcp, irb_no, master_secret, name, role, tamp
timestamp
After the IRB agent starts up, the researcher agent establishes a trust channel with
the IRB agent, and then the IRB performs DID exchange with the researcher. Algorithm 1After the IRB agent starts up, the researcher agent establishes a trust channel with the
IRB agent, and then the IRB performs DID exchange with the researcher. Algorithm 1 de-describes the steps for establishing a connection between these agents.
scribes the steps for establishing a connection between these agents.
**Algorithm 1 Establishing Trusted Connections**
**Algorithm 1 Establishing Trusted Connections**
1: Researcher agent exchanges DIDs with the IRB agent to establish a DIDComm channel.
1: Researcher agent exchanges DIDs with the IRB agent to establish a DIDComm chan
2: IRB offers an audited researcher credential over this channel.
nel.3: Researcher accepts and stores the credential in their wallet.
2: IRB† Audited researcher credential is specified by IRB. offers an audited researcher credential over this channel.
3: Researcher accepts and stores the credential in their wallet.
† Audited researcher credential is specified by IRB. 3.5. Issuing IRB Credential
With a connection with the researcher’s agent established the IRB issuer can interact
_3.5. Issuing IRB Credential with that agent. It might ask for a presentation to confirm the identity of the researcher._
With a connection with the researcher’s agent established the IRB issuer can interact
-----
_Appl. Sci. 2021, 11, 8984_ 12 of 20
Eventually, it will reach the point of needing to issue a credential to the researcher. To
do that, the controller passes to the framework the type of the credential, the data for
the claims, and the connection identifier for the researcher, and the framework (for the
most part) takes care of issuing the credential for the given research subject. Note that
after offering the credential to the researcher, the response might not come back for hours.
This is not an issue, the issuer framework will just wait. Once the credential is issued, an
identifier for the credential is given back to the controller, which again stores that with the
rest of the information it keeps on the researcher. To issue an Indy credential, the simplest
instance of the protocol must have three steps:
The issuer sends the holder an offer message.
_•_
The holder responds with a request message.
_•_
_•_ The issuer completes the exchange by sending the holder an issue message containing
the verifiable credential.
The access policy defines programmatically the requirements for authorization to
access CDM. The access policy defines these rules based on the CDM, user/group assignments, and ownership assignments. The IRB credential represents the access policy of
CDM. Algorithm 2 describes the steps for issuing credential, and the detailed issuing flow
is as follows.
1. The holder sends a proposal to the issuer (issuer receives proposal). When the
holder starts with sending a proposal, it uses the/issue-credential-2.0/send-proposal
endpoint.
2. The issuer sends an offer to the holder based on the proposal (holder receives offer).
The issuer receives the proposal and can respond with an offer using the/issuecredential-2.0/records/{id}/send-offer endpoint. After this offer, the flow continues
with the holder responding with a request.
3. The holder sends a request to the issuer (issuer receives request). If the holder automatically accepts offers and turns them into requests, then the issuing of credentials
would be completely automated. That improves privacy—making the user in control
of when and whom to share information with.
4. The issuer sends credentials to the holder (holder receives credentials). The issue
credential protocol is used to enable an issuer to provide a holder with a verifiable
credential. In this protocol:
There are two participants (issuer, holder).
_•_
There are four message types (propose, offer, request, and issue).
_•_
There are four states (proposed, offered, requested, and issued).
_•_
5. The holder stores credentials and sends acknowledgement to the issuer. Verifiable
credentials are issued to the user and stored in his/her digital wallet, and the user
decides when and where to use them.
6. The issuer receives acknowledgement.
**Algorithm 2 Issuing credential**
1: for each Researcher agent do
2: Initiate DID Exchange with CDM provider agent to establish DIDComm channel.
3: Researcher agent delivers the CDM selected to CDM provider agent via DIDComm channel.
4: CDM provider offers Verified CDM token credential over DIDComm.
5: Researcher agent accepts and stores the credential
6: CDM provider encrypts the CDM and delivers the cipher CDM to CDM consumer agent with the
IRB number approved by IRB
_7: end for_
† The CDM is derived from the EMR of in CDM provider
† Verified CDM token credential is specified by PROVIDER
-----
_Appl. Sci. 2021, 11, 8984_ 13 of 20
_3.6. Proof the Credential_
Privacy is important when dealing with CDM. The entities using DIDs will be able to
express only the portions of their credentials. This expression of a subset of one’s credential
is called credential presentation. Specifically, the presentation refers to the verifiable data
received by a verifier. Instead of typing in the name, address, and government ID, a
presentation of that information is provided from verifiable credentials issued from IRB by
an authority trusted by the verifiers, CDM provider, and CDM consumer. The verifiers can
automatically accept the claims in the presentation (if they trust the issuer) without any
further checking.
Instead of obtaining the data directly from the issuer IRB, the data from the issuer
comes from the holder, researcher, and the cryptographic material to verify that the authenticity of the data comes from the distributed ledger. This reduces the number of integrations
that have to be implemented between issuers and verifiers. A researcher can be issued
a professional accreditation credential from the relevant authority (e.g., the College of
Physicians and Surgeons) and the claims verified (and trusted) by medical facilities in real
time.
Should the doctor lose his or her accreditation, the credential can be revoked, which
would be immediately in effect. This would hold true for any credentialed profession, be it
lawyers, engineers, nurses, tradespeople, real estate agents, and so on.
**4. Implementation**
_4.1. Experimental Setup_
In this section, the design of the experiments is introduced. Detailed information of
our hardware and software configurations is described in Table 2. To run von-network and
agents, a docker engine is controlled by those containers. Each of the containers is running
as a light-weighted virtual machine.
**Table 2. Hardware and software configuration.**
**Item** **Model**
CPU Intel(R) Xeon(R) E-2134
RAM 16 Gbyte
OS Linux 3.1.0
Docker 19.03.8
Docker-compose 1.21.0
Hyperledger Indy node management is permissioned. It has its own ledger and
stores/reads public information in the distributed ledger that is reliably elected. The nodes
communicate to agree (reach consensus) on what transactions should be written and in
what order. To start Hyperledger Indy nodes, a von-network is used. It is a portable
development of Hyperledger Indy with a ledger browser. The von-network plays a role as
a Hyperledger Indy public ledger sandbox instance. In this work, it is running in docker
locally.
Figure 8 shows the von-network with four nodes for identity management in cloud
CDM. The von-webserver has a web interface that allows you to browse the transactions in
the blockchain.
Before issuing a credential, a credential definition as well as its schema needs to be
created. Both the schema and the credential definition are recorded on a von-network.
Hyperledger Aries Cloud Agent Python (ACA-Py) is a foundation for building a verifiable credential (VC) ecosystem [35]. It operates in the second (DIDComm Peer to Peer
Protocol) and third (Data Exchange Protocols) layers of the Trust Over IP framework using
DIDComm messaging and Hyperledger Aries protocols in Figure 9.
-----
_Appl. Sci. 2021, 11, 8984_ 14 of 20
_Appl. Sci. 2021, 11, x FOR PEER REVIEW_ 15 of 21
**Figure 8. The von-network running in cloud CDM.**
Before issuing a credential, a credential definition as well as its schema needs to be
created. Both the schema and the credential definition are recorded on a von-network.
Hyperledger Aries Cloud Agent Python (ACA-Py) is a foundation for building a verifiable
credential (VC) ecosystem [35]. It operates in the second (DIDComm Peer to Peer Protocol)
and third (Data Exchange Protocols) layers of the Trust Over IP framework using DIDComm messaging and Hyperledger Aries protocols in Figure 9.
**Figure 8.Figure 8. The von-network running in cloud CDM.The von-network running in cloud CDM.**
**Figure 8. The von-network running in cloud CDM.**
Before issuing a credential, a credential definition as well as its schema needs to be
created. Both the schema and the credential definition are recorded on a von-network.
Hyperledger Aries Cloud Agent Python (ACA-Py) is a foundation for building a verifiable
credential (VC) ecosystem [35]. It operates in the second (DIDComm Peer to Peer Protocol)
and third (Data Exchange Protocols) layers of the Trust Over IP framework using DID
Before issuing a credential, a credential definition as well as its schema needs to be
created. Both the schema and the credential definition are recorded on a von-network.
Hyperledger Aries Cloud Agent Python (ACA-Py) is a foundation for building a verifiable
credential (VC) ecosystem [35]. It operates in the second (DIDComm Peer to Peer Protocol)
and third (Data Exchange Protocols) layers of the Trust Over IP framework using DIDComm messaging and Hyperledger Aries protocols in Figure 9.
**a)Trust over IP governance stack.** (b) Trust over IP technology stack.
**Figure 9. Trust over IP framework [36].**
**Figure 9. Trust over IP framework [36].**
A business logic controller is written for the development of a given use case, and the A business logic controller is written for the development of a given use case, and the
created controller uses the ACA-Py library based on AIP (Aries Interop Profile) 2.0. AIP created controller uses the ACA-Py library based on AIP (Aries Interop Profile) 2.0. AIP 2.0
protocols are used for issuing, verifying, and holding VCs that work with a Hyperledger
Indy distributed ledger. The von-network is used to represent a credential format named
(a)Trust over IP governance stack. AnonCreds (Anonymous Credentials). It is a kind of detailed implementation of zero-(b) Trust over IP technology stack.
knowledge proof (ZKP) support.
**Figure 9. Trust over IP framework [36].**
A ZKP is a kind of cryptographic method, and its use in blockchain appears to be
promising in cases where existing blockchain technologies can adapt a ZKP to address
A business logic controller is written for the development of a given use case, and the
specific business requirements focusing on data privacy [37]. It proves attributes for an
created controller uses the ACA-Py library based on AIP (Aries Interop Profile) 2.0. AIP
Before issuing a credential, a credential definition as well as its schema needs to be
created. Both the schema and the credential definition are recorded on a von-network.
Hyperledger Aries Cloud Agent Python (ACA-Py) is a foundation for building a verifiable
credential (VC) ecosystem [35]. It operates in the second (DIDComm Peer to Peer Protocol)
and third (Data Exchange Protocols) layers of the Trust Over IP framework using DID-
Comm messaging and Hyperledger Aries protocols in Figure 9.
-----
_Appl. Sci. 2021, 11, 8984_ 15 of 20
entity (a person, organization, or thing) without exposing a correlatable identifier about
that entity. That claims from verifiable credentials can be selectively disclosed, meaning
that just some data elements from credentials, even across credentials can (and should be)
provided in a single presentation. By providing them in a single presentation, the verifier
knows that all the credentials were issued to the same entity.
Four agents, the researcher, IRB, provider, and consumer are developed. Those agents
are written in Python by using ACA-py library. Agents that receive a message from
another entity post a webhook internally over HTTP, allowing the controller to respond
appropriately. Note that this can include requesting the agent to send further messages in
reply. More details can be seen in Table 3.
**Table 3. Participating entities and their endpoints.**
**Name** **HTTP Port** **Admin API Port** **Webhook Port**
Researcher 8030 8031 8032
IRB 8010 8011 8012
CDM Provider 8050 8051 8052
CDM Consumer 8060 8061 8062
ACA-py can also notify its controller when an event has occurred. It supports webhooks that allow immediately obtaining an update of what happened. Requests and
responses between controllers configured through ACA-py are transmitted as HTTP requests, and webhook notifications are delivered as a result of processing. Webhook is
an asynchronous HTTP callback on an event occurrence. It is a simple server-to-server
communication for reporting a specific event occurred on a server. The server on which the
event occurred will fire an HTTP POST request to another server on a URL that is provided
by the receiving server.
In this paper, each of the cloud CDM subjects operates their own agents acting as a
peer, and transactions between peers are maintained in a distributed ledger. Agent-to-agent
communication is based on the DiDComm specification to support bilateral communication
through a trusted channel.
_4.2. Experimental Result_
The simulation environment setup starts with the registration of the entity researcher
named Alice on each IRB. To establish the connection between IRB and Alice, IRB advertises
an invitation data, Alice delivers the invitation message to IRB, and IRB responds to the
accept message associated with the invitation. For peer-to-peer communication, Aries
Interop Profile (AIP) uses 20. AIP is used to establish a connection between agents, exchange
identity certificates, and perform transmission data through command delivery. After the
identity is verified, the user’s CDM data credential is performed.
After processing the registration information, the IRB sends a unique connection
invitation message to Alice, as represented in Figure 10. The connection request message is
used to communicate the DID document of the invitee (Alice) to the inviter (IRB). The @type
attribute is a required string value that denotes that the received message is a connection
_Appl. Sci. 2021, 11, x FOR PEER REVIEW request. After receiving the connection request, IRB evaluates the provided DID and DID17 of 21_
Doc according to the DID Method Spec.
**Figure 10. JSON format of IRB invitation attribute.**
**Figure 10. JSON format of IRB invitation attribute.**
When IRB and researcher agents want to connect with each other they establish a
-----
_Appl. Sci. 2021, 11, 8984_ 16 of 20
**Figure 10. JSON format of IRB invitation attribute.**
**Figure 10. JSON format of IRB invitation attribute.**
16 of 20
When IRB and researcher agents want to connect with each other, they establish a
When IRB and researcher agents want to connect with each other, they establish a
connection by DIComm, a series of messages that go back and forth to establish a connectionWhen IRB and researcher agents want to connect with each other, they establish a
connection by DIComm, a series of messages that go back and forth to establish a connec
and exchange information. In Figureconnection by DIComm, a series of messages that go back and forth to establish a connec- 11, connection_id is used to send a message between
tion and exchange information. In Figure 11, connection_id is used to send a message be
two agents.tion and exchange information. In Figure 11, connection_id is used to send a message be
tween two agents.
tween two agents.
**Figure 11.Figure 11. Figure 11. JSON format of the accepted message associated with the invitation.JSON format of the accepted message associated with the invitation. JSON format of the accepted message associated with the invitation.**
16 of 20
In answer to the connect invitation, the IRB issues and offers a researcher a VC,In answer to the connect invitation, the IRB issues and offers a researcher a VC, rep
In answer to the connect invitation, the IRB issues and offers a researcher a VC, rep
represented in Figureresented in Figure 12 (segment of the issued credential), to be used to prove his/her iden- 12 (segment of the issued credential), to be used to prove his/her
resented in Figure 12 (segment of the issued credential), to be used to prove his/her iden
identity when connecting to CDM provider. The VC is issued according to its schematity when connecting to CDM provider. The VC is issued according to its schema defini
tity when connecting to CDM provider. The VC is issued according to its schema defini
definition in Figuretion in Figure 6. The credential is stored in the wallet of the researcher. The credential is 6. The credential is stored in the wallet of the researcher. The credential
tion in Figure 6. The credential is stored in the wallet of the researcher. The credential is
is generated based on IRB records including IRB number, name, affiliation, the status ofgenerated based on IRB records including IRB number, name, affiliation, the status of
generated based on IRB records including IRB number, name, affiliation, the status of
GCP, etc. GCP stands for good clinical practice. This means that the clinical studies usingGCP, etc. GCP stands for good clinical practice. This means that the clinical studies using
GCP, etc. GCP stands for good clinical practice. This means that the clinical studies using
CDM satisfy the clinical trial management criteria through the IRB.CDM satisfy the clinical trial management criteria through the IRB.
CDM satisfy the clinical trial management criteria through the IRB.
**Figure 12. Researcher VC offered by the IRB upon registration.**
**Figure 12.Figure 12. Researcher VC offered by the IRB upon registration.Researcher VC offered by the IRB upon registration.**
Similarly, using CDM provider VC schema, the same setup is performed for the CDM
provider. They aim to identify the CDM providers and the IRB issues and provide the CDM
with a VC to allow the research to verify the CDM provider. Upon receiving the accessing
CDM request, the CDM provider requires the researcher to present a valid verifiable
credential (issued by the IRB), containing GCP in the allowed status of the credential in
Figure 13. In the response to the CDM provider, the researcher presents a valid VC, with
the allowed GCP granting the researcher permissions to access CDM data in that CDM
provider. As shown in Figure 13, the result of the process is handled by the researcher. The
-----
ifiable credential (issued by the IRB), containing GCP in the allowed status of the creden-ifiable credential (issued by the IRB), containing GCP in the allowed status of the creden
_Appl. Sci. 2021, 11, 8984_ 17 of 20
tial in Figure 13. In the response to the CDM provider, the researcher presents a valid VC, tial in Figure 13. In the response to the CDM provider, the researcher presents a valid VC,
with the allowed GCP granting the researcher permissions to access CDM data in that with the allowed GCP granting the researcher permissions to access CDM data in that
CDM provider. As shown in Figure 13, the result of the process is handled by the re-CDM provider. As shown in Figure 13, the result of the process is handled by the researcher. The proof from the researcher is validated by CDM provider. Using AnnoCreds, searcher. The proof from the researcher is validated by CDM provider. Using AnnoCreds, proof from the researcher is validated by CDM provider. Using AnnoCreds, the validation
the validation process is based on the GCP attribute in VC. the validation process is based on the GCP attribute in VC. process is based on the GCP attribute in VC.
**Figure 13. The result of the process the proof by using ZKP.**
**Figure 13. The result of the process the proof by using ZKP. Figure 13. The result of the process the proof by using ZKP.**
Figure 14 shows a proof, which is part of the credential issued by IRB, provided by
Figure 14 shows a proof, which is part of the credential issued by IRB, provided by Figure 14 shows a proof, which is part of the credential issued by IRB, provided by
the researcher to IRB and the CDM provider showing that the researcher is qualified. IRB
the researcher to IRB and the CDM provider showing that the researcher is qualified. IRB the researcher to IRB and the CDM provider showing that the researcher is qualified. IRB
verifies the qualification in the ZKP method based on the properties of the provided proof.
verifies the qualification in the ZKP method based on the properties of the provided proof. verifies the qualification in the ZKP method based on the properties of the provided proof.
Using the proof IRB, IRB give a permission of the CDM data request qualification when
Using the proof IRB, IRB give a permission of the CDM data request qualification when Using the proof IRB, IRB give a permission of the CDM data request qualification when
the attribute value of GCP is greater than 0.
the attribute value of GCP is greater than 0. the attribute value of GCP is greater than 0.
**Figure 14. The proof.**
**Figure 14. The proof. Figure 14. The proof.**
_4.3. Discussion_
_4.3. Discussion_ _4.3. Discussion_
Privacy, security, and usability: the healthcare data are sensitive by nature, and they
Privacy, security, and usability: the healthcare data are sensitive by nature, and they need a maximum of security against data breaches and privacy disclosure when exchangingPrivacy, security, and usability: the healthcare data are sensitive by nature, and they
need a maximum of security against data breaches and privacy disclosure when exchang-need a maximum of security against data breaches and privacy disclosure when exchang-the data, especially after enabling third parties’ medical services to interact with the system.
ing the data, especially after enabling third parties’ medical services to interact with the ing the data, especially after enabling third parties’ medical services to interact with the Medical data formats such as CDM for joint use have been developed for the participation
system. Medical data formats such as CDM for joint use have been developed for the par-system. Medical data formats such as CDM for joint use have been developed for the par-of multiple hospitals and research institutions, and a stronger response method that is
ticipation of multiple hospitals and research institutions, and a stronger response method ticipation of multiple hospitals and research institutions, and a stronger response method not vulnerable to security is needed. In order to further improve usability, in this paper,
that is not vulnerable to security is needed. In order to further improve usability, in this that is not vulnerable to security is needed. In order to further improve usability, in this reliable cloud CDM research is conducted using DID based on blockchain.
paper, reliable cloud CDM research is conducted using DID based on blockchain. paper, reliable cloud CDM research is conducted using DID based on blockchain. In the construction, operation, and utilization of CDM, it is generally used only in
In the construction, operation, and utilization of CDM, it is generally used only in the the computer network within individual hospitals so that it is maintained at the sameIn the construction, operation, and utilization of CDM, it is generally used only in the
computer network within individual hospitals so that it is maintained at the same security computer network within individual hospitals so that it is maintained at the same security security level as general medical information. However, the problem of information
level as general medical information. However, the problem of information leakage may level as general medical information. However, the problem of information leakage may leakage may occur due to insufficient systems or regulations to take responsibility for
occur due to insufficient systems or regulations to take responsibility for information se-occur due to insufficient systems or regulations to take responsibility for information se-information security and prepare countermeasures in multi-institutional combined research.
curity and prepare countermeasures in multi-institutional combined research. In addition, curity and prepare countermeasures in multi-institutional combined research. In addition, In addition, although CDM is mainly built on a cloud-based basis, security for conversion
although CDM is mainly built on a cloud-based basis, security for conversion and conver-although CDM is mainly built on a cloud-based basis, security for conversion and conver-and conversion and de-identification of personal information in the hospital information
system cannot be performed by building a clear solution or system. Instead, CDM is
verified by the business procedure to confirm or pledge not to leak personal information
by the programmer and system manager who performs the conversion and has a very
weak structure. Therefore, clinical information in hospitals usually has to go through the
consent of the patient who is the data subject and approval by the IRB. In addition, there is
a restriction that researchers must use medical data only inside the hospital.
Figure 15 shows the flow of access control in cloud CDM. When a manager with
authority sends a plaintext inquiry to the CDM ( 1 2 ), the access control list and CDM data
_⃝⃝_
are transmitted to the cloud CDM ( 3 ). The cloud CDM performs the following detailed steps
_⃝_
-----
thority sends a plaintext inquiry to the CDM (① ②), the access control list and CDM data
_Appl. Sci. 2021, 11, 8984_ 18 of 20
are transmitted to the cloud CDM (③). The cloud CDM performs the following detailed
steps and then sends the encrypted request result and ACL. The user, data approval range,
period of use, etc., are subject to IRB review, and if approved (④ ⑤ ⑥), the user finally
and then sends the encrypted request result and ACL. The user, data approval range, period
performs the analysis with the CDM result value (of use, etc., are subject to IRB review, and if approved (⑦ 4). During a series of processes, data 5 6 ), the user finally performs the
_⃝⃝⃝_
are encrypted, and unauthorized users’ access is blocked so that the contents cannot be analysis with the CDM result value (⃝ 7 ). During a series of processes, data are encrypted,
checked. and unauthorized users’ access is blocked so that the contents cannot be checked.
**Figure 15.Figure 15. Flow of access control in cloud CDM.Flow of access control in cloud CDM. 1①⃝ Search in trust manager Search in trust manager 2⃝② Request data from the hospital where the Request data from the hospital where the**
data are availabledata are available 3⃝③Request for CDM data, attachment of access control list Request for CDM data, attachment of access control list 4⃝ Request result, ACL④ Request result, ACL 5⃝ IRB approval (user,⑤ IRB approval
data approval range, period of use)(user, data approval range, period of use) 6⃝ Approval notice⑥ Approval notice 7⃝ User analysis.⑦ User analysis.
**5. Conclusions**
**5. Conclusions**
Some businesses, including those that analyze CDM in public health research, which
Some businesses, including those that analyze CDM in public health research, which
deals with sensitive information, may require a certain level of privacy and security.
deals with sensitive information, may require a certain level of privacy and security. CDM
CDM for data sharing and utilization of medical institutions requires access to various
for data sharing and utilization of medical institutions requires access to various patient
patient medical information. It is used for disease research and customized medical care.
medical information. It is used for disease research and customized medical care. Intrin
Intrinsically the CDM data are highly sensitive, and they need maximum security against
sically the CDM data are highly sensitive, and they need maximum security against data
data breaches and privacy disclosure when exchanging data. The cloud CDM provides
breaches and privacy disclosure when exchanging data. The cloud CDM provides interop
interoperability for the participation of multiple hospitals and serves as an information
erability for the participation of multiple hospitals and serves as an information-based
based study for customized and user-centered healthcare. However, reliable management
study for customized and user-centered healthcare. However, reliable management of
of safe and transparent medical information of personal information is required.
safe and transparent medical information of personal information is required.
The cloud CDM proposed applies DID and blockchain technology for secure access
The cloud CDM proposed applies DID and blockchain technology for secure access
control that occurs when a researcher accesses it. The proposed service model is used
control that occurs when a researcher accesses it. The proposed service model is used to
to provide the credential of the researcher in the process of creating and accessing the
provide the credential of the researcher in the process of creating and accessing the CDM
CDM data of the designed secure cloud CDM. It does not consider the interaction with the
existing system for establishing the initial trustiness of entities participating in the cloud
CDM and suggests showing that the DID is used as a method for identification.
The prototype is an extension of the delivery of encrypted CDM using DID and
describes the identification by limiting the use case of the CDM data of the researcher
registered in the cloud CDM. This proposed method aspires to provide a unified and
efficient data access control policy management framework. The designed model was
verified by applying the ophthalmic CDM data of domestic hospitals. It provides strong
security and ensures both the integrity and the availability of CDM data.
**Author Contributions: Conceptualization, Y.B.P. and Y.K.; methodology, Y.B.P.; software, Y.K.; vali-**
dation, Y.K., Y.B.P. and J.C.; formal analysis, Y.K.; investigation, J.C.; resources, Y.B.P.; data curation,
Y.B.P.; writing—original draft preparation, Y.K.; writing—review and editing, J.C.; visualization, Y.K.;
supervision, J.C.; project administration, J.C.; funding acquisition, J.C. All authors have read and
agreed to the published version of the manuscript.
-----
_Appl. Sci. 2021, 11, 8984_ 19 of 20
**Funding: This research was funded by Korea Environmental Industry & Technology Institute (KEITI),**
grant number RE202101551 and The APC was funded by Ministry of Environment (ME).
**Institutional Review Board Statement: Not applicable.**
**Informed Consent Statement: Not applicable.**
**Acknowledgments: This work was supported Korea Environmental Industry & Technology Institute**
(KEITI) grant funded by the Korea government (Ministry of Environment). Project No. RE202101551,
the development of IoT-based technology for collecting and managing big data on environmental
hazards and health effects.
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. Shivade, C.; Raghavan, P.; Fosler-Lussier, E.; Embi, P.J.; Elhadad, N.; Johnson, S.B.; Lai, A.M. A review of approaches to identifying
[patient phenotype cohorts using electronic health records. J. Am. Med. Inform. Assoc. 2014, 21, 221–230. [CrossRef] [PubMed]](http://doi.org/10.1136/amiajnl-2013-001935)
2. Ferreira, J.C.; Ferreira da Silva, C.; Martins, J.P. Roaming service for electric vehicle charging using blockchain-based digital
[identity. Energies 2021, 14, 1686. [CrossRef]](http://doi.org/10.3390/en14061686)
3. Liu, B.; Yuan, X.-T.; Yu, Y.; Liu, Q.; Metaxas, D. Decentralized Robust Subspace Clustering. Proc. AAAI Conf. Artif. Intell. 2016, 30,
[3539–3545. Available online: https://ojs.aaai.org/index.php/AAAI/article/view/10473 (accessed on 1 July 2021).](https://ojs.aaai.org/index.php/AAAI/article/view/10473)
4. Xia, S.; Zheng, S.; Wang, G.; Gao, X.; Wang, B. Granular ball sampling for noisy label classification or imbalanced classification.
_[IEEE Trans. Neural Netw. Learn. Syst. 2021. [CrossRef]](http://doi.org/10.1109/TNNLS.2021.3105984)_
5. You, S.C.; Lee, S.; Cho, S.Y.; Park, H.; Jung, S.; Cho, J.; Yoon, D.; Park, R.W. Conversion of National Health Insurance ServiceNational Sample Cohort (NHIS-NSC) database into observational medical outcomes partnership-common data model (OMOPCDM). Stud. Health Technol. Inf. 2017, 245, 467–470.
6. Chadwick, D.W. Federated identity management. Foundations of security analysis and design v. Lect. Notes Comput. Sci. 2009,
_5705, 96–120._
7. Jayaraman, I.; Mohammed, M. Secure Privacy Conserving Provable Data Possession (SPC-PDP) framework. Inf. Syst. E-Bus.
_[Manag. 2019, 1–27. [CrossRef]](http://doi.org/10.1007/s10257-019-00417-8)_
8. Xiong, L.; Li, F.G.; Zeng, S.K.; Peng, T.; Liu, Z.C. A Blockchain-based privacy-awareness authentication scheme with efficient
[revocation for multi-server architectures. IEEE Access 2019, 7, 125840–125853. [CrossRef]](http://doi.org/10.1109/ACCESS.2019.2939368)
9. Cho, J.H.; Kang, Y.; Park, Y.B. Secure delivery scheme of common data model for decentralized cloud platforms. Appl. Sci. 2020,
_[10, 7134. [CrossRef]](http://doi.org/10.3390/app10207134)_
10. Pãnescu, A.T.; Manta, V. Smart contracts for research data rights management over the ethereum blockchain network. Sci. Technol.
_[Libr. 2018, 37, 235–245. [CrossRef]](http://doi.org/10.1080/0194262X.2018.1474838)_
11. Androulaki, E.; Barger, A.; Bortnikov, V.; Cachin, C.; Christidis, K.; de Caro, A.; Enyeart, D.; Ferris, C.; Laventman, G.; Manevich,
Y.; et al. Hyperledger fabric: A distributed operating system for permissioned blockchains. In Proceedings of the Thirteenth EuroSys
_Conference, Porto, Portugal, 23–26 April 2018; ACM: New York, NY, USA, 2018; p. 30._
12. Dagher, G.G.; Mohler, J.; Milojkovic, M.; Marella, P.B. Ancile: Privacy-preserving framework for access control and interoperability
[of electronic health records using blockchain technology. Sustain. Cities Soc. 2018, 39, 283–297. [CrossRef]](http://doi.org/10.1016/j.scs.2018.02.014)
13. Silberschatz, A.; Korth, H.F.; Sudarshan, S. Database System Concepts; McGraw-Hill: New York, NY, USA, 1997.
14. [Hyperledger/Aries-Cloudagent-Python. Available online: https://github.com/hyperledger/aries-cloudagent-python (accessed](https://github.com/hyperledger/aries-cloudagent-python)
on 1 April 2021).
15. Reed, D.; Sporny, M.; Longley, D.; Allen, C.; Grant, R.; Sabadell, M. Decentralized Identifiers (DIDs) v1.0—Core Architecture, Data
Model, and Representations. IT Security and Privacy—A Framework for Identity Management (ISO/IEC 24760-1). Available
[online: https://www.w3.org/TR/did-core/ (accessed on 1 March 2021).](https://www.w3.org/TR/did-core/)
16. Blumenthal, D.; Tavenner, M. The “meaningful use” regulation for electronic health records. N. Engl. J. Med. 2010, 363, 501–504.
[[CrossRef]](http://doi.org/10.1056/NEJMp1006114)
17. Jensen, P.B.; Jensen, L.J.; Brunak, S. Mining electronic health records: Towards better research applications and clinical care. Nat.
_[Rev. Genet. 2012, 13, 395–405. [CrossRef] [PubMed]](http://doi.org/10.1038/nrg3208)_
18. Glicksberg, B.S.; Oskotsky, B.; Giangreco, N.; Thangaraj, P.M.; Rudrapatna, V.; Datta, D.; Butte, A.J. ROMOP: A light-weight R
[package for interfacing with OMOP-formatted electronic health record data. JAMIA Open 2019, 2, 10–14. [CrossRef] [PubMed]](http://doi.org/10.1093/jamiaopen/ooy059)
19. Reps, J.M.; Schuemie, M.J.; Suchard, M.A.; Ryan, P.B.; Rijnbeek, P.R. Design and implementation of a standardized framework
to generate and evaluate patient-level prediction models using observational healthcare data. J. Am. Med. Inf. Assoc. 2018, 25,
[969–975. [CrossRef] [PubMed]](http://doi.org/10.1093/jamia/ocy032)
20. Voss, E.A.; Makadia, R.; Matcho, A.; Ma, Q.; Knoll, C.; Schuemie, M.; Ryan, P.B. Feasibility and utility of applications of the
[common data model to multiple, disparate observational health databases. J. Am. Med. Inf. Assoc. 2015, 22, 553–564. [CrossRef]](http://doi.org/10.1093/jamia/ocu023)
21. Garza, M.; Del Fiol, G.; Tenenbaum, J.; Walden, A.; Zozus, M.N. Evaluating common data models for use with a longitudinal
[community registry. J. Biomed. Inform. 2016, 64, 333–341. [CrossRef]](http://doi.org/10.1016/j.jbi.2016.10.016)
-----
_Appl. Sci. 2021, 11, 8984_ 20 of 20
22. Hripcsak, G.; Duke, J.D.; Shah, N.H.; Reich, C.G.; Huser, V.; Schuemie, M.J.; Ryan, P.B. Observational Health Data Sciences and
Informatics (OHDSI): Opportunities for observational researchers. Stud. Health Technol. Inform. 2015, 216, 574.
23. Yoon, D.; Ahn, E.K.; Park, M.Y.; Cho, S.Y.; Ryan, P.; Schuemie, M.J.; Park, R.W. Conversion and data quality assessment of
electronic health record data at a Korean tertiary teaching hospital to a common data model for distributed network research.
_[Healthc. Inform. Res. 2016, 22, 54–58. [CrossRef] [PubMed]](http://doi.org/10.4258/hir.2016.22.1.54)_
24. Nakamoto, S. Bitcoin: A peer-to-peer electronic cash system. Decent. Bus. Rev. 2008, 21260–21268.
25. Alamri, B.; Javed, I.T.; Margaria, T. A GDPR-compliant framework for IoT-based personal health records using blockchain. In
Proceedings of the 2021 11th IFIP International Conference on New Technologies, Mobility and Security (NTMS), Paris, France,
19–21 April 2021; pp. 1–5.
26. [Simply Vital Health. Available online: https://www.simplyvitalhealth.com/ (accessed on 29 December 2018).](https://www.simplyvitalhealth.com/)
27. Roehrs, A.; da Costa, C.A.; da Rosa Righi, R. OmniPHR: A distributed architecture model to integrate personal health records. J.
_[Biomed. Inform. 2017, 71, 70–81. [CrossRef]](http://doi.org/10.1016/j.jbi.2017.05.012)_
28. Landau, S.; Le van Gong, H.; Wilton, R. Achieving privacy in a federated identity management system. In Financial Cryptography
_and Data; Dingledine, R., Golle, P., Eds.; Security 2009. Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany,_
[2009; Volume 5628. [CrossRef]](http://doi.org/10.1007/978-3-642-03549-4_4)
29. [Allen, C. The Path to Self-Sovereign Identity. Life with Alacrity. Available online: http://www.lifewithalacrity.com/2016/04/the-](http://www.lifewithalacrity.com/2016/04/the-path-to-self-soverereign-identity.html)
[path-to-self-soverereign-identity.html (accessed on 1 July 2021).](http://www.lifewithalacrity.com/2016/04/the-path-to-self-soverereign-identity.html)
30. Hardjono, T.; Pentland, A. Verifiable anonymous identities and access control in permissioned blockchains. _arXiv 2019,_
arXiv:1903.04584.
31. Shrestha, A.K.; Vassileva, J. Blockchain-based research data sharing framework for incentivizing the data owners. In Proceedings
of the International Conference on Blockchain, Seattle, WA, USA, 25–30 June 2018; Lecture Notes in Computer Science. Springer:
Berlin/Heidelberg, Germany, 2018; Volume 10974, pp. 259–266.
32. Augot, D.; Chabanne, H.; Chenevier, T.; George, W.; Lambert, L.; Augot, D.; Chabanne, H.; Chenevier, T.; George, W.; Lambert, L.
A user-centric system for verified identities on the Bitcoin blockchain. In Data Privacy Management, Cryptocurrencies and Blockchain
_Technology; Springer: Oslo, Norway, 2017; Volume 10436, pp. 390–407._
33. Halpin, H. NEXTLEAP: Decentralizing identity with privacy for secure messaging. In Proceedings of the 12th International
Conference on Availability, Reliability and Security, Reggio Calabria, Italy, 29 August–1 September 2017; pp. 1–10.
34. Babkin, S.; Epishkina, A. Authentication protocols based on one-time passwords. In Proceedings of the 2019 IEEE Conference of
Russian Young Researchers in Electrical and Electronic Engineering (EIConRus), Saint Petersburg, Russia, 28–31 January 2019;
pp. 1794–1798.
35. [Zhang, R.; Xue, R.; Liu, L. Security and privacy on blockchain. ACM Comput. Surv. 2019, 52, 1–34. [CrossRef]](http://doi.org/10.1145/3316481)
36. [Taking the Sovrin Foundation to a Higher Level: Introducing SSI as a Universal Service. Available online: https://sovrin.org/](https://sovrin.org/taking-the-sovrin-foundation-to-a-higher-level-introducing-ssi-as-a-universal-service/)
[taking-the-sovrin-foundation-to-a-higher-level-introducing-ssi-as-a-universal-service/ (accessed on 10 August 2020).](https://sovrin.org/taking-the-sovrin-foundation-to-a-higher-level-introducing-ssi-as-a-universal-service/)
37. Meralli, S. Privacy-preserving analytics for the securitization market: A zero-knowledge distributed ledger technology application.
_[Financ. Innov. 2020, 6, 1–20. [CrossRef]](http://doi.org/10.1186/s40854-020-0172-y)_
-----
| 19,926
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/app11198984?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/app11198984, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2076-3417/11/19/8984/pdf?version=1632727991"
}
| 2,021
|
[] | true
| 2021-09-27T00:00:00
|
[
{
"paperId": "1a48dbf0bf2aca7d96173fc4240af5d6cb595e53",
"title": "Granular Ball Sampling for Noisy Label Classification or Imbalanced Classification"
},
{
"paperId": "02d7526aeb706faeb259a306068c031dbc7563dd",
"title": "A GDPR-Compliant Framework for IoT-Based Personal Health Records Using Blockchain"
},
{
"paperId": "4c9d81b73ff97e514ac0822980dbca83b7a3acb4",
"title": "Roaming Service for Electric Vehicle Charging Using Blockchain-Based Digital Identity"
},
{
"paperId": "4943b266419a5d1663a32ebcda7af85ba249a778",
"title": "Secure Delivery Scheme of Common Data Model for Decentralized Cloud Platforms"
},
{
"paperId": "272a92b587e80b2d1296b9cd512d6afccff512ff",
"title": "Privacy-preserving analytics for the securitization market: a zero-knowledge distributed ledger technology application"
},
{
"paperId": "f1543af61440f843c83e06482fc70c8548e53a50",
"title": "A Blockchain-Based Privacy-Awareness Authentication Scheme With Efficient Revocation for Multi-Server Architectures"
},
{
"paperId": "62abca8345da4f01af3478ec4f9716b71132ea33",
"title": "Secure Privacy Conserving Provable Data Possession (SPC-PDP) framework"
},
{
"paperId": "b4c3ae7667cc4c64ba6cf7114ab3be0b163312cf",
"title": "Security and Privacy on Blockchain"
},
{
"paperId": "0d69b356b43bc746727ca8fba82eb8263aee308f",
"title": "Verifiable Anonymous Identities and Access Control in Permissioned Blockchains"
},
{
"paperId": "b3cd68e3325e278c4d5f81c992ff99bdcd7e36cf",
"title": "ROMOP: a light-weight R package for interfacing with OMOP-formatted electronic health record data"
},
{
"paperId": "1dda1b7ee454f478afea63a1280a48cc6568e69a",
"title": "Blockchain-Based Research Data Sharing Framework for Incentivizing the Data Owners"
},
{
"paperId": "08c7ec20b47923c09ff52aa44ca324d17c47c7bb",
"title": "Smart Contracts for Research Data Rights Management over the Ethereum Blockchain Network"
},
{
"paperId": "863dff6fea7811e6c2b76b3eb64eee84ee280b33",
"title": "Ancile: Privacy-Preserving Framework for Access Control and Interoperability of Electronic Health Records Using Blockchain Technology"
},
{
"paperId": "ccc4151e36e8c784239a825d054de27dd19f7419",
"title": "Design and implementation of a standardized framework to generate and evaluate patient-level prediction models using observational healthcare data"
},
{
"paperId": "1b4c39ba447e8098291e47fd5da32ca650f41181",
"title": "Hyperledger fabric: a distributed operating system for permissioned blockchains"
},
{
"paperId": "740ad62b0f831b22d8ebdb58803504189bff1d8f",
"title": "A User-Centric System for Verified Identities on the Bitcoin Blockchain"
},
{
"paperId": "a03bb179d07885b2ad4ff2eef148fc77311218a3",
"title": "NEXTLEAP: Decentralizing Identity with Privacy for Secure Messaging"
},
{
"paperId": "6536e22778df1d8b371cd8ff263145713e7bffd9",
"title": "OmniPHR: A distributed architecture model to integrate personal health records"
},
{
"paperId": "24c2d7c501c5b317a2d568b75bffeff885d99b30",
"title": "Evaluating common data models for use with a longitudinal community registry"
},
{
"paperId": "87c2372a720619737eca8c56babed2cb3112c3a2",
"title": "Decentralized Robust Subspace Clustering"
},
{
"paperId": "c13be46b29b4004b05970b11b380d2db06a503d8",
"title": "Conversion and Data Quality Assessment of Electronic Health Record Data at a Korean Tertiary Teaching Hospital to a Common Data Model for Distributed Network Research"
},
{
"paperId": "0fd30a48762d2b63a62a72af5d0d782587b154e5",
"title": "Feasibility and utility of applications of the common data model to multiple, disparate observational health databases"
},
{
"paperId": "9bd957c67683bc873998b242b331e9c9d6992db9",
"title": "A review of approaches to identifying patient phenotype cohorts using electronic health records"
},
{
"paperId": "7de5085c35a6e62cef75c308769f29fb72f9479e",
"title": "Mining electronic health records: towards better research applications and clinical care"
},
{
"paperId": "7cb992e7373c5b797d5e76bbfb11035008f1afb3",
"title": "The \"meaningful use\" regulation for electronic health records."
},
{
"paperId": "472567b2572aa8ef7b251e49e6207385dd4f2be2",
"title": "Federated Identity Management"
},
{
"paperId": "b1654b53c25e6e8e73e9ce9e725d645ce58e0a91",
"title": "Achieving Privacy in a Federated Identity Management System"
},
{
"paperId": "44b9755fa59cc6e66884ad9a97c9199abeb05687",
"title": "Authentication Protocols Based on One-Time Passwords"
},
{
"paperId": "03a396becd6c730c6142204e9429ce4503649bf7",
"title": "The Path to Self-Sovereign Identity"
},
{
"paperId": "765cce270d306ffc165dccb8792e236c83272aaa",
"title": "Conversion of National Health Insurance Service-National Sample Cohort (NHIS-NSC) Database into Observational Medical Outcomes Partnership-Common Data Model (OMOP-CDM)"
},
{
"paperId": "d5424a5f93f9d410cf17c5393eb5e6a1821dc135",
"title": "Observational Health Data Sciences and Informatics (OHDSI): Opportunities for Observational Researchers"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": "a5edce377759894482464a133cb9ec6791709eb2",
"title": "Database System Concepts"
},
{
"paperId": null,
"title": "Hyperledger / Aries - Cloudagent - Python"
},
{
"paperId": null,
"title": "Simply Vital Health"
},
{
"paperId": null,
"title": "Taking the Sovrin Foundation to a Higher Level : Introducing SSI as a Universal Service"
},
{
"paperId": null,
"title": "Decentralized Identifiers ( DIDs ) v 1 . 0 — Core Architecture , Data Model , and Representations . IT Security and Privacy — A Framework for Identity Management ( ISO / IEC 24760 - 1 )"
}
] | 19,926
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/003dadd684445bdeacb638ba0d153e2aad975990
|
[
"Computer Science"
] | 0.869791
|
Federated Learning without Full Labels: A Survey
|
003dadd684445bdeacb638ba0d153e2aad975990
|
IEEE Data Engineering Bulletin
|
[
{
"authorId": "9372198",
"name": "Yilun Jin"
},
{
"authorId": "40457423",
"name": "Yang Liu"
},
{
"authorId": "2157740727",
"name": "Kai Chen"
},
{
"authorId": "144286907",
"name": "Qian Yang"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IEEE Data Eng Bull"
],
"alternate_urls": null,
"id": "7bf8fd30-543b-48f6-bb8a-8c518006bdd2",
"issn": null,
"name": "IEEE Data Engineering Bulletin",
"type": "journal",
"url": "https://tc.computer.org/tcde/tcde-bulletin-issues/"
}
|
Data privacy has become an increasingly important concern in real-world big data applications such as machine learning. To address the problem, federated learning (FL) has been a promising solution to building effective machine learning models from decentralized and private data. Existing federated learning algorithms mainly tackle the supervised learning problem, where data are assumed to be fully labeled. However, in practice, fully labeled data is often hard to obtain, as the participants may not have sufficient domain expertise, or they lack the motivation and tools to label data. Therefore, the problem of federated learning without full labels is important in real-world FL applications. In this paper, we discuss how the problem can be solved with machine learning techniques that leverage unlabeled data. We present a survey of methods that combine FL with semi-supervised learning, self-supervised learning, and transfer learning methods. We also summarize the datasets used to evaluate FL methods without full labels. Finally, we highlight future directions in the context of FL without full labels.
|
## Federated Learning without Full Labels: A Survey
#### Yilun Jin[†] Yang Liu[‡] Kai Chen[†] Qiang Yang[†]
†
#### Department of CSE, HKUST, Hong Kong, China
[email protected], {qyang,kaichen}@cse.ust.hk
‡
#### Institute for AI Industry Research, Tsinghua University, Beijing, China
[email protected]
Abstract
Data privacy has become an increasingly important concern in real-world big data applications such
as machine learning. To address the problem, federated learning (FL) has been a promising solution
to building effective machine learning models from decentralized and private data. Existing federated
learning algorithms mainly tackle the supervised learning problem, where data are assumed to be fully
labeled. However, in practice, fully labeled data is often hard to obtain, as the participants may not have
sufficient domain expertise, or they lack the motivation and tools to label data. Therefore, the problem
of federated learning without full labels is important in real-world FL applications. In this paper, we
discuss how the problem can be solved with machine learning techniques that leverage unlabeled data.
We present a survey of methods that combine FL with semi-supervised learning, self-supervised learning,
and transfer learning methods. We also summarize the datasets used to evaluate FL methods without
full labels. Finally, we highlight future directions in the context of FL without full labels.
### 1 Introduction
Deep learning (DL) algorithms have achieved great success in the past decade. Powered by large-scale data
such as ImageNet [1], ActivityNet [2], BookCorpus [3], and WikiText [4], deep learning models have been
successfully applied to image classification [5], object detection [6], and natural language understanding [7].
However, the success of DL relies on large-scale, high-quality data, which is not always available in practice for
two reasons. On one hand, collecting and labeling data is costly, making it difficult for a single organization to
accumulate and store large-scale data. On the other hand, it is also infeasible to share data across organizations
to build large-scale datasets, as doing so leads to potential leakage of data privacy. In recent years, a series of
laws and regulations have been enacted, such as the General Data Protection Regulation (GDPR) [8] and the
California Consumer Privacy Act (CCPA) [9], imposing constraints on data sharing. Therefore, how to jointly
leverage the knowledge encoded in decentralized data while protecting data privacy becomes a critical problem.
Federated Learning (FL) [10, 11] is a promising solution to the problem and has received great attention from
both the industry and the research community. The key idea of FL is that participants (also known as clients or
parties) exchange intermediate results, such as model parameters and gradients, instead of raw data, to jointly
train machine learning models. As the raw data never leave their owners during model training, FL becomes an
Copyright 2004 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for
advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any
copyrighted component of this work in other works must be obtained from the IEEE.
Bulletin of the IEEE Computer Society Technical Committee on Data Engineering
-----
attractive privacy-preserving solution to the problem of decentralized machine learning. Up to now, a plethora
of FL techniques has been proposed, focusing primarily on addressing the issues of data heterogeneity [13, 15],
system heterogeneity [14, 17], data privacy and security [16, 18], and communication efficiency [12, 19].
Despite the significant research efforts, there is still one important yet under-explored topic in FL, which is
how to effectively leverage unlabeled data to learn better federated models. In existing efforts of FL [12, 13, 14],
it is assumed that all data held by all participants are fully labeled, and that a supervised learning problem is to
be solved. However, the assumption may not hold in practice for two reasons. First, participants may not be
sufficiently motivated to label their data. For example, suppose a sentiment classification model is to be trained
with FL, smartphone users would be unwilling to spend time and effort to label all sentences typed in the phone.
Second, participants may not have sufficient expertise to label their data. For example, wearable devices record
various data (e.g. heart rate, breath rate, etc.) about the user’s physical conditions, labeling which would require
domain expertise in medical science and cannot be done by ordinary users. Based on the above observations, we
argue that unlabeled data widely exist in real-world FL applications, and that the problem of Federated Learning
without Full Labels is an important problem to study.
There are generally three learning paradigms in centralized machine learning (ML) that tackle the problem
of learning without full labels, semi-supervised learning [20, 21], self-supervised learning [24, 23], and transfer
learning [22], all of which have drawn much attention from researchers. Among them, semi-supervised learning
aims to leverage unlabeled data to assist the limited labeled data [25, 26, 27]. Self-supervised learning aims to
learn indicative feature representations from unlabeled data, which are then used to assist downstream supervised
learning tasks [28, 29, 30]. Transfer learning aims to use sufficient data from a source domain to assist learning
in a target domain with insufficient data [31, 32, 33], where the target domain commonly contains unlabeled
data. However, despite the large number of existing works in these areas, it is not straightforward to apply them
in FL due to the following challenges.
- Isolation of labeled and unlabeled data. In traditional semi-supervised learning and transfer learning,
the server has access to both labeled and unlabeled data. However, in FL without full labels, it is common
for a participant to have unlabeled data only. For example, a medical institute may not have the expertise
to diagnose a complex illness, leaving all its data unlabeled. Moreover, it is not allowed in FL to exchange
labeled data to solve the problem. The isolation of labeled and unlabeled data may compromise the overall
performance. As observed in [34, 35], training with only unlabeled data leads to forgetting the knowledge
learned from labeled data, which negatively impacts the overall performance. Therefore, it is important to
bridge the knowledge between labeled and unlabeled data, without data exchange.
- Privacy of labeled data. In the problem of FL without full labels, the number of labeled data is often
limited. Therefore, participants have to repetitively access and exchange information about them to exploit
the knowledge in the labels. This leads to risks of privacy leakage of the labeled data. For example, semihonest participants can learn to reconstruct the labeled data via gradient inversion attacks [36].
- Data heterogeneity. Data heterogeneity, i.e. the local data held by different participants have different
data distributions, is an important property in FL that causes accuracy degradation [13, 14]. Similarly,
data heterogeneity also poses challenges in the problem of FL without full labels. For example, as the
number of labeled data is limited, local models tend to overfit the local data more easily, which causes a
greater amount of weight divergence [37] and performance degradation.
- Balancing performance and efficiency. The large-scale unlabeled data in the problem creates a tradeoff
between performance and efficiency. Specifically, while large-scale unlabeled data is available for training, their impacts on the model performance may be marginal, and the overall efficiency can be improved
by sampling a fraction of unlabeled data without compromising model performance.
-----
In this paper, we present a survey of the problem of FL without full labels and its existing solutions. The
rest of the paper is organized as follows. Section 2 presents necessary backgrounds about FL as well as machine
learning paradigms without full labels, including semi-supervised learning, self-supervised learning, and transfer
learning. Sections 3, 4, 5 then review methods on federated semi-supervised learning, federated self-supervised
learning, and federated transfer learning, respectively. Section 6 summarizes the datasets used for evaluating FL
methods without full labels. Section 7 analyzes the similarities and differences between our work and related
surveys. Finally, Section 8 presents an outlook on potential directions in the context of FL without full labels.
### 2 Preliminaries
In this section, we formally introduce backgrounds about FL, as well as the machine learning paradigms leveraging unlabeled data, semi-supervised learning, self-supervised learning, and transfer learning.
#### 2.1 Federated Learning (FL)
Federated Learning aims to virtually unify decentralized data held by different participants to train machine
learning models while protecting data privacy. Depending on how the data is split across participants, FL can be
divided into horizontal federated learning (HFL) and vertical federated learning (VFL) [10]. In HFL, participants
own data with the same feature space (e.g. participants own image data from different users), while in VFL,
participants own data with the same user space but different feature spaces (e.g. a financial institute owns
transaction records of a user, while an e-commerce corporation owns purchase records). In this paper, following
the majority of existing research efforts, we primarily focus on HFL[1], i.e. all participants share the same feature
space. Formally, we consider an FL scenario with C participants, denoted as 1, . . ., C. Each participant i owns a
dataset Di = {Xij, yij}[N]j=1[i] [, where][ N][i][ =][ |D][i][|][ is the number of data held by participant][ i][, and][ X][ij][, y][ij][ denote the]
features and the label of the j-th sample from client i, respectively. We use pi(X, y), pi(X), pi(y|X) to denote
the joint distribution, marginal distribution, and conditional distribution of client i, respectively. Denoting the
model parameters as θ ∈ R[d], the overall optimization objective of FL is as follows,
Ni
�
l (Xij, yij; θ), s.t. Mp(θ) < εp, (1)
j=1
min ffl(θ) = [1]
θ C
C
�
ffl,i(θ), where ffl,i(θ) = [1]
Ni
i=1
where ffl,i(θ) is the local optimization objective of participant i, and l(X, y; θ) is a loss function, such as
the cross-entropy loss for classification problems. In addition, Mp(θ) denotes a metric measuring the privacy
leakage of θ (e.g. the budget in differential privacy (DP) [72]), and εp is a privacy constraint.
The training process of FL generally involves multiple communication rounds, each of which contains two
steps, local training, and server aggregation.
- In the local training stage, a subset of all participants is selected. They are given the latest global model
and will train the model with their local data for several epochs.
- In the server aggregation stage, participants upload their updated parameters to the server. The server
aggregates received parameters via weighted averaging to obtain the global model for the next round.
Depending on the properties of participants, FL can be categorized into cross-device FL and cross-silo FL
[11]. Participants of cross-device FL are commonly smart devices (e.g. phones, sensors, wearables) connected
with wireless networks, while participants of cross-silo FL are commonly large organizations with connected
datacenters, implying the following differences:
1Unless otherwise specified, we will use FL to refer to HFL throughout this paper.
-----
- Computation/communication capability. Participants in cross-device FL commonly have limited computation (e.g. small memory, limited power supply) and communication capability (e.g. wireless network).
- Stability. Participants in cross-device FL are not stable and may drop out due to network breakdown.
- Participant states. In general, participants in cross-device FL cannot carry state vectors, in that they may
only participate in one round of FL, and then drop out indefinitely.
#### 2.2 Machine Learning with Unlabeled Data
2.2.1 Semi-supervised Learning
In semi-supervised learning, there are two datasets, a labeled dataset L = {Xj, yj}[|L|]j=1[, and an unlabeled dataset]
U = {Xk}[|U|]k=1[,][ |U| ≪|L|][. In addition, the marginal distributions of][ L][,][ U][ are the same, i.e.][ p][L][(][X][) =][ p][U] [(][X][)][.]
The goal of semi-supervised learning, involving both labeled and unlabeled data, is as follows,
|U|
�
lu(Xk; θ), (2)
k=1
min fsemi(θ) = [1]
θ |L|
|L|
�
ls(Xj, yj; θ) + [1]
|U|
j=1
where ls, lu denotes the loss for labeled (supervised) and unlabeled data, respectively.
We then introduce some widely adopted techniques in semi-supervised learning.
Pseudo-Labeling [79]. Pseudo-labeling is a simple but effective trick for semi-supervised learning. Specifically, for each unlabeled data sample, its pseudo-label is taken as the class with the highest predicted probability,
yˆk = arg max gθ(Xk)c, (3)
c
where gθ is the model with parameter θ, and gθ(Xk)c denotes the predicted probability of class c for Xk. There
is often a confidence threshold τ, such that pseudo-labels are only taken on confident samples with ˆyk > τ .
After that, the pseudo-labels are used to supervise learning on unlabeled data, i.e.
lu(Xk; θ) = ls(Xk, ˆyk; θ). (4)
Teacher-student Models [25]. Teacher-student models in semi-supervised learning leverage two networks,
a teacher model θtea and a student model θstu. On one hand, the student model is trained to be consistent with
the teacher model to enhance its robustness
lu(Xk; θ) = d (gθstu (Xk), gθtea (Xk)), (5)
where d(·, ·) is a distance metric. On the other hand, the teacher model is updated with moving averaging
(parameterized by α) over the student model after each iteration
θtea = (1 − α)θtea + αθstu. (6)
2.2.2 Self-supervised Learning
Self-supervised learning aims to learn good feature representations from unlabeled data to facilitate downstream
machine learning tasks. There are in general two ways to perform self-supervised learning, generative learning,
and contrastive learning [24]. Generative learning trains the model to reconstruct the original data X from
masked data to learn the internal semantics within X, while contrastive learning trains the model to distinguish
-----
Assumptions
Machine Learning Paradigm Application Limitations
Train-test i.i.d. Labeled Data
Supervised Learning ✓ ✓ Sufficient labeled data Labeled data is hard to obtain.
A few labeled data Labeled and unlabeled data
Semi-supervised Learning ✓ Insufficient
+ large-scale unlabeled data should have the same distribution.
Self-supervised Learning ✓ × Large-scale unlabeled data Cannot directly perform supervised tasks.
Unlabeled data Hard to select a helpful
Transfer Learning × From another domain + labeled data source domain.
from another domain Potential negative transfer.
Table 1: A comparison between supervised learning, semi-supervised learning, self-supervised learning, and
transfer learning.
between ‘positive’ and ‘negative’ samples. In this survey, we primarily focus on contrastive learning, whose
objective is given as follows.
�
min fctr(θ) = [d (gθ(X), gθ(X+)) − λ · d (gθ(X), gθ(X−))], (7)
θ
X∈U
where U is the unlabeled dataset, gθ is a neural network parameterized by θ, X+, X− are positive and negative
samples sampled for data X, λ is the weight for negative samples, and d(·, ·) is a distance metric. By minimizing
fctr, the model gθ learns to minimize the distance between positive samples in the feature space, while maximizing the distance between negative ones. Some representative contrastive learning methods include SimCLR
[45], MoCo [46], BYOL [48], and SimSiam [47]. We briefly explain their similarities and differences.
Similarities. All four methods employ a Siamese structure – two networks with the same architecture. One
of them is called the online network θo and the other is called the target network θtar. The main difference is
that the online network is directly updated via gradient descent, while the target network is generally not.
Differences. The differences between existing self-supervised learning methods are generally three-fold.
1. Architecture. In SimCLR and MoCo, the online and the target networks have the same architecture.
On the contrary, for SimSiam and BYOL, the online network contains an additional predictor, i.e. θo =
(θo[f] [, θ]o[p][)][. The predictor aims to transform features between different views, enabling additional diversity.]
2. Target Network Parameter. For SimCLR and SimSiam, the target network shares the same parameters
as the online network θo = θtar, while for BYOL and MoCo, the target network is updated with an
exponential moving average similar to Eqn. 6.
3. Negative Samples. On one hand, SimCLR and MoCo require negative samples X−. MoCo generates
negative samples from previous batches, while SimCLR takes all other samples in the same batch as
negative samples. On the other hand, SimSiam and BYOL do not require negative samples (i.e. λ = 0).
2.2.3 Transfer Learning
Both semi-supervised learning and self-supervised learning assume that the training and test data are independent
and identically distributed (i.i.d.), regardless of whether labels are present. However, transfer learning [22] does
not require the assumption. Specifically, transfer learning deals with multiple data distributions (also called
domains) pi(X, y), i = 1, 2, . . . T, where the model is trained on one, and tested on another. Without loss
of generality, we assume that T = 2. We denote L1 = {X1i, y1i}i[|L]=1[1][|] [∼] [p][1][(][X][, y][)][ as the][ source][ dataset,]
and U2 = {X2j}j[|U]=1[2][|] [∼] [p][2][(][X][)][ as the][ target][ dataset. The overall goal is to minimize the error on the target]
-----
dataset. However, as there are no labeled target data, we resort to the abundant source data to learn a model that
generalizes well to the target dataset. A commonly studied optimization objective is as follows,
min
θf,θc
|L1|
�
ls(X1i, y1i; θf, θc)
i=1
� �� �
fcls(L1;θf,θc)
+λ · d �gθf (L1), gθf (U2)�, (8)
� �� �
fdom(L1,U2;θf )
where θf, θc, are parameters of the feature extractor and the classifier, respectively, d(·, ·) is a distance metric,
gθf (L1) = {gθf (X1i)}i[|L]=1[1][|] [denotes the set of source features extracted by][ θ][f] [, and][ f][cls][, f][dom][ denote the classifier]
loss on the source domain and the domain distance between domains, respectively. Intuitively, Eqn. 8 aims to
minimize the classification error on the source domain, while minimizing the distance between source domain
features and target domain features. In this way, the feature extractor θf is considered to extract domain-invariant
features, and the classifier can be reused in the target domain. Commonly used distance metrics d(·, ·) include
L2 distance, maximum mean discrepancy (MMD) [32] and adversarial domain discriminator [33].
In addition, if an additional labeled target dataset L2 is available, θf, θc can be further fine-tuned with L2.
Transfer learning can generally be categorized into homogeneous transfer learning and heterogeneous transfer learning [22]. Homogeneous transfer learning assumes that domains share the same feature and label space,
while heterogeneous transfer learning does not make such an assumption. For example, consider a movie recommender system that would like to borrow relevant knowledge from a book recommender system. If both
systems rely on text reviews and ratings for recommendation, then a homogeneous transfer learning is to be
solved, with the shared feature space being texts, and the shared label space being the ratings. However, if the
movie recommender wants to leverage additional video clips, then the problem becomes a heterogeneous transfer learning problem, as the book recommender does not have video features. Heterogeneous transfer learning
generally requires explicit cross-domain links to better bridge heterogeneous features and labels. For example,
a novel and its related movie products should have similar feature representations.
2.2.4 Summary and Discussion
We summarize the three learning paradigms involving unlabeled data in Table 1. As shown, supervised learning
has two key assumptions, the i.i.d. property between training and test data, and sufficient labeled data. Therefore,
supervised learning is not applicable when either the labeled data is insufficient, or the training and test data come
from different distributions. To address the drawback, semi-supervised learning, self-supervised learning, and
transfer learning are proposed to relax the two key assumptions.
- Semi-supervised learning relaxes the assumption of sufficient labeled data. With limited labeled data,
semi-supervised learning aims to exploit large-scale unlabeled data that have the same distribution as
labeled data with techniques such as pseudo-labeling or teacher-student models. The main limitation
of semi-supervised learning is the difficulty to obtain i.i.d. unlabeled data. For example, for the task
of medical imaging, the images taken from multiple hospitals may follow different distributions due to
device differences, demographic shifts, etc.
- Self-supervised learning further relaxes the assumption of labeled data. It aims to learn meaningful feature
representations from the internal structures of unlabeled data, such as patches, rotations, and coloring in
images. The main limitation of self-supervised learning is that, although it does not require labels to learn
feature representations, they cannot be directly used to perform supervised tasks (e.g. classification).
- Transfer learning further relaxes the assumption of i.i.d. train and test data. Given unlabeled data in a
domain, it aims to learn from a different but related domain with sufficient labeled data, and to transfer
-----
helpful knowledge to the unlabeled data. The main limitation of transfer learning is that it commonly requires trial-and-errors to select an adequate source domain. When inadequate source domains are chosen,
negative transfer [83] may happen which compromises model accuracy.
### 3 Federated Semi-supervised Learning
In this section, we present an overview of federated semi-supervised learning, whose main goal is to jointly use
both labeled and unlabeled data owned by participants to improve FL. Before introducing detailed techniques,
we first categorize federated semi-supervised learning into two settings following [35]:
- Label-at-client, where the labeled data are located at the clients, while the server only has access to
unlabeled data. For example, when a company would like to train an FL model for object detection
using images taken from smartphones, the company has no access to the local data of users, and labeling
can only be done by users. However, users are generally unwilling to label every picture taken from
their smartphones, creating a label-at-client setting for federated semi-supervised learning. Formally, the
objective function of this setting is as follows,
1
min
θ C
C
�
fsemi,i(θ), s.t. Mp(θ) < εp (9)
i=1
where fsemi,i(θ) denotes the semi supervised learning loss (Eqn. 2) evaluated on the dataset of participant
i, and Mp, εp follow Eqn. 1.
- Label-at-server, where the labeled data are located at the server, while clients have only unlabeled data.
For example, consider a company of wearable devices that would like to train a health condition monitoring model with FL. In this case, users generally do not have the expertise to label data related to health
conditions, leaving the data at clients unlabeled. The objective can be similarly formulated as
|Ui|
�
lu(Xik; θ), s.t. Mp(θ) < εp. (10)
k=1
C
�
i=1
[1]
|Ui|
1
min
θ |L|
|L|
�
ls(Xj, yj; θ) + [1]
C
j=1
Methods for each federated semi-supervised learning setting are discussed in the following sections. We
also summarize existing methods in Table 2.
#### 3.1 The Label-at-client Setting
The label-at-client setting of federated semi-supervised learning is similar to conventional FL (Eqn. 1), in that
clients can train local models with their labeled data, and the updated parameters are aggregated by the server.
Therefore, the label-at-client setting inherits the challenges of data heterogeneity, data privacy, and efficiency
tradeoff from conventional FL. In addition, some clients may not have labeled data to train their local models,
causing the label isolation problem. We introduce how existing works address these problems in this section.
RSCFed [38] primarily focuses on the label isolation problem and the data heterogeneity problem in federated semi-supervised learning. For local training, the teacher-student model (introduced in Section 2.2.1) is
adopted for training on unlabeled data. To further address the data heterogeneity problem, RSCFed proposes
a sub-consensus sampling method and a distance-weighted aggregation method. In each round, several subconsensus models are aggregated by independently sampling multiple subsets of all participants, such that each
sub-consensus model is expected to contain participants with labeled data. Moreover, the local models are
-----
Label Data Data Efficiency
Setting Method
Isolation Privacy Heterogeneity Tradeoff
Teacher-student Sub-consensus models &
RSCFed [38] × ×
model distance-weighted aggregation
FedSSL [39] Pseudo-labeling Differential privacy (DP) Global generative model ×
Labelat-client
Labelat-server
FedMatch [35] Pseudo-labeling × Inter-client consistency Disjoint & sparse learning
Negative labels
FedPU [41] × × ×
from other clients
Tuning confidence threshold
AdaFedSemi [40] Pseudo-labeling × ×
and participation rate.
Ensemble Transmit logits,
DS-FL [42] × Entropy reduction averaging
pseudo-labeling not parameters
Alternate training &
SemiFL [34] × × ×
Pseudo-labeling
Pseudo-labeling Inter-client
FedMatch [35] × Disjoint & sparse learning
& Disjoint learning consistency loss
Table 2: Summary of techniques for federated semi-supervised learning. × indicates that the proposed method
does not focus on this issue.
weighted according to their distance to sub-consensus models, such that deviating models receive low weights
and their impacts are minimized.
FedSSL [39] tackles the label isolation problem, the data privacy problem, and the data heterogeneity problem. To facilitate local training of unlabeled clients, FedSSL leverages the technique of pseudo-labeling. Further,
to tackle the data heterogeneity problem, FedSSL learns a global generative model to generate data from a unified feature space, such that the data heterogeneity is mitigated by the generated data. Finally, to prevent privacy
leakage caused by the generative model, FedSSL leverages differential privacy (DP) to limit the information
leakage of the training data in the generative model.
FedMatch [35] proposes an inter-client consistency loss to address the data heterogeneity problem. Specifically, top-k nearest clients are sampled for each client, and on each data sample, the output of the local model is
regularized with those of the top-k client models to ensure consistency. In addition, FedMatch proposes disjoint
learning that splits the parameters for labeled and unlabeled data, and the parameters for unlabeled data are
sparse. Upon updates, clients with only unlabeled data upload sparse tensors, reducing the communication cost.
FedPU [41] studies a more challenging setting within semi-supervised learning, positive and unlabeled learning, in which each client has only labels in a subset of classes. In this setting, a client has only information about
a part of all classes, leading to a severe label isolation problem. To tackle the problem, FedPU derives a novel
objective function, such that the task of learning the negative classes of a client is relegated to other clients who
have labeled data in the negative class. In this way, each client is only responsible for learning the positive
classes and can do local training by itself. Empirically, the proposed FedPU outperforms FedMatch [35] in the
positive-and-unlabeled learning setting.
AdaFedSemi [40] proposes a system to achieve the tradeoff between efficiency and model accuracy in federated semi-supervised learning with server-side unlabeled data. For every round, the model is trained with
labeled data at clients and aggregated at the server. The server-side unlabeled data are incorporated into the
training process via pseudo-labeling. AdaFedSemi [40] identifies two key parameters to balance the tradeoff
between efficiency and performance, the client participation rate P, and the confidence threshold of pseudolabels τ . A lower P reduces both the communication cost and the model accuracy, while a high τ reduces the
server-side computation cost while also limiting the usage of unlabeled data. Therefore, AdaFedSemi designs a
tuning method based on multi-armed bandits (MAB) to tune both parameters as training proceeds. Experiments
show that AdaFedSemi achieves a good balance between efficiency and accuracy by dynamically adjusting P
-----
and τ in different training phases.
DS-FL [42] tackles a similar problem to AdaFedSemi, where clients own labeled data while the server
owns unlabeled data. It proposes an ensemble pseudo-label solution to leverage the server-side unlabeled data.
Specifically, instead of a single pseudo-label ˆyk for a data sample Xk, it averages the pseudo-labels generated
by all clients, i.e. yˆk = MEANc[C]=1[g][θ]c[(][X][k][)][. This creates an ensemble of client models and offers better]
performance. Moreover, as only pseudo-labels are transmitted instead of model parameters, the communication
cost can be significantly saved. In addition, DS-FL observes that training on pseudo-labels leads to a high
prediction entropy. It then proposes an entropy-reduced aggregation, which sharpens the local outputs gθc (Xk)
before aggregation.
#### 3.2 The Label-at-server Setting
The label-at-server setting, where clients do not have any labeled data, is more challenging than the label-atclient setting. The reason is that all clients own unlabeled data only and cannot provide additional supervision
signals to the FL model. As shown in [35] and [34], training with only unlabeled data may lead to catastrophic
forgetting of the knowledge learned from labeled data, and thus compromises the model performance.
To address the isolation between labeled data and unlabeled data, FedMatch [35] proposes a disjoint learning
scheme that involves two sets of parameters for labeled and unlabeled data, respectively. The parameters for
labeled data are fixed when training on unlabeled data, and vice versa, to prevent the knowledge from being
overwritten. Disjoint learning brings additional benefits in communication efficiency, in that the parameters for
unlabeled data, which are transmitted between participants and the server, are set to be sparse. In addition, to
address the heterogeneous data held by different clients, FedMatch proposes an inter-client consistency loss,
such that local models from different participants generate similar outputs on the same data.
SemiFL [34] takes another approach to solving the challenges. It proposes to fine-tune the global model
with labeled data to enhance its quality and to alleviate the forgetting caused by unsupervised training at clients.
Furthermore, instead of regularizing model outputs across clients, SemiFL proposes to maximize the consistency
between client models and the global model. Specifically, the global model generates pseudo-labels for clientside unlabeled data, and the local models of clients are trained to fit the pseudo-labels. Empirical results show
that SemiFL yields more competitive results than FedMatch.
### 4 Federated Self-supervised Learning
In this section, we introduce how self-supervised learning can be combined with FL to learn with decentralized
and purely unlabeled data. Although there are two types of self-supervised learning, generative and contrastive
learning, so far only contrastive methods have been studied in the FL setting, and thus we limit the discussions
within federated contrastive self-supervised learning. The objective function can be formalized as
1
min
θ C
C
�
fctr,i(θ), s.t. Mp(θ) < εp, (11)
i=1
where fctr,i denotes fctr (Eqn. 7) evaluated at participant i. Compared to FL with full supervision, federated
contrastive learning does not have globally consistent labels, and thus, the local contrastive objectives may
deviate from one another to a greater extent. Therefore, heterogeneous data poses a greater challenge to federated
contrastive learning. Table 3 summarizes existing works in federated contrastive self-supervised learning.
FedCA [49], as one of the earliest works to study federated self-supervised learning, proposes a dictionary
module and an alignment module to solve the feature misalignment problem caused by data heterogeneity. Extending SimCLR, the dictionary module in FedCA aims to use the global model to generate consistent negative
-----
Label Data Data Efficiency
Method
Isolation Privacy Heterogeneity Tradeoff
FedCA [49] SimCLR × Dictionary & Alignment module ×
SSFL [52] SimSiam × Personalized models ×
FedU [44] BYOL × Selective divergence-aware update ×
FedEMA [50] BYOL × Moving average client update ×
FedX [51] Local relation loss × Global contrastive & relation loss ×
Rotation prediction Sending local centroids
Orchestra [43] Bi-level clustering ×
& clustering instead of all representations
Table 3: Summary of techniques for federated self-supervised learning. × indicates that the proposed method
does not focus on this issue.
samples across clients, while the alignment module uses a set of public data to align the representations generated by local models. However, the alignment module of FedCA requires sharing a public dataset, which
compromises data privacy.
SSFL [52] addresses the data heterogeneity problem in federated self-supervised learning with a personalized FL framework [53, 54], in which each participant trains s unique local model instead of training a shared
global model. The drawback of SSFL is that the adopted self-supervised learning method requires a large batch
size, which is hard to achieve on resource-limited edge devices.
FedU [44] designs a heterogeneity-aware aggregation scheme to address data heterogeneity in federated
self-supervised learning. As discussed in Section 2.2.2, there are generally two networks in contrastive learning,
an online network and a target network. Therefore, how to aggregate and update the two networks in FL with
data heterogeneity becomes an important research question. With empirical experiments, FedU discovers that
aggregating and updating only the online network yields better performances. Moreover, as FedU extends
BYOL with an additional predictor model, it is also necessary to design an update rule for it. FedU designs
a divergence-aware predictor update rule, which updates the local predictor only when its deviation from the
global predictor is low. These rules ensure that data heterogeneity is well captured by local and global models.
Extending FedU, FedEMA [50] presents an extensive empirical study on the design components of federated
contrastive learning. It performs experiments combining FL with MoCo, SimCLR, SimSiam, and BYOL, and
identifies BYOL as the best base method. Based on the results, FedRMA with a divergence-aware moving
average update rule is proposed. The difference between FedEMA and FedU is that, FedU overwrites the local
online model with the global online model
θo,c[r] [=][ θ]o[r][−][1], (12)
where θo,c[r] [denotes the local online model at client][ c][ and round][ r][, and][ θ]o[r][−][1] denotes the global online model
aggregated at the previous round. On the contrary, FedEMA updates the local online model by interpolating
between the global and local online models to adaptively incorporate global knowledge, i.e.
θo,c[r] [= (1][ −] [µ][)][θ]o[r][−][1] + µθo,c[r][−][1][,] (13)
where µ is a parameter based on weight divergence,
µ = min(λ∥θo,c[r][−][1] [−] [θ]o[r][−][1]∥, 1). (14)
While FedU and FedEMA are simple and effective, they both require stateful clients to keep track of the divergence between local and global models, and are thus not applicable in cross-device FL.
Contrary to FedU and FedEMA, Orchestra [43] proposes a theoretically guided federated self-supervised
learning method that works with cross-device FL. Orchestra is based on the theory that feature representations
-----
with good clustering properties yield low classification errors. Therefore, in addition to contrastive learning,
Orchestra aims to simultaneously enhance the clustering properties of all data representations. However, sharing all data representations for clustering may cause the problem of privacy leakage. Orchestra addresses the
problem with a bi-level clustering method, in which clients first cluster their data representations, and send only
the local centroids to the server. The server performs a second clustering on the local centroids to obtain global
clustering centroids, which are sent back to clients to compute cluster assignments. As local centroids reveal
less information than all data representations, this bi-level clustering method better preserves data privacy.
Orthogonal to the above methods, FedX [51] proposes a versatile add-on module for federated self-supervised
learning methods. FedX consists of both local and global relational loss terms that can be added to various contrastive learning modules. The local relational loss aims to ensure that under the local model, two augmentations
of the same data sample have similar relations (similarities) to samples within the same batch B, i.e.
exp(sim(zi, zj)) exp(sim(zi+, zj))
r[j]i [=] � i+ [=] � (15)
k∈B [exp(sim(][z][i][,][ z][k][))] [,][ r][j] k∈B [exp(sim(][z][i][+][,][ z][k][))] [,]
Lrel = JS(ri, ri+), (16)
where zi, zi+ denotes the feature representation of the i-th sample (with different augmentations) in the batch,
ri denotes the (normalized) similarity between the i-th sample and all other samples, and JS denotes the JensenShannon divergence. The global relational loss is similarly defined such that under the local model, two augmentations of the same data sample have similar relations to the global representations. Empirical results show
that FedX is versatile and can improve the performance of various contrastive learning methods in FL, including
FedSimCLR, FedMoCo, and FedU.
### 5 Federated Transfer Learning
In this section, we summarize efforts that combine FL with transfer learning (FTL). We categorize existing works
in FTL into homogeneous FTL and heterogeneous FTL, whose differences are introduced in Section 2.2.3.
#### 5.1 Homogeneous FTL
In this section, we introduce research works on homogeneous FTL. Assuming that there are S source domains
and T target domains, each of which is held by one participant, the objective of homogeneous FTL is as follows,
T
�
λijfdom(Li, Uj; θf ), s.t. Mp(θf, θc) < εp, (17)
j=1
S
�
i=1
min
θf,θc
S
�
fcls(Li; θf, θc) +
i=1
where Li, Uj denote the labeled/unlabeled dataset held by the i-th source/target domain, respectively, fcls, fdom
follow Eqn. 8, and λij are hyperparameters used to select helpful source domains. If source domain i and
target domain j are similar, we can assign a high λij, and vice versa. Depending on how many source or target
domains are involved, we can categorize existing works into two settings, single-source, and multi-source. In
the multi-source setting, selecting the most appropriate source domain poses an additional challenge compared
to the single-source setting. We introduce related works in both settings in the following sections.
5.1.1 Single-source Setting
The single-source setting in federated transfer learning commonly involves one server with labeled data, and
multiple clients with unlabeled data. As the clients themselves may have different data distributions, each client
creates a unique target domain, which requires a flexible adaptation method to tackle multiple targets.
-----
To our knowledge, DualAdapt [56] is the first work to tackle the single-source, multi-target federated transfer
learning problem. DualAdapt extends from the maximum classifier discrepancy (MCD) method [61]. Specifically, MCD involves a feature extractor θf and two classifiers θc1, θc2, trained with the following steps iteratively:
- First, θf, θc1, θc2 are trained to minimize the error on the source domain L1.
- Second, given a target domain sample Xt, we fix the feature extractor θf and maximize the discrepancy
between the classifiers, i.e. maxθc1,θc2 Lcd = d(gθf,θc1(Xt), gθf,θc2(Xt)). This step aims to find target
samples that are dissimilar to the source domain.
- Third, the classifiers are fixed, and the feature extractor θf is trained to minimize Lcd to generate domain
invariant features.
The FL setting creates two challenges for MCD. First, Step 2 should be taken at clients, yet as no labels are
available, Step 2 may result in naive non-discriminative solutions. To address the problem, DualAdapt proposes
client self-training, where pseudo-labels generated by the server model are used to train the classifiers in addition
to Lcd. Second, to maintain a single feature extractor θf, Step 3 is done at the server, which has no access to target
samples Xt. DualAdapt proposes to use mixup [62] to approximate target samples Xt. To further mitigate the
impact of domain discrepancy, DualAdapt proposes to fit Gaussian mixture models (GMM) at each participant.
At each participant, samples from other participants are re-weighed via the fit GMMs, such that impacts of
highly dissimilar samples are mitigated.
FRuDA [58] proposes a system for single-source, multi-target federated transfer learning with DANN [33]
Similar to DualAdapt, it also considers the setting with multiple unlabeled target domains, for which it proposes
an optimal collaboration selection (OCS) method. The intuition of OCS is that, for a new target domain, instead
of always transferring from the only source domain, it is also possible to transfer from an existing target domain
that is closer to the new domain. To implement the intuition, OCS derives an upper bound for the transfer
learning error from one domain to another,
εCE,D2(h, h[′]) ≤ θCE(εL1,D1(h, h[′]) + 2θW (D1, D2)), (18)
where εM,D(h, l) denotes the error, measured by metric M, of the hypothesis h on data distribution D with the
label function l, θCE, θ are constants, h, h[′] are source and target hypotheses, D1, D2 are source and target data
distributions, respectively, and W (D1, D2) denotes the Wasserstein distance between D1, D2. With Eqn. 18, the
optimal collaborator of each target domain can be selected by minimizing the right-hand side. To further improve
efficiency, a lazy update scheme, exchanging discriminator gradients every p iteration, is further proposed.
5.1.2 Multi-source Setting
A more challenging setting of federated transfer learning is the multi-source setting, where multiple source
domains with labeled data are available to transfer knowledge to a single unlabeled target domain. In this
setting, it is necessary to select a source domain with helpful knowledge without directly observing source data.
To our knowledge, FADA [55] is the first work to tackle the multi-source federated transfer learning problem. FADA extends the adversarial domain adaptation [33] method, with a domain discriminator between each
source domain and the target domain. The domain discriminator aims to tell whether each feature representation belongs to the source and the target domain, and the feature extractor is then trained to fool the domain
discriminator to learn domain invariant features. To train the domain discriminator, FADA directly exchanges
feature representations from both domains, which may lead to potential privacy threats. In addition, to select the
most relevant source domain to transfer from, FADA proposes a source domain weighting method based on gap
statistics. Gap statistics [63] measures how well the feature representations are clustered,
�
∥zi − zj∥2, (19)
i,j∈Cr
I =
k
�
r=1
1
2nr
-----
Homo- Label Data Data Efficiency
Method # Source # Target
geneous Isolation Privacy Heterogeneity Tradeoff
DualAdapt [56] ✓ 1 - 1 MCD & Pseudo-labeling& × GMM weighting ×
MixUp approximation
FRuDA [58] ✓ 1 - 1 DANN [33] & × Optimal collaborator selection Lazy update
Optimal collaborator selection
FADA [55] ✓ - 1 1 DANN [33] & × Gap statistics weighting ×
Representation sharing
FADE [57] ✓ - 1 1 DANN No representation sharing CDAN ×
Squared adversarial loss
EfficientFDA [60] ✓ - 1 1 Max. mean discrepancy (MMD) Homomorphic encryption (HE) × Optimized HE operation
PrADA [86] ✓ 2 1 Grouped DANN Homomorphic encryption (HE) × ×
SFTL [85] × 1 1 Sample alignment loss HE & Secret sharing (SS) Sample alignment loss ×
SFHTL [84] × 1 >1 Label propagation Split learning [87] Unified feature space ×
Table 4: Summary of techniques for (unsupervised) federated transfer learning. × indicates that the proposed
method does not focus on this issue. ’Homogeneous’ indicates whether the work focuses on homogeneous FTL
(✓) or heterogeneous FTL (×). # Source, # Target denote the number of source and target domains considered
in the work, respectively.
where fi denotes the feature representation of the i-th sample, C1 . . . Ck denote the index set of k clusters, and
nr is the number of samples in cluster r. A low I indicates that the feature representations can be clustered with
low intra-cluster variance, which usually indicates good features. FADA then computes how the gap statistics
of the target domain drop after learning with each source domain, i.e.
Ii[gain] = Ii[r][−][1] − Ii[r][,] (20)
where r denotes the communication round, and i denotes the source domain index. Finally, FADA applies
weights on source domains via with Softmax(I1[gain], I2[gain] . . ., ).
FADE [57] improves over FADA by not sharing representations to learn the domain discriminator, thus better
protecting data privacy. Instead, the domain discriminator is kept local at each client, and is trained locally and
updated via parameter aggregation. FADE theoretically shows that the design leads to the same optimal values
as FADA, but empirically leads to negative impacts. The issues of the design are that the trained discriminator
may have low sensitivity (and thus takes longer to converge) and user mode collapse (and thus fail to represent
heterogeneous data). To address the drawbacks, FADA presents two tricks. To tackle the low sensitivity issue,
FADE squares the adversarial loss such that it is more reactive under large loss values. To tackle the user
mode collapse issue, FADE proposes to maximize the mutual information between users (related to classes) and
representations, and implements the idea with conditional adversarial domain adaptation (CDAN) [80].
EfficientFDA [60] is another improvement over FADA in that source and target domain feature representations are encrypted with homomorphic encryption (HE) [64], and the maximum mean discrepancy (MMD) [32]
is computed over ciphertexts. As homomorphic encryption incurs large computation and communication costs,
EfficientFDA further proposes two ciphertext optimizations. First, ciphertexts in each batch of samples are aggregated to reduce communication overhead. Second, for computing gradients with ciphertexts, the chain rule
is applied to replace ciphertext computations with plaintexts to improve computational efficiency. Experiments
show that EfficientFDA achieves privacy in federated transfer learning, while being 10-100x more efficient than
naive HE-based implementations.
While the above works tackle the problem with multiple source domains with the same feature space, PrADA
[86] tackles a different problem, involving two source domains with different feature spaces. PrADA considers
a partially labeled target domain A {X[A]l [, y]l[A][} ∪{][X]u[A][}][, a labeled source domain B][ {][X][B][ ∈] [R][N][B][×][D][, y][B][}][, and]
a feature source domain C {X[A]C [∈] [R][N][A][×][D][C] [} ∪{][X]C[B] [∈] [R][N][B][×][D][C] [}][. Domains A and B share the same feature]
space with different distributions, while domain C aims to provide rich auxiliary features for samples in both A
and B. PrADA presents a fine-grained domain adaptation technique, in which features from domain C are first
manually grouped into g tightly relevant feature groups. Each feature group is then assigned a feature extractor
-----
and a domain discriminator to perform fine-grained, group-level domain adaptation. In addition, to protect data
privacy, the whole training process is protected with homomorphic encryption. Experiments show that with the
grouped domain adaptation, PrADA achieves better transferability and interpretability.
#### 5.2 Heterogeneous FTL
In this section, we introduce existing works about heterogeneous FTL. Compared to homogeneous FTL, the main
difference of heterogeneous FTL is that it commonly requires cross-domain links between data (e.g. different
features of the same user ID, the same features from different users, etc.) to bridge the heterogeneous feature
spaces. Formally, assuming a heterogeneous FTL setting with two parties, A and B, with data DA, DB, with
DAB = DA ∩DB being the overlapping dataset (i.e. cross-domain links), the objective of heterogeneous FTL is
min LA(DA; θA) + LB(DB; θB) + λLalgn(DAB; θA, θB), s.t. Mp(θA, θB) < εp, (21)
θA,θB
where LA, LB are loss functions on dataset DA, DB, respectively, and Lalgn is an alignment loss that aims to
align the overlapping dataset DAB between domains. However, in FL, sharing sample features or labels pose
potential privacy threats. How to leverage the cross-domain sample links to transfer knowledge while preserving
privacy thus becomes a key challenge to solve.
To our knowledge, SFTL [85] is the first work to tackle the heterogeneous FTL problem. It considers a twoparty setting and assumes that some user IDs IAB exist in both parties (with different features). SFTL proposes
an alignment loss to minimize the difference between features of the same users to achieve knowledge transfer,
�
Lalgn = d(gθA (X[A]i [)][, g][θ]B [(][X]i[B][))][,] (22)
i∈IAB
where IAB denotes the overlapping user ID set, gθA, gθB denote neural network models of party A and B, and
X[A]i [,][ X]i[B] [denote the features of user][ i][ held by party A and B, respectively. In addition, SFTL addresses the data]
privacy problem by designing two secure protocols for SFTL, one based on homomorphic encryption, and the
other based on secret sharing (SS).
The drawbacks of SFTL are that it is limited to the two-party setting, and both A and B have only partial
models and cannot perform independent inference. To address these drawbacks, SFHTL [84] proposes an improved framework that supports multiple parties. The main difficulty in the multi-party heterogeneous FTL is
the lack of overlapping samples and labels. To address the lack of overlapping samples, SFHTL proposes a
feature reconstruction technique to complement the missing non-overlapping features. Specifically, all parties
are trained to project their features into a unified latent feature space. Then, each party learns a reconstruction
function that projects the unified features to raw features. With the reconstruction functions, each party can
expand the feature spaces of non-overlapping samples, thus enlarging the training dataset. In addition, SFHTL
proposes a pseudo-labeling method based on label propagation [20] to address the lack of labels. Specifically, a
nearest neighbor graph based on feature proximity in the unified feature space is constructed, and the labels are
propagated from labeled samples to unlabeled samples via the graph. Finally, to protect the privacy of labels,
SFHTL is trained with split learning, such that labels are not directly shared with other parties.
### 6 Datasets and Evaluations
Benchmarking datasets are important for the development of machine learning research. In this section, we
introduce commonly used datasets and benchmarks for the problem of FL without full labels in the existing
literature. A summary of datasets can be found in Table 5. We find out that for both federated semi-supervised
and unsupervised learning, existing works mainly partition (e.g. according to Dirichlet distributions) datasets
-----
FL Methods without Full Labels
Dataset Application # Domains # Samples Partition
Semi Self Trans.
CIFAR-10 ✓[38, 39, 35, 40, 34] ✓[52, 44, 43, 51, 50] × CV 1 60000 Dirichlet & Uniform
CIFAR-100 ✓[38, 34] ✓[43, 44, 50] × CV 1 60000 Dirichlet & Uniform
SVHN ✓[40, 34] ✓[51] × CV 1 73257 Dirichlet & Uniform
Sent140 ✓[39] × × NLP 1 1600498 Natural (Twitter User)
Reuters ✓[42] × × NLP 1 11228 Dirichlet
IMDb ✓[42] × × NLP 1 50000 Dirichlet
Landmark-23K × ✓[52] × CV 1 1600000 Natural (Location)
Digit-Five × × ✓[55, 58] CV 5 107348 Natural (Style)
Office-Caltech10 × × ✓[55, 58, 60] CV 4 2533 Natural (Style)
DomainNet × × ✓[55, 58] CV 6 416401 Natural (Style)
AmazonReview × × ✓[55] NLP 4 8000 Natural (Product Category)
Mic2Mic × × ✓[58] Speech 4 65000 Natural (Device Type)
GTA5 × × ✓[56] CV 4 25000 Natural (Location)
Table 5: Commonly used datasets for evaluating FL methods without full labels. ✓and × indicate that the
dataset has or has not been used for evaluating an FL setting without full labels, respectively. # domains, #
samples denote the number of domains and the total number of samples in the dataset. Datasets with multiple
domains are more commonly used for unsupervised federated transfer learning.
for centralized machine learning (e.g. CIFAR-10, CIFAR-100, SVHN) manually, and manually sample a subset
of labels. On the contrary, for federated transfer learning, datasets generally form natural partitions (e.g. city
in GTA5, product types in AmazonReview, etc.) based on different domains. We thus conclude that real-world
datasets representing realistic data heterogeneity and label isolation problems are still needed to credibly evaluate
federated semi-supervised and self-supervised methods.
### 7 Related Surveys
Federated learning has attracted the attention of researchers worldwide. Therefore, there have been many survey
papers that cover various aspects of FL. In this section, we summarize and analyze existing survey papers
compared to our work. Table 6 shows a summary of comparisons between related surveys and ours.
First, our work differs from general surveys on FL [11, 10, 88] in that they provide comprehensive reviews
on a wide range of FL aspects, including privacy preservation, communication reduction, straggler mitigation,
incentive mechanisms, etc. Among them, communication and privacy are also important issues in the problem
of FL without full labels and are covered in our survey. On the contrary, our survey is focused on a specific
aspect, namely how to deal with unlabeled data. Second, our work also differs from surveys on semi-supervised
learning [21], self-supervised learning [24], and transfer learning [22] in the centralized setting, in that while they
extensively summarize machine learning techniques for unlabeled data, they fail to cover FL-specific challenges,
such as label isolation, data privacy, etc. Finally, compared to surveys that focus on FL algorithms on non-i.i.d.
data [89, 90, 91], our work focuses on leveraging unlabeled data to assist FL, while these surveys focus on
FL with fully labeled data, but are not independent and identically distributed. Nonetheless, these surveys are
related to our work in that non-i.i.d. data is an important challenge in all FL settings, and we also summarize
how existing works address the challenge in the problem of FL without full labels.
The most related survey to our work is [59], which surveyed FL techniques to tackle data space, statistical,
and system heterogeneity. Our work is similar to [59] in two ways. On one hand, statistical heterogeneity is a
key challenge in FL, and we also summarize how existing works address the challenge in FL without full labels.
On the other hand, homogeneous and heterogeneous FTL (Section 5) are powerful tools to solve statistical and
data space heterogeneity, respectively, which are also covered in Sections 3 and 4 in [59]. Nonetheless, the main
focus of [59] lies in supervised FL with labeled data, which is different from our work which additionally covers
federated semi-supervised and self-supervised methods.
-----
Survey Papers Similarities Differences
These papers cover a wide range of
aspects in general FL, while
our survey focuses on a specific problem
of leveraging unlabeled data.
These papers do not cover
FL specific challenges, such as
labeled data isolation, data heterogeneity,
data privacy, etc.
These papers primarily focus on
optimization algorithms for fully supervised FL,
while our work focuses specifically
on leveraging unlabeled data.
[59] primarily focuses on heterogeneity
in supervised FL, while our work focuses on
leveraging unlabeled data and covers
federated semi-supervised and self-supervised learning.
[10, 11, 88]
[22, 23, 24, 21, 92, 93]
[90, 89, 91]
[59]
Similar to our survey, these papers
review existing solutions to protect
data privacy and reduce
communication/computation overhead.
Similar to our survey, these papers
review machine learning methods
for unlabeled data, including semi-supervised,
self-supervised, and transfer learning.
Similar to our survey, these papers
review methods in FL that
address the problem of non-i.i.d. data
(i.e. data heterogeneity).
Similar to our survey, [59] covers
methods to tackle data heterogeneity.
Also, [59] reviews existing works
on homogeneous and heterogeneous FTL.
Table 6: Comparative analysis between our survey and related surveys.
Learning Paradigm Main Techniques Advantages Disadvantages
Federated Semi- Enhancing methods in centralized settings with
supervised Learning 1. Label isolation: Pseudo-labeling,
domain alignment, etc.
2. Privacy: DP, HE, etc.
Federated Self
3. Data heterogeneity: Source domain selection,
supervised Learning
divergence-aware update, etc.
4. Efficiency tradeoff: Sample selection,
Federated communication reduction, HE optimization, etc.
Transfer Learning
Similar formulation to conventional FL.
Can directly perform supervised tasks.
Full utilization of client data.
Suitable for unsupervised tasks
like retrieval, clustering, etc.
Models data heterogeneity, which is
a key challenge in FL. Flexible
formulation (heterogeneous FTL).
Data heterogeneity inherently violates
i.i.d. assumption. Large-scale unlabeled data
creates an efficiency tradeoff.
Data heterogeneity inherently violates
i.i.d. assumption. Need labels for
supervised tasks.
Source domain selection requires
intricate design or manual effort.
Table 7: A summary of techniques, advantages, and disadvantages of learning paradigms reviewed in this paper.
### 8 Conclusion and Future Directions
#### 8.1 Summary of the Survey
In this paper, we present a survey about the problem of federated learning without full labels. We introduce
three learning paradigms to solve the problem, federated semi-supervised learning, federated self-supervised
learning, and federated transfer learning. We further review existing works in these paradigms and discuss how
they address the crucial challenges, i.e. label isolation, privacy protection, data heterogeneity, and efficiency
tradeoff. Table 7 shows a summary of the main techniques, advantages, and disadvantages of learning paradigms
discussed in this paper. We finally present a summary of the datasets and benchmarks used to evaluate FL
methods without full labels.
#### 8.2 Future Directions
Compared to general FL with full supervision, the problem of FL without full labels is still under-explored. We
highlight the following future directions in the context of FL without full labels.
-----
8.2.1 Trustworthiness
Trustworthiness is an important aspect in real-world machine learning systems like FL. Generally speaking,
users of machine learning systems would expect a system to be private, secure, robust, fair, and interpretable,
which is what trustworthiness mean in the context of FL.
Unlabeled data can play an important role in enhancing trustworthiness from multiple aspects.
- Robustness: A robust system requires that its output should be insensitive to small noises added to the
input. A machine learning system that is not robust can significantly compromise its security in real-world
applications. For example, studies [69] have shown that is it possible to tweak physical objects to fool an
object detection model. In applications like autonomous driving, this property becomes a security threat.
Many research works have studied how to enhance robustness with unlabeled data [68, 67]. For example,
Carmon et al. and Uesato et al. [67, 70] show that pseudo-labeling, one of the most common semisupervised learning techniques, can boost the robustness by 3-5% over state-of-the-art defense models.
Deng et al. [68] additionally find out that even out-of-distribution unlabeled data helps enhance robustness.
Therefore, how these techniques can be adapted in the FL setting with heterogeneous data is an interesting
future direction. Also, as common methods of learning robust models (i.e. adversarial training [81]) are
inefficient, it is promising to study whether FL methods without full labels can be an efficient substitute.
- Privacy: In real-world machine learning applications, labeling data itself is a compromise of data privacy,
as domain experts have to directly observe the data. Therefore, solving the FL problem without full labels
inherently leads to better data privacy. In addition, unlabeled data provides a better way of navigating
through the privacy-utility tradeoff in differential privacy (DP) [72]. For example, PATE [71] shows
that with an additional set of unlabeled data, it simultaneously achieves a higher model accuracy and a
tighter privacy bound compared to the state-of-the-art DPSGD method [73]. Therefore, how to select and
leverage unlabeled data to aggregate client knowledge privately while maintaining good model accuracy
is also a promising direction.
- Interpretability: Interpretability indicates that a machine learning system should be able to make sense
of its decision, which generally creates trust between users and system developers. There are many ways
to instill interpretability in machine learning, among which disentangled representation learning [74] is
a popular direction. Informally speaking, disentangled representation aims to map the inputs to latent
representations where high-level factors in the input data are organized in a structured manner in the
representations (e.g. brightness, human pose, facial expressions, etc.). Thus, disentangled representations
provide intuitive ways to manipulate and understand deep learning models and features.
Much progress has been made in unsupervised disentangled representation learning. For example, InfoGAN [75] learns disentangled representations by maximizing the mutual information between the features
and the output. Beta-VAE [76] disentangles features by adding an independence regularization on the feature groups. Therefore, it is promising to instill interpretability in FL via unlabeled data with disentangled
representations. In FL, the participants commonly hold data with varying data distributions. Therefore,
how to stably disentangle the heterogeneous feature distributions from multiple participants is a challenge
for interpretable FL without full labels.
- Fairness: As machine learning models are increasingly involved in decision-making in the daily lives
of people, the models should not discriminate one group of users against another (e.g. gender, race,
etc.). Informally speaking, the fairness of a machine learning model gθ over a sensitive attribute s can be
described as the difference between the model performances given different values of s,
∆s,θ = ∥m(gθ|s = 1) − m(gθ|s = 0)∥, (23)
-----
where ∥· ∥ is a distance, and m(gθ|s = 1) is a performance metric stating how well the model performs
when the sensitive attribute s = 1.
When m(gθ|s) does not involves labels (e.g. some groups have a higher probability to be predicted positive), FADE [57] provides a good solution to ensure group fairness. However, when m(gθ|s = 1) requires
labeled data (e.g. classification accuracy is lower for under-represented groups), enforcing fairness with
unlabeled data remains an open problem, both for general machine learning and FL.
8.2.2 Generalization to Unseen Domains
All the introduced techniques in this paper require at least observing the test domain such that it can work well
on it. Even for federated transfer learning, some unlabeled samples in the target domain are still needed for
successful adaptation. However, in real-world applications, it is often required to adapt to completely unseen
domains. For example, FL models should try to adapt to new users that constantly join mobile applications,
who, at the time of joining, have no interaction data available. The problem setting triggers research in federated
domain generalization (FedDG). However, existing works in FedDG [77, 78] assume that all domains are fully
labeled, which, as stated in this survey, is not realistic. It is thus important to study the FedDG problem under
limited labeled data and large-scale unlabeled data.
8.2.3 Automatic FL without Full Labels
Automatic machine learning (AutoML) [82] is a class of methods that aim to achieve good model performances
without manual tuning (e.g. architecture, hyperparameters, etc.). In FL without full labels, as different participants may hold heterogeneous labeled or unlabeled data, it may not be optimal for them to share the same model
architecture. Integrating AutoML to FL without full labels thus enables participants to find personalized architectures to achieve the performance-efficiency tradeoff. However, participants with only unlabeled data cannot
independently evaluate the performance of the model, creating challenges to automatic FL without full labels.
### References
[1] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A Large-scale Hierarchical Image
Database. 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 248–255, 2009.
[2] Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. ActivityNet: A Large-scale
Video Benchmark for Human Activity Understanding. 2015 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), 961–970, 2015.
[3] Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler.
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books. 2015
IEEE International Conference on Computer vision (ICCV), 19–27, 2015.
[4] Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer Sentinel Mixture Models. International Conference on Learning Representations, 2017.
[5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. 2016
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–778, 2016.
[6] Kaiming He, Georgia Gkioxari, Piotr Doll´ar, and Ross Girshick. Mask R-CNN. 2017 IEEE International Conference
on Computer Vision (ICCV), 2961–2969, 2017.
[7] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of Deep Bidirectional
Transformers for Language Understanding. 2019 Conference of the North American Chapter of the Association for
Computational Linguistics: Human Language Technologies (NAACL-HLT), 4171–4186, 2019.
-----
[8] The European Parliament. General Data Protection Regulation. https://eur-lex.europa.eu/legalcontent/EN/TXT/?uri=CELEX:02016R0679-20160504, 27 April 2016.
[9] State of California Department of Justice. California Consumer Privacy Act. https://oag.ca.gov/privacy/ccpa, 2018.
[10] Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. Federated Machine Learning: Concept and Applications.
ACM Transactions on Intelligent Systems and Technology (TIST), 10.2 (2019): 1-19.
[11] Peter Kairouz, H Brendan McMahan, Brendan Avent, Aur´elien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista
Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, and Others. Advances and Open Problems in
Federated Learning. Foundations and Trends® in Machine Learning, 14.1–2 (2021), 1–210.
[12] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communicationefficient Learning of Deep Networks from Decentralized Data. International Conference on Artificial Intelligence
and Statistics (AISTATS), 1273–1282, 2017.
[13] Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated Optimization in Heterogeneous Networks. Proceedings of Machine Learning and Systems (MLSys), 429–450, 2020.
[14] Fan Lai, Xiangfeng Zhu, Harsha V. Madhyastha, and Mosharaf Chowdhury. Oort: Efficient Federated Learning
via Guided Participant Selection. USENIX Symposium on Operating Systems Design and Implementation (OSDI),
19–35, 2021.
[15] Sai Praneeth Karimireddy, Satyen Kale, Mahryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha
Suresh. SCAFFOLD: Stochastic Controlled Averaging for Federated Learning. International Conference on Machine
Learning (ICML), 5132–5143, 2020.
[16] Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel
Ramage, Aaron Segal, and Karn Seth. Practical Secure Aggregation for Privacy-preserving Machine Learning. ACM
SIGSAC Conference on Computer and Communications Security (CCS), 1175–1191, 2017.
[17] Enmao Diao, Jie Ding, and Vahid Tarokh. HeteroFL: Computation and Communication Efficient Federated Learning
for Heterogeneous Clients. International Conference on Learning Representations (ICLR), 2021.
[18] Chengliang Zhang, Suyi Li, Junzhe Xia, Wei Wang, Feng Yan, and Yang Liu. BatchCrypt: Efficient Homomorphic
Encryption for Cross-Silo Federated Learning. USENIX Annual Technical Conference (ATC), 493–506, 2020.
[19] Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Hassani, Ali Jadbabaie, and Ramtin Pedarsani. FedPAQ: A
Communication-efficient Federated Learning Method with Periodic Averaging and Quantization. International Conference on Artificial Intelligence and Statistics (AISTATS), 2021–2031, 2020.
[20] Xiaojin Jerry Zhu. Semi-supervised Learning Literature Survey. University of Wisconsin-Madison, 2005.
[21] Olivier Chapelle, Bernhard Scholkopf, and Alexander Zien. Semi-supervised Learning. IEEE Transactions on Neural
Networks, 20.3 (2009), 542–542.
[22] Sinno Jialin Pan and Qiang Yang. A Survey on Transfer Learning. IEEE Transactions on Knowledge Discovery and
Data Engineering, 22.10 (2010), 1345–1359.
[23] Longlong Jing and Yingli Tian. Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey.
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 43.11 (2020), 4037–4058.
[24] Xiao Liu, Fanjin Zhang, Zhenyu Hou, Li Mian, Zhaoyu Wang, Jing Zhang, and Jie Tang. Self-supervised Learning:
Generative or Contrastive. IEEE Transactions on Knowledge and Data Engineering (TKDE), 35.1 (2021), 857–876.
[25] Antti Tarvainen and Harri Valpola. Mean Teachers are Better Role Models: Weight-averaged Consistency Targets
Improve Semi-supervised Deep Learning Results. Advances in Neural Information Processing Systems (NeurIPS),
2017.
[26] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual Adversarial Training: A Regularization Method for Supervised and Semi-supervised Learning. IEEE Transactions on Pattern Analysis and Machine
Intelligence (TPAMI), 41.8 (2018), 1979–1993.
-----
[27] David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicholas Papernot, Avital Oliver, and Colin A Raffel. MixMatch:
A Holistic Approach to Semi-supervised Learning. Advances in Neural Information Processing Systems (NeurIPS),
2019.
[28] Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised Visual Representation Learning by Context Prediction. IEEE International Conference on Computer Vision (ICCV), 1422–1430, 2015.
[29] Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised Representation Learning by Predicting Image
Rotations. International Conference on Learning Representations, 2018.
[30] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Doll´ar, and Ross Girshick. Masked Autoencoders are
Scalable Vision Learners. IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16000–16009, 2022.
[31] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How Transferable are Features in Deep Neural Networks?. Advances in Neural Information Processing Systems (NIPS), 2014.
[32] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning Transferable Features with Deep Adaptation Networks. International Conference on Machine Learning (ICML), 97–105, 2015.
[33] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc¸ois Laviolette, Mario
Marchand, and Victor Lempitsky. Domain-adversarial Training of Neural Networks. Journal of Machine Learning
Research (JMLR), 17.1 (2016), 2096–2130.
[34] Enmao Diao, Jie Ding, and Vahid Tarokh. SemiFL: Semi-supervised Federated Learning for Unlabeled Clients with
Alternate Training. Advances in Neural Information Processing Systems (NeurIPS), 2022.
[35] Wonyong Jeong, Jaehong Yoon, Eunho Yang, and Sung Ju Hwang. Federated Semi-supervised Learning with Interclient Consistency & Disjoint Learning. International Conference on Learning Representations (ICLR), 2021.
[36] Jonas Geiping, Hartmut Bauermeister, Hannah Dro¨oge, and Michael Moeller. Inverting Gradients-How Easy Is It
to Break Privacy in Federated Learning? Advances in Neural Information Processing Systems (NeurIPS), 16937–
16947, 2020.
[37] Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. Federated Learning with
[Non-iid Data. arXiv preprint arXiv:1806.00582, 2018.](http://arxiv.org/abs/1806.00582)
[38] Xiaoxiao Liang, Yiqun Lin, Huazhu Fu, Lei Zhu, and Xiaomeng Li. RSCFed: Random Sampling Consensus Federated Semi-supervised Learning. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),
10154–10163, 2022.
[39] Chenyou Fan, Junjie Hu, and Jianwei Huang. Private Semi-supervised Federated Learning. International Joint
Conference on Artificial Intelligence (IJCAI), 2009–2015, 2022.
[40] Lun Wang, Yang Xu, Hongli Xu, Jianchun Liu, Zhiyuan Wang, and Liusheng Huang. Enhancing Federated Learning
with In-cloud Unlabeled Data. IEEE International Conference on Data Engineering (ICDE), 136–149, 2022.
[41] Xinyang Lin, Hanting Chen, Yixing Xu, Chao Xu, Xiaolin Gui, Yiping Deng, and Yunhe Wang. Federated Learning
with Positive and Unlabeled Data. International Conference on Machine Learning (ICML), 13344–13355, 2022.
[42] Sohei Itahara, Takayuki Nishio, Yusuke Koda, Masahiro Morikura, and Koji Yamamoto. Distillation-based Semisupervised Federated Learning for Communication-efficient Collaborative Training with Non-iid Private Data. IEEE
Transactions on Mobile Computing (TMC), 22.1 (2021), 191–205.
[43] Ekdeep Lubana, Chi Ian Tang, Fahim Kawsar, Robert Dick, and Akhil Mathur. Orchestra: Unsupervised Federated
Learning via Globally Consistent Clustering. International Conference on Machine Learning (ICML), 14461–14484,
2022.
[44] Weiming Zhuang, Xin Gan, Yonggang Wen, Shuai Zhang, and Shuai Yi. Collaborative Unsupervised Visual Representation Learning from Decentralized Data. IEEE/CVF International Conference on Computer Vision (ICCV),
4912–4921, 2021.
[45] Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton. A Simple Framework for Contrastive Learning
of Visual Representations. International Conference on Machine Learning (ICML), 1597–1607, 2020.
-----
[46] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum Contrast for Unsupervised Visual
Representation Learning. IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9729–9738, 2020.
[47] Xinlei Chen and Kaiming He, Exploring Simple Siamese Representation Learning. IEEE/CVF Conference on
Computer Vision and Pattern Recognition, 15750–15758, 2021.
[48] Jean-Bastien Grill, Florian Strub, Florent Altch´e, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernado Avila Pires, and Others. Bootstrap Your Own Latent-A New Approach to Self-supervised Learning.
Advances in Neural Information Processing Systems, 21271–21284, 2020.
[49] Fengda Zhang, Kun Kuang, Zhaoyang You, Tao Shen, Jun Xiao, Yin Zhang, Chao Wu, Yueting Zhuang, and Xiaolin
[Li. Federated Unsupervised Representation Learning. arXiv prepring arXiv:2010.08982, 2020.](http://arxiv.org/abs/2010.08982)
[50] Weiming Zhuang, Yonggang Wen, and Shuai Zhang. Divergence-aware Federated Self-supervised Learning. International Conference on Learning Representations (ICLR), 2022.
[51] Sungwon Han, Sungwon Park, Fangzhao Wu, Sundong Kim, Chuhan Wu, Xing Xie, and Meeyoung Cha. FedX:
Unsupervised Federated Learning with Cross Knowledge Distillation. European Conference on Computer Vision
(ECCV), 691–707, 2022.
[52] Chaoyang He, Zhengyu Yang, Erum Mushtaq, Sunwoo Lee, Mahdi Soltanolkotabi, and Salman Avestimehr.
SSFL: Tackling Label Deficiency in Federated Learning via Personalized Self-supervision. arXiv preprint
[arXiv:2110.02470, 2021.](http://arxiv.org/abs/2110.02470)
[53] Alysa Ziying Tan, Han Yu, Lizhen Cui, and Qiang Yang. Towards Personalized Federated Learning. IEEE Transactions on Neural Networks and Learning Systems, Early Access, 2022.
[54] Tian Li, Shengyuan Hu, Ahmad Beirami, and Virginia Smith. Ditto: Fair and Robust Federated Learning through
Personalization. International Conference on Machine Learning (ICML), 6357–6368, 2021.
[55] Xingchao Peng, Zijun Huang, Yizhe Zhu, and Kate Saenko. Federated Adversarial Domain Adaptation. International
Conference on Learning Representations (ICLR), 2020.
[56] Chun-Han Yao, Boqing Gong, Hang Qi, Yin Cui, Yukun Zhu, and Ming-Hsuan Yang. Federated Multi-Target
Domain Adaptation. IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 1424–1433, 2022.
[57] Junyuan Hong, Zhuangdi Zhu, Shuyang Yu, Zhangyang Wang, Hiroko H. Dodge, and Jiayu Zhou. Federated Adversarial Debiasing for Fair and Transferable Representations. ACM SIGKDD Conference on Knowledge Discovery &
Data Mining (KDD), 617–627, 2021.
[58] Shaoduo Gan, Akhil Mathur, Anton Isopoussu, Fahim Kawsar, Nadia Berthouze, and Nicholas D. Lane. FRuDA:
Framework for Distributed Adversarial Domain Adaptation. IEEE Transactions on Parallel and Distributed Systems
(TPDS), 33.11 (2022), 3153–3164.
[59] Dashan Gao, Xin Yao, and Qiang Yang. A Survey on Heterogeneous Federated Transfer Learning. arXiv preprint
[arXiv:2210.04505, 2022.](http://arxiv.org/abs/2210.04505)
[60] Hua Kang, Zhiyang Li, and Qian Zhang. Communicational and Computational Efficient Federated Domain Adaptation. IEEE Transactions on Parallel and Distributed Systems (TPDS), 33.12 (2022), 3678–3689.
[61] Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. Maximum Classifier Discrepancy for Unsupervised Domain Adaptation. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3723–3732,
2018.
[62] Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. MixUp: Beyond Empirical Risk Minimization. International Conference on Learning Representations (ICLR), 2017.
[63] Robert Tibshirani, Guenther Walther, and Trevor Hastie. Estimating the Number of Clusters in a Data Set via the
Gap Statistic. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 63.2 (2001), 411–423.
[64] Yoshinori Aono, Takuya Hayashi, Lihua Wang, Shiho Moriai, and Others. Privacy-preserving Deep Learning via
Additively Homomorphic Encryption. IEEE Transactions on Information Forensics and Security, 13.5 (2017), 1333–
1345.
-----
[65] Fan Lai, Yinwei Dai, Sanjay Singapuram, Jiachen Liu, Xiangfeng Zhu, Harsha Madhyastha, and Mosharaf Chowdhury. FedScale: Benchmarking Model and System Performance of Federated Learning at Scale. International
Conference on Machine Learning, 11814–11827, 2022.
[66] Sebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub Koneˇcn`y, H. Brendan McMahan, Virginia
[Smith, and Ameet Talwalkar. LEAF: A Benchmark for Federated Settings arXiv preprint arXiv:1812.01097, 2018.](http://arxiv.org/abs/1812.01097)
[67] Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, John C. Duchi, and Percy S. Liang. Unlabeled Data Improves
Adversarial Robustness. Advances in Neural Information Processing Systems (NeurIPS), 2019.
[68] Zhun Deng, Linjun Zhang, Amirata Ghorbani, and James Zou. Improving Adversarial Robustness via Unlabeled
Out-of-domain Data. International Conference on Artificial Intelligence and Statistics (AISTATS), 2845–2853, 2021.
[69] Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. Adversarial Examples in the Physical World. Artificial
Intelligence Safety and Security, 99–112, 2018.
[70] Jean-Baptiste Alayrac, Jonathan Uesato, Po-Sen Huang, Alhussein Fawzi, Robert Stanforth, and Pushmeet Kohli.
Are Labels Required for Improving Adversarial Robustness? Advances in Neural Information Processing Systems
(NeurIPS), 2019.
[71] Nicholas Papernot, Martin Abadi, Ulfar Erlingsson, Ian Goodfellow, and Kunal Talwar. Semi-supervised Knowledge
Transfer for Deep Learning from Private Training Data. Internation Conference on Learning Representations (ICLR),
2017.
[72] Cynthia Dwork and Aaron Roth. The Algorithmic Foundations of Differential Privacy. Foundations and Trends®in
Theoretical Computer Science, 9.3–4 (2014), 211–407.
[73] Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep
Learning with Differential Privacy. ACM SIGSAC Conference on Computer and Communications Security (CCS),
308–318, 2016.
[74] Hsin-Ying Lee, Hung-Yu Tseng, Jia-Bin Huang, Maneesh Singh, and Ming-Hsuan Yang. Diverse Image-to-image
Translation via Disentangled Representations. European Conference on Computer Vision (ECCV), 35–51, 2018.
[75] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. InfoGAN: Interpretable
Representation Learning by Information Maximizing Generative Adversarial Nets. Advances in Neural Information
Processing Systems (NeurIPS), 2016.
[76] Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed,
and Alexander Lerchner. Beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework.
International Conference on Learning Representations (ICLR), 2017.
[77] Quande Liu, Cheng Chen, Jing Qin, Qi Dou, and Pheng-Ann Heng. FedDG: Federated Domain Generalization on
Medical Image Segmentation via Episodic Learning in Continuous Frequency Space. IEEE/CVF Conference on
Computer Vision and Pattern Recognition (CVPR), 1013–1023, 2021.
[78] A. Tuan Nguyen, Philip Torr, and Ser-Nam Lim. FedSR: A Simple and Effective Domain Generalization Method for
Federated Learning. Advances in Neural Information Processing Systems (NeurIPS), 2022.
[79] Dong-Hyun Lee. Pseudo-Label: The Simple and Efficient Semi-supervised Learning Method for Deep Neural Networks. Workshop on Challenges in Representation Learning, ICML, 2013.
[80] Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I. Jordan. Conditional Adversarial Domain Adaptation.
Advances in Neural Information Processing System (NIPS), 2018.
[81] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards Deep
Learning Models Resistant to Adversarial Attacks. International Conference on Learning Representations (ICLR),
2018.
[82] Xin He, Kaiyong Zhao, Xiaowen Chu. AutoML: A Survey of the State-of-the-art. Knowledge-Based Systems, 212
(2021), 106622.
[83] Michael T. Rosenstein, Zvika Marx, Leslie Pack Kaelbling, and Thomas G. Dietterich. To Transfer or Not to Transfer.
NIPS 2005 Workshop on Transfer Learning, 2005.
-----
[84] Siwei Feng, Boyang Li, Han Yu, Yang Liu, and Qiang Yang. Semi-Supervised Federated Heterogeneous Transfer
Learning. Knowledge-Based Systems, 252 (2022), 109384.
[85] Yang Liu, Yan Kang, Chaoping Xing, Tianjian Chen, and Qiang Yang. A Secure Federated Transfer Learning
Framework. IEEE Intelligent Systems, 35.4 (2020), 70–82.
[86] Yan Kang, Yuanqin He, Jiahuan Luo, Tao Fan, Yang Liu, and Qiang Yang. Privacy-Preserving Federated Adversarial
Domain Adaptation over Feature Groups for Interpretability. IEEE Transactions on Big Data, 2022.
[87] Praneeth Vepakomma, Otkrist Gupta, Tristan Swedish, and Ramesh Raskar. Split Learning for Health: Distributed
[Deep Learning Without Sharing Raw Patient Data. arXiv preprint arXiv:1812.00564, 2018.](http://arxiv.org/abs/1812.00564)
[88] Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. Federated Learning: Challenges, Methods, and
Future Directions. IEEE Signal Processing Magazine, 37.3 (2020), 50–60.
[89] Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H Brendan McMahan, Maruan Al-Shedivat, and Others. A
[Field Guide to Federated Optimization. arXiv preprint arXiv:2107.06917, 2021.](http://arxiv.org/abs/2107.06917)
[90] Qinbin Li, Yiqun Diao, Quan Chen, and Bingsheng He. Federated Learning on Non-iid Data Silos: An Experimental
Study. IEEE International Conference on Data Engineering (ICDE), 965–978, 2022.
[91] Hangyu Zhu, Jinjin Xu, Shiqing Liu, and Yaochu Jin. Federated Learning on Non-iid Data: A Survey. Neurocomputing, 465 (2021), 371–390.
[92] Jesper E. Van Engelen and Holger H. Hoos. A Survey on Semi-supervised Learning. Machine Learning, 109.2
(2020), 373–440.
[93] Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He. A
Comprehensive Survey on Transfer Learning. Proceedings of the IEEE, 109.1 (2020), 43–76.
-----
| 21,775
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2303.14453, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "CLOSED",
"url": "http://arxiv.org/pdf/2303.14453"
}
| 2,023
|
[
"JournalArticle",
"Review"
] | true
| 2023-03-25T00:00:00
|
[
{
"paperId": "e8b456c05c0aaa0d8c2e941001d3eede6ffe8595",
"title": "FedX: Unsupervised Federated Learning with Cross Knowledge Distillation"
},
{
"paperId": "cafc2226ee4acb64b0c1a9620bd5a96ceae5153f",
"title": "Private Semi-Supervised Federated Learning"
},
{
"paperId": "462061fe15a19e76b316755095aa7ae3febdac11",
"title": "Semi-Supervised Federated Heterogeneous Transfer Learning"
},
{
"paperId": "527abfac71e29321350c0249dbd4fca960c44c48",
"title": "Orchestra: Unsupervised Federated Learning via Globally Consistent Clustering"
},
{
"paperId": "7424e5bc67f6a848a7216153cb25ec5ff7a222af",
"title": "Enhancing Federated Learning with In-Cloud Unlabeled Data"
},
{
"paperId": "eb9104bb72f3856751d8c1b9c9573768d5f3df35",
"title": "Divergence-aware Federated Self-Supervised Learning"
},
{
"paperId": "ba6e14a84566387af24da80f5bb5812b8a3fa792",
"title": "RSCFed: Random Sampling Consensus Federated Semi-supervised Learning"
},
{
"paperId": "498d79cc61de1c0ea71de2e62a20b8aca3559d43",
"title": "FRuDA: Framework for Distributed Adversarial Domain Adaptation"
},
{
"paperId": "7f8b37f690c12ed190a7daef7f8a4dbcf0a4bddb",
"title": "Privacy-Preserving Federated Adversarial Domain Adaptation Over Feature Groups for Interpretability"
},
{
"paperId": "6351ebb4a3287f5f3e1273464b3b91e5df5a16d7",
"title": "Masked Autoencoders Are Scalable Vision Learners"
},
{
"paperId": "8273a0ae25ab9988c14fbb939d71bd44f197fc35",
"title": "SSFL: Tackling Label Deficiency in Federated Learning via Personalized Self-Supervision"
},
{
"paperId": "2c35be69e72c971d9e4953a00df99f630741e5eb",
"title": "Federated Multi-Target Domain Adaptation"
},
{
"paperId": "104ce1a96bf780ba4ffc44faf6935be4ec9e6ba8",
"title": "Collaborative Unsupervised Visual Representation Learning from Decentralized Data"
},
{
"paperId": "f42a8e15d11bcf5081b8e413410e9452e78d620a",
"title": "Federated Adversarial Debiasing for Fair and Transferable Representations"
},
{
"paperId": "412569269f13540080faa81620ea67eeb72f76b2",
"title": "A Field Guide to Federated Optimization"
},
{
"paperId": "6a7a7d38bf5b0e770bacd436e47d99ce7dd5dfbd",
"title": "Federated Learning with Positive and Unlabeled Data"
},
{
"paperId": "81c3c02f8d82c826833c7ac74746f65564930feb",
"title": "Federated Learning on Non-IID Data: A Survey"
},
{
"paperId": "759e354f11e50b40be85b82f4aeb87f235e956ed",
"title": "SemiFL: Semi-Supervised Federated Learning for Unlabeled Clients with Alternate Training"
},
{
"paperId": "f356b19619e2d9ebeb96839f0e87a07071248179",
"title": "FedScale: Benchmarking Model and System Performance of Federated Learning at Scale"
},
{
"paperId": "d6a6d8f56c5db24bfae7c614f21195dc0e89f4a8",
"title": "FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space"
},
{
"paperId": "481dd25896ac531707870c9b8c179cce20013401",
"title": "Towards Personalized Federated Learning"
},
{
"paperId": "dd30a98f5274541f018a762fba46a0730519606a",
"title": "Federated Learning on Non-IID Data Silos: An Experimental Study"
},
{
"paperId": "31949039a48961f939ac50440c3b4b8504fccceb",
"title": "Ditto: Fair and Robust Federated Learning Through Personalization"
},
{
"paperId": "0e23d2f14e7e56e81538f4a63e11689d8ac1eb9d",
"title": "Exploring Simple Siamese Representation Learning"
},
{
"paperId": "bf2ca8386bfc6a4c65a91f4628da7c49f931e9f2",
"title": "Federated unsupervised representation learning"
},
{
"paperId": "f9b3ed20d6da7dfbfbd8a58e2bde173e5e9c768c",
"title": "HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients"
},
{
"paperId": "10475b4d2467cfa15bb0fd149b34b295928002e4",
"title": "Distillation-Based Semi-Supervised Federated Learning for Communication-Efficient Collaborative Training With Non-IID Private Data"
},
{
"paperId": "cc8379abcda8faa78e2c5e17deb96785ea447461",
"title": "A Secure Federated Transfer Learning Framework"
},
{
"paperId": "51b61fbf3339433c7ffdbfa9c946185fb49317da",
"title": "Federated Semi-Supervised Learning with Inter-Client Consistency"
},
{
"paperId": "706f756b71f0bf51fc78d98f52c358b1a3aeef8e",
"title": "Self-Supervised Learning: Generative or Contrastive"
},
{
"paperId": "4c0b4fe0fb05daba6deb12cb042d8ba2829c853d",
"title": "Improving Adversarial Robustness via Unlabeled Out-of-Domain Data"
},
{
"paperId": "38f93092ece8eee9771e61c1edaf11b1293cae1b",
"title": "Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning"
},
{
"paperId": "698ab1cc02a79596a87f92d5a0882ab1a7aee266",
"title": "Inverting Gradients - How easy is it to break privacy in federated learning?"
},
{
"paperId": "7af72a461ed7cda180e7eab878efd5f35d79bbf4",
"title": "A Simple Framework for Contrastive Learning of Visual Representations"
},
{
"paperId": "07912741c6c96e6ad5b2c2d6c6c3b2de5c8a271b",
"title": "Advances and Open Problems in Federated Learning"
},
{
"paperId": "3448e3c55cf3b1f25aab4719eb094a95dbe7f05e",
"title": "A survey on semi-supervised learning"
},
{
"paperId": "add2f205338d70e10ce5e686df4a690e2851bdfc",
"title": "Momentum Contrast for Unsupervised Visual Representation Learning"
},
{
"paperId": "18d026ec5d0eebd17ee2c762da89540c0b3d7bde",
"title": "A Comprehensive Survey on Transfer Learning"
},
{
"paperId": "2f5e6ec8904e84738fdff37b39220c0c837529a1",
"title": "Federated Adversarial Domain Adaptation"
},
{
"paperId": "ba04a332de10e844752794c25c2dee35483d4ca3",
"title": "Semi-Supervised Learning"
},
{
"paperId": "fc7b1823bd8b59a590d0bc33bd7a145518fd71c5",
"title": "SCAFFOLD: Stochastic Controlled Averaging for Federated Learning"
},
{
"paperId": "0088cd8408c2a77101412f37bfada0a57669b8bc",
"title": "FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization"
},
{
"paperId": "49bdeb07b045dd77f0bfe2b44436608770235a23",
"title": "Federated Learning: Challenges, Methods, and Future Directions"
},
{
"paperId": "3a7fa673ff8ec4ec2f322473de005f3cd09ea820",
"title": "AutoML: A Survey of the State-of-the-Art"
},
{
"paperId": "b3f1aa12dde233aaf543bb9ccb27213c494e0fd5",
"title": "Unlabeled Data Improves Adversarial Robustness"
},
{
"paperId": "6d12401822a24b2ff5542a7fa72158d891960c62",
"title": "Are Labels Required for Improving Adversarial Robustness?"
},
{
"paperId": "c42816f497d663c681df20d48a6e66a5632600d8",
"title": "MixMatch: A Holistic Approach to Semi-Supervised Learning"
},
{
"paperId": "4c94ee7df6bc2bfcac76703be4f059a79010f7e5",
"title": "Self-Supervised Visual Feature Learning With Deep Neural Networks: A Survey"
},
{
"paperId": "1284ed4bf6a043ecf8cebca09e4811f1e3b83b65",
"title": "Federated Optimization in Heterogeneous Networks"
},
{
"paperId": "bbabd8a25260bf2413befcd756077efa81b1c618",
"title": "Split learning for health: Distributed deep learning without sharing raw patient data"
},
{
"paperId": "8dcbcaaf337d7bd22e580f1bb7a795ed4bb604fd",
"title": "LEAF: A Benchmark for Federated Settings"
},
{
"paperId": "60bc358296ae11ac8f11286bba0a49ac7e797d26",
"title": "Diverse Image-to-Image Translation via Disentangled Representations"
},
{
"paperId": "9445423239efb633f5c15791a7abe352199ce678",
"title": "General Data Protection Regulation"
},
{
"paperId": "5cfc112c932e38df95a0ba35009688735d1a386b",
"title": "Federated Learning with Non-IID Data"
},
{
"paperId": "530a4ab0308bc98995ffd64207135ca0ae36db7f",
"title": "Privacy-Preserving Deep Learning via Additively Homomorphic Encryption"
},
{
"paperId": "aab368284210c1bb917ec2d31b84588e3d2d7eb4",
"title": "Unsupervised Representation Learning by Predicting Image Rotations"
},
{
"paperId": "0d725e4fea8bbaf332d6a8d424ebecbd547a3851",
"title": "Maximum Classifier Discrepancy for Unsupervised Domain Adaptation"
},
{
"paperId": "db0cc2f21b20cbc0ab8946090967399c25709614",
"title": "Practical Secure Aggregation for Privacy-Preserving Machine Learning"
},
{
"paperId": "4feef0fd284feb1233399b400eb897f59ec92755",
"title": "mixup: Beyond Empirical Risk Minimization"
},
{
"paperId": "7aa38b85fa8cba64d6a4010543f6695dbf5f1386",
"title": "Towards Deep Learning Models Resistant to Adversarial Attacks"
},
{
"paperId": "9171e83fb98299e14cbb3673437a0495a213767a",
"title": "Conditional Adversarial Domain Adaptation"
},
{
"paperId": "4b1c6f6521da545892f3f5dc39461584d4a27ec0",
"title": "Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning"
},
{
"paperId": "1a0912bb76777469295bb2c059faee907e7f3258",
"title": "Mask R-CNN"
},
{
"paperId": "7493389667058116dbc7e808987f129325ee60d7",
"title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results"
},
{
"paperId": "a90226c41b79f8b06007609f39f82757073641e2",
"title": "beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework"
},
{
"paperId": "e70b9a38fcf8373865dd6e7b45e45cca7ff2eaa9",
"title": "Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data"
},
{
"paperId": "efbd381493bb9636f489b965a2034d529cd56bcd",
"title": "Pointer Sentinel Mixture Models"
},
{
"paperId": "b544ca32b66b4c9c69bcfa00d63ee4b799d8ab6b",
"title": "Adversarial examples in the physical world"
},
{
"paperId": "e9a986c8ff6c2f381d026fe014f6aaa865f34da7",
"title": "Deep Learning with Differential Privacy"
},
{
"paperId": "eb7ee0bc355652654990bcf9f92f124688fde493",
"title": "InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets"
},
{
"paperId": "d1dbf643447405984eeef098b1b320dee0b3b8a7",
"title": "Communication-Efficient Learning of Deep Networks from Decentralized Data"
},
{
"paperId": "2c03df8b48bf3fa39054345bafabfeff15bfd11d",
"title": "Deep Residual Learning for Image Recognition"
},
{
"paperId": "0e6824e137847be0599bb0032e37042ed2ef5045",
"title": "Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books"
},
{
"paperId": "0a28efacb92d16e6e0dd4d87b5aca91b28be8853",
"title": "ActivityNet: A large-scale video benchmark for human activity understanding"
},
{
"paperId": "1d5972b32a9b5a455a6eef389de5b7fca25771ad",
"title": "Domain-Adversarial Training of Neural Networks"
},
{
"paperId": "fc1b1c9364c58ec406f494dd944b609a6a038ba6",
"title": "Unsupervised Visual Representation Learning by Context Prediction"
},
{
"paperId": "7340f090f8a0df5b109682e9f6d57e4b8ca1a2f7",
"title": "Learning Transferable Features with Deep Adaptation Networks"
},
{
"paperId": "081651b38ff7533550a3adfc1c00da333a8fe86c",
"title": "How transferable are features in deep neural networks?"
},
{
"paperId": "0023582fde36430c7e3ae81611a14e558c8f4bae",
"title": "The Algorithmic Foundations of Differential Privacy"
},
{
"paperId": "a25fbcbbae1e8f79c4360d26aa11a3abf1a11972",
"title": "A Survey on Transfer Learning"
},
{
"paperId": "d2c733e34d48784a37d717fe43d9e93277a8c53e",
"title": "ImageNet: A large-scale hierarchical image database"
},
{
"paperId": "a662a25c195d27df933e7236b583b0151ce54045",
"title": "European Parliament"
},
{
"paperId": "bbc1ad39d245ff3089542f7f9a7052d55107ac5b",
"title": "Communicational and Computational Efficient Federated Domain Adaptation"
},
{
"paperId": "75f38e2f68c089e73944581e9964944fa96b15d5",
"title": "FedSR: A Simple and Effective Domain Generalization Method for Federated Learning"
},
{
"paperId": "c783cdc03a32e5094affa7eef710459aac599aaf",
"title": "BatchCrypt: Efficient Homomorphic Encryption for Cross-Silo Federated Learning"
},
{
"paperId": "df2b0e26d0599ce3e70df8a9da02e51594e0e992",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"
},
{
"paperId": null,
"title": "State of California Department of Justice"
},
{
"paperId": "798d9840d2439a0e5d47bcf5d164aa46d5e7dc26",
"title": "Pseudo-Label : The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks"
},
{
"paperId": "db8dbe07af7eebc1eed662be268592f00f4882e0",
"title": "To transfer or not to transfer"
},
{
"paperId": "89c8179cce5887300a8b588c86cfd3e6db0b2801",
"title": "Estimating the number of clusters in a dataset via the gap statistic"
},
{
"paperId": "529e2b6043ae282482ae435f10e1ba4bbcde81b3",
"title": "This paper is included in the Proceedings of the 15th USENIX Symposium on Operating Systems Design and Implementation."
},
{
"paperId": null,
"title": "Target Network Parameter"
}
] | 21,775
|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Medicine",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
},
{
"category": "Political Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/003dc0e124674827546850aff0a44ab131461ae8
|
[
"Medicine"
] | 0.899503
|
Public health emergency operation centres: status, gaps and areas for improvement in the Eastern Mediterranean Region
|
003dc0e124674827546850aff0a44ab131461ae8
|
BMJ Global Health
|
[
{
"authorId": "1791955545",
"name": "Osman Elmahal"
},
{
"authorId": "2045685388",
"name": "A. Abdullah"
},
{
"authorId": "1604258048",
"name": "Manal Elzalabany"
},
{
"authorId": "40221023",
"name": "H. Anan"
},
{
"authorId": "2312209476",
"name": "Dalia Samhouri"
},
{
"authorId": "2451702",
"name": "R. Brennan"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"BMJ Glob Health"
],
"alternate_urls": [
"https://gh.bmj.com/content/about-us"
],
"id": "654756cb-fd15-4e15-b2ee-f1639ff9b959",
"issn": "2059-7908",
"name": "BMJ Global Health",
"type": "journal",
"url": "https://gh.bmj.com/"
}
|
The functionality of Public Health Emergency Operations Centres (PHEOCs) in countries is vital to their response capacity. The article assesses the status of National PHEOCs in the 22 countries of the Eastern Mediterranean Region. We designed and administered an online survey between May and June 2021. Meetings and Key Informant Interviews were also conducted with the emergency focal points in the WHO country offices and with other select partners. We also collected data on PHEOCs from the Joint External Evaluations conducted in the Region between 2016 and 2018 in 18 countries, and intra-action review mission reports conducted in 11 countries to review the response to COVID-19 during May 2020–June 2021 - and other relevant mission reports. Only 12 countries reported having PHEOC with varying levels of functionality and 10 of them reported using PHEOC for their response operations. This review formed the baseline of capacity requirements of National PHEOC in each country and will facilitate identifying benchmarks of areas of improvement for future national, WHO and partners support.
|
# Public health emergency operation centres: status, gaps and areas for improvement in the Eastern Mediterranean Region
## Osman M Elmahal,[1] Ali Abdullah,[1] Manal K Elzalabany,[1] Huda Haidar Anan,[1] Dalia Samhouri,[1] Richard John Brennan[2]
**To cite: Elmahal OM, Abdullah A,**
Elzalabany MK, et al. Public
health emergency operation
centres: status, gaps and
areas for improvement in
the Eastern Mediterranean
Region. BMJ Global Health
2022;7:e008573. doi:10.1136/
bmjgh-2022-008573
**Handling editor Seye Abimbola**
Received 18 January 2022
Accepted 8 May 2022
© Author(s) (or their
employer(s)) 2022. Re-use
permitted under CC BY-NC. No
commercial re-use. See rights
and permissions. Published by
BMJ.
1Country Health Emergency
Preparedness and International
Health Regulations (CPI),
WHO Health Emergency
Programme (WHE), World Health
Organisation Regional Office
for the Eastern Mediterranean,
Cairo, Egypt
2WHO Health Emergency
Programme (WHE), World Health
Organisation Regional Office
for the Eastern Mediterranean,
Cairo, Egypt
**Correspondence to**
Dr Osman M Elmahal;
elmahalo@who.int
**ABSTRACT**
The functionality of Public Health Emergency Operations
Centres (PHEOCs) in countries is vital to their response
capacity. The article assesses the status of National
PHEOCs in the 22 countries of the Eastern Mediterranean
Region. We designed and administered an online survey
between May and June 2021. Meetings and Key Informant
Interviews were also conducted with the emergency focal
points in the WHO country offices and with other select
partners. We also collected data on PHEOCs from the Joint
External Evaluations conducted in the Region between
2016 and 2018 in 18 countries, and intra-action review
mission reports conducted in 11 countries to review the
response to COVID-19 during May 2020–June 2021 - and
other relevant mission reports. Only 12 countries reported
having PHEOC with varying levels of functionality and 10 of
them reported using PHEOC for their response operations.
This review formed the baseline of capacity requirements
of National PHEOC in each country and will facilitate
identifying benchmarks of areas of improvement for future
national, WHO and partners support.
**INTRODUCTION**
The Eastern Mediterranean Region (EMR)
is composed of 22 countries. The region has
a long history of public health crises and has
suffered from a myriad of diverse major emergencies. For example, natural and ecological disasters, human-induced catastrophes,
have a high and adverse impact on human
public health.[1] Many countries within the
EMR have dedicated departments to manage
disease outbreaks, catastrophic disasters and
other types of emergencies.[2] Similarly, in
other countries there are specialised departments that manage single hazards or unique
diseases. Given that these are usually managed
in a siloed approach, this process can lead to
unintended consequences and complications
for an incident management response (IMS).
For example, because of the tendency of
countries to use a siloed approach to single
or categorical health risks, this approach
may not be best suited to support informed
decision-making. Informed decision-making
occurs when there is an open and free flow of
data and critical information that is streamed
to the IMS housed within the PHEOC to
inform setting appropriate objectives necessary to mitigate risks.
Consequently, most countries should adopt
an integrated and holistic approach while
considering their health emergency and
disaster risk management profile and capabilities.[3 4] A transparent and holistic approach
would be better suited to advance the prevention, preparedness, readiness, response, and
recovery to risks which aligns with the intent
of the International Health Regulations (IHR
2005) requirements.
The IHR (2005) serves as a legal framwork
for all States Parties to level up their public
health capabilities.[5] Therefore, countries’
capacities to manage health risks should span
the whole emergency cycle from prevention,
preparedness, readiness, and response, to
recovery.[6] Health emergency management
-----
programmes within the health sector should be able to
lead and coordinaterelated interventions. They must
ensure that their programmes are all streamlined and
address identified priority health risks. This is normally
performed when a comprehensive ‘Rrisk Assessment’ is
completed and the results implemented.
In recent years, WHO has advocated for the adoption
of PHEOCs and published several guiding documents
promoting the establishment of PHEOCs, elaborating
the requirements to establish and operate a PHEOC at
the national level.[7–11] A PHEOC as defined in the WHO
PHEOC framework 2015 is ‘a physical location for the coordi_nation of information and resources to support incident manage-_
_ment activities. Such a centre may be a temporary facility or may_
_be established in a permanent location’.[7] A PHEOC is a place_
where information and resources can be managed for all
different kinds of health risks. It facilitates the engagement of various stakeholders and ensures better management of information and resources during response
operations to health emergencies and disasters.[7]
Understanding the status of the PHEOCs in the EMR
is crucial to identify areas of support and gaps, and better
prioritise regional interventions. Such situational analysis
at the regional level will help to craft priority regional
interventions to support countries in the region. Countries should have functional PHEOCs able to manage all
types of emergencies, from small-scale emergencies like
localised foodborne outbreaks or road traffic accidents
to large-scale like complex emergencies and COVID-19
pandemics.
In this review, we assessed the current structure and
functionality of PHEOCs in the Region, and identified
gaps and potential areas for improvements, to build an
enhanced network of PHEOCs as an integral part of
national emergency management systems.
**EVALUATION OF PHEOC FOR EMR**
We adopted a mixed methods research design to assess
the National PHEOC in each of the 22 countries of the
EMR. Firstly, we utilized the results of the PHEOC data
from the regional Joint External Evaluation (JEE), which
was conducted in the region between 2016 and 2018 in
18 countries.[12 13] The four main indicators in the Emergency Response Operations section (R.2.1-R.2.4) of the
JEE were used as a proxy to examine the overall national
PHEOC status in the region. These evaluations are valid
for up to five years, as per the recommendations of the
JEE framework.[13 14]
eSecondly, we developed an online survey adapted from
the PHEOC framework Annex-9,[7] which was completed
in 2021 by official PHEOC focal points in 15 countries.
The survey addressed the minimuim PHEOC requirements such as legal authority, policy group and steering
committee, plans and procedures, suitable physical space
and information telecommunication infrastructure, sufficient and trained human resources and relevant information bodies.
Further, we utilized the results of the intra-action review
reports conducted in 11 countries to review the response
to COVID-19[15] and other relevant mission reports. Moreover, data were further informed by national PHEOC
status presentations during the PHEOC bi-regional
meeting (EMR & AFR) held April-May 2021, with participation from all the 22 countries.
Finally, we conduced key informant interviews (KII)
with emergency focal points in the WHO country offices
and with other relevant partners about their PHEOC
capacities. Informed consent was obtained, and we
ensured that our results are regional and not country
specific.
Descriptive quantitative analysis was used to analyse the
survey data, mainly calculating frequencies and percentages of agreement with survey domains related to PHEOC
status at the countries’ level. Thematic analysis was used
to analyse the KII and meetings with key stakeholders,
identifying main areas of agreement, gaps, challenges
and also opportunities for improvement.
Even though not all countries have a functioning
PHEOC, all 22 countries have some sort of response mechanism in place. Only 12 (54.5 %) reported established
national PHEOC with varying levels of functionality.
Partner organisations have proved instrumental in
facilitating and augmenting the functional capacities of
the PHEOC in many countries. These partner organisations vary in category and types.
A wide range of partner categories interacts with
PHEOCs at the national level, for example, relevant
departments within ministries of health, line ministries,
UN agencies, non-governmental organisations and international non-governmental organisations and donors.
Ten of the National PHEOCs (45.5%) reported multiple
uses of their PHEOC during last year in the response
operations mostly for infectious diseases outbreaks (11
times) for natural emergencies (6 times).
Political support and understanding were reported
in the 12 countries where there is a National PHEOC.
However, only 6 (27.3%) of the National PHEOCs have
sufficient human & financial resources to run their
response operations. The minimum requirements for
routine staff are met in only 8 (36.4%) countries. Eleven
of the PHEOCs can identify and contact a roster of
trained personnel while only 6 PHEOCs have a dedicated
training program and a comprehensive, progressive exercise program. Only 5 (22.7%) countries reported that
training and exercise programs are primary components
of a performance monitoring and evaluation system and
their staff are routinely trained. Eight (36.4%) countries reported that their staff can activate and mount a
response within 120 minutes of detecting an event and
they are available to fulfill key PHEOC roles 24/7. Half
of 12 National PHEOCs reported that their staff did not
receive formal training in Public Health Emergency
Management. Just over one-third of the countries (n=8)
have an established training program with follow-up
documentation supporting training activities.
-----
Nine countries (40.9%) report having approved and
enacted legal instruments for their PHEOC. PHEOC is
reported to sit within the health sector organogram in 10
(45.5%) countries. PHEOCs are supported by any form
of legal instrument in 11 and 9 countries for national and
sub-national levels, respectively. Only 8 countries (36.4%)
reported using a legal instrument to define governance
structure, core functions, and scope of PHEOC authority
and operations approved by their government. Eleven of
the national PHEOCs did not conduct legal framework
mapping of existing laws and regulations that help to
avoid conflicts with other relevant authorities including
any implicated for repeal, amendment, or transfer of
prior authorities. Nine countries agreed upon the relationship between the Ministry of Health (MoH), PHEOC,
and thea National Disaster Management Organization
and/or other Ministries, agencies, and sectors before,
during, and after public health emergencies.
A policy group to provide strategic / policy guidance to
PHEOC was established in 10 PHEOCs (45.5%). Furthermore, a steering committee of PHEOC stakeholders to
supervise the planning and development of PHEOC was
established in 8 countries (36.4%) with membership
comprised of key PHEOC stakeholders and users.
An all-hazards national public health emergency
response plan including the concept of operations,
and addressing priority risks, has been developed and
approved in 7 countries (31.8%). Plan defining roles of
engagements of various stakeholders from outside MoH
is reported in 9 countries (40.9%). Only five (22.7%) of
the PHOEC reported the presence of business continuity
plans. Seven (31.8%) PHEOCs have existing notification,
reporting, engagement, and coordination requirements
and coordinate with Law Enforcement National Security
Agencies when needed. PHEOC manuals or handbooks
for management and operations were developed in 8
countries (36.4%) with integrated procedures and protocols that align with existing MoH or overarching agency.
Half of the countries (n=11) reported having a clear
operational structure comprising management, operations, planning, logistics, finance, and administration, or
a similar organization chart in place.
Nine (40.9%) of established PHEOCs rely on electronic
soultions to support at least one aspect of PHEOC information management and in 5 (22.7%) of those national
PHEOCs, solutions are government owned. Eleven countries have a dedicated PHEOC facility with adequate
space for management, operations, planning, logistics
and finance to support routine and response activities. In
terms of Information Communication Technology (ICT),
10 countries (45.5%) have appropriate teleconferencing,
11 countries (50%) have sufficient computer workstations, 7 countries (31.8%) have anti-virus and cyber
security protocols, 8 countries (36.4%) have audiovisual
functionality, 9 countries (40.9%) have sufficient electricity, and 7 countries (31.8%) have sufficiently tested
telephonic and/or interoperable radio communications.
Sufficient internet access and capacity were reported in
11 PHEOCs, but only 5 PHEOCs had interoperability of
their communication means, e.g. radio, telephoe, and
fax. A hotline for receiving emergency calls and alerts
is also present in 11 countries (50.0%). Not all PHEOC
have sufficient office equipment like printers, copiers,
fax machines, and scanners or digital senders that are
maintained and functional; only 9 PHEOCs reported
having sufficient office equipment. Appropriate security
and identification protocols were also only implemented
in 9 PHEOCs.
Half of countries (n=11) do not have a direct link to
the national surveillance systems where essential data
systematically flows to the PHEOC from relevant sectors
while the other 11 countries can collect and manage
operational information.Access to essential contextual
information such as road network, demography (GIS
data) is available in 6 (27.3%) countries. Only 7 countries (31.8%) reported the availability of visual data dashboards to convey a concise picture of the situation or
response activities.
JEE reports indicate that three countries have developed or demonstrated capacities to activate emergency
response as described in the JEE tool. Only two countries have the required plans and procedures to run a
fully functioning PHEOC. Similarly, three countries
reported “demonstrated capacities” for emergency operations programs as well as case management procedures
and implementation of IHR relevant hazards, as stated in
the JEE scores.
PHEOC is still in the infancy stage in this region.
However, it seems PHEOC is slowly gaining traction as
almost half of the countries now have active PHEOC.
Moreover, the ease of activating PHEOC for response
operations for various types of emergencies is also gaining
more recognition. PHEOC needs to be positioned at the
heart of response operations.[7 11] PHEOCs as a multisectoral coordination platform expanded their stakeholders
base to include all major response players at the national
level.
A legal framework is a prerequisite to establishing
PHEOC and ensuring its functionality as stated in the
WHO PHEOC framework.[7 8] Developing such a legal
framework is a demanding process and requires strong
political support. It should start with defining the
purpose, scope, the concept of operations and roles and
responsibilities of the PHEOC.[7 8] Mapping of the already
existing public health-related legal instruments within
and outside the health sector is mandatory to avoid any
conflict with authorities.[7 8] Our analysis shows that such
endeavours were not fully met in the current PHEOCs
and may represent a challenge for establishing a new
PHEOC.
Many of the PHEOCs do have some sort of an overarching body that provides strategic direction for PHEOC
response operations.[7 11] However, such body members
need to have a sound understanding of the PHEOCs legal
framework and its concept of operation to ensure better
PHEOC guidance. Also, there is a big gap in overseeing
-----
PHEOC functions during peacetime as almost half of
the PHEOC do not have active steering committees.
The steering committee will ensure PHEOC capacity
matches the health risks on the ground and facilitates
resource mobilisation to build PHEOC capacity. The
absence of the steering committee could be due to the
lack of involvement of MoH leaders in establishing the
PHEOC and positioning it as a siloed programme within
the MoH.[3 7 11] PHEOC may be looked at as a threat to
many departments working in response and could lead to
a power struggle and competition over resources. Therefore, a steering committee involving all relevant stakeholders will ensure the right positioning of the PHEOC
and increase its acceptance within the MoH and the
health sector.
It is apparent from the analysis that there is a big
gap regarding plans and operational documents for
the PHEOCs. The added value of the PHEOCs is to
have a more structured, organised and predictable
response.[3 7 11] This will only be achieved if the PHEOC
has enough strategic and operational documents to lead
its operations. PHEOC plans and procedures should
have a clear concept of operation and detailed operational documents such as response plans, Standard Operation Procedures (SOPs), protocols, etc that are regularly
tested, reviewed, updated and well communicated with
all stakeholders.[7 9 11] Further, developing such documents
entails vast technical experience and is time-consuming.[11]
Many of the PHEOC staff reported either a lack of technical expertise to develop such documents or they do
not have the time to develop them or both. In addition,
these documents should reflect the engagement of all
stakeholders; their participation in the approval process
is crucial.[11] Their approval will facilitate engagement and
ensure the PHEOC is the right platform to coordinate
the efforts of all stakeholders.
Although PHEOC infrastructure is expensive, it is the
most common investment made to establish a national
PHEOC. PHEOC’s dedicated buildings with fancy ICT
infrastructure deluded policy-makers and even technical
staff that the building alone represents a functioning
PHEOC. Such misconceptions need to be rectified to
ensure that the physical structure is not undermining
the importance of the rest of the PHEOC.[7] The massive
one off investment of building or renting a dedicated
building and infrastructure prevents many countries
from establishing a functioning PHEOC.[3] The use of
already existing multipurpose rooms or even the adoption of virtual PHEOC could help countries overcome
such investment challenges.[3 7] In the era of IT advancement, many solutions are emerging to cut the cost of
physical and infrastructure investment. COVID-19 also
played a catalyst role in accelerating such IT advancement
and its acceptance by users as the new norm. Countries
should include such solutions to help them overcome
the relatively high investment cost of PHEOC’s physical
infrastructure.
Information management is one of the main gaps
facing PHEOC in the region. Access to surveillance
and contextual data is severely limited diminishing the
PHEOC’s ability to portray an accurate response picture
and produce the right recommendations for decision-
makers.[3 7 11] This could be linked to poor PHEOC positioning within the health sector as mentioned above
and/or weak governance (legal framework and steering
committee).[7 8] On the other side, the vast amount of data
influx during response makes it extremely difficult to
analyse and produce meaningful information in a timely
fashion. Therefore, this increases the need for automated
information systems to be able to timely collect, analyse
and report dynamic real-time information.[7] Such investments will make it easier for decision-makers within the
PHEOC to make timely informed decisions. Further, an
automated information system will facilitate documentation and provide quality data for system intra-action/
after-action reviews and staff accountability.[7]
Generally, human resources are one of the most precious
and scarce resources in the region in terms of numbers
and skill mix.[16] The situation is even worse regarding staff
working in emergencies due to the increasing demand
for such cadre in the region and the poor remuneration
and working conditions at the national level due to the
economic hardship of those countries.[16] PHEOC is a
complex unit of work and requires staff to have a wide
range of competencies due to the dynamic nature of
emergencies.[3 7 10 11] Staff is required to have a combination of competencies to address multiple functions and
tasks.[7 10 11] Moreover, it is a very stressful working environment, which is physically and mentally demanding on
staff. Staff working in PHEOC need well-defined Terms
of References (ToRs) and clear works SOPs and a regular
training programme that equips them with the right
competencies to perform their duties.[7 10 11] This should
also be completed by a transparent accountability mechanism creating and maintaining a conducive environment.[7 10 11]
**CONCLUSION**
PHEOC establishment and operationalisation have
prerequisites.[7] PHEOC need to have strong governance
in place in terms of a legal framework and governing
bodies (steering committee and policy group).[7 8 11] Weak
governance is found to be one of the biggest challenges
for countries that want to develop or operate a PHEOC.[7 8]
Countries need to invest more in advocating for PHEOC
and construct effective governance and a sound legal
framework. PHEOC positioning within the health sector
should involve all relevant stakeholders from the inception phase to guarantee a better understanding of its
benefits and use and ensure acceptability and involvement.[7 11] Investment priorities should also be reviewed,
as most are skewed towards physical infrastructure at the
expense of the other key elements.
-----
In summary, PHEOC has been proven globally as a
smart solution to manage emergencies in regions like
EMR.[3 7 11] PHEOC have proved to help many countries
achieve a robust response mechanism for all types of
hazards. EMR countries need support to ensure they do
have enough enablers to establish and operate PHEOC.
At the same time, this support must be balanced across
all PHEOC elements. WHO invested in its capacities to
have the required technical expertise to support countries establish and operate their PHEOCs. It is high time
for countries to tap into such support and leverage the
momentum to establish and operate their PHEOC.
**Acknowledgements** The authors would like to acknowledge all efforts of the
national PHEOC coordinators and staff, directors of emergency departments,
national officers managing planning, operations, administration, finance, logistics,
communication, coordination, security, alert and surveillance, information
technology in countries of the Region, emergency focal points in the WHO country
offices and local partners for completing the survey and being part of the key
informant interviews as key sources for data collection. Specific thanks goes to
all colleagues who contributed to the different joint external evaluation reports
and intra-action review reports that have been used as additional sources for data
collection. We also extend our acknowledegment to WHO AFRO alongside WHO HQ,
African CDC, US CDC, Health Security Agency UK, Robert Koch Instituite, West Africa
Health Organzation and European CDC for supporting in organising the bi-regional
PHEOC meeting and development of the PHEOC assessment tool at country level,
used in our data collection.
**Contributors** OME, AA and DS conceptualised the study. OME and AA developed
the study design. AA and MKE collected the data. MKE analysed the data. OME
wrote the first draft. OME, AA and MKE reviewed all results. OME, AA, MKE, HHA,
DS and RJB edited the draft and approved the final manuscript for submission. The
author(s) read and approved the final manuscript.
**Funding** The authors have not declared a specific grant for this research from any
funding agency in the public, commercial or not-for-profit sectors.
**Competing interests** None declared.
[Director 2019. Cairo; 2019. https://applications.emro.who.int/docs/](https://applications.emro.who.int/docs/9789290223467-eng.pdf?ua=1)
[9789290223467-eng.pdf?ua=1](https://applications.emro.who.int/docs/9789290223467-eng.pdf?ua=1)
2 WHO Regional Office for Eastern Mediterranean. The work of WHO
in the Eastern Mediterranean Region: Annual report of the Regional
[Director 2018. Cairo; 2018. https://applications.emro.who.int/docs/](https://applications.emro.who.int/docs/9789290222781-2019-en.pdf?ua=1)
[9789290222781-2019-en.pdf?ua=1](https://applications.emro.who.int/docs/9789290222781-2019-en.pdf?ua=1)
3 World Health Organization. A systematic review of public health
_[emergency operations centres (EOCs). Geneva, 2013. https://apps.](https://apps.who.int/iris/bitstream/handle/10665/99043/WHO_HSE_GCR_2014.1_eng.pdf)_
[who.int/iris/bitstream/handle/10665/99043/WHO_HSE_GCR_2014.](https://apps.who.int/iris/bitstream/handle/10665/99043/WHO_HSE_GCR_2014.1_eng.pdf)
[1_eng.pdf](https://apps.who.int/iris/bitstream/handle/10665/99043/WHO_HSE_GCR_2014.1_eng.pdf)
4 Freedman AM, Mindlin M, Morley C, et al. Addressing the gap
between public health emergency planning and incident response.
_[Disaster Health 2013;1:13–20.](http://dx.doi.org/10.4161/dish.21580)_
5 World Health Organization. International Health Regulations (2005)
[Third Edition. Geneva; 2005. https://www.who.int/publications/i/](https://www.who.int/publications/i/item/9789241580496)
[item/9789241580496](https://www.who.int/publications/i/item/9789241580496)
6 US Centers for Disease Control & Prevention. Public health
emergency preparedness and response capabilities: national
standards for state, local, tribal, and territorial public health, 2018.
[Available: https://www.cdc.gov/cpr/readiness/00_docs/CDC_Prep](https://www.cdc.gov/cpr/readiness/00_docs/CDC_PreparednesResponseCapabilities_October2018_Final_508.pdf)
[arednesResponseCapabilities_October2018_Final_508.pdf](https://www.cdc.gov/cpr/readiness/00_docs/CDC_PreparednesResponseCapabilities_October2018_Final_508.pdf)
7 World Health Organization. Framework for a Public Health
[Emergency Operations Centre. Geneva; 2015. https://www.who.](https://www.who.int/publications/i/item/framework-for-a-public-health-emergency-operations-centre)
[int/publications/i/item/framework-for-a-public-health-emergency-](https://www.who.int/publications/i/item/framework-for-a-public-health-emergency-operations-centre)
[operations-centre](https://www.who.int/publications/i/item/framework-for-a-public-health-emergency-operations-centre)
8 WHO Regional Office for Africa. Public Health Emergency
Operations Center (PHEOC) Legal Framework Guide: A Guide for the
Development of a Legal Framework to Authorize the Establishment
[and Operationalization of a PHEOC. Brazzaville; 2021. https://www.](https://www.afro.who.int/publications/public-health-emergency-operations-center-pheoc-legal-framework-guide-guide)
[afro.who.int/publications/public-health-emergency-operations-](https://www.afro.who.int/publications/public-health-emergency-operations-center-pheoc-legal-framework-guide-guide)
[center-pheoc-legal-framework-guide-guide](https://www.afro.who.int/publications/public-health-emergency-operations-center-pheoc-legal-framework-guide-guide)
9 World Health Organization. Handbook for developing a public health
_[emergency operations centre: Part a. Geneva, 2018. https://www.](https://www.who.int/publications/i/item/handbook-for-developing-a-public-health-emergency-operations-centre-part-a)_
[who.int/publications/i/item/handbook-for-developing-a-public-](https://www.who.int/publications/i/item/handbook-for-developing-a-public-health-emergency-operations-centre-part-a)
[health-emergency-operations-centre-part-a](https://www.who.int/publications/i/item/handbook-for-developing-a-public-health-emergency-operations-centre-part-a)
10 World Health Organization. Handbook for developing a public health
_[emergency operations centre: part C. Geneva, 2018. https://www.](https://www.who.int/publications/i/item/handbook-for-developing-a-public-health-emergency-operations-centre-part-c)_
[who.int/publications/i/item/handbook-for-developing-a-public-](https://www.who.int/publications/i/item/handbook-for-developing-a-public-health-emergency-operations-centre-part-c)
[health-emergency-operations-centre-part-c](https://www.who.int/publications/i/item/handbook-for-developing-a-public-health-emergency-operations-centre-part-c)
11 WHO Regional Office for Africa. Handbook for public health
_emergency operations center operations and management._
[Brazzaville, 2021. https://www.afro.who.int/sites/default/files/2021-](https://www.afro.who.int/sites/default/files/2021-03/AFRO_PHEOC-Handbook_.pdf)
[03/AFRO_PHEOC-Handbook_.pdf](https://www.afro.who.int/sites/default/files/2021-03/AFRO_PHEOC-Handbook_.pdf)
12 WHO Regional Office for Eastern Mediterranean. Progress report
on emergencies and the International health regulations (2005) in
[the eastern Mediterranean region. Cairo; 2019. https://applications.](https://applications.emro.who.int/docs/RC_Technical_Papers_2019_Inf_Doc_8_en.pdf?ua=1)
[emro.who.int/docs/RC_Technical_Papers_2019_Inf_Doc_8_en.pdf?](https://applications.emro.who.int/docs/RC_Technical_Papers_2019_Inf_Doc_8_en.pdf?ua=1)
[ua=1](https://applications.emro.who.int/docs/RC_Technical_Papers_2019_Inf_Doc_8_en.pdf?ua=1)
13 World Health Organization. Joint external evaluation tool:
international health regulations (2005), second edition. Geneva;
[2018. https://extranet.who.int/sph/sites/default/files/document-](https://extranet.who.int/sph/sites/default/files/document-library/document/9789241550222-eng.pdf)
[library/document/9789241550222-eng.pdf](https://extranet.who.int/sph/sites/default/files/document-library/document/9789241550222-eng.pdf)
14 Samhouri D, Ijaz K, Rashidian A, et al. Analysis of joint external
[evaluations in the who eastern Mediterranean region. East Mediterr](http://dx.doi.org/10.26719/2018.24.5.477)
_[Health J 2018;24:477–87.](http://dx.doi.org/10.26719/2018.24.5.477)_
15 World Health Organization. After Action Reviews and Simulation
Exercises under the International Health Regulations 2005 M&E
[Framework (IHR MEF). Geneva; 2018. https://apps.who.int/iris/](https://apps.who.int/iris/bitstream/handle/10665/276175/WHO-WHE-CPI-2018.48-eng.pdf)
[bitstream/handle/10665/276175/WHO-WHE-CPI-2018.48-eng.pdf](https://apps.who.int/iris/bitstream/handle/10665/276175/WHO-WHE-CPI-2018.48-eng.pdf)
16 WHO Regional Office for Eastern Mediterranean. Framework
for action for health workforce development in the eastern
[Mediterranean region 2017-2030. Cairo; 2018. https://applications.](https://applications.emro.who.int/docs/EMROPub_2018_EN_20314.pdf?ua=1)
[emro.who.int/docs/EMROPub_2018_EN_20314.pdf?ua=1](https://applications.emro.who.int/docs/EMROPub_2018_EN_20314.pdf?ua=1)
**Patient consent for publication** Not applicable.
**Ethics approval** Not applicable.
**Provenance and peer review** Not commissioned; externally peer reviewed.
**Data availability statement** Data are available on reasonable request.
**Open access** This is an open access article distributed in accordance with the
Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which
permits others to distribute, remix, adapt, build upon this work non-commercially,
and license their derivative works on different terms, provided the original work is
properly cited, appropriate credit is given, any changes made indicated, and the
[use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.](http://creativecommons.org/licenses/by-nc/4.0/)
**REFERENCES**
1 WHO Regional Office for Eastern Mediterranean. The work of WHO
in the Eastern Mediterranean Region: Annual report of the Regional
-----
| 8,038
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC9240820, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBYNC",
"status": "GOLD",
"url": "https://gh.bmj.com/content/bmjgh/7/Suppl_4/e008573.full.pdf"
}
| 2,022
|
[
"JournalArticle",
"Review"
] | true
| 2022-06-01T00:00:00
|
[
{
"paperId": "2851a47dd889749b1249fa898598b72391cdf04a",
"title": "Analysis of Joint External Evaluations in the WHO Eastern Mediterranean Region."
},
{
"paperId": "d9f8541c3574fce5cdb20b038de0ac8ed55f893c",
"title": "Addressing the gap between public health emergency planning and incident response"
},
{
"paperId": "5ac1a8b24748619e6d96fe5d2705e93d931a2f40",
"title": "WHO Regional Office for the Eastern Mediterranean"
},
{
"paperId": "8943625f88f49ba563900a21915f40476206dba9",
"title": "THE WORLD HEALTH ORGANIZATION"
},
{
"paperId": null,
"title": "Legal Framework Guide: A Guide for the Development of a Legal Framework to Authorize the Establishment and Operationalization of a PHEOC. Brazzaville"
},
{
"paperId": null,
"title": "Cairo; 2019"
},
{
"paperId": null,
"title": ". Progress report on emergencies and the International health regulations (2005) in the eastern Mediterranean region. Cairo"
},
{
"paperId": null,
"title": "Director 2019. Cairo"
},
{
"paperId": null,
"title": "The work of WHO in the Eastern Mediterranean Region: Annual report of the Regional Director"
},
{
"paperId": null,
"title": "After Action Reviews and Simulation Exercises under the International Health Regulations 2005 M&E Framework (IHR MEF)"
},
{
"paperId": null,
"title": "16 WHO Regional Office for Eastern Mediterranean. Framework for action for health workforce development in the eastern Mediterranean region 2017-2030."
},
{
"paperId": null,
"title": "Public health emergency preparedness and response capabilities: national standards for state, local, tribal, and territorial public health"
},
{
"paperId": null,
"title": "Handbook for developing a public health emergency operations centre: Part a"
},
{
"paperId": null,
"title": "Joint external evaluation tool: international health regulations (2005), second edition"
},
{
"paperId": null,
"title": "International Health Regulations (2005) Third Edition. Geneva"
},
{
"paperId": null,
"title": "Many countries have established PHEOCs, and some were successful in utilizing them in their COVID-19 response alongside other emergencies"
}
] | 8,038
|
en
|
[
{
"category": "Engineering",
"source": "external"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/003e6ceefff5c8a6799167229999c33a0c666349
|
[
"Engineering"
] | 0.865556
|
Modeling, Control & Fault Management of Microgrids
|
003e6ceefff5c8a6799167229999c33a0c666349
|
[
{
"authorId": "70545068",
"name": "M. Moradian"
},
{
"authorId": "97977850",
"name": "Faramarz Mahdavi Tabatabaei"
},
{
"authorId": "34412697",
"name": "S. Moradian"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
In this paper, modeling and decentralize control principles of a MicroGrid (MG) whom equipped with three Distributed Generation (DG) systems (consist of: Solar Cell System (SCS), MicroTurbine System (MTS) and Wind Energy Conversion System (WECS)) is simulated. Three arrangement of load changing have investigated for the system. In first one the system doesn’t have transfer of power between MG and grid. In other two arrangements system have transfer of power between MG and utility grid. Of course in third case transfer of power between DG resources is considerable. Case study system is equipped by energy storage devices (battery bank) for each DG’s separately by means of increasing the MG reliability. For WECS and SCS, MPPT control and for MTS, voltage and frequency (V&F) controller has designed. The purpose of this paper is load respond in MG and storage process of surplus energy by consider of load changing. MATLAB/Simulink and its libraries (mainly the Sim Power Systems toolbox) were employed in order to develop a simulation platform suitable for identifying MG control requirements. This paper reported a control and op- eration of MG in network tension by applying a three phase fault.
|
**_Smart Grid and Renewable Energy, 2013, 4, 99-112_** 99
http://dx.doi.org/10.4236/sgre.2013.41013 Published Online February 2013 (http://www.scirp.org/journal/sgre)
# Modeling, Control & Fault Management of Microgrids
### Mehdi Moradian[1], Faramarz Mahdavi Tabatabaei[2], Sajad Moradian[3]
1Department of Electrical Engineering, Sahand University of Technology, Tabriz, Iran; 2Saman Gostar Company (Distributor of
SANTERNO, Italy), Tehran, Iran; [3]Department of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran.
Email: [email protected], [email protected], [email protected]
Received September 11[th], 2012; revised November 13[th], 2012; accepted November 20[th], 2012
## ABSTRACT
In this paper, modeling and decentralize control principles of a MicroGrid (MG) whom equipped with three Distributed
Generation (DG) systems (consist of: Solar Cell System (SCS), MicroTurbine System (MTS) and Wind Energy Conversion System (WECS)) is simulated. Three arrangement of load changing have investigated for the system. In first
one the system doesn’t have transfer of power between MG and grid. In other two arrangements system have transfer of
power between MG and utility grid. Of course in third case transfer of power between DG resources is considerable.
Case study system is equipped by energy storage devices (battery bank) for each DG’s separately by means of increasing the MG reliability. For WECS and SCS, MPPT control and for MTS, voltage and frequency (V&F) controller has
designed. The purpose of this paper is load respond in MG and storage process of surplus energy by consider of load
changing. MATLAB/Simulink and its libraries (mainly the Sim Power Systems toolbox) were employed in order to
develop a simulation platform suitable for identifying MG control requirements. This paper reported a control and operation of MG in network tension by applying a three phase fault.
**Keywords: Microgrid; Decentralize Control; Wind Energy Conversion System; Microturbine; Solar Cell**
99
## 1. Introduction
Microgrid concept widely developed in countries such as
USA, Canada, Japan and UK. It has been investigated
and implemented [1,2]. The increase in researches due to
benefits of this type of networks including provide the
reliability and security of network and loads, high efficiency, environmentally friendly and self-healing [3]. In
today’s power systems, very large problems including
electricity production cost and also reduce of fossil fuel,
on the other hand, the increasing pollution created from
burning oil and gas and dramatically growth of demands
has been increase the greenhouse gases in the air which
is considered as a big threat for ozone layer. Because
electricity costs much less and in some cases is zero (for
DG Resources), so in today’s power systems used distributed energy resources (e.g. wind and sun are free resources to generate electricity). Another advantage of
smart grid which it should be mentioned is loss reduction
caused by power transmission line. Because one of the
goals in smart grids is producing power by distributed
energy resources and removed the power plant as much
as possible. So with this, the line power flowing and its
corresponding losses can be reduced to acceptable level.
In recent years, many researches about structure, control
and implementation of smart grids has been worked in
laboratories. Control of MG and performance of energy
storages have close relationships, which in this paper
have been reviewed. By energy storage MG control can
be more easily and the system reliability can be increased.
A MG is a connection of distributed energy resources
like: wind energy conversion, microturbine, fuel cell, PV
arrays, Combined Heat and Power (CHP) and energy
saving factors such as flywheel, batteries or Uninterruptable Power Supply (UPS) and power capacitors in low
voltage power systems [4].
Basic structure of a typical MG is shown and discussed in [5].
Here is assumed that distributed generation sources
have the ability to respond the loads. After disappearing
network fault, synchronization operation performed and
isolated network connected to Utility Power Source again
[6].
In reference [7] a very simple scheme of a MG with
three DG resources has been studied. In the article that
was published in 2010, there is no mention and have not
been analyzed the: DG resources structure, controllers of
each micro source, fault occurrence in the grid and reaction of MG against the sudden event and also transferred
power between DG’s.
Note that the decentralized control means that on each
DG resources has been an independent controller and
each of this resources performed control operations in
C i h © 2013 S iR **_SGRE_**
-----
100 Modeling, Control & Fault Management of Microgrids
dependently. May be the type of applied controller is
different and even similar. Fault events that may lead to
islanding of a distribution system are discussed in [8].
The work described in this paper regards the simulation
and control of the MG equipped by three distributed energy storage device and local loads. Used distributed
generations in this paper are consists of: WECS, SCS
(with MPPT control strategies), and MTS (with voltage
and frequency control strategy). The robustness of the
tested control strategies were studied for disturbances
taking place in the utility network, followed by a forced
islanding of the MG. Experimental tests for islanding and
synchronization were presented in [9]. Islanding of the
MG can take place by unplanned events like faults in the
utility network or by planned actions like maintenance
requirements.
## 2. Structure of Distributed Generation Systems
In Figure 1 the general structure of a micro-distribution
network is shown. The input power producing by distributed generation resources converted to electrical energy
for network and load requirements. Control tasks are
divided into two parts:
1) Input-Side Controller: which should be possible to
take the maximum power from the input source. Naturally, the protection of input side converter must be in
considered.
2) Grid-Side Controller: that can follow these tasks: a)
Input active power control derived for network; b) Control of the reactive power transferred between network
and micro-grid; c) DC link voltage control; d) Synchronization of network; e) Assurance of power quality injected to the network. Generally the network controller
position is VSI, which both amplitude and phase of the
output voltage are controlled. All items listed above are
the basic features for the grid-side controller that these
converters should have. Studied network in this paper
consists of two distributed generation sources, which is
briefly explains their structure.
3) Microgrids Internal Structure: According to given
system studied in this article is including distribution
generating resources of PV and MT. The structure of this
two system and their relevant controlling parts are shown
below. It is noteworthy that PV and Fuel Cell (FC) systems have similar hardware structures [10,11].
3.1) Photovoltaic and FC systems: As previously noted
the PV and FC hardware structure is similar. Although
voltage or current by FC and PV is low, but by binding a
set of them together can increase the production levels
and also can increase or decrease the voltage level by
using DC-DC converters such as boost converter for increasing the voltage level.
Non-linear relationship between _V I-_ obtain from the
below equation [12].
Which in it _I_ _SC_ is short circuit current, _I_ _o_
[is re- ]
verse saturation current, _R is series resistance and S_
is constant factor which is depends on the type of materials used in cell.
In this paper a silicon solar panel, ( _M_ 1, _N_ 36 ) has
been used. Sample model is constructed by Iranian Optical Fiber Fabrication Company (OFFC) that related table
of its coefficients and parameters comes to Table A1
[13].
According to related values Equation (1) is written as
follows:
_Vpv_ _Nln_ _I_ _SC_ MIipv _o_ _MIo_ _NRS_
_ipv_ (1)
**Figure 1. Topology of smart systems control.**
C i h © 2013 S iR **_SGRE_**
-----
Modeling, Control & Fault Management of Microgrids 101
_Vpv_ 1.767ln _I_ _SC_ 0.00005ipv 0.00005
_ipv_ (2)
Non-linear characteristics of _V I-_ and _P I-_ are
shown in **Figure 2.** _PMP_,VMP are known as power and
voltage of maximum power in _PV_ cell. These Curves
are:
By changing temperature, the coefficients will changes
[12]. Two samples of these changes are estimated in
Equations 3-a (70˚C) and 3-b (−20˚C).
_Vpv_ 1.69ln 3.005 0.00024ipv 0.00024
_ipv_
(3-a)
_Vpv_ 1.82ln 2.83 0.00001ipv 0.00001
_ipv_ (3-b)
For displaying MPPT technique in _PV_ we act the
way [14]: cell voltage with corresponding maximum
power production by considering the open circuit voltage
for different temperature show a dependency.
_VMP_ _MVOC_ [ (4) ]
This equation shows MPPT technique which in it _MV_
called voltage factor that for OFFC its value considered
0.71. This method for maximum power estimating is
simple and fast.
Equivalent circuit to _PV_ cell block shown in Figure
**3(a) which the related equation to non-linear** _V I-_ relationship is placement. Also a delay function to limit the
current of rapid response to voltage controlled source and
to improving the convergence responses is used.
For VMPPT its related equivalent circuit shown in
**Figure 3(b). This block will calculate the open circuit**
voltage (By using _I_ _SC_ and Equation (2)), then comprised it with PV output voltage and produces the fire
command for the PWM block. The delay shown here is
the same reason as in Figure 3(a).
Now we want to see the performance of this system in
**Figure 2. Non-linear characteristics of V-I and P-I.**
connected to the grid-connected mode and with applying
a controller in AC part according to what is seen in Fig**ure 4. After increasing the voltage level by boost, we**
paralleled it by energy storage devices (That somehow
we can call them UPS). We do this in order to increase
reliability of system. Now it is necessary that DC voltage
produced by inverter becomes AC. The purpose of the
grid-side controller is to maintain DC link voltage at a
constant value regardless production power range.
Vector control in a rotating reference frame with the
line voltage vector is used. The purpose in this controller
is regulation of DC voltage and reactive power control.
Using the Park conversions, voltage equations can be
controlled to reference frame d-q. The idea of control is
taken from [10].
**Figure 5 shows simulated model of grid-side control-**
ler. PI standard controllers are used in order to regulate
the line current in rotational synchronous frame in internal control loop and DC voltage in external loop.
_id_ [ is active part of current and ] _iq_ [ is reactive section ]
on current. In order to obtain a transformation from active power, the value of current reference _iq_ [ (reactive ]
part) considered as zero. PLL used in figure is to synchronize converter frequency with main grid. It is assumed that the harmonics produced by switching is zero.
## 3. Structure And Mts Control Model
Recently microturbines have been much attention because of their small size, relative low cost, repair and cheap
maintenance and relatively simple control. Different dynamic models have been discussed for micro-turbines by
Rowen, Hannet, Saha and Nern for combustion gas turbine [15-17]. In 1993 mathematical method of gas turbine by Rowen was developed [15]. While in 1993,
Prime Mover Working Group by considering the control
of speed, acceleration, fuel and temperature made this
model wider [16]. MT used in this article is a small
combustion turbine with an installed capacity 25 to 500
KW and a high rotation speed (between 50,000 to
120,000 rpm). This model includes the speed governor,
(a)
(b)
**Figure 3. Equivalent circuit of (a) PV cell (b) VMPPT.**
C i h © 2013 S iR **_SGRE_**
-----
102 Modeling, Control & Fault Management of Microgrids
**Figure 4. SCS in grid-connected mode and applying grid-side controller.**
**Figure 5. Equivalent circuit of grid-side controller.**
acceleration control block, fuel system control and temperature control. Single-shaft turbine model is considered.
Power producer with a Permanent Magnet Synchronous
Generator (PMSG) has two poles and smooth poles rotor.
Because of high speed shaft, generators of an AC voltage
source will be a high frequency (frequency angular
higher than 100,000 rad/sec) [17]. Since turbines moves
at high speed, so AC generator is a high-frequency generator which cannot be directly coupled the AC network
[18]. One way to model a system of distributed generation MT, based on all classification system are three following separate parts [19,20]:
1) Module 1: mechanical system of turbine and fuel.
2) Module 2: PMSG and AC/DC rectifier and energy
storage devices.
3) Module 3: AC/DC voltage source inverter, PWM
controller.
Mechanical Model and MT Control Functions:
Based on Rowen and Hannet model, we examine the
MT model. Dynamic equations of MTS in [15] are investigated.
According to the principle of energy conversion and
ignore the inverter losses, total of instantaneous powers
in output of AC terminal must be equal to the instant
powers in dc terminal like.
_V Idc_ _dc_ _v ia a_ _v ib b_ _v ic c_ (5)
Which _I_ _DC_ and _VDC_ are dc link voltage and current.
VSI simplified model shows in **Figure 6(a). The in-**
verter which used in this essay is hysteresis model.
Diagram block of V&F controlling model presented in
**Figure 6(b).** _Vdref_ and _Vqref_ are reference amounts. In
order to have unit power factor, the amount of _Vqref_ is
zero and _Vdref_ is 1 p.u. Voltage and frequency (V&F)
control has to regulate the voltage value at the Point of
Common Coupling (PCC) and also the frequency of the
whole grid.
Now MT model of distributed generation in gridconnected mode is shown in Figure 7. The produced
frequency by inverter will have the equal amount of 50
Hz corresponding to the network.
LCL Filters in this paper is designed by the idea in
[ 21,22].
C i h © 2013 S iR **_SGRE_**
-----
Modeling, Control & Fault Management of Microgrids 103
(a) (b)
**Figure 6. Bolck diagram of MTS components; (a) VSI simplified model; (b) Diagram block of V&F controlling model.**
**Figure 7. The general model of MTS in grid-connected mode.**
Conceptual and technical solution of MG is presented
in [23,24].
## 4. Structure and WECS Control Model
Electrical wind generators are the equipment who converts wind to electrical energy. Different types of generators are used in wind turbines. For example small
sized wind turbines are equipped by DC generators with
capacity up to 90 kw (from 10 to 90).
In wind turbines modern systems three phase AC generators are customs [25].
General kind of AC generators who are used in modern wind turbines are:
1) Squirrel Cage Induction Generator (SCIG)
2) Wired Rotor Induction Generator (WRIG)
3) DoubleFed Induction Generator (DFIG)
4) Synchronous Generators with output excitation (SG)
5) Permanent Magnet Synchronous Generator (PMSG)
Synchronous Generator is a kind of generators who are
used in some researches [26,27]. These generators could
connect to wind turbine without any gearbox. These
benefits are attractive by consider of maintenance and
limited shelf life. Synchronous generators could to excite
by electric or with permanent magnet rotor. By considering the above reasons, used generator in this paper is
kind of PMSG.
## 5. Simulation of WECS
This system is modeled by equations of wind turbine as
could be seen in Equations below. In this paper a variable
speed wind turbine is used. Wind speed 12 m/sec is considered. The parameters value of PMSG is shown in Table A2. If the speed of wind was variable, WECS should
be used the Buck/Boost converter. In this case the trigger
signal should produce for two switches. This performance cause to system be complicated.
Equations for wind turbine are shown in below [28].
_Pm_ 0.5 _C_ _p_ , _A_ _vw3_ (6)
_C_ _p_ , _C1_ _Ci2_ _C3_ _C4_ e Ci5 _C6_
(7)
_Rw_
(8)
_vw_
1 1 0.035
[]i 0.08 3 1
(9)
Output mechanical power in watt is shown in Equation
(6). In this equation, air density in (kg/m[3]), Cp, performance coefficient, _vw_ [ wind speed in m/sec, ] , tip
speed ratio, , pitch angle, _A, turbine swept area._
In Equation (7), the coefficients _C1_ to _C6_ are:
_C1_ 0.5176,C2 116,C3 0.4,C4 5,C5 21 and
_C6_ 0.0068 [28].
In Equation (8), _R_ is rotor radius in meter; w, angular speed in rad/sec the output torque of wind turbine is
input of used PMSG.
C i h © 2013 S iR **_SGRE_**
-----
104 Modeling, Control & Fault Management of Microgrids
In order to acquiring the output maximum power in
WECS, we use the MPPT algorithm Figure 8.
Inverter’s in each DG’s are modeled base on
SANTERNO products [29].
In this algorithm, the initial value adjusted for DC references voltage. Correspondingly, voltage and current
will be measured. After the measurement, DC output
power (Po) would be calculated. In next step, the reference voltage ought to be altered as much as dc variations
### Vdc . By this way:
_Vref_ k _Vref_ k 1 Vdc (10)
Then dc power will calculated with
_P k_ _Vdc_ k _Idc_ k . If _P k_ _Po_, the system output isn’t in maximum point, so accordingly the reference
voltage have to rise a quantum of Vdc and power
should compare with earlier amount _P k_ _P k_ 1 .
This process continued till receiving to maximum point.
Now if _P k_ _P k_ 1, the reference voltage should
decrease.
Value and parameters of boost model is shown in Table A3.
In **Figure 9 MPPT block of WECS is shown. This**
block should produce the trigger signal of switches in DC
link for tracking the maximum power.
In Figure 10(a) Torque-Speed characteristics of WT is
shown. Note that all of curves are in 12 m/sec wind
speed.
As you see in Figure 10(b), maximum amount of power coefficient in used turbine is 0.41 and 7.71 .
As we said, output power of WT is 7100 watt that is
shown in Figure 10(d).
Produced torque of WT in system run time is 184 N.m.
DC link Voltage in battery bank terminal is shown in
**Figure 10(e). Voltage ripple is small. But it regulate in**
243 volt. Battery bank capacity should be proportion to
**Figure 8. WECS MPPT flowchart.**
**Figure 9. MPPT simulated block for WECS.**
C i h © 2013 S iR **_SGRE_**
-----
Modeling, Control & Fault Management of Microgrids 105
(a) (b)
(c) (d)
(e)
**Figure 10. Output of WECS; (a) Torque-speed characteristics of WT; (b)** **_C_** **_p_** **curve of WT; (c) Changing of power coef-**
**ficient in time; (d) Produced mechanical power; (e) DC link Voltage in battery bank terminal.**
produced power and connected load to the system. This ment in this networks, 3 phase fault (phase-phase) apsignal is input of three phase inverter. In Figure 11 grid- plied to system and breaker (Point of Common Coupling)
connected WECS is simulated. In t = 0.25 sec three in moments will stop applying the fault and load and DG
phase fault (phase-phase) applied and the system goes to sources goes to islanding mode. Output shown in this
islanding mode. case shows that in this moment DG resources are respon
**Figure 12 has shown the output voltages before/after** sive to load. When the network fault disappears, part of
autonomous again can be connected to the utility grid.
LCL filter.
Some of the values required in the above system are
given in Appendix.
## 6. Simulation
Simulation process is for three cases: 1) the case that
As you see in Figure 13, each DG sources protect local load capacity and produced power are equal. Here the
(three-phase and balance) load, the considered load for system doesn’t have any transfer of power between MG
MTS is 375 KVA, WECS is 7100 W and SCS load, 110 and utility grid. 2) The case that loads capacity is less
W. Each DG sources is equipped with an energy storage than produced power. Here system has power transfer
device. Because if in specifics circumstances the produc- between grid and MG. 3) The case that MTS load capacers cut, storage resources can continue to support the ity is less than its local load, SCS and WECS local loads
loads. To review the islanding mode and fault manage- are bigger than its produced capacity. Here the system
C i h © 2013 S iR **_SGRE_**
-----
106 Modeling, Control & Fault Management of Microgrids
**Figure 11. WECS in grid-connected mode.**
(a) (b)
**Figure 12. Output voltage of WECS (a) Before; (b) After LCL filter.**
**Figure 13. Case study system.**
C i h © 2013 S iR **_SGRE_**
-----
Modeling, Control & Fault Management of Microgrids 107
has power transfer between MG and grid. Also transfer
power between DG resources is considerable.
Output curve of used PV system is shown in Figure 3.
The points Corresponds to maximum power is shown in
the figure.
## 7. Simulation Result
In this part simulated migrogrids result are displayed and
investigates. First case: amount of produced power by
each resources are equal with the loads toward each DG:
in this case SL’s respond’s by each micro sources and we
don’t have any power transfer between MG and grid. At t
= 0.25 sec three phase fault applied to the system and
breaker in PCC guide the system to islanding mode operation. In below curves the time before 0.25 sec is for
grid-connected operation and after 0.25 sec is for islanding mode. In first case NSL isn’t connected to grid or it
can support by utility grid. Load capacities are equal to
produce capacity. Waveforms of the system in this case
are shown in figures below.
In simulated system, SL’s are sensitive loads and NSL
is non-sensitive load of system.
By using simulation, produced power waveform of PV
is as follows (Figure 14(a)).
Three phase voltage of SCS is 32 v (Figure 14(b)). it
means that MPPT process is done by the applied controller.
In Figure 15 the voltage and current curves shows. In
this case, the system can continue to stable operation and
in islanding mode the loads could responds by DG
sources.
In **Figure 16(b) total harmonic distortion in system**
output is shown. In the moment of t = 0.25 sec three
phase fault applied. THD rise up to 2.035 percent (in
fault time).
It can be seen in Figure 17, the network can respond
to loads and we don’t have any transferred power between grid and MG.
(a) (b)
**Figure 14. Output of SCS; (a) Pulled output power from the PV in DC link; (b) Three phase voltages of SCS.**
(a) (b)
**Figure 15. Voltage and current of line and loads (in p.u.); (a) Three phase voltage of line (in p.u.); (b) Line and loads three**
**phase current (in p.u.)**
C i h © 2013 S iR **_SGRE_**
-----
108 Modeling, Control & Fault Management of Microgrids
(a) (b)
**Figure 16. Frequency and THD of system; (a) System frequency changing curve; (b) Total harmonic distortion in line voltage.**
(a) (b)
(c) (d)
**Figure 17. Terminal voltage and current toward the MT; (a) Output voltage of MT system before LCL filter; (b) Output MT**
**system after LCL filter; (c) Terminal voltage in the battery bank; (d) Current of shunt capacitor in DC link.**
Second case: non-sensitive load with 7130 watt capacity
is connected to utility system. And sensitive loads are
decrease by size 1130 w (SCS load 30 w, MTS load 1 kw,
WECS load 100 w are decreases). We want to see and
investigate the effect of these changes in the system. The
outputs are like these:
In **Figure 18, level of energy storage will increase af-**
ter operation of breaker in PCC and fault occurrence time.
Injected current to network by each DG’s are showed
in Figure 19. In third case, the PV load rise up to 130 w
C i h © 2013 S iR **_SGRE_**
-----
Modeling, Control & Fault Management of Microgrids 109
(30 w more than its nominal load) and WECS load rise
up to 7200 w (100 w more than its nominal load and
MTSs load decrease in amount of 1130 w. in this case
NSL rise up to 7 kw (1 kw more than its first case). it
showed that additional part of loads in the network supplies by MTS. Output of this case is showed in Figure 20.
## 8. Conclusion
In this paper a microgrid with three DG resources equi
pped by energy storage devices and grid side controllers
has simulated. Control principles and modeling of the
system has investigated. Output of the system displayed
for three load arrangement. By means of showing load
management and support of loads in developed systems,
Fault management and control vision has showed. And
energy storage operation in the moment of load respond
and when loads have the changes has displayed. We use
different controllers such as: MPPT controller (for SCS
(a) (b)
(c) (d)
(e) (f)
**Figure 18. Storage operation in load changing condition; (a) Battery bank current of WECS in first case; (b) Battery bank**
**current of WECS in second case; (c) Battery bank current of MTS in first case; (d) Battery bank current of MTS in second**
**case; (e) Battery bank current of SCS in first case; (f) Battery bank current of SCS in second case.**
C i h © 2013 S iR **_SGRE_**
-----
110 Modeling, Control & Fault Management of Microgrids
(a) (b)
(c) (d)
**Figure 19. Second case injected current of DG’s for the grid-connected operation; (a) Transferred current between MG and grid**
**in grid connected mode; (b) Injected current from MTS to grid; (c) Injected current from WECS to grid; (d) Injected current**
**from SCS to grid.**
(a) (b)
(c) (d)
**Figure 20. Third case injected current of DG’s for the grid-connected operation; (a) Injected current from MTS to WECS; (b) In-**
**jected current from MTS to SCS; (c) Injected current from MTS to grid; (d) Injected current from MTS to WECS, SCS and grid.**
C i h © 2013 S iR **_SGRE_**
-----
Modeling, Control & Fault Management of Microgrids
111
and WECS) and V&F controller (for MTS) in order to
research about decentralize control operation and showing the effect of this kind of control. In Grid-connected
and islanding mode, additional product of DG resources
have stored in battery banks. Of course it could be seen
that in all conditions the system can continue to stable
operation and loads are in good respond condition.
## REFERENCES
[1] B. Lasseter, “Microgrids (Distributed Power Generation),”
_Proceedings of the IEEE PES Winter Meeting, Vol. 1,_
2001, pp. 146-149.
[2] N. Hatziargyriou, H. Asano, R. Iravani and C. Marnay,
“Microgrids: An Overview of Ongoing Research, Development, and Demonstration Projects,” _IEEE Power En-_
_ergy Magazine, Vol. 5, No. 4, 2007, pp. 78-94._
[doi:10.1109/MPAE.2007.376583](http://dx.doi.org/10.1109/MPAE.2007.376583)
[3] M. Pipattanasomporn, H. Feroze and S. Rahman, “MultiAgent Systems in a Distributed Smart Grid: Design and
Implementation,” Power Systems Conference and Exposi_tion, Seattle, 15-18 March 2009, pp. 1-8._
[4] Public Power Corporation, “Microgrids—Large Scale Integration of Micro-Generation to Low Voltage Grids,”
Technical Annex, 2002.
[5] P. Piagi and R. H. Lasseter, “Autonomous Control of
Microgrids,” IEEE PES Meeting, Montreal, June 2006.
[6] C. L. Moreira, F. O. Resende and J. A. P. Lopes, “Using
Low Voltage MicroGrids for Service Restoration,” IEEE
_Transactions on Power Systems, Vol. 22, No. 1, 2007, pp._
395-403.
[7] R. Zamora and A. K. Srivastava, “Controls for Microgrids with Storage: Review, Challenges, and Research
Needs,” Elsevier, Vol. 14, No. 7, 2010, pp. 2009-2018.
[8] F. Katiraei, M. R. Iravani and P. W. Lehn, “Microgrid
Autonomous Operation during and Subsequent to Islanding Process,” IEEE Transactions on Power Delivery, Vol.
20, No. 1, 2005, pp. 248-257.
[9] D. Georgakis, S. Papathanassiou, N. Hatziargyriou, A.
Engler and C. Hardt, “Operation of a Prototype Microgrid
System Based on Micro-Sources Equipped with Fast-Acting Power Electronics Interfaces,” _Proceedings of IEEE_
35th PESC, Aachen, Vol. 4, 2004, pp. 2521-2526.
[10] F. Blaabjerg, R. Teodorescu, M. Liserre and A. V. Timbus, “Overview of Control and Grid Synchronization for
Distributed Power Generation Systems,” _IEEE Transac-_
_tions on Industrial Electronics, Vol. 53, No. 5, 2006, pp._
1398-1409.
[11] M. Uzunoglu, O. C. Onar and M. S. Alam, “Modeling,
Control and Simulation of a PV/FC/UC Based Hybrid
Power Generation System for Stand-Alone Applications,”
_Renewable Energy, Vol. 34, No. 3, 2009, pp. 509-520._
[doi:10.1016/j.renene.2008.06.009](http://dx.doi.org/10.1016/j.renene.2008.06.009)
[12] Z. M. Salameh, B. S. Borowy and A. R. A. Amin, “Photovoltaic Module-Site Matching Based on the Capacity
Factors,” IEEE Transactions on Energy Conversion, Vol.
[10, No. 2, 1995, pp. 326-332. doi:10.1109/60.391899](http://dx.doi.org/10.1109/60.391899)
[13] http://www.solarserver.com/yellow-pages/companies/com
pany-search/optical-fiber-solar-cell-fabrication-company.
html
[14] M. A. Masoum, H. Dehbonei and E. F. Fuchs, “Theoretical and Experimental Analyses of Photovoltaic Systems
with Voltage- and Current-Based Maximum Power-Point
Tracking,” _IEEE Transactions on Energy Conversion,_
Vol. 22, No. 8, 2002, p. 62.
[15] W. I. Rowen, “Simplified Mathematical Representations
of Heavy Duty Gas Turbines,” Journal of Engineering for
_Power, Vol. 105, No. 4, 1983, pp. 865-869._
[doi:10.1115/1.3227494](http://dx.doi.org/10.1115/1.3227494)
[16] L. N. Hannet and A. Khan, “Combustion Turbine Dynamic Model Validation from Tests,” IEEE Transactions on
_Power Systems, Vol. 8, No. 1, 1993, pp. 152-158._
[17] A. K. Saha, S. Chowdhury, S. P. Chowdhury and P. A. Crossley, “Modeling and Performance Analysis of a Microturbine as a Distributed Energy Resource,” _IEEE Trans-_
_actions on Energy Conversion, Vol. 24, No. 2, 2009, pp._
529-538.
[18] Working Group on Prime Mover and Energy Supply
Models for System Dynamic Performance Studies, “Dynamic Models for Combined Cycle Plants in Power System Studies,” IEEE Transactions on Power Systems, Vol.
[9, No. 3, 1994, pp. 1698-1708. doi:10.1109/59.336085](http://dx.doi.org/10.1109/59.336085)
[19] I. Zamora, J. S. Martin, A. Mazon, J. S. Martin and V.
Aperribay, “Emergent Technologies in Electrical MicroGeneration,” _International Journal of Emerging Electric_
_Power Systems, Vol. 3, No. 2, 2005, pp. 1553-1779._
[20] C.-M. Ong, “Dynamic Simulation of Electric Machinery
Using Matlab/Simulink,” Prentice Hall, Upper Saddle River, 1998.
[21] M. Malinowski, S. Stynski, W. Kolomyjski and M. P.
Kazmierkowski, “Control of Tree-Level PWM Converter
Applied to Variable Speed-Type Turbine,” _IEEE Trans-_
_actions on Industrial Electronics, Vol. 56, No. 1, 2009,_
pp. 69-77.
[22] M. Liserre, F. Blaabjerg and S. Hansen, “Design and
Control of an LCL Filter-Based Three-Phase Active Rectifier,” IEEE Transactions on Industry Applications, Vol.
4, No. 5, 2005, pp. 1281-1291.
[23] J. A. P. Lopes, C. L. Moreira and A. G. Madureira, “Defining Control Strategies for MicroGrids Islanded Operation,” IEEE Transactions on Power Systems, Vol. 21, No.
2, 2006, pp. 916-924.
[24] R. H. Lasseter and P. Piagi, “Microgrid: A Conceptual
Solution,” PESC’04, Aachen, 20-25 June 2004.
[25] T. Ackermann, “Wind Power in Power Systems,” John
Wiley & Sons, Chichester, 2005.
[doi:10.1002/0470012684](http://dx.doi.org/10.1002/0470012684)
[26] A. J. G. Westlake, J. R. Bumby and E. Spooner, “Damping the Power-Angle Oscillations of a Permanent-Magnet
Synchronous Generator with Particular Reference to
Wind Turbine Applications,” IEE Proceedings of Electric
_Power Applications, Vol. 143, No. 3, 1996, pp. 269-280._
[27] L. Dambrosio and B. Fortunato, “One Step Ahead Adaptive Control Technique for a Wind Turbine-Synchronous
Generator System,” Proceedings of the 32nd Intersociety
C i h © 2013 S iR **_SGRE_**
-----
112 Modeling, Control & Fault Management of Microgrids
_Energy Conversion Engineering Conference, Honolulu,_
27 July-1 August 1997, pp. 1970-1975.
[28] A. H. M. A. Rahim, M. A. Alam and M. F. Kandlawala,
“Dynamic Performance Improvement of an Isolated Wind
Turbine Induction Generator,” _Computers and Electrical_
## Appendix
Carrier frequency in VMPPT PWM generator, 3000 Hz
and in grid-side controller, 5000 Hz, boost converter parameters: _L_ 0.0034H, _C_ 0.00561F . PI coefficients
in grid-side controller: _K_ _pVdc_ 0.05,
_KiVdc_ 3, _K_ _pId_ 2.5, _KiId_ 700, _K_ _pIq_ 2.5,
_KiIq_ 700 .
**Table A1. Values and coefficients used in pv cell.**
Current temp. coefficient _α = 0.002086_ [A/˚C]
Voltage temp. coefficient _β = 0.0779_ [V/˚C]
_Engineering, Vol. 35, No. 4, 2009, pp. 594-607._
[doi:10.1016/j.compeleceng.2008.08.008](http://dx.doi.org/10.1016/j.compeleceng.2008.08.008)
[29] Carraro Group.
www.santerno.com/company/company-profile/
**Table A2. Synchronous generator parameters amounts.**
Parameters Amount Unit
Stator phase resistance _[R]s_ 0.0485
Stator inductances _L Ld_, _q_ 0.395 mH
Inductive flow by permanent magnet 0.1194 Wb
Moment of inertia (J) 0.0027 kg m 2
Nominal power 14 kw
Pairs of poles (P) 4
**Table A3. Boost converter coefficient values.**
Parameters Amount Unit
Low voltage capacitor C1 500 μF
High voltage capacitor Co 4700 μF
Inductance 800 μH
Switching frequency 20 KHz
Reverse saturation current
Short circuit cell current
_I_ 0 0.5 10 4 [A]
_I_ _ph_ _I_ _SC_ 0.5 10 4 [A]
Cell resistance _RS _ 0.0277 [Ω]
Cell material coefficient _λ = 0.049_ [1/V]
C i h © 2013 S iR **_SGRE_**
-----
| 10,038
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.4236/SGRE.2013.41013?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.4236/SGRE.2013.41013, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=28141"
}
| 2,013
|
[] | true
| 2013-02-26T00:00:00
|
[
{
"paperId": "0e60bb75f6f2b2099b79098dc03351c78181d8fe",
"title": "Controls for microgrids with storage: Review, challenges, and research needs"
},
{
"paperId": "bf7020e6a2f9efd3978bf5a060ce4e66d79a2d07",
"title": "Dynamic performance improvement of an isolated wind turbine induction generator"
},
{
"paperId": "5c20e9c5df7eea296de92fe85a0424a31e2b39c3",
"title": "Modeling and Performance Analysis of a Microturbine as a Distributed Energy Resource"
},
{
"paperId": "6c6aab2c6c272ec1e3ca5d8b3a93bccac2539d73",
"title": "Multi-agent systems in a distributed smart grid: Design and implementation"
},
{
"paperId": "834184d508d2c7e66719f22eeb05c97ecfe7fe87",
"title": "Modeling, control and simulation of a PV/FC/UC based hybrid power generation system for stand-alone applications"
},
{
"paperId": "b0f392320bdb3d5595ffb5de3bcaa115987c1ca6",
"title": "Microgrids: an overview of ongoing research, development, anddemonstration projects"
},
{
"paperId": "25d00b9c25321240d39ddf6bc35b82682efa9ea3",
"title": "Using Low Voltage MicroGrids for Service Restoration"
},
{
"paperId": "df43eb24c4d5bf073d91f07ae5b6810af295a508",
"title": "Autonomous control of microgrids"
},
{
"paperId": "7278b8287c5ccd3334c4fa67e82d368c94a5af21",
"title": "Overview of Control and Grid Synchronization for Distributed Power Generation Systems"
},
{
"paperId": "9a216e91da907a44dbde0906f17ac2e409deb15e",
"title": "Microgrids - Large Scale Integration of Microgeneration to Low Voltage Grids"
},
{
"paperId": "56a50da4b876ffc92a9cdb27d9938cae15d7a45d",
"title": "Defining control strategies for MicroGrids islanded operation"
},
{
"paperId": "63e761db1ef0d5bdc61dc478b373d33ec8106f43",
"title": "Emergent Technologies In Electrical Microgeneration"
},
{
"paperId": "7b3a886eadd37a322b103a0bb58de1101ad6cac2",
"title": "Micro-grid autonomous operation during and subsequent to islanding process"
},
{
"paperId": "5e5cec10cdf2a9072ed6f35805fa46673a64f78b",
"title": "Microgrid: a conceptual solution"
},
{
"paperId": "800e62df549ced2d29e854c437a27e7907b4e5df",
"title": "Operation of a prototype microgrid system based on micro-sources quipped with fast-acting power electronics interfaces"
},
{
"paperId": "adf58457ec0a7ab55beb471edaa3a910672d2cd9",
"title": "Microgrid autonomous operation during and subsequent to islanding process"
},
{
"paperId": "730376d31e44ec89d425f6883859c76ddebecbef",
"title": "Closure on \"Theoretical and experimental analyses of photovoltaic systems with voltage and current-based maximum power point tracking\""
},
{
"paperId": "ea6038e64708d6abb5f03f5398ff1b71f9b775b2",
"title": "Design and control of an LCL-filter based three-phase active rectifier"
},
{
"paperId": "a0be615cf4a60ec3cd853a6523de9a8926fc0e54",
"title": "Microgrids [distributed power generation]"
},
{
"paperId": "31b37bea58ad6d8f50037d12c7b3f0f2a773c8e4",
"title": "One step ahead adaptive control technique for a wind turbine-synchronous generator system"
},
{
"paperId": "6b1ad90e4b90751ccac42fb6ad25b608a8d883fd",
"title": "Damping the power-angle oscillations of a permanent-magnet synchronous generator with particular reference to wind turbine applications"
},
{
"paperId": "6a83ea014643463ae63f940fa23e668d11a618ec",
"title": "Photovoltaic module-site matching based on the capacity factors"
},
{
"paperId": "a260e3042514cec9fdf5ad9d49ef828fb29738bc",
"title": "Dynamics models for combines cycle plants in power system studies"
},
{
"paperId": "37af76a2637c4e5eb4bc8b9ecf1f65439697ddbb",
"title": "Combustion turbine dynamic model validation from tests"
},
{
"paperId": "e71b738bf0d06f490b30a4704156459b84c56050",
"title": "Simplified Mathematical Representations of Heavy-Duty Gas Turbines"
},
{
"paperId": "58a5463b7b7ef6f9f1fe4f13c7892fc120553d17",
"title": "Control of Three-Level PWM Converter Applied to Variable-Speed-Type Turbines"
},
{
"paperId": "2397223cc0386b2df3edd31e30f2d642f72940a0",
"title": "Wind Power in Power Systems"
},
{
"paperId": "caf7ece3afd49b7c407fb58ce3e8132ea6b2ad78",
"title": "Theoretical and Experimental Analyses of Photovoltaic Systems with Voltage and Current-Based Maximum Power Point Tracking"
},
{
"paperId": "119b28d9b33e9de899a78e319ae684dda7797a5d",
"title": "Dynamic simulation of electric machinery : using MATLAB/SIMULINK"
},
{
"paperId": null,
"title": "www.santerno.com/company/company-profile/ Appendix Carrier frequency in VMPPT PWM generator, 3000 Hz and in grid-side controller, 5000 Hz, boost converter"
},
{
"paperId": null,
"title": "Dynamic Models for Combined Cycle Plants in Power System Studies"
},
{
"paperId": null,
"title": "Control of Tree-Level PWM Converter Applied to Variable Speed-Type Turbine"
}
] | 10,038
|
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/003e9214bb370dd53852ea7bc51052086331dae0
|
[
"Computer Science"
] | 0.84023
|
OptSmart: A Space Efficient Optimistic Concurrent Execution of Smart Contracts
|
003e9214bb370dd53852ea7bc51052086331dae0
|
Distributed Parallel Databases
|
[
{
"authorId": "26905752",
"name": "Parwat Singh Anjana"
},
{
"authorId": "2185443548",
"name": "S. Kumari"
},
{
"authorId": "145506228",
"name": "Sathya Peri"
},
{
"authorId": "51437421",
"name": "Sachin Rathor"
},
{
"authorId": "26402290",
"name": "Archit Somani"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
Popular blockchains such as Ethereum and several others execute complex transactions in blocks through user-defined scripts known as smart contracts. Serial execution of smart contract transactions/atomic-units (AUs) fails to harness the multiprocessing power offered by the prevalence of multi-core processors. By adding concurrency to the execution of AUs, we can achieve better efficiency and higher throughput.
In this paper, we develop a concurrent miner that proposes a block by executing the AUs concurrently using optimistic Software Transactional Memory systems (STMs). It captures the independent AUs in a concurrent bin and dependent AUs in the block graph (BG) efficiently. Later, we propose a concurrent validator that re-executes the same AUs concurrently and deterministically using a concurrent bin followed by a BG given by the miner to verify the proposed block. We rigorously prove the correctness of concurrent execution of AUs and achieve significant performance gain over the state-of-the-art.
|
## OptSmart: A Space Efficient Optimistic Concurrent Execution of Smart Contracts[⋆]
Parwat Singh Anjana[†], Sweta Kumari[‡], Sathya Peri[†], Sachin Rathor[†],
and Archit Somani[‡]
_†Department of CSE, Indian Institute of Technology Hyderabad, Telangana, India_
_‡Department of Computer Science, Technion, Israel_
**Abstract**
Popular blockchains such as Ethereum and several others execute complex transac
tions in blocks through user-defined scripts known as smart contracts. Serial execution
of smart contract transactions/atomic-units (AUs) fails to harness the multiprocessing
power offered by the prevalence of multi-core processors. By adding concurrency to
the execution of AUs, we can achieve better efficiency and higher throughput.
In this paper, we develop a concurrent miner that proposes a block by executing the
AUs concurrently using optimistic Software Transactional Memory systems (STMs).
It captures the independent AUs in a concurrent bin and dependent AUs in the block
_graph (BG) efficiently. Later, we propose a concurrent validator that re-executes the_
same AUs concurrently and deterministically using a concurrent bin followed by BG
given by the miner to verify the block. We rigorously prove the correctness of concur
rent execution of AUs and show significant performance gain than state-of-the-art.
_Keywords: Blockchain, Smart Contracts, Software Transactional Memory System,_
Multi-version, Concurrency Control, Opacity
_⋆A preliminary version of this paper appeared in 27th Euromicro International Conference On Parallel,_
Distributed, and Network-Based Processing (PDP[1]) 2019, Pavia, Italy. A poster version of this work
received Best Poster Award in ICDCN 2019 [2].
_⋆⋆This manuscript covers the exhaustive related work, detailed proposed mechanism with algorithms, opti-_
mizations on the size of the block graph, rigorous correctness proof, and additional experimental evaluations
with state-of-the-art.
_∗∗∗Author sequence follows lexical order of last names._
_Email address: [email protected], [email protected],_
[email protected], [email protected],
[email protected] (Parwat Singh Anjana[†], Sweta Kumari[‡], Sathya Peri[†], Sachin Rathor[†],
and Archit Somani[‡])
_Preprint submitted to Journal of Parallel and Distributed Computing_ _February 18, 2021_
-----
**1. Introduction**
It is commonly believed that blockchain is a revolutionary technology for doing
business over the Internet. Blockchain is a decentralized, distributed database or ledger
of records that store the information in cryptographically linked blocks. Cryptocurren
cies such as Bitcoin [3] and Ethereum [4] were the first to popularize the blockchain
technology. Blockchains are now considered for automating and securely storing user
records such as healthcare, financial services, real estate, etc. Blockchain network con
sists of multiple peers (or nodes) where peers do not necessarily trust each other. Each
node maintains a copy of the distributed ledger. Clients, users of the blockchain, send
requests or transactions to the nodes of the blockchain called as miners. The miners
collect multiple transactions from the clients and form a block. Miners then propose
these blocks to be added to the blockchain.
The transactions sent by clients to miners are part of a larger code called as smart
_contracts that provide several complex services such as managing the system state,_
ensuring rules, or credentials checking of the parties involved [5]. Smart contracts
are like a ‘class’ in programming languages that encapsulate data and methods which
operate on the data. The data represents the state of the smart contract (as well as
the blockchain) and the methods (or functions) are the transactions that possibly can
change contract state. Ethereum uses Solidity [6] while Hyperledger supports language
such as Java, Golang, Node.js, etc.
**Motivation for Concurrent Execution of Smart Contracts:** Dickerson et al. [5]
observed that smart contract transactions are executed in two different contexts in
Ethereum blockchain. First, executed by miners while forming a block– a miner se
lects a sequence of client requests (transactions), executes the smart contract code of
these transactions in sequence, transforming the state of the associated contract in this
process. The miner then stores the sequence of transactions, the resulting final state of
the contracts, and the previous block hash in the block. After creating the block, the
miner proposes it to be added to the blockchain through the consensus protocol. The
other peers in the system, referred to as validators in this context, validate the block
2
-----
proposed by the miner. They re-execute the smart contract transactions in the block
_serially to verify the block’s final states. If the final states match, then the block is_
accepted as valid, and the miner who appended this block is rewarded. Otherwise, the
block is discarded. Thus the transactions are executed by every peer in the system. It
has been observed that the validation runs several times more than the miner code [5].
This design of smart contract execution is not efficient as it does not allow any
concurrency. In today’s world of multi-core systems, the serial execution does not uti
lize all the cores, resulting in lower throughput. This limitation is not specific only
to Ethereum blockchain but also applies to other popular blockchains as well. Higher
throughput means more transaction execution per unit time, which clearly will be de
sired by both miners and validators.
However, the concurrent execution of smart contract transactions is not straightfor
ward. Because various transactions could consist of conflicting access to the shared
data objects. Two contract transactions are said to be in conflict if both of them access
a shared data object, and at least one performs a write operation. Arbitrary execution
of these smart contract transactions by the miners might result in the data-races lead
ing to the inconsistent final state of the blockchain. Unfortunately, it is impossible
to statically identify conflicting contract transactions since contracts are developed in
Turing-complete languages. The common solution for correct execution of concurrent
transactions is to ensure that the execution is serializable [7]. A usual correctness
criterion in databases, serializability ensure that the concurrent execution is equivalent
to some serial execution of the same transactions. Thus miners must ensure that their
execution is serializable [5] or one of its variants as described later.
The concurrent execution of the smart contract transactions of a block by the valida
tors, although highly desirable, can further complicate the situation. Suppose a miner
ensures that the concurrent execution of the transactions in a block is serializable. Later
a validator re-executes the same transactions concurrently. However, during the con
current execution, the validator may execute two conflicting transactions in an order
different from the miner. Thus the serialization order of the miner is different from the
validator. These can result in the validator obtaining a final state different from what
was obtained by the miner. Consequently, the validator may incorrectly reject the block
3
-----
_C1_
_w(x, 10)_
_T1_
_C1_ _w(x, 10)_ _C1_ _IS_ _FS_
_T1_ _x_ 0 20
_T1_
_w(x, 10)_
|Col1|IS|FS|
|---|---|---|
|x|0|20|
_C2_ _T2_ _w(x, 20)_ _C2_ _T2_ _w(x, 20)_ _C2_
_T2_
_w(x, 20)_
|Col1|IS|FS|
|---|---|---|
|x|0|10|
(a) Concurrent transactions (b) Equivalent execution by miner (c) Equivalent execution by validator
Figure 1: (a) consists of two concurrent conflicting transactions T1 and T2 working on same
shared data-objects x which are part of a block. (b) represents the miner’s concurrent execution
with an equivalent serial schedule as T1, T2 and final state (or FS) as 20 from the initial state
(or IS) 0. Whereas (c) shows the concurrent execution by a validator with an equivalent serial
schedule as T2, T1, and the final state as 10 from IS 0, which is different from the final state
proposed by the miner. Such a situation leads to the rejection of the valid block by the validator,
which is undesirable.
although it is valid as depicted in Figure 1.
Dickerson et al. [5] identified these issues and proposed a solution for concurrent
execution by both miners and validators. The miner concurrently executes block trans
actions using abstract locks and inverse logs to generate a serializable execution. Then,
to enable correct concurrent execution by the validators, the miners provide a happen
_before graph in the block. The happen-before graph is a direct acyclic graph over all_
the transactions of the block. If there is a path from a transaction Ti to Tj then the val
idator has to execute Ti before Tj. Transactions with no path between them can execute
concurrently. The validator using the happen-before graph in the block executes all the
transactions concurrently using the fork-join approach. This methodology ensures that
the final state of the blockchain generated by the miners and the validators are the same
for a valid block and hence not rejected by the validators. The presence of tools such
as a happen-before graph in the block provides a greater enhancement to validators to
consider such blocks. It helps them execute quickly through parallelization instead of a
block that does not have any parallelization tools. This fascinates the miners to provide
such tools in the block for concurrent execution by the validators.
**Proposed Solution Approach - Optimistic Concurrent Execution and Lock-Free**
**Graph: Dickerson et al. [5] developed a solution to the problem of concurrent miner**
and validators using locks and inverse logs. It is well known that locks are pessimistic
4
-----
in nature. So, in this paper, we propose a novel and efficient framework for concurrent
miner using optimistic Software Transactional Memory Systems (STMs). STMs are
suitable for the concurrent executions of transactions without worrying about consis
tency issues.
The requirement of the miner, is to concurrently execute the smart contract trans
actions correctly and output a graph capturing dependencies among the transactions
of the block such as happen-before graph. We denote this graph as block graph (or
BG). The miner uses an optimistic STM system to execute the smart contract transac
tions concurrently in the proposed solution. Since STMs also work with transactions,
we differentiate between smart contract transactions and STM transactions. The STM
transactions invoked by an STM system is a piece of code that it tries to execute atom
ically even in the presence of other concurrent STM transactions. If the STM system
is not able to execute it atomically, then the STM transaction is aborted.
The expectation of a smart contract transaction is that it will be executed serially.
Thus, when it is executed in a concurrent setting, it is expected to execute atomically
(or serialized). To differentiate between smart contract transaction from STM transac
tion, we denote smart contract transaction as atomic-unit (AU) and STM transaction as
_transaction in the rest of the document. Thus the miner uses the STM system to invoke_
a transaction for each AU. In case the transaction gets aborted, then the STM repeat
edly invokes new transactions for the same AU until a transaction invocation eventually
commits.
A popular correctness guarantee provided by STM systems is opacity [8] which
is stronger than serializability. Opacity like serializability requires that the concurrent
execution, including the aborted transactions, be equivalent to some serial execution.
This ensures that even aborted transaction reads consistent value until the point of abort.
As a result, a miner using an STM does not encounter any undesirable side-effects such
as crash failures, infinite loops, divide by zero, etc. STMs provide this guarantee by
executing optimistically and support atomic (opaque) reads, writes on transactional
_objects (or t-objects)._
Due to simplicity, we have chosen two timestamp based STMs in our design: (1)
_Basic Timestamp Ordering or BTO STM [9, Chap 4], maintains only one version for_
5
-----
each t-object. (2) Multi-Version Timestamp Ordering or MVTO STM [10], maintains
multiple versions corresponding to each t-object which further reduces the number of
aborts and improves the throughput.
The advantage of using timestamp-based STM is that the equivalent serial history
is ordered based on the transactions’ timestamps. Thus using the timestamps, the miner
can generate the BG of the AUs. We call it as STM approach. Dickerson et al. [5],
developed the BG in a serial manner. Saraph and Herlihy [11] proposed a simple bin
_based two-phase speculative approach to execute AUs concurrently in the Ethereum_
blockchain without storing the BG in the block. We analyzed that the bin-based ap
proach reduces the size of the block but fails to exploits the concurrency. We name
this approach as Speculative Bin (Spec Bin) approach. So, in our proposed approach,
we combined spec bin-based approach [11] with the STM approach [1] for the optimal
storage of BG in a block and exploit the concurrency. Concurrent miner generates an
efficient BG in concurrent and lock-free [12] manner.
The concurrent miner applies the STM approach to generate two bins while ex
ecuting AUs concurrently, a concurrent bin and a sequential bin. AUs which can be
executed concurrently (without any conflicts) are stored in the concurrent bin. While
the AUs having conflicts are stored in a sequential bin in the BG form to record the
conflicts. This combined technique reduces the size of the BG than [1] while storing
the graph of only sequential bin AUs instead of all AUs.
We propose a concurrent validator that creates multiple threads. Each of these
threads parses the concurrent bin followed by efficient BG provided by the concurrent
miner and re-execute the AUs for validation. The BG consists of only dependent AUs.
Each validator thread claims a node that does not have any dependency, i.e., a node
without any incoming edges by marking it. After that, it executes the corresponding
AUs deterministically. Since the threads execute only those nodes with no incoming
edges, the concurrently executing AUs will not have any conflicts. Hence the validator
threads need not have to worry about synchronization issues. We denote this approach
adopted by the validator as a decentralized approach as the multiple threads are work
ing on BG concurrently in the absence of a master thread.
The approach adopted by Dickerson et al. [5], works on fork-join in which a master
6
-----
thread allocates different tasks to slave threads. The master thread identifies AUs that
do not have any incoming dependencies in the BG and allocates them to different slave
threads. In this paper, we compare the performance of both these approaches with the
serial validator.
**The significant contributions of the paper are as follows:**
- Introduce a novel way to execute the AUs by concurrent miner using optimistic
STMs (Section 4). We implement the concurrent miner using BTO and MVTO
STM, but it is generic to any STM protocol.
- We propose a lock-free and concurrent graph library to generate the efficient BG
which contains only dependent atomic-units and optimize the size of the block
than [1] (see Section 4).
- We propose concurrent validator that re-executes the AUs deterministically and
efficiently with the help of concurrent bin followed by efficient BG given by
concurrent miner (see Section 4).
- To make our proposed approach storage optimal and efficient, we have optimized
the BG size (see Section 4).
- We rigorously prove that the concurrent miner and validator satisfies correctness
criterion as opacity (see Section 5).
- We achieve 4.49 and 5.21 average speedups for optimized concurrent miner
_×_ _×_
using BTO and MVTO STM protocol, respectively. Optimized concurrent BTO
and MVTO decentralized validator outperform average 7.68 and 8.60 than
_×_ _×_
serial validator, respectively (Section 6).
Section 2 presents the related work on concurrent execution of smart contract trans
actions. While, Section 3 includes the notions related to STMs and execution model
used in the paper. The conclusion with several future directions is presented in Sec
tion 7.
**2. Related Work**
This section presents the related work on concurrent execution on blockchains in
line with the proposed approach.
7
-----
Table 1: Related Work Summary
**Miner Approach** **Locks** **Require Block Graph** **Validator Approach** **Blockchain Type**
Dickerson et al. [5] Pessimistic ScalaSTM Yes Yes Fork-join Permissionless
Zhang and Zhang [17] - - Read, Write Set MVTO Approach Permissionless
Anjana et al. [1] Optimistic RWSTM No Yes Decentralized Permissionless
Amiri et al. [18] Static Analysis - Yes - Permissioned
Saraph and Herlihy [11] Bin-based Approach Yes No Bin-based Permissionless
Anjana et al. [19] Optimistic ObjectSTM No Yes Decentralized Permissionless
**Proposed Approach** **Bin+Optimistic RWSTM** **No** **No (if no dependencies) / Yes** **Decentralized** **Permissionless**
The interpretation of Blockchain was introduced by Satoshi Nakamoto in 2009 as
Bitcoin [3] to perform electronic transactions without third party interference. Nick
Szabo [13] introduced smart contracts in 1997, adopted by Ethereum blockchain in
2015 to expand blockchain functionalities beyond financial transactions (cryptocurren
cies). A smart contract is an interface to reduce the computational transaction cost
and provides secure relationships on distributed networks. There exist several papers
[14, 15, 16] in the literature that works on the safety and security concern of smart
contracts, which is out of the scope of this paper. We mainly focus on the concurrent
execution of AUs. A concise summary of closely related works is given in Table 1.
Dickerson et al. [5] introduced concurrent executions of AUs in the blockchain.
They observed that miners and validators could execute AUs simultaneously to exploit
concurrency offered by ubiquitous multi-core processors. The approach of this work is
given in Section 1.
Zhang and Zhang [17] proposed a concurrent miner using a pessimistic concur
rency control protocol, which delays the read until the corresponding writes to commit
and ensures a conflict-serializable schedule. The proposed concurrent validator uses
MVTO protocol to execute transactions concurrently using the write sets provided by
the concurrent miner in the block.
Anjana et al. [1] proposed optimistic Read-Write STM (RWSTM) using BTO and
MVTO based protocols. The timestamp-based protocols are used to identify the con
flicts between AUs. The miner executes the AUs using RWSTM and constructs the BG
dynamically at the runtime using the timestamps. Later, a concurrent Decentralized
_Validator (Dec-Validator) executes the AUs in the block in a decentralized manner._
The Decentralized Validator is efficient than the Fork-Join Validator since there is no
8
|Col1|Miner Approach|Locks|Require Block Graph|Validator Approach|Blockchain Type|
|---|---|---|---|---|---|
|Dickerson et al. [5] Zhang and Zhang [17] Anjana et al. [1] Amiri et al. [18] Saraph and Herlihy [11] Anjana et al. [19] Proposed Approach|Pessimistic ScalaSTM - Optimistic RWSTM Static Analysis Bin-based Approach Optimistic ObjectSTM Bin+Optimistic RWSTM|Yes - No - Yes No No|Yes Read, Write Set Yes Yes No Yes No (if no dependencies) / Yes|Fork-join MVTO Approach Decentralized - Bin-based Decentralized Decentralized|Permissionless Permissionless Permissionless Permissioned Permissionless Permissionless Permissionless|
-----
master validator thread to allocate the AUs to the slave validator threads to execute.
Instead, all the validator threads identify the source vertex (a vertex with indegree 0) in
the BG independently and claim the source node to execute the corresponding AU.
Amiri et al. [18] proposed ParBlockchain– an approach for concurrent execution
of transactions in the block for permissioned blockchain. They developed an OXII
_paradigm[1]_ to support distributed applications. The OXII paradigm orders the block
transactions based on the agreement between the orderer nodes using static analysis
or speculative execution to obtain the read-set and write-set of each transaction, then
generates the BG and constructs the block. The executors from respective applications
(similar to the executors in fabric channels) execute the transactions concurrently and
then validate them by re-executing the transaction. So, the nodes of the ParBlockchain
execute the transactions in two phases using the OXII paradigm. A block with BG
based on the transaction conflicts is generated in the first phase, known as the ordering
_phase. The second phase, known as the execution phase, executes the block transac-_
tions concurrently using the BG appended with block.
Saraph and Herlihy [11] proposed a simple bin-based two-phase speculative ap
proach to execute AUs concurrently in the Ethereum blockchain. They empirically val
idated the possible benefit of their approach by evaluating it on historical transactions
from the Ethereum. In the first phase, the miner uses locks and executes AUs in a block
concurrently by rolling back those AUs that lead to the conflict(s). All the aborted AUs
are then kept into a sequential bin and executed in the second phase sequentially. The
miner gives concurrent and sequential bin hints in the block to the validator to execute
the same schedule as executed by the miner. The validator executes the concurrent bin
AUs concurrently while executes the sequential bin AUs sequentially. Instead of BG,
giving hints about bins takes less space. However, it does not harness the maximum
concurrency available within the block.
Later, Anjana et al. [19] proposed an approach that uses optimistic single-version
and multi-version Object-based STMs (OSTMs) for the concurrent execution of AUs by
1A paradigm in which transactions are first ordered for concurrent execution then executed by both miners
and validators [18].
9
-----
the miner. The OSTMs operate at a higher (object) level rather than page (read-write)
level and constructs the BG. However, the BG is still quite significantly large in the
existing approaches and needs higher bandwidth to broadcast such a large block for
validation.
In contrast, we propose an efficient framework for concurrent execution of the AUs
using optimistic STMs. We combine the benefits of both Spec Bin-based and STM
based approaches to optimize the storage aspects (efficient storage optimal BG), which
further improves the performance. Due to its optimistic nature, the updates made by
a transaction will be visible to shared memory only on commit; hence, rollback is
not required. Our approach ensures correctness criteria as opacity [8]. The proposed
approach gives better speedup over state-of-the-art and serial execution of AUs.
**3. System Model**
In this section, we will present the notions related to STMs and the execution model
used in the proposed approach.
Following [20, 21], we assume a system of n processes/threads, p1, . . ., pn that
access a collection of transactional objects or t-objects via atomic transactions. Each
transaction has a unique identifier. Within a transaction, processes can perform trans
_actional operations or methods:_
- STM.begin()– begins a transaction.
- STM.write(x, v) (or w(x, v))– updates a t-object x with value v in its local
memory.
- STM.read(x, v) (or r(x, v))– tries to read x and returns value as v.
- STM.tryC()– tries to commit the transaction and returns commit (or ) if suc_C_
ceeds.
- STM.tryA()– aborts the transaction and returns .
_A_
Operations STM.read() and STM.tryC() may return A. Transaction Ti starts with
the first operation and completes when any of its operations return or . For a
_A_ _C_
10
-----
transaction Tk, we denote all the t-objects accessed by its read operations and write op
erations as rsetk and wsetk, respectively. We denote all the operations of a transaction
_Tk as evts(Tk) or evtsk._
**History: A history is a sequence of events, i.e., a sequence of invocations and responses**
of transactional operations. The collection of events is denoted as evts(H). For sim
plicity, we consider sequential histories, i.e., the invocation of each transactional oper
ation is immediately followed by a matching response. Therefore, we treat each trans
actional operation as one atomic event and let <H denote the total order on the trans
actional operations incurred by H. We identify a history H as tuple ⟨evts(H), <H _⟩._
Further, we consider well-formed histories, i.e., no transaction of a process begins
before the previous transaction invocation has completed (either commits or aborts).
We also assume that every history has an initial committed transaction T0 that initializes
all the t-objects with value 0. The set of transactions that appear in H is denoted
by txns(H). The set of committed (resp., aborted) transactions in H is denoted by
_committed(H) (resp., aborted(H)). The set of incomplete or live transactions in H is_
denoted by H.incomp = H.live = (txns(H) _committed(H)_ _aborted(H))._
_−_ _−_
We construct a complete history of H, denoted as H, by inserting STM.tryAk(A)
immediately after the last event of every transaction Tk ∈ _H.live. But for STM.tryCi_
of transaction Ti, if it released the lock on first t-object successfully that means updates
made by Ti is consistent so, Ti will immediately return commit.
**_Transaction Real-Time and Conflict order: For two transactions Tk, Tm ∈_** _txns(H),_
we say that Tk precedes Tm in the real-time order of H, denoted as Tk ≺H[RT] _Tm, if Tk_
is complete in H and the last event of Tk precedes the first event of Tm in H. If neither
_Tk ≺H[RT]_ _Tm nor Tm ≺H[RT]_ _Tk, then Tk and Tm overlap in H. We say that a history_
is serial (or t-sequential) if all the transactions are ordered by real-time order. We say
that Tk, Tm are in conflict, denoted as Tk ≺H[Conf] _Tm, if_
(1) STM.tryCk() <H STM.tryCm() and wset(Tk) ∩ _wset(Tm) ̸= ∅;_
(2) STM.tryCk() <H rm(x, v), x ∈ _wset(Tk) and v ̸= A;_
(3) rk(x, v) <H STM.tryCm(), x ∈ _wset(Tm) and v ̸= A._
Thus, it can be seen that the conflict order is defined only on operations that have
11
-----
successfully executed. We denote the corresponding operations as conflicting.
**Valid and Legal histories: A successful read rk(x, v) (i.e., v ̸= A) in a history H is**
said to be valid if there exist a transaction Tj that wrote v to x and committed before
_rk(x, v). History H is valid if all its successful read operations are valid._
We define rk(x, v)’s lastWrite as the latest commit event Ci preceding rk(x, v) in
_H such that x ∈_ _wseti (Ti can also be T0). A successful read operation rk(x, v) (i.e.,_
_v ̸= A), is said to be legal if the transaction containing rk’s lastWrite also writes v_
onto x. The history H is legal if all its successful read operations are legal. From the
definitions we get that if H is legal then it is also valid.
**Notions of Equivalence: Two histories H and H** _[′]_ are equivalent if they have the same
set of events. We say two histories H, H _[′]_ are multi-version view equivalent [9, Chap.
5] or MVVE if
(1) H, H _[′]_ are valid histories and
(2) H is equivalent to H _[′]._
Two histories H, H _[′]_ are view equivalent [9, Chap. 3] or VE if
(1) H, H _[′]_ are legal histories and
(2) H is equivalent to H _[′]. By restricting to legal histories, view equivalence does_
not use multi-versions.
Two histories H, H _[′]_ are conflict equivalent [9, Chap. 3] or CE if
(1) H, H _[′]_ are legal histories and
(2) conflict in H, H _[′]_ are the same, i.e., conf (H) = conf (H _[′])._
Conflict equivalence like view equivalence does not use multi-versions and restricts
itself to legal histories.
**VSR, MVSR, and CSR: A history H is said to VSR (or View Serializable) [9, Chap.**
3], if there exist a serial history S such that S is view equivalent to H. But this notion
considers only single-version corresponding to each t-object.
MVSR (or Multi-Version View Serializable) maintains multiple version correspond
ing to each t-object. A history H is said to MVSR [9, Chap. 5], if there exist a serial
history S such that S is multi-version view equivalent to H. It can be proved that ver
ifying the membership of VSR as well as MVSR in databases is NP-Complete [7]. To
12
-----
circumvent this issue, researchers in databases have identified an efficient sub-class of
VSR, called CSR based on the notion of conflicts. The membership of CSR can be
verified in polynomial time using conflict graph characterization.
A history H is said to CSR (or Conflict Serializable) [9, Chap. 3], if there exist a
serial history S such that S is conflict equivalent to H.
**Serializability and Opacity: Serializability [7] is a commonly used criterion in databases.**
But it is not suitable for STMs as it does not consider the correctness of aborted trans
actions as shown by Guerraoui and Kapalka [8]. Opacity, on the other hand, considers
the correctness of aborted transactions as well.
A history H is said to be opaque [8, 20] if it is valid and there exists a t-sequential legal
history S such that
(1) S is equivalent to complete history H and
(2) S respects ≺H[RT] [, i.e.,][ ≺]H[RT] _[⊂≺]S[RT]_ [.]
By requiring S being equivalent to H, opacity treats all the incomplete transac
tions as aborted. Similar to view-serializability, verifying the membership of opacity
is NP-Complete [7]. To address this issue, researchers have proposed another popular
correctness-criterion co-opacity whose membership is polynomial time verifiable.
**Co-opacity: A history H is said to be co-opaque [21] if it is valid and there exists a**
t-sequential legal history S such that
(1) S is equivalent to complete history H and
(2) S respects ≺H[RT] [, i.e.,][ ≺]H[RT] _[⊂≺]S[RT]_ [.]
(3) S preserves conflicts (i.e. ≺H[Conf] _⊆≺S[Conf]_ ).
**Linearizability: A history H is linearizable [22] if**
(1) The invocation and response events can be reordered to get a valid sequential
history.
(2) The generated sequential history satisfies the object’s sequential specification.
(3) If a response event precedes an invocation event in the original history, then this
should be preserved in the sequential reordering.
**Lock Freedom: An algorithm is said to be lock-free [12] if the program threads are**
13
-----
Edge List (or eList)
_ts_ _vref_ _eNext_
5
_ts_ _vref_ _eNext_
10
|vref|eNext|
|---|---|
|||
|vref|eNex|
|---|---|
|||
_ts_ _vref_ _eNext_
10
|vref|Col2|
|---|---|
|||
|ts|AU|inCnt|Col4|
|---|---|---|---|
|0|1|0|vNext|
|||||
|ts|AU|inCnt|eNext|
|5|2|1|vNext|
|||||
|ts|AU|inCnt|eNext|
|10|3|2|vNext|
|||||
+∞
_−∞_ +∞
_vNext_
_T10_
(a) Underlying representation of Block Graph (b) Block Graph
Figure 2: Pictorial representation of Block Graph
run for a sufficiently long time, at least one of the threads makes progress. It allows
individual threads to starve but guarantees system-wide throughput.
**4. Proposed Mechanism**
This section presents the methods of lock-free concurrent block graph library fol
lowed by concurrent execution of AUs by miner and validator.
_4.1. Lock-free Concurrent Block Graph_
**Data Structure of Lock-free Concurrent Block Graph: We use the adjacency list**
to maintain the block graph BG(V, E), as shown in Figure 2 (a). Where V is a set of
vertices (or vNodes) which are stored in the vertex list (or vList) in increasing order
of timestamp between two sentinel node vHead (- ) and vTail (+ ). Each vertex
_∞_ _∞_
node (or vNode) contains ⟨ts = i, AUid = id, inCnt = 0, vNext = nil, eNext = nil⟩.
Where i is a unique timestamp (or ts) of transactions Ti. AUid is the id of a atomic-unit
executed by transaction Ti. To maintain the indegree count of each vNode, we initialize
_inCnt as 0. vNext and eNext initialize as nil._
14
-----
While E is a set of edges which maintains all conflicts of vNode in the edge list
(or eList), as shown in Figure 2 (a). eList stores eNodes (or conflicting transaction
nodes, say Tj) in increasing order of timestamp between two sentinel nodes eHead
(- ) and eTail (+ ). Edge node (or eNode) contains _ts = j, vref, eNext = nil_ . Here,
_∞_ _∞_ _⟨_ _⟩_
_j is a unique timestamp (or ts) of committed transaction Tj having a conflict with_
_Ti and ts(Ti) is less than ts(Tj). We add conflicting edges from lower timestamp to_
higher timestamp transactions to maintain the acyclicity in the BG i.e., conflict edge
is from Ti to Tj in the BG. Figure 2 (b) illustrates this using three transactions with
timestamp 0, 5, and 10, which maintain the acyclicity while adding an edge from lower
to higher timestamp. To make it search efficient, vertex node reference (or vref) keeps
the reference of its own vertex which is present in the vList and eNext initializes as nil.
The block graph (BG) generated by the concurrent miner helps to execute the
validator concurrently and deterministically through lock-free graph library methods.
Lock-free graph library consists of five methods as follows: addVert(), addEdge(),
searchLocal(), searchGlobal() and decInCount().
**Lock-free Graph Library Methods Accessed by Concurrent Miner: The concur-**
rent miner uses addVert() and addEdge() methods of lock-free graph library to
build a BG. When concurrent miner wants to add a node in the BG, it first calls the
addVert() method. The addVert() method identifies the correct location of that
node (or vNode) in the vList at Line 16. If vNode is not part of vList, it creates the node
and adds it into vList at Line 19 in a lock-free manner using atomic compare and swap
(CAS) operation. Otherwise, vNode is already present in vList at Line 24.
**Algorithm 1 BG(vNode, STM): It generates a BG for all the atomic-unit nodes.**
1: procedure BG(vNode, STM)
2: /*Get the confList of transaction Ti from STM*/
3: clist ← STM.getConfList (vNode.tsi);
4: /*Ti conflicts with Tj and Tj existes in conflict list
of Ti*/
5: **for all (tsj ∈** clist) do
6: addVert (tsj );
7: addVert (vNode.tsi);
15
8: **if (tsj < vNode.tsi) then**
9: addEdge (tsj, vNode.tsi);
10: **else**
11: addEdge (vNode.tsi, tsj );
12: **end if**
13: **end for**
14: end procedure
-----
**Algorithm 2 addVert(tsi): It adds the vertex in the BG for Ti.**
15: procedure addVert(tsi)
16: Identify ⟨vPred, vCurr⟩ of vNode of tsi in vList;
17: **if (vCurr.tsi ̸= vNode.tsi) then**
18: Create new Graph Node (vNode) of tsi in vList;
19: **if (vPred.vNext.CAS(vCurr, vNode)) then**
20: return⟨Vertex added⟩;
21: **end if**
22: goto Line 16; /*Start with the vPred to identify
the new ⟨vPred, vCurr⟩*/
23: **else**
24: return⟨Vertex already present⟩;
25: **end if**
26: end procedure
**Algorithm 3 addEdge(fromNode, toNode): It adds an edge from fromNode to toNode.**
27: procedure addEdge(fromNode, toNode)
28: Identify the ⟨ePred, eCurr⟩ of toNode in eList of
the fromNode vertex in BG;
29: **if (eCurr.tsi ̸= toNode.tsi) then**
30: Create new Graph Node (or eNode) in eList;
31: **if (ePred.eNext.CAS(eCurr, eNode)) then**
32: Increment the _inCnt_ atomically of
_eNode.vref in vList;_
33: return⟨Edge added⟩;
34: **end if**
35: goto Line 28; /*Start with the ePred to identify
the new ⟨ePred, eCurr⟩*/
36: **else**
37: return⟨Edge already present⟩;
38: **end if**
39: end procedure
**Algorithm 4 searchLocal(cacheVer, AUid): Thread searches source node in cache-**
_List._
40: procedure searchLocal(cacheV er)
41: **if (cacheVer.inCnt.CAS(0, -1)) then**
42: _nCount ←_ _nCount.get&Inc();_
43: _AUid ←_ cacheVer.AUid;
44: return⟨cacheVer⟩;
45: **else**
46: return⟨nil⟩;
47: **end if**
48: end procedure
**Algorithm 5 searchGlobal(BG, AUid): Thread searches the source node in BG.**
49: procedure searchGlobal(BG, AUid)
50: _vNode ←_ BG.vHead;
51: **while (vNode.vNext ̸= BG.vTail) do**
52: **if (vNode.inCnt.CAS(0, -1)) then**
53: _nCount ←_ _nCount.get&Inc();_
54: _AUid ←_ _vNode.AUid;_
55: return⟨vNode⟩;
56: **end if**
57: _vNode ←_ _vNode.vNext;_
58: **end while**
59: return⟨nil⟩;
60: end procedure
16
-----
**Algorithm 6 decInCount(remNode): Decrement the inCnt of each conflicting node.**
61: procedure decInCount(remNode)
62: **while (remNode.eNext ̸= remNode.eTail) do**
63: Decrement the _inCnt_ atomically of
remNode.vref in the vList;
64: **if (remNode.vref.inCnt == 0) then**
65: Add remNode.verf node into cacheList of
thread local log, thLog;
66: **end if**
67: remNode ← remNode.eNext.verf ;
68: return⟨remNode⟩;
69: **end while**
70: return⟨nil⟩;
71: end procedure
**Algorithm 7 executeCode(curAU): Execute the current atomic-units.**
72: procedure executeCode(curAU )
73: **while (curAU.steps.hasNext()) do /*Assume that**
curAU is a list of steps*/
74: curStep = currAU.steps.next();
75: **switch (curStep) do**
76: **case read(x):**
77: Read data-object x from a shared memory;
78: **case write(x, v):**
79: Write x in shared memory with value v;
80: **case default:**
81: /*Neither read or write in shared memory*/;
82: execute curStep;
83: **end while**
84: return ⟨void⟩
85: end procedure
After successfully adding vNode in the BG, concurrent miner calls addEdge()
method to add the conflicting node (or eNode) corresponding to vNode in the eList.
First, the addEdge() method identifies the correct location of eNode in the eList of
corresponding vNode at Line 28. If eNode is not part of eList, it creates and adds
it into eList of vNode at Line 31 in a lock-free manner using atomic CAS operation.
After successful addition of eNode in the eList of vNode, it increments the inCnt of
_eNode.vref (to maintain indegree count) node, which is present in the vList at Line 32._
**Lock-free Graph Library Methods Accessed by Concurrent Validator: Concur-**
rent validator uses searchLocal(), searchGlobal() and decInCount()
methods of lock-free graph library. First, concurrent validator thread calls searchLocal()
method to identify the source node (having indegree (or inCnt) 0) in its local cacheList
(or thread-local memory). If any source node exists in the local cacheList with inCnt
0, then to claim that node, it sets the inCnt field to -1 at Line 41 atomically.
If the source node does not exist in the local cacheList, then the concurrent val
idator thread calls searchGlobal() method to identify the source node in the BG
at Line 52. If a source node exists in the BG, it sets inCnt to -1 atomically to claim
17
-----
**Algorithm 8 Concurrent Miner(auList[], STM): Concurrently m threads are executing**
atomic-units from auList[] (or list of atomic-units) with the help of STM.
86: procedure Concurrent Miner(auList[], STM)
87: /*Add all AUs in the Concurrent Bin (concBin[])*/
88: _concBin[] ←_ _auList[];_
89: /*curAU is the current AU taken from auList[] */
90: curAU ← _curInd.get&Inc(auList[]);_
91: /*Execute until all AUs successfully completed*/
92: **while (curAU < size of(auList[])) do**
93: _Ti ←_ STM.begin();
94: **while (curAU.steps.hasNext()) do**
95: curStep = currAU.steps.next();
96: **switch (curStep) do**
97: **case read(x):**
98: _v ←_ STM.readi(x);
99: **if (v == abort) then**
100: goto Line 93;
101: **end if**
102: **case write(x, v):**
103: STM.writei(x, v);
104: **case default:**
105: /*Neither read or write in memory*/
106: execute curStep;
107: **end while**
108: /*Try to commit the current transaction Ti and
update the confList[i]*/
109: _v ←_ STM.tryCi();
110: **if (v == abort) then**
111: goto Line 93;
112: **end if**
113: **if (confList[i] == nil) then**
114: curAU doesn’t have dependencies with other
AUs. So, no need to create a node in BG.
115: **else**
116: create a nodes with respective dependencies
from curAU to all AUs ∈ _confList[i] in BG_
and remove curAU and AUs from concBin[]
117: Create vNode with ⟨i, AUid, 0, nil, nil⟩ as
a vertex of Block Graph;
118: BG(vNode, STM);
119: **end if**
120: curAU ← _curInd.get&Inc(auList[]);_
121: **end while**
122: end procedure
that node and calls the decInCount() method to decreases the inCnt of all con
flicting nodes atomically, which are present in the eList of corresponding source node
at Line 63. While decrementing inCnts, it checks if any conflicting node became a
source node, then it adds that node into its local cacheList to optimize the search time
of identifying the next source node at Line 65.
_4.2. Concurrent Miner_
Smart contracts in blockchain are executed in two different contexts. First, the
miner proposes a new block. Second, multiple validators re-execute to verify and val
idate the block proposed by the miner. In this subsection, we describe how miner
executes the smart contracts concurrently.
A concurrent miner gets the set of transactions from the blockchain network. Each
transaction is associated with a method (atomic-unit) of smart contracts. To run the
smart contracts concurrently, we have faced the challenge of identifying the conflicting
18
-----
transactions at run-time because smart contract languages are Turing-complete. Two
transactionsconflict if they access a shared data-objects and at least one of them per
form write operation. In concurrent miner, conflicts are identified at run-time using
an efficient framework provided by the optimistic software transactional memory sys
tem (STMs). STMs access the shared data-objects called as t-objects. Each shared
_t-object is initialized to an initial state (or IS). The atomic-units may modify the IS_
to some other valid state. Eventually, it reaches the final state (or FS) at the end of
block-creation. As shown in Algorithm 8, the concurrent miner first copies all the AUs
in the concurrent bin at Line 88. Each transaction Ti gets the unique timestamp i from
STM.begin() at Line 93. Then transaction Ti executes the atomic-unit of smart
contracts. Atomic-unit consists of multiple steps such as reads and writes on shared
_t-objects as x. Internally, these read and write steps are handled by the STM.read()_
and STM.write(), respectively. At Line 97, if current atomic-unit step (or curStep)
is read(x) then it calls the STM.read(x). Internally, STM.read() identify the
shared t-object x from transactional memory (or TM) and validate it. If validation is
successful, it gets the value as v at Line 98 and executes the next step of atomic-unit;
otherwise, re-executed the atomic-unit if aborted at Line 99.
If curStep is write(x) at Line 102 then it calls the STM.write(x). Internally,
STM.write() stores the information of shared t-object x into local log (or txlog)
in write-set (or wseti) for transaction Ti. We use an optimistic approach in which
the transaction’s effect will reflect onto the TM after the successful STM.tryC(). If
validation is successful for all the wseti of transaction Ti in STM.tryC(), i.e., all the
changes made by the Ti are consistent, then it updates the TM; otherwise, re-execute
the atomic-unit if aborted at Line 110. After successful validation of STM.tryC(),
it also maintains the conflicting transaction of Ti into the conflict list in TM.
If the conflict list is nil (Line 113), there is no need to create a node in the BG.
Otherwise, create the node with respective dependencies in the BG and remove those
AUs from the concurrent bin (Line 116). To maintain the BG, it calls addVert()
and addEdge() methods of the lock-free graph library. The details of addVert()
and addEdge() methods are explained in SubSection 4.1. Once the transactions
successfully executed the atomic-units and done with BG construction, the concurrent
19
-----
**Algorithm 9 Concurrent Validator(auList[], BG): Concurrently V threads are execut-**
ing AUs with the help of concurrent bin followed by the BG given by the miner.
123: procedure Concurrent Validator(auList[], BG)
124: /*Execute until all AUs successfully completed*/
125: /*Phase-1: Concurrent Bin AUs execution.*/
126: **while (concCount < size of(concBin[])) do**
127: count ← concCount.get&Inc(auList[]);
128: _AUid ←_ _concBin[count];_
129: _executeCode(AUid);_
130: **end while**
131: /*Phase-2: Block Graph AUs execution.*/
132: **while (nCount < size of(auList[])) do**
133: **while (cacheList.hasNext()) do**
134: cacheVer ← _cacheList.next();_
135: cacheVertex ← searchLocal(cacheVer,
AUid);
136: _executeCode(AUid);_
137: **while (cacheVertex) do**
138: cacheVertex ← decInCount(cacheVertex);
139: **end while**
140: Remove the current node (or cacheVertex)
from local cacheList;
141: **end while**
142: vexNode ← searchGlobal(BG, AUid);
143: _executeCode(AUid);_
144: **while (verNode) do**
145: verNode ← decInCount(verNode);
146: **end while**
147: **end while**
148: end procedure
_miner computes the hash of the previous block. Eventually, concurrent miner proposes_
a block consisting of a set of transactions, BG, the final state of each shared t-objects,
previous block hash, and sends it to all other network peers to validate.
_4.3. Concurrent Validator_
The concurrent validator validates the block proposed by the concurrent miner. It
executes the block transactions concurrently and deterministically in two phases us
ing a concurrent bin and BG given by the concurrent miner. In the first phase, val
idator threads execute the independent AUs of concurrent bin concurrently (Line 126
to Line 130). Then in the second phase, it uses BG to executes the dependent AUs
by executeCode() method at Line 136 and Line 143 using searchLocal(),
searchGlobal() and decInCount() methods of lock-free graph library at Line 135,
Line 142 and (Line 138, Line 145), respectively. BG consists of dependency among
the conflicting transactions that restrict them to execute serially. The functionality of
lock-free graph library methods is explained earlier in SubSection 4.1.
After the successful execution of all the atomic-units, the concurrent validator
compares its computed final state with the final states given by the concurrent miner.
If the final state matches for all the shared data-objects, then the block proposed by
20
-----
the concurrent miner is valid. Finally, based on consensus between network peers, the
block is appended to the blockchain, and the respective concurrent miner is rewarded.
_4.4. Optimizations_
To make the proposed approach storage optimal and efficient, this subsection ex
plains the key change performed on top of the solution proposed by Anjana et al. [1].
In Anjana et al. [1], there is a corresponding vertex node in the block graph (BG) for
every AUs in the block. We observed that all the AUs in the block need not have depen
dencies. Adding a vertex node for such AUs takes additional space in the block. This
is the first optimization our approach provides. In our approach, only the dependent
AUs have a vertex in the BG, while the independent AUs are stored in the concurrent
bin, which does not need any additional space. During the execution, a concurrent
miner thread does not add a vertex to the BG if it identifies that the currently executed
AU does not depend on the AUs already executed. However, suppose any other miner
thread detects any dependence during the remaining AUs execution. That thread will
add the dependent AUs vertices in the BG.
For example, let say we have n AUs in a block and a vertex node size is _m kb_
_≈_
to store in the BG, then it needs a total of n _m kb of vertex node space for Anjana et_
_∗_
al. [1]. Suppose from n AUs, only _[n]2_ [have the dependencies, then a total of][ n]2
_[∗]_ _[m][ kb]_
vertex space needed in the BG. In the proposed approach, the space optimization can
be 100% in the best case when all the AUs are independent. While in the worst case,
it can be 0% when all the AUs are dependent. However, only a few AUs in a block
have dependencies. Space-optimized BG helps to improve the network bandwidth and
reduces network congestion.
Further, our approach combines the benefit of both Speculative Bin-based approach
[11] and STM-based approach [1] to yield maximum speedup that can be achieved by
validators to execute AUs. So, another optimization is at the validators side; due to
the concurrent bin in the block, the time taken to traverse the BG will decrease; hence,
speedup increases. The concurrent validators execution is modified and divided into
two phases. First, it concurrently executes AUs of the concurrent bin using multiple
threads, since AUs in the concurrent bin will be independent. While in the second
21
-----
phase, dependent AUs are stored in the BG and concurrently executed using BG to
preserve the transaction execution order as executed by the miner.
**5. Correctness**
The correctness of concurrent BG, miner, and validator is described in this section.
We first list the linearization points (LPs) of the block graph library methods as follows:
1. addVert(vNode): (vPred.vNext.CAS(vCurr, vNode)) in Line 19 is the LP point
of addVert() method if vNode is not exist in the BG. If vNode is exist in the
BG then (vCurr.tsi ̸= vNode.tsi) in Line 17 is the LP point.
2. addEdge(fromNode, toNode): (ePred.eNext.CAS(eCurr, eNode)) in Line 31 is
the LP point of addEdge() method if eNode is not exist in the BG. If eNode is
exist in the BG then (eCurr.tsi ̸= toNode.tsi) in Line 29 is the LP point.
3. searchLocal(cacheVer, AUid): (cacheVer.inCnt.CAS(0, -1)) in Line 41 is
the LP point of searchLocal() method.
4. searchGlobal(BG, AUid): (vNode.inCnt.CAS(0, -1)) in Line 52 is the LP
point of searchGlobal() method.
5. decInCount(remNode): Line 63 is the LP point of decInCount() method.
**Theorem 1. Any history Hm generated by the concurrent miner using the BTO proto-**
_col satisfies co-opacity._
**Proof: Concurrent miner executes AUs concurrently using BTO protocol and generate**
a concurrent history Hm. The underlying BTO protocol ensures the correctness of
concurrent execution of Hm. The BTO protocol [9, Chap 4] proves that any history
generated by it satisfies co-opacity [23]. So, implicitly BTO proves that the history Hm
generated by concurrent miner using BTO satisfies co-opacity.
**Theorem 2. Any history Hm generated by the concurrent miner using the MVTO pro-**
_tocol satisfies opacity._
22
-----
**Proof: Concurrent miner executes AUs concurrently using MVTO protocol and gener-**
ate a concurrent history Hm. The underlying MVTO protocol ensures the correctness
of concurrent execution of Hm. The MVTO protocol [10] proves that any history gen
erated by it satisfies opacity [8]. So, implicitly MVTO proves that the history Hm
generated by concurrent miner using MVTO satisfies opacity.
**Theorem 3. All the dependencies between the conflicting nodes are captured in BG.**
**Proof: Dependencies between the conflicting nodes are captured in the BG using LP**
points of lock-free graph library methods defined above. Concurrent miner constructs
the lock-free BG using BTO and MVTO protocol in SubSection 4.1. BG consists of
vertices and edges, where each committed AU act as a vertex and edges (or depen
dencies) represents the conflicts of the respective STM protocol (BTO and MVTO).
As we know, STM protocols BTO [9, Chap 4] and MVTO [10] used in this paper for
the concurrent execution are correct, i.e., these protocols captures all the dependen
cies correctly between the conflicting nodes. Hence, all the dependencies between the
conflicting nodes are captured in the BG.
**Theorem 4. A history Hm generated by the concurrent miner using BTO protocol and**
_a history Hv generated by a concurrent validator are view equivalent._
**Proof: A concurrent miner executes the AUs of Hm concurrently using BTO protocol,**
captures the dependencies of Hm in the BG, and proposes a block B. Then it broad
casts the block B along with BG to concurrent validators to verify the block B. The
concurrent validator applies the topological sort on the BG and obtained an equivalent
serial schedule Hv. Since the BG constructed from Hm considers all the conflicts and
_Hv obtained from the topological sort on the BG. So, Hv is equivalent to Hm. Simi-_
larly, Hv also follows the read from relation of Hm. Hence, Hv is legal. Since Hv and
_Hm are equivalent to each other, and Hv is legal. So, Hm and Hv are view equivalent._
**Theorem 5. A history Hm generated by the concurrent miner using MVTO protocol**
_and a history Hv generated by a concurrent validator are multi-version view equiva-_
_lent._
23
-----
**Proof: Similar to the proof of Theorem 4, the concurrent miner executes the AUs of**
_Hm concurrently using MVTO protocol, captures the dependencies in the BG, pro-_
poses a block B, and broadcasts it to the concurrent validators to verify it. MVTO
maintains multiple-version corresponding to each shared object. Later, concurrent val
idator obtained Hv by applying topological sort on the BG provided by the concurrent
miner. Since, Hv obtained from topological sort on the BG so, Hv is equivalent to Hm.
Similarly, the BG maintains the read from relations of Hm. So, from MVTO protocol
if Tj reads a value for shared object k say rj(k) from Ti in Hm then Ti committed
before rj(k) in Hv. Therefore, Hv is valid. Since Hv and Hm are equivalent to each
other and Hv is valid. So, Hm and Hv are multi-version view equivalent.
**6. Experimental Evaluation**
We aim to increase the efficiency of the miners and validators by employing con
current execution of AUs while optimizing the size of the BG appended by the miner
in the block. To assess the efficiency of the proposed approach, we performed simula
tion on the series of benchmark experiments with Ethereum [4] smart contracts from
Solidity documentation [6]. Since multi-threading is not supported by the Ethereum
Virtual Machine (EVM) [4, 5], we converted the Ethereum smart contracts into C++.
We evaluated the proposed approach with the state-of-the-art approaches [1, 5, 11] over
baseline serial execution on three different workloads by varying the number of AUs,
the number of threads, and the number of shared objects. The benchmark experiments
are conservative and consist of one or fewer smart contracts AUs in a block, which
leads to a higher degree of conflicts than actual conflicts in practice where a block con
sists of AUs from different contracts ( 1.5 million deployed smart contracts [24]).
_≈_
Due to fewer conflicts in the actual blockchain, the proposed approach is expected to
provide greater concurrency. We structure our experimental evaluation to answer the
following questions:
1. How much speedup is achieved with varying AUs by concurrent miners and
validators when fixing the number of threads and shared objects? As conflicts
increase with increasing AUs, we expect a decrease in speedup.
24
-----
2. How does speedup change when increasing the number of threads with a fixed
number of AUs and shared objects? We expect to see the speedup increase with
increasing threads confined by logical threads available within the system.
3. How does speedup shift over different shared objects with fixed AUs and threads?
We expect an increase in speedup due to conflict deterioration with objects in
crease. So, we anticipate concurrent miners and validators overweigh serial min
ers and validators with fewer conflicts.
_6.1. Contract Selection and Benchmarking_
This section provides a comprehensive overview of benchmark contracts coin, bal
lot, and simple auction from Solidity Documentation [6] selected as real-world exam
ples for evaluating the proposed approach. The AUs in a block for the coin, ballot,
and auction benchmark operate on the same contract, i.e., consists of the transaction
calls of one or more methods of the same contract. In practice, a block consists of the
AUs from different contracts; hence we designed another benchmark contract called
_mix contract consisting of contract transactions from coin, ballot, and auction in equal_
proportion in a block. The benchmark contracts and respective methods are as follows:
**Coin Contract: The coin contract is the simplest form of sub-currency. The users**
involved in the contract have accounts, and accounts are shared objects. It implements
methods such as mint(), transfer()/send(), and getbalance() which
represent the AUs in a block. The contract deployer uses the mint() method to
give initial coins/balance to each account with the same fixed amount. We initialized
the coin contract’s initial state with a fixed number of accounts on all benchmarks and
workloads. Using transfer(), users can transfer coin from one account to other
account. The getbalance() is used to check the coins in a user account. For the
experiments a block consists of 75% getbalance(), and 25% transfer() calls.
A conflict between AUs occurs if they access a common object (account), and at least
one of them performs a transfer() operation.
**Ballot Contract: The ballot contract is an electronic voting contract in which voters**
and proposals are shared objects. The vote(), delegate(), and winningproposal()
are the methods of ballot contract. The voters use the vote() method to cast their vote
25
-----
to a specific proposal. Alternatively, a voter can delegate their vote to other voter using
delegate() method. A voter can cast or delegate their vote only once. At the end
of the ballot, the winningproposal() is used to compute the winner. We initial
ized the ballot contract’s initial state with a fixed number of proposals and voters for
benchmarking on different workloads for experiments. The proposal to voter ratio is
fixed to 5% to 95% of the total shared objects. A block consists of 90% vote(), and
a 10% delegate() method calls followed by a winningproposal() call for the
experiments. The AUs will conflict if they operate on the same object. So, if two voters
vote() for the same proposal simultaneously, then they will conflict.
**Simple Auction Contract: It is an online auction contract in which bidders bid for**
a commodity online. In the end, the amount from the maximum bidder is granted to
the owner of the commodity. The bidders, maximum bid, and maximum bidder are the
shared object. In our experiments, the initial contract state is a fixed number of bidders
with a fixed initial account balance and a fixed period of the auction to end. In the
beginning, the maximum bidder and bid are set to null (the base price and the owner
can be set accordingly). The bidder uses the contract method bid() to bid for the
commodity with their bid amount—the max bid amount and the bidder changes when
a bid is higher than the current maximum. A bidder uses the withdraw() method to
move the balance of their previous bid into their account. The bidder uses bidEnd()
method to know if the auction is over. Finally, when the auction is ended, the maximum
bidder (winner) amount is transferred to the commodity owner, and commodity own
ership is transferred to the max bidder. For benchmarking in our experiments a block
consist of 8% bid(), 90% withdraw(), and 2% bidEnd() method calls. The
max bidder and max bid are the conflict points whenever a new bid with the current
highest amount occurs.
**Mix Contract: In this contract, we combine the AUs in equal proportion from the**
above three contracts (coin, ballot, and auction). Therefore, our experiment block con
sists of an equal number of corresponding contract transactions with the same initial
state as initialized in the above contracts.
26
-----
_6.2. Experimental Setup and Workloads_
We ran our experiments on a large-scale 2-socket Intel(R) Xeon(R) CPU E5-2690
V4 @ 2.60 GHz with a total of 56 hyper-threads (14 cores per socket and two threads
per core) with 32 GB of RAM running Ubuntu 18.04.
In our experiments, we have noticed that speedup varies from contract to contract
on different workloads. The speedup on various contracts is not for comparison be
tween contracts. Instead, it demonstrates the proposed approach efficiency on several
use-cases in the blockchain. We have considered the following three workloads for
performance evaluation:
1. In workload 1 (W1), a block consists of AUs varies from 50 to 400, fixed 50
threads, and shared objects of 2K. The AUs per block in Ethereum blockchain
is on an average of 100, while the actual could be more than 200 [5], however a
theoretical maximum of 400 [25] after a recent increase in the gas limit. Over
_≈_
time, the number of AUs per block is increasing. In practice, one block can have
less AUs than the theoretical cap, which depends on the gas limit of the block
and the gas price of the transactions. We will see that in a block, the percentage
of data conflicts increase with increasing AUs. The conflict within a block is
described by different AUs accessing a common shared object, and at least one
of them performs an update. We have found that the data conflict varies from
contract to contract and has a varied effect on speedup.
2. In workload 2 (W2), we varied the number of threads from 10 to 60 while fixed
the AUs to 300 and shared objects to 2K. Our experiment system consists of a
maximum of 56 hardware threads, so we experimented with a maximum of 60
threads. We observed that the speedup of the proposed approach increases with
an increasing number of threads limited by logical threads.
3. The number of AUs and threads in workload 3 (W3) are 50 and 300, respectively,
although the shared objects range from 1K to 6K. This workload is used with
each contract to measure the impact of the number of participants involved. Data
conflicts are expected to decrease with an increasing number of shared objects;
27
-----
however, the search time may increases. The speedup depends on the execution
of the contract; but, it increases with an increasing number of shared objects.
_6.3. Analysis_
In our experiments, blocks of AUs were generated for each benchmark contract on
three workloads: W1 (varying AUs), W2 (varying threads), and W3 (varying shared
objects). Then, concurrent miners and validators execute the blocks concurrently.
The corresponding blocks serial execution is considered as a baseline to compute the
speedup of proposed concurrent miners and validators. The running time is collected
for 15 iterations (times) with 10 blocks per iteration, and 10 validators validate each
block. The first block of each iteration is left as a warm-up run, and a total of 150
blocks are created for each reading. So, each block execution time is averaged by 9.
Further, the total time taken by all iterations is averaged by the number of iteration for
each reading; the Eqn(1) is used to compute a reading time.
_αt =_
_n_ _m−1_
� �
_βt_
_i=1_ _b=1_ (1)
_n_ (m 1)
_∗_ _−_
Where αt is an average time for a reading, n is the number of iterations, m is the
number of blocks, and βt is block execution time.
In all plots, figure (a), (b), and (c) correspond to workload W1, W2, and W3, re
spectively. Figure 3 to Figure 6 show the speedup achieved by proposed and state-of
the-art concurrent miners over serial miners for all benchmarks and workloads. Fig
ure 7 to Figure 10 show the speedup achieved by proposed and state-of-the-art concur
rent decentralized validators over serial validators for all benchmarks and workloads.
Figure 11 to Figure 14 show speedup achieved by proposed and state-of-the-art concur
rent fork-join validators over serial validators. Figure 15 to Figure 18 show the average
number of edges (dependencies) and vertices (AUs) in the block graph for respective
contracts on all workloads. While Figure 19 to Figure 22 show the percentage of addi
tional space required to store the block graph in Ethereum block. A similar observation
has been found [26] for the fork-join validator, the average number of dependencies,
and space requirement on other contracts.
28
-----
We observed that speedup for all benchmark contracts follows the roughly same
pattern. In the read-intensive benchmarks (coin and mix), speedup likely to increase on
all the workloads, while in the write-intensive benchmark (ballot and auction), speedup
drop downs on high contention. We also observed that there might not be much speedup
for concurrent miners with fewer AUs (less than 100) in the block, conceivably due to
multi-threading overhead. However, the speedup for concurrent validators generally
increases across all the benchmarks and workloads. Fork-join concurrent validators on
W2 is an exception in which speedup drops down with an increase in the number of
threads since fork-join follows a master-slave approach where a master thread becomes
a performance bottleneck. We also observed that the concurrent validators achieve a
higher speedup than the concurrent miners. Because the concurrent miner executes
the AUs non-deterministically, finds conflicting AUs, creates concurrent bin and an
efficient BG for the validators to execute the AUs deterministically.
Our experiment results also show the BG statics and additional space required to
store BG in a block of Ethereum blockchain, which shows the space overhead. We
compare our proposed approach with the existing speculative bin (Spec Bin) based
approach [11], the fork-join approach (FJ-Validator) [5] and the approach proposed
in [1] (we call it default/Def approach). The proposed approach combines the benefit
of both bin-based and the STM approaches to get maximum benefit for concurrent
miners and validators. The proposed approach[2] produces an optimal BG, reduces the
space overhead, and outperforms the state-of-the-art approaches.
Figure 3(a) to Figure 6(a) show the speedup for concurrent miner on W1. As shown
in Figure 3(a) and Figure 6(a) for read-intensive contracts such as in coin and mix
contract, the speedup increases with an increase in AUs, respectively. While in write
intensive contracts such as ballot and auction contract the speedup does not increase
with an increase in AUs; instead, it may drop down if AUs increases, as shown in
Figure 4(a) and Figure 5(a), respectively. This is because contention increases with an
increase in AUs.
2In the figures, legend items in bold.
29
-----
16
8
4
2
1
100 200 300 400 500 600
1
|Def-BTO Miner Def-MVTO Miner Spec Bin Miner Opt-BTO Miner Opt-MVTO M iner Serial Miner 16 16 Coin Contract Coin Contract Coin Contract 8 8 4 4 2 2 1 1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|||||||||||
|||||||||||
10 20 30 40 50 60
Number of Threads
1
1k 2k 3k 4k 5k 6k
Number of Shared Objects
(c) Concurrent Miner on W3
Number of AUs
(a) Concurrent Miner on W1
(b) Concurrent Miner on W2
Figure 3: Concurrent miner speedup over serial miner for coin contract.
16
8
16
8
16
8
4
2
4
2
4
2
1
100 200 300 400 500 600
1
|Col1|B|allot Cont|ract|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
|Col1|B|allot Cont|ract|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
|Col1|B|allot Cont|ract|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
10 20 30 40 50 60
Number of Threads
1
1k 2k 3k 4k 5k 6k
Number of Shared Objects
(c) Concurrent Miner on W3
Number of AUs
(a) Concurrent Miner on W1
(b) Concurrent Miner on W2
Figure 4: Concurrent miner speedup over serial miner for ballot contract.
16
8
16
8
16
8
4
2
4
2
4
2
1
100 200 300 400 500 600
1
|Col1|A|uction Co|ntract|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
|Col1|A|uction Co|ntract|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
|Col1|A|uction Co|ntract|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
10 20 30 40 50 60
Number of Threads
1
1k 2k 3k 4k 5k 6k
Number of Shared Objects
(c) Concurrent Miner on W3
Number of AUs
(a) Concurrent Miner on W1
(b) Concurrent Miner on W2
Figure 5: Concurrent miner speedup over serial miner for auction contract.
16
8
16
8
16
8
4
2
4
2
4
2
1
100 200 300 400 500 600
1
|Col1|M|ix Contra|ct|Col5|
|---|---|---|---|---|
||||||
||||||
|Col1|M|ix Contra|ct|Col5|
|---|---|---|---|---|
||||||
||||||
|Col1|M|ix Contra|ct|Col5|
|---|---|---|---|---|
||||||
||||||
10 20 30 40 50 60
Number of Threads
1
1k 2k 3k 4k 5k 6k
Number of Shared Objects
(c) Concurrent Miner on W3
Number of AUs
(a) Concurrent Miner on W1
(b) Concurrent Miner on W2
Figure 6: Concurrent miner speedup over serial miner for mix contract.
30
-----
16
8
4
2
1
100 200 300 400 500 600
1
10 20 30 40 50 60
1
1k 2k 3k 4k 5k 6k
|Def-BTO Dec-Validator Def-MVTO Dec-Validator Spec Bin Dec-Validator Opt-BTO Dec-Validator Opt-MVTO Dec-Validator Serial Validator|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|16 16 Coin Contract Coin Contract Coin Contract 8 8 4 4 2 2 1 1||||||||||
|||||||||||
|||||||||||
|||||||||||
Number of AUs
(a) Concurrent Dec-Validator on W1
Number of Threads
(b) Concurrent Dec-Validator on W2
Number of Shared Objects
(c) Concurrent Dec-Validator on W3
Figure 7: Concurrent decentralized validator speedup over serial validator for coin contract.
16
8
16
8
16
8
4
2
4
2
4
2
1
100 200 300 400 500 600
1
10 20 30 40 50 60
1
1k 2k 3k 4k 5k 6k
|Col1|B|allot Cont|ract|Col5|
|---|---|---|---|---|
||||||
||||||
|Col1|B|allot Cont|ract|Col5|
|---|---|---|---|---|
||||||
||||||
|Col1|B|allot Cont|ract|Col5|
|---|---|---|---|---|
||||||
||||||
Number of AUs
(a) Concurrent Dec-Validator on W1
Number of Threads
(b) Concurrent Dec-Validator on W2
Number of Shared Objects
(c) Concurrent Dec-Validator on W3
Figure 8: Concurrent decentralized validator speedup over serial validator for ballot contract.
16
8
16
8
16
8
4
2
4
2
4
2
1
100 200 300 400 500 600
1
10 20 30 40 50 60
1
1k 2k 3k 4k 5k 6k
Number of AUs
(a) Concurrent Dec-Validator on W1
Number of Threads
(b) Concurrent Dec-Validator on W2
Number of Shared Objects
(c) Concurrent Dec-Validator on W3
Figure 9: Concurrent decentralized validator speedup over serial validator for auction contract.
16
8
16
8
16
8
4
2
4
2
4
2
1
100 200 300 400 500 600
1
10 20 30 40 50 60
1
1k 2k 3k 4k 5k 6k
|Col1|M|ix Contra|ct|Col5|
|---|---|---|---|---|
||||||
||||||
|Col1|M|ix Contra|ct|Col5|
|---|---|---|---|---|
||||||
||||||
|Col1|M|ix Contra|ct|Col5|
|---|---|---|---|---|
||||||
||||||
Number of AUs
(a) Concurrent Dec-Validator on W1
Number of Threads
(b) Concurrent Dec-Validator on W2
Number of Shared Objects
(c) Concurrent Dec-Validator on W3
Figure 10: Concurrent decentralized validator speedup over serial validator for mix contract.
31
-----
Def-BTO FJ-Validator
Def MVTO FJ-Validator
Serial Validator
8
4
2
1
0.5
100 200 300 400 500 600
|Def-BTO FJ-Validator Def MVTO FJ-Validator Serial Validator Opt- BTO FJ-Validator Opt-MVTO FJ-Validator 8 8 Coin Contract Coin Contract Coin Contract 4 4 2 2 1 1 0.5 0.5|Def-BTO FJ-Validator Opt-BTO FJ-Validator|Col3|Col4|Col5|Def MVTO FJ-Validator Serial Validator Opt-MVTO FJ-Validator|
|---|---|---|---|---|---|
||C|oin Contr|act|||
|||||||
|||||||
|||||||
|Col1|C|oin Contr|act|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
|Col1|C|oin Contra|ct|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
10 20 30 40 50 60
1k 2k 3k 4k 5k 6k
Number of AUs
(a) Concurrent FJ-Validator on W1
Number of Threads
(b) Concurrent FJ-Validator on W2
Number of Shared Objects
(c) Concurrent FJ-Validator on W3
Figure 11: Concurrent fork join validator speedup over serial validator for coin contract.
8
4
2
1
8
4
2
1
8
4
2
1
0.5
100 200 300 400 500 600
0.5
10 20 30 40 50 60
0.5
1k 2k 3k 4k 5k 6k
|Col1|B|allot Cont|ract|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
|Col1|B|allot Con|tract|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
|Col1|B|allot Con|tract|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
Number of AUs
(a) Concurrent FJ-Validator on W1
Number of Threads
(b) Concurrent FJ-Validator on W2
Number of Shared Objects
(c) Concurrent FJ-Validator on W3
Figure 12: Concurrent fork join validator speedup over serial validator for ballot contract.
8
4
2
1
8
4
2
1
8
4
2
1
0.5
100 200 300 400 500 600
0.5
10 20 30 40 50 60
0.5
1k 2k 3k 4k 5k 6k
|Col1|A|uction Co|ntract|Col5|
|---|---|---|---|---|
||||||
||||||
|Col1|A|uction C|ontract|Col5|
|---|---|---|---|---|
||||||
||||||
|Col1|A|uction C|ontract|Col5|
|---|---|---|---|---|
||||||
||||||
Number of AUs
(a) Concurrent FJ-Validator on W1
Number of Threads
(b) Concurrent FJ-Validator on W2
Number of Shared Objects
(c) Concurrent FJ-Validator on W3
Figure 13: Concurrent fork join validator speedup over serial validator for auction contract.
8
4
2
1
8
4
2
1
8
4
2
1
|Col1|M|ix Contra|ct|Col5|
|---|---|---|---|---|
||||||
|Col1|M|ix Contr|act|Col5|
|---|---|---|---|---|
||||||
|Col1|Col2|Mix Contr|act|Col5|
|---|---|---|---|---|
||||||
0.5
100 200 300 400 500 600
0.5
10 20 30 40 50 60
0.5
1k 2k 3k 4k 5k 6k
Number of AUs
(a) Concurrent FJ-Validator on W1
Number of Threads
(b) Concurrent FJ-Validator on W2
Number of Shared Objects
(c) Concurrent FJ-Validator on W3
Figure 14: Concurrent fork join validator speedup over serial validator for mix contract.
32
-----
Figure 7(a) through Figure 14(a) show the speedup for concurrent validators over
serial validators on W1. The speedup for concurrent validators (decentralized and fork
join) increases with an increase in AUs. Figure 7(a) to Figure 10(a) demonstrate the
speedup achieved by decentralized validator. It can be observed that for read-intensive
benchmarks, the optimized MVTO decentralized validator (Opt-MVTO Dec-Validator)
outperforms other validators. In contrast, in write-intensive benchmarks, the default
MVTO decentralized validator (Def-MVTO Dec-Validator) achieves better speedup
over other validators. Due to the overhead of multithreading for the concurrent bin
with very fewer AUs. We observed that with increasing AUs in the blocks, the conflicts
also increase. As a result, the number of transactions in the concurrent bin decreases.
The speculative bin decentralized validator (Spec Bin Dec-Validator) speedup is quite
less over concurrent STM Dec-Validators. Because STM miner precisely determines
the dependencies between the AUs of the block and harness the maximum concurrency
than the bin-based miner. However, suppose the block consists of the AUs with very
few dependencies. In that case, Spec Bin Dec-Validator is expected to outperform other
validators, as shown in the Figure 7(a).
Figure 11(a) to Figure 14(a) show the speedup for fork-join validators on W1 for
all the benchmarks. We can observe that the proposed optimized MVTO fork-join val
idator (Opt-MVTO FJ-Validator) outperforms other validators due to lower overheads
at the fork-join master validator thread to allocate independent AUs to slave valida
tor threads. We noticed that decentralized concurrent validators speedup is quite high
over fork-join concurrent validators because there is no bottleneck in this approach for
allocating the AUs. All threads in the decentralized approach work independently. It
can also be observed that with fewer AUs in several benchmarks, the speedup by fork
join validators drops to the point where it is less than the serial validators due to the
overhead of thread creation dominate the speedup achieved, as shown in Figure 12(a),
Figure 13(a) and Figure 14(a).
In W1, concurrent miners achieve a minimum of 2 and maximum up to 10
_≈_ _×_ _×_
speedup over serial miners across the contracts. The concurrent STM decentralized
validators achieve speedup minimum 4 and maximum up to 14 while Spec
_≈_ _×_ _≈_ _×_
Bin Dec-Validator ranges from 3 to 9 over serial miner across the contracts.
_≈_ _×_ _≈_ _×_
33
-----
The fork-join concurrent validators achieve a maximum speedup of 5 over the
_≈_ _×_
serial validator.
Figure 3(b) to Figure 14(b) show the speedup on W2. The speedup increases with
an increase in the number of threads. However, it is limited by the maximum number
of logical threads in the experimental system. Thus, a slight drop in the speedup can be
seen from 50 threads to 60 threads because the experimental system has a maximum
of 56 logical threads. The reset of the concurrent miner observations is similar to the
workload W1 based on read-intensive and write-intensive benchmarks.
As shown in the Figure 7(b) to Figure 10(b), the concurrent decentralized valida
tors speedup increase with an increase in threads. While as shown in Figure 11(b) to
Figure 14(b), the concurrent fork-join validators speedup drops down with an increase
in threads. The reason for this drop in the speedup is that the master validator thread
in the fork-join approach becomes a bottleneck. The decentralized validator’s observa
tion shows that for the read-intensive benchmark, the Opt-MVTO Dec-validator out
performs other validators. While in the write-intensive benchmark, the Def-MVTO
Dec-validator outperforms other validators, as shown in Figure 8(b). However, in the
fork-join validator approach, the proposed Opt-MVTO FJ-validator outperforms all
other validators due to the optimization benefit of bin based approach inclusion.
In W2, concurrent miners achieve a minimum of 1.5 and achieves maximum
_≈_ _×_
up to 8 speedup over serial miners across the contracts. The concurrent STM
_≈_ _×_
decentralized validators achieve speedup minimum 4 and maximum up to 10
_≈_ _×_ _≈_ _×_
while Spec Bin Dec-Validator ranges from 3 to 7 over serial miner across the
_≈_ _×_ _≈_ _×_
contracts. The fork-join concurrent validators achieve a maximum speedup of 4.5
_≈_ _×_
over the serial validator.
The plots in Figure 3(c) to Figure 14(c) show the concurrent miners and validators
speedup on W3. As shared objects increase, the concurrent miner speedup increases
because conflict decreases due to less contention. Additionally, when contention is
very low, more AUs are added in the concurrent bin. However, it also depends on the
contract. If the contract is a write-intensive, fewer AUs are added in the concurrent bin.
While more AUs added in the concurrent bin for read-intensive contracts.
As shown in Figure 3(c) and Figure 6(c), the speculative bin miners surpass STM
34
-----
miners due to read-intensive contracts. While in Figure 4(c) and Figure 5(c), the Def
MVTO Miner outperform other miners as shared objects increase. In contrast, Def
BTO Miner performs better over other miners when AUs are fewer because search time
in write-intensive contracts to determine respective versions is much more in MVTO
miner than BTO miner. Although, all concurrent miners performers better than the
serial miner. In W3, concurrent miners start at around 1.3 and archives maximum up
_×_
to 14 speedup over serial miners across all the contracts.
_×_
The speedup by validators (decentralized and fork-join) increases with shared ob
jects. In Figure 7(c), Figure 9(c), and Figure 10(c), proposed Opt-STM Dec-Validator
perform better over other validators. However, for write-intensive contracts, the num
ber of AUs in the concurrent bin would be less. Therefore, the speedup by Def-STM
Dec-Validators is greater than Opt-STM Dec-Validators, as shown in Figure 8(c). The
Spec Bin Dec-Validator speedup is quite less over concurrent STM Dec-Validators be
cause STM miner precisely determines the dependencies between the AUs than the bin
based miner.
In fork-join validators, proposed Opt-STM FJ-Validators outperform over all other
FJ-Validators, as shown in Figure 11(c) to Figure 14(c) because of less contention at
the master validator thread in the proposed approach to allocate independent AUs to
slave validator threads. We noticed that decentralized concurrent validators speedup
is relatively high over fork-join concurrent validators with similar reasoning explained
above. In W3, concurrent STM decentralized validators start at around 4 and achieve
_×_
a maximum up to 14 speedup while Spec Bin Dec-Validator ranges from 1 to 14
_×_ _×_ _×_
speedup over serial miner across the contracts. The fork-join concurrent validators
achieve a maximum speedup of 7 over the serial validator. The concurrent validators
_×_
benefited from the work of the concurrent miners and outperformed serial validators.
Figure 15 to Figure 18 show the average number of edges (dependencies as his
tograms) and vertices (AUs as line chart) in the BG for mix contract on all the work
loads[3]. The average number of edges (dependencies) in the BG for both Default and
3We used histograms and line chart to differentiate vertices and edges to avoid confusion in comparing
the edges and vertices.
35
-----
4096
1024
256
64
16
4
|Col1|Col2|(Opt-)BTO Edges Opt-BTO Vertices BTO Vertices|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|Col22|Col23|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||( Opt-)BTO Edges Opt-BTO V ertices BTO Vertices (Opt-)MVTO Edges Opt-MVTO Vertices MVTO Vertices|||||||||||||||||||||
|||Co|in Con|tract|||4096 1024 256 64 16 4|||Co|in Con|tract|||4096 1024 256 64 16 4|||Co|in Con|tract|||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
100 200 300 400 500 600
Number of AUs
10 20 30 40 50 60
Number of Threads
1k 2k 3k 4k 5k 6k
(a) STM Miner on W1
(b) STM Miner on W2
Number of Shared Objects
(c) STM Miner on W3
Figure 15: Average number of edges (dependencies) and vertices (AUs) in block graph for coin contract.
4096
1024
256
4096
1024
256
4096
1024
256
64
16
4
64
16
4
64
16
4
|Col1|Col2|Ba|llot Co|ntract|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
||||||||
||||||||
|Col1|Col2|Ba|llot Co|ntract|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
||||||||
||||||||
|Col1|Col2|Ba|llot Co|ntract|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
||||||||
||||||||
100 200 300 400 500 600
Number of AUs
10 20 30 40 50 60
Number of Threads
1k 2k 3k 4k 5k 6k
(a) STM Miner on W1
(b) STM Miner on W2
Number of Shared Objects
(c) STM Miner on W3
Figure 16: Average number of edges (dependencies) and vertices (AUs) in block graph for ballot contract.
4096
1024
256
4096
1024
256
4096
1024
256
64
16
4
64
16
4
64
16
4
|Col1|Col2|Au|ction C|ontrac|t|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
|Col1|Col2|Au|ction|Contra|ct|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
|Col1|Col2|Au|ction C|ontra|ct|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
100 200 300 400 500 600
Number of AUs
10 20 30 40 50 60
Number of Threads
1k 2k 3k 4k 5k 6k
(a) STM Miner on W1
(b) STM Miner on W2
Number of Shared Objects
(c) STM Miner on W3
Figure 17: Average number of edges (dependencies) and vertices (AUs) in block graph for auction contract.
4096
1024
256
4096
1024
256
4096
1024
256
64
16
4
64
16
4
64
16
4
|Col1|Col2|Mi|x Cont|ract|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
||||||||
|Col1|Col2|Mi|x Cont|ract|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
||||||||
|Col1|Col2|Mi|x Cont|ract|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
||||||||
100 200 300 400 500 600
Number of AUs
10 20 30 40 50 60
Number of Threads
1k 2k 3k 4k 5k 6k
(a) STM Miner on W1
(b) STM Miner on W2
Number of Shared Objects
(c) STM Miner on W3
Figure 18: Average number of edges (dependencies) and vertices (AUs) in block graph for mix contract.
36
-----
Optimized approach for respective STM protocol remains the same; hence only two
histograms are plotted for simplicity. As shown in the Figure 15(a) to Figure 18(a)
with increasing AUs in W1, the BG edges and vertices also increase. It shows that the
contention increases with increasing AUs in the blocks. As shown in the Figure 15(b)
to Figure 18(b) in W2, the number of vertices and edges does not change much. How
ever, in the W3, the number of vertices and edges decreases, as shown in Figure 15(c)
to Figure 18(c).
In our proposed approach, the BG consists of vertices respective to only conflict
ing AUs, and non-conflicting AUs are stored in the concurrent bin. While in Anjana
et al. [1] approach, all the AUs had corresponding vertex nodes in the BG shown in
Figure 15 to Figure 18. So, in W1, it will be 100 vertices in the BG if block consists of
100 AUs and 200 if block consists of 200 AUs. In W2 and W3, it will be 300 vertices.
Having only conflicting AUs vertices in BG saves much space because each vertex
node takes 28-byte storage space.
The average block size in the Bitcoin and Ethereum blockchain is 1200 KB [27]
_≈_
and 20.98 KB [28], respectively measured for the interval of Jan 1[st], 2019 to Dec
_≈_
31[th], 2020. Further, the block size keeps on increasing, and so the number of trans
actions in the block. The average number of transactions in the Ethereum block is
100 [28]. Therefore, in the Ethereum blockchain, each transaction size is an average
_≈_
0.2 KB ( 200 bytes). We computed the block size based on these simple calcu_≈_ _≈_
lations when AUs vary in the block for W1. The Eqn(2) is used to compute the block
size (B) for the experiments.
_B = 200 ∗_ _NAUs_ (2)
Where, B is block size in bytes, NAUs number of AUs in block, and 200 is the average
size of an AU in bytes.
To store the block graph BG(V, E) in the block, we used adjacency list. In the
BG, a vertex node Vs takes 28 bytes storage, which consists of 3 integer variables and
2 pointers. While an edge node Es needs a total of 20 bytes storage. The Eqn(3) is used
to compute the size of BG (β bytes). While Eqn(4) is used to compute the additional
space (βp percentage) needed to store BG in the block.
37
-----
128
64
32
16
8
4
2
1
0.5
100 200 300 400 500 600
0.5
|Def-BTO BG Def-MVTO BG Opt-BTO BG Opt-MVTO BG|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|Def-BT O BG Def-MVTO BG Opt-BTO BG Opt-MVTO BG 128 128 Coin Contract 64 Coin Contract 64 Coin Contract 32 32 16 16 8 8 4 4 2 2 1 1 0.5 0.5||||||||||
||C|oin Contra|ct|||C|oin Contr|act||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|Col1|C|oin Contr|act|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
||||||
10 20 30 40 50 60
Number of Threads
(b) STM Miner on W2
0.5
1k 2k 3k 4k 5k 6k
Number of Shared Objects
(c) STM Miner on W3
Number of AUs
(a) STM Miner on W1
Figure 19: Percentage of additional space to store block graph in Ethereum block for coin contract.
128
64
32
16
128
64
32
16
128
64
32
16
8
4
2
1
8
4
2
1
0.5
0.5
100 200 300 400 500 600
8
4
2
1
0.5
|Col1|Ba|llot Contr|act|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
||||||
|Col1|Ba|llot Cont|ract|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
||||||
|Col1|Ba|llot Cont|ract|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
||||||
10 20 30 40 50 60
Number of Threads
(b) STM Miner on W2
1k 2k 3k 4k 5k 6k
Number of Shared Objects
(c) STM Miner on W3
Number of AUs
(a) STM Miner on W1
Figure 20: Percentage of additional space to store block graph in Ethereum block for ballot contract.
128
64
32
16
128
64
32
16
128
64
32
16
8
4
2
1
8
4
2
1
0.5
0.5
100 200 300 400 500 600
8
4
2
1
0.5
|Col1|Au|ction Con|tract|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
||||||
|Col1|Au|ction Co|ntract|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
||||||
|Col1|Au|ction Co|ntract|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
||||||
10 20 30 40 50 60
Number of Threads
(b) STM Miner on W2
1k 2k 3k 4k 5k 6k
Number of Shared Objects
(c) STM Miner on W3
Number of AUs
(a) STM Miner on W1
Figure 21: Percentage of additional space to store block graph in Ethereum block for auction contract.
128
64
32
16
128
64
32
16
128
64
32
16
8
4
2
1
8
4
2
1
0.5
0.5
100 200 300 400 500 600
8
4
2
1
0.5
|Col1|Mi|x Contrac|t|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
|Col1|Mi|x Contra|ct|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
|Col1|Mi|x Contra|ct|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
10 20 30 40 50 60
Number of Threads
(b) STM Miner on W2
1k 2k 3k 4k 5k 6k
Number of Shared Objects
(c) STM Miner on W3
Number of AUs
(a) STM Miner on W1
Figure 22: Percentage of additional space to store block graph in Ethereum block for mix contract.
38
-----
_β = (Vs ∗_ _NAUs) + (Es ∗_ _Me)_ (3)
Where, β is size of BG in bytes, Vs is size of a vertex node of BG in bytes, Es is size
(in bytes) of a edge node of BG, and Me is number of edges in BG.
_βp = (β ∗_ 100)/B (4)
The plots in Figure 19 to Figure 22 demonstrate the average percentage of addi
tional storage space required to store BG in the Ethereum block on all benchmarks and
workloads. We can observe that the space requirement also increases with an increase
in the number of dependencies and vertices in BG. However, the space requirement of
our proposed approach is smaller than the existing default approach. As shown in the
Figure 16, the dependencies and vertices are highest in the ballot contract compared to
other contracts, so the space requirement is also high, as shown in Figure 20. This is
because the ballot is a write-intensive benchmark. It can be seen that the space require
ments of BG by Opt-BTO BG and Opt-MVTO BG is smaller than Def-BTO BG and
Def-MVTO BG miner, respectively.
The proposed approach significantly reduces the BG size for mix contract as shown
in Figure 22 across all the workloads. Which clearly shows the storage efficiency of
the proposed approach. The storage advantage comes from using a bin-based approach
combined with the STM approach, where concurrent bin information needs to be added
into the block, which requires less space than having a corresponding vertex in BG
for each AUs of the block. So, we combine the advantages of both the approaches
(STM and Bin) to get maximum speedup with storage optimal BG. The average space
required for BG in % w.r.t. block size is 34.55%, 31.69%, 17.24%, and 13.79% by Def
BTO. Def- MVTO, Opt-BTO, and Opt-MVTO approach, respectively. The proposed
Opt-BTO and Opt-MVTO BG are 2 (or 200.47%) and 2.30 (or 229.80%) efficient
_×_ _×_
over Def-BTO and Def-MVTO BG, respectively. With an average speedup of 4.49
_×_
and 5.21 for Opt-BTO, Opt-MVTO concurrent miner over serial, respectively. The
_×_
Opt-BTO and Opt-MVTO decentralized concurrent validator outperform an average of
7.68 and 8.60 than serial validator, respectively.
_×_ _×_
39
-----
**7. Conclusion**
To exploit the multi-core processors, we have proposed the concurrent execution
of smart contract by miners and validators, which improves the throughput. Initially,
the miner executes the smart contracts concurrently using optimistic STM protocol as
BTO. To reduce the number of aborts and further improve efficiency, the concurrent
miner uses MVTO protocol, which maintains multiple versions corresponding to each
data object. Concurrent miner proposes a block that consists of a set of transactions,
concurrent bin, BG, previous block hash, and the final state of each shared data objects.
Later, the validators re-execute the same smart contract transactions concurrently and
deterministically in two-phase using concurrent bin followed by the BG given by miner,
which capture the conflicting relations among the transactions to verify the final state.
Overall, the proposed Opt-BTO and Opt-MVTO BG are 2 (or 200.47%) and 2.30
_×_ _×_
(or 229.80%) efficient over Def-BTO and Def-MVTO BG, respectively. With an av
erage speedup of 4.49 and 5.21 for Opt-BTO, Opt-MVTO concurrent miner over
_×_ _×_
serial, respectively. The Opt-BTO and Opt-MVTO decentralized concurrent validator
outperform an average of 7.68 and 8.60 than serial validator, respectively.
_×_ _×_
**Acknowledgements. This project was partially supported by a research grant from**
Thynkblynk Technologies Pvt. Ltd, and MEITY project number 4(20)/2019-ITEA.
**References**
[1] P. S. Anjana, S. Kumari, S. Peri, S. Rathor, A. Somani, An efficient framework
for optimistic concurrent execution of smart contracts, in: 2019 27th Euromicro
International Conference on Parallel, Distributed and Network-Based Processing
(PDP), IEEE, 2019, pp. 83–92.
[2] P. S. Anjana, S. Kumari, S. Peri, S. Rathor, A. Somani, Entitling concurrency
to smart contracts using optimistic transactional memory, in: Proceedings of
the 20th International Conference on Distributed Computing and Networking,
ICDCN ’19, Association for Computing Machinery, New York, NY, USA, 2019,
p. 508.
40
-----
[3] S. Nakamoto, Bitcoin: [A peer-to-peer electronic cash system, https://](https://bitcoin.org/bitcoin.pdf)
[bitcoin.org/bitcoin.pdf (2009).](https://bitcoin.org/bitcoin.pdf)
[[4] Ethereum, http://github.com/ethereum, [Acessed 26-3-2019].](http://github.com/ethereum)
[5] T. Dickerson, P. Gazzillo, M. Herlihy, E. Koskinen, Adding Concurrency to Smart
Contracts, in: Proceedings of the ACM Symposium on Principles of Distributed
Computing, PODC ’17, ACM, New York, NY, USA, 2017, pp. 303–312.
[[6] Solidity Documentation, https://solidity.readthedocs.io/, [Ac-](https://solidity.readthedocs.io/)
cessed 15-09-2020].
[7] C. H. Papadimitriou, The serializability of concurrent database updates, J. ACM
26 (4) (1979) 631–653.
[8] R. Guerraoui, M. Kapalka, On the correctness of transactional memory (2008)
175–184.
[9] G. Weikum, G. Vossen, Transactional Info Systems: Theory, Algorithms, and the
Practice of Concurrency Control and Recovery, 2002.
[10] P. Kumar, S. Peri, K. Vidyasankar, A TimeStamp Based Multi-version STM Al
gorithm, in: ICDCN, Springer, 2014, pp. 212–226.
[11] V. Saraph, M. Herlihy, An Empirical Study of Speculative Concurrency in
Ethereum Smart Contracts, in: International Conference on Blockchain Eco
nomics, Security and Protocols (Tokenomics 2019), Vol. 71 of OASIcs, Dagstuhl,
Germany, 2020, pp. 4:1–4:15.
[12] M. Herlihy, N. Shavit, On the nature of progress, OPODIS 2011, Springer.
[13] N. Szabo, Formalizing and securing relationships on public networks, First Mon
day 2 (9).
[14] L. Luu, J. Teutsch, R. Kulkarni, P. Saxena, Demystifying incentives in the con
sensus computer, in: Proceedings of the 22Nd ACM SIGSAC Conference on
Computer and Communications Security, CCS ’15, ACM, New York, NY, USA,
2015, pp. 706–719.
41
-----
[15] K. Delmolino, M. Arnett, A. E. Kosba, A. Miller, E. Shi, Step by step towards
creating a safe smart contract: Lessons and insights from a cryptocurrency lab, in:
Financial Cryptography and Data Security - FC 2016 International Workshops,
BITCOIN, VOTING, and WAHC, Christ Church, Barbados, February 26, 2016.,
Springer.
[16] L. Luu, D.-H. Chu, H. Olickel, P. Saxena, A. Hobor, Making smart contracts
smarter, in: CCS ’16, ACM.
[17] A. Zhang, K. Zhang, Enabling concurrency on smart contracts using multiversion
ordering, in: Web and Big Data, Springer, Cham, 2018, pp. 425–439.
[18] M. J. Amiri, D. Agrawal, A. El Abbadi, Parblockchain: Leveraging transaction
parallelism in permissioned blockchain systems, in: 2019 IEEE 39th International
Conference on Distributed Computing Systems (ICDCS), IEEE, 2019, pp. 1337–
1347.
[19] P. S. Anjana, H. Attiya, S. Kumari, S. Peri, A. Somani, Efficient concurrent exe
cution of smart contracts in blockchains using object-based transactional memory,
in: Networked Systems, Springer, Cham, 2021, pp. 77–93.
[20] R. Guerraoui, M. Kapalka, Principles of Transactional Memory, Synthesis Lec
tures on Distributed Computing Theory, 2010.
[21] P. Kuznetsov, S. Peri, Non-interference and local correctness in transactional
memory, Theor. Comput. Sci. 688 (2017) 103–116.
[22] M. P. Herlihy, J. M. Wing, Linearizability: A correctness condition for concurrent
objects, ACM Trans. Program. Lang. Syst., 1990.
[23] S. Peri, A. Singh, A. Somani, Efficient means of achieving composability using
object based semantics in transactional memory systems, in: Networked Systems,
Springer, Cham, 2019, pp. 157–174.
[[24] E. Muzzy, S. Rizvi, D. Sui, Ethereum by the numbers, https://media.](https://media.consensys.net/ethereum-by-the-numbers-3520f44565a9)
[consensys.net/ethereum-by-the-numbers-3520f44565a9,](https://media.consensys.net/ethereum-by-the-numbers-3520f44565a9)
[Accessed 15-09-2020] (2018).
42
-----
[25] N. Shukla, Ethereum’s increased gas limit enables net
work to hit 44 tps, [https://eng.ambcrypto.com/](https://eng.ambcrypto.com/ethereums-increased-gas-limit-enables-network-to-hit-44-tps/amp/)
[ethereums-increased-gas-limit-enables-network-to-hit-44-tps/](https://eng.ambcrypto.com/ethereums-increased-gas-limit-enables-network-to-hit-44-tps/amp/)
[amp/, [Accessed 15-09-2020].](https://eng.ambcrypto.com/ethereums-increased-gas-limit-enables-network-to-hit-44-tps/amp/)
[[26] P. S. Anjana, S. Kumari, S. Peri, S. Rathor, A. Somani, An efficient framework](http://arxiv.org/abs/1809.01326)
[for concurrent execution of smart contracts, CoRR abs/1809.01326.](http://arxiv.org/abs/1809.01326)
[URL http://arxiv.org/abs/1809.01326](http://arxiv.org/abs/1809.01326)
[[27] Bitcoin Block Size, https://www.blockchain.com/en/charts, [Ac-](https://www.blockchain.com/en/charts)
cessed 15-09-2020].
[28] Ethereum Stats, [https://etherscan.io/charts,](https://etherscan.io/charts) [Accessed 15-09
2020].
43
-----
| 29,080
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2102.04875, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "CLOSED",
"url": ""
}
| 2,021
|
[
"JournalArticle"
] | false
| 2021-02-09T00:00:00
|
[
{
"paperId": "3ead9ba6582992e9d4fbafaeacb8cdca7a549ad5",
"title": "Ethereum"
},
{
"paperId": "ed8fc287cd7c7e40ac6b0648b0d828bf91ecfb9c",
"title": "ResilientDB"
},
{
"paperId": "4989b09f07123da8ec2b540f20ec0b80a63ed1ee",
"title": "SharPer: Sharding Permissioned Blockchains Over Network Clusters"
},
{
"paperId": "c8a717f1ec7cfc38a98a1950b4df2f63eb7f8447",
"title": "Blurring the Lines between Blockchains and Database Systems: the Case of Hyperledger Fabric"
},
{
"paperId": "23722c506c2ca2d8c9319dd9cb677411c8384807",
"title": "Efficient Concurrent Execution of Smart Contracts in Blockchains Using Object-Based Transactional Memory"
},
{
"paperId": "a8c6520fa4be060765891cc928c4081909997f4c",
"title": "ParBlockchain: Leveraging Transaction Parallelism in Permissioned Blockchain Systems"
},
{
"paperId": "bfc99db58112b0b6cdd5878d00cabbdfb453b4ac",
"title": "An Empirical Study of Speculative Concurrency in Ethereum Smart Contracts"
},
{
"paperId": "535ddd08250e2c2e1f5539d375c1f569f83ccb1a",
"title": "A simple and practical concurrent non-blocking unbounded graph with linearizable reachability queries"
},
{
"paperId": "160995fa72e82bdde408fe0a554575c67ad6e888",
"title": "RapidChain: Scaling Blockchain via Full Sharding"
},
{
"paperId": "e9af75cb23ae70e5c6b847e80f6d278e76998b59",
"title": "An Efficient Framework for Optimistic Concurrent Execution of Smart Contracts"
},
{
"paperId": "020a39cbe561ebe1af9f9dcb5c8a5c201968a40a",
"title": "Entitling concurrency to smart contracts using optimistic transactional memory"
},
{
"paperId": "b6c8807980975bab50cca45fbaa964d7f65c65dd",
"title": "An Efficient Framework for Concurrent Execution of Smart Contracts"
},
{
"paperId": "f3b7e6c990e4a7d1446dee0157cb2f658458b08a",
"title": "Enabling Concurrency on Smart Contracts Using Multiversion Ordering"
},
{
"paperId": "05d2acfe58593fc8d601efd623d55c3f5828fa48",
"title": "Efficient Means of Achieving Composability Using Object Based Semantics in Transactional Memory Systems"
},
{
"paperId": "db458ee14286fa3c314794479bc3a5544f758356",
"title": "Towards Scaling Blockchain Systems via Sharding"
},
{
"paperId": "1b4c39ba447e8098291e47fd5da32ca650f41181",
"title": "Hyperledger fabric: a distributed operating system for permissioned blockchains"
},
{
"paperId": "77a90eb95dd293232432f92ee186b604650a69de",
"title": "Efficient means of Achieving Composability using Transactional Memory"
},
{
"paperId": "97b4375a71e98fb5b4628b3cf9bf80c4e006e891",
"title": "BLOCKBENCH: A Framework for Analyzing Private Blockchains"
},
{
"paperId": "f4277d119f1354c0f999667b8f5a653c5b6a08cc",
"title": "Adding concurrency to smart contracts"
},
{
"paperId": "7968129a609364598baefbc35249400959406252",
"title": "Making Smart Contracts Smarter"
},
{
"paperId": "94cf401c112e409b63077be177e7b3d80a0bbfd7",
"title": "A Secure Sharding Protocol For Open Blockchains"
},
{
"paperId": "1542f3fe7bf34c7cff7c747f59bdbbae777c90cd",
"title": "Step by Step Towards Creating a Safe Smart Contract: Lessons and Insights from a Cryptocurrency Lab"
},
{
"paperId": "54a41191d9a7ac8e9e004c19560128b66fcbdd79",
"title": "Demystifying Incentives in the Consensus Computer"
},
{
"paperId": "92fb87c2d29ac67170763fa59d32de6e9188e088",
"title": "A TimeStamp Based Multi-version STM Algorithm"
},
{
"paperId": "ec79422e0bfdb61d8b6d2a6ec5b2dfbcab970852",
"title": "Innovative instructions and software model for isolated execution"
},
{
"paperId": "b7155128f23c6e9f954bfd694adc2dacd1a82754",
"title": "Non-interference and local correctness in transactional memory"
},
{
"paperId": "2971f8be9d651defda905fae34fdab51d6cbd5d6",
"title": "On the Nature of Progress"
},
{
"paperId": "fb9ef3c0d7e95b91da1832f5d5df289b92c4e064",
"title": "Principles of Transactional Memory"
},
{
"paperId": "13f7c5807452ae602046582a385c0fb544ec5de1",
"title": "On the correctness of transactional memory"
},
{
"paperId": "a3769b3f148d10aa61a3937c2eefdf9d092141dc",
"title": "Transactional information systems: theory, algorithms, and the practice of concurrency control and recovery"
},
{
"paperId": "5b4cf1e37954ccd1ca6b315986d45904f9d2f636",
"title": "Formalizing and Securing Relationships on Public Networks"
},
{
"paperId": "8acb61984986540cda5d7da2a9cff819b245751f",
"title": "Linearizability: a correctness condition for concurrent objects"
},
{
"paperId": "e7ab23d011e5183db78cfea48e303210f6e57e2e",
"title": "The serializability of concurrent database updates"
},
{
"paperId": null,
"title": "ByShard"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": null,
"title": "Transactional Info Systems: Theory"
},
{
"paperId": null,
"title": "the number of threads from 10 to 60 while fixed the AUs to 300 and shared objects to 2K"
},
{
"paperId": null,
"title": "Solidity Documentation"
},
{
"paperId": null,
"title": "Bitcoin Block Size"
}
] | 29,080
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/003fa2b27bf5e7c404bd074751c7b35d08e7629a
|
[
"Computer Science"
] | 0.88034
|
Optimizing Data Management in Grid Environments
|
003fa2b27bf5e7c404bd074751c7b35d08e7629a
|
OTM Conferences
|
[
{
"authorId": "3249432",
"name": "A. Zissimos"
},
{
"authorId": "1844396",
"name": "Katerina Doka"
},
{
"authorId": "2563159",
"name": "A. Chazapis"
},
{
"authorId": "2934849",
"name": "Dimitrios Tsoumakos"
},
{
"authorId": "1774783",
"name": "N. Koziris"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
# Optimizing Data Management in Grid
Environments
Antonis Zissimos, Katerina Doka, Antony Chazapis, Dimitrios Tsoumakos,
and Nectarios Koziris
National Technical University of Athens
School of Electrical and Computer Engineering
Computing Systems Laboratory
_{azisi,katerina,chazapis,dtsouma,nkoziris}@cslab.ece.ntua.gr_
**Abstract. Grids currently serve as platforms for numerous scientific**
as well as business applications that generate and access vast amounts
of data. In this paper, we address the need for efficient, scalable and
robust data management in Grid environments. We propose a fully decentralized and adaptive mechanism comprising of two components: A
Distributed Replica Location Service (DRLS ) and a data transfer mechanism called GridTorrent. They both adopt Peer-to-Peer techniques in
order to overcome performance bottlenecks and single points of failure.
On one hand, DRLS ensures resilience by relying on a Byzantine-tolerant
protocol and is able to handle massive concurrent requests even during
node churn. On the other hand, GridTorrent allows for maximum bandwidth utilization through collaborative sharing among the various data
providers and consumers. The proposed integrated architecture is completely backwards-compatible with already deployed Grids. To demonstrate these points, experiments have been conducted in LAN as well
as WAN environments under various workloads. The evaluation shows
that our scheme vastly outperforms the conventional mechanisms in both
efficiency (up to 10 times faster) and robustness in case of failures and
flash crowd instances.
## 1 Introduction
One of the most critical components in Grid systems is the data management
layer. Grid computing has attracted several data-intensive applications in the
scientific field, such as bioinformatics, physics or astronomy. To a great extent,
these applications rely on analysis of data produced by geographically disperse
scientific devices such as sensors or satellites etc. For example, the Large Hadron
Collider (LHC) project at CERN [1] is expected to generate tens of terabytes of
raw data per day that have to be transferred to academic institutions around
the world, in seek of the Higgs boson. Apart from that, business applications
manipulating vast amounts of data have lately started to invade Grid environments. Gredia [2] is an EU-funded project which proposes a Grid infrastructure
for sharing of rich multimedia content. To motivate this approach, let us consider
R. Meersman, T. Dillon, P. Herrero (Eds.): OTM 2009, Part I, LNCS 5870, pp. 497–512, 2009.
_⃝c_ Springer-Verlag Berlin Heidelberg 2009
-----
498 A. Zissimos et al.
the following scenario: News agencies have created a joint data repository in the
Grid, where journalists, photographers, editors, etc can store, search and download various news content. Assume that just minutes after a breaking news-flash
(e.g., the riots in Athens), a journalist on scene captures a video of the protests
and uploads it on the Grid. Hundreds of journalists and editors around the world
need to be able to quickly locate and efficiently download the video in order to
include it in their news reports. Thus, it is imperative that, apart from optimized
data transfer, such a system should be able to cope with high request rates – to
the point of a flash crowd.
Faced with the problem of managing extremely large scale datasets, the Grid
community has proposed the Data Grid architecture [13], defining a set of basic
services. The most fundamental of them are the Data Transfer service, responsible for moving files among grid nodes (e.g., GridFTP [7]), the Replica Location
service (RLS ), which keeps track of the physical locations of files and the Optimization service, which selects the best data source for each transfer in terms of
completion time and manages the dynamic replica creation/deletion according
to file usage statistics.
However, all of the aforementioned services heavily rely on centralized mech
anisms, which constitute performance bottlenecks and single points of failure:
The so far centralized RLS can neither scale to large numbers of concurrent
requests nor keep pace with frequent updates performed in highly dynamic environments. GridFTP fails to make optimal use of all bandwidth resources in
cases where the same data must be transferred to multiple sites and does not automatically maximize bandwidth utilization. Even when using multiple parallel
TCP channels, a manual configuration is required. Most importantly, GridFTP
servers face the danger of collapsing under heavy workload conditions, making
critical data unavailable.
In this paper, we introduce a novel data management architecture which in
tegrates the location service with data transfer under a fully distributed and
adaptive philosophy. Our scheme comprises of two parts that cooperate to efficiently handle multiple concurrent requests and data transfer: The Distributed
_Replica Location Service (DRLS) that handles the locating of files and GridTor-_
_rent that manages the file transfer and related optimizations. This is pictorially_
shown in Figure 1.
DRLS utilizes a set of nodes that, organized in a DHT, equally share the
replica location information. The unique characteristic of the DRLS is that, besides the decentralization and scalability that it offers, it fully supports updates
on the multiple sites of a file that exist in the system. Since in many dynamic
applications data locations change rapidly with time, our Byzantine-tolerant
protocol guarantees consistency and efficiently handles updates on the various
data locations stored, unlike conventional DHT implementations. GridTorrent is
a protocol that, inspired by BitTorrent, focuses on real-time optimization of data
transfers on the Grid, fully supporting the induced security mechanisms. Based
on collaborative sharing, GridTorrent allows for low latency and maximum bandwidth utilization, even under extreme load and flash crowd conditions. It allows
-----
Optimizing Data Management in Grid Environments 499
transfers from multiple sites to multiple clients and maximizes performance by
piece exchange among the participants. A very important characteristic of the
proposed architecture is that it is designed to interface and exploit well-defined
and deployed Data Grid components and protocols, thus being completely backwards compatible and readily deployable. This work includes an experimental
section that includes a real implementation of the system and results over both
LAN and WAN environments with highly dynamic and adverse workloads.
**Fig. 1. Pictorial description of the proposed architecture and component interaction.**
Although DRLS nodes and Storage Servers appear to be a separate physical entity, it
is possible to coexist in order to exploit all the available resources.
## 2 Current Status
In this section we overview the related work in the area of data management. We
first go through existing practices for the Replica Location and Data Transfer
services. Next, we present a brief description of the BitTorrent protocol, which is
the basis of the proposed GridTorrent mechanism and we finally mention other
relevant data transfer mechanisms.
**2.1** **Locating Files**
**Centralized Catalog and Giggle. In Grid environments, it is common to**
maintain local copies of remote files, called replicas [23] to guarantee availability
and reduce latencies. To work with a file, a Grid application first asks the RLS to
locate corresponding replicas of the requested item. This translates to a query
towards the Replica Catalogue, which contains mappings between Logical File
_Names (LFNs) and Physical File Names (PFNs). If a local replica already exists_
the application can directly use it, otherwise it must be transferred to the local
node. This initial architecture posed limitations to the scalability and resilience
of the system. Efforts on distributing the catalog resulted in the most widespread
-----
500 A. Zissimos et al.
**Fig. 2. Replica Location Service deploy-**
ment scenario with Giggle
**Fig. 3. Replica Location Service deploy-**
ment scenario with P-RLS
solution currently deployed on the Grid, the Giggle Framework [14]. To achieve
distribution, Giggle proposes a two-tier architecture, comprising of the Local
_Replica Catalogs (LRC_ s), which map LFNs to PFNs across a site and the Replica
_Location Indices (RLI s), which map LFNs to LRCs (Figure 2)._
**Distributed Replica Location Service (DRLS). Still, the centralized na-**
ture of the catalogs remains the bottleneck of the system, when the number of
performed searches increases. Furthermore, the updates in the LRCs induce a
complex and bandwidth-consuming communication scheme between LRCs and
RLIs. To this end, in [12] we proposed a RLS based on a Distributed Hash Table
(DHT). The underlying DHT is a modified Kademlia peer-to-peer network that
enables mutable storage. In this work, we enhance our solution by exploiting
_XOROS [11], a DHT that provides a Byzantine-tolerant protocol for serializable_
data updates directly at the peer-to-peer level. In this way, we can store static
information such as file properties with the traditional distributed hash table
put/get mechanism, as well as dynamic information such as the actual LFN to
PFN mappings with an update mechanism that ensures consistency.
**Related Work. Peer-to-peer overlay networks and corresponding protocols**
have already been incorporated in other RLS designs. In [10], Min Cai et al., have
replaced the global indices of Giggle with a Chord network, producing a variant
of Giggle called P-RLS. A Chord topology can tolerate random node joins and
leaves, but does not provide data fault-tolerance by default. The authors choose
to replicate the distributed RLI index in the successor set of each root node (the
node responsible for storage of a particular mapping), effectively reproducing
Kademlia’s behavior of replicating data according to the replication parameter
_κ. In order to update a specific key-value pair, the new value is inserted as_
usual, by finding the root node and replacing the corresponding value stored
there and at all nodes in its successor set. While there is a great resemblance
to this design and the one we propose, there is no support for updating keyvalue pairs directly in the peer-to-peer protocol layer. It is an open question
how the P-RLS design would cope with highly transient nodes. Frequent joins
and departures in the Chord layer would require nodes continuously exchanging
-----
Optimizing Data Management in Grid Environments 501
key-value pairs in order to keep the network balanced and the replicas of a
particular mapping in the correct successors. Our design deals with this problem,
as the routing tables inside the nodes are immune to participants that stay in
the network for a very short amount of time. Moreover, our protocol additions to
support mutable data storage are not dependent on node behavior; the integrity
of updated data is established only by relevant data operations. Finally, the PRLS approach retains the two-tier Giggle architecture, since the actual LFN to
PFN mappings are still kept in Local Replica Catalogs imposing a bottleneck for
the whole system with no support for load-balancing and failover mechanisms.
In another variant of an RLS implementation using a peer-to-peer network [21],
all replica location information is organized in an unstructured overlay and all
nodes gradually store all mappings in a compressed form. This way each node
can locally serve a query without forwarding requests. Nevertheless, the amount
of data (compressed or not) that has to be updated throughout the network
each time, can grow to such a large extent, that the scalability properties of the
peer-to-peer overlay are lost. In contrast to other peer-to-peer RLS designs, we
envision a service that does not require the use of specialized servers for locating
replicas. According to our design, a lightweight DHT-enabled RLS peer can even
run at every node connected to the Grid.
**2.2** **Transferring Files**
**The GridFTP Protocol. The established method for data transfer in the Grid**
is GridFTP [7], a protocol defined by the Global Grid Forum and adopted by
the majority of the existing middleware. GridFTP extends the standard FTP,
including features like the Grid Security Infrastructure (GSI) [17] and third-party
control and data channels. A more distributed approach of the GridFTP service
has lead to the Globus Stripped GridFTP protocol [8], included in the current
release of the Globus Toolkit 4 [3]. Transfers of data striped or interleaved across
multiple servers, partial file transfers and parallel data transfers using multiple
TCP streams are some of the newly added features.
**The GridTorrent Approach. Yet, the GridFTP protocol is still based on the**
client-server model, inducing all the undesirable characteristics of centralized
techniques, such as server overload, single points of failure and the inability to
cope with flash crowds. We argue that large numbers of potential downloaders
together with the well-documented increase in the volume of data by orders of
magnitude stress the applicability of this approach. We propose a replica-aware
algorithm based on the P2P paradigm, through which data movement services
can take advantage of multiple replicas to boost aggregate transfer throughput.
In our previous work [27] there were made some preliminary steps towards this
direction. A first GridTorrent prototype was implemented and one could use
the Globus RLS and various GridFTP storage servers to download a file, as
well as exploit other simultaneous downloaders, thus making a first step towards
cooperation. Nevertheless, a core component of every Grid Service, the Globus
Security Infrastructure (GSI) wasn’t integrated with our previous prototype.
-----
502 A. Zissimos et al.
Furthermore, in torrent-like architectures like GridTorrent there is the inherent
problem of not being able to upload a file unless there are downloaders interested
in the specified file. To tackle this problem we introduce the GridTorrent’s control
channel, a separate communication path that can be used to issue commands to
remote GridTorrent servers. Thus, in order to upload a file several GridTorrent
servers are automatically notified and after the necessary authentication and
authorization phases, the file is uploaded to multiple servers simultaneously and
more efficiently. There is no need for the user to issue another set of commands for
replication, because this is handled by GridTorrent. Finally, in order to scale to
larger deployments our prototype is integrated with the aforementioned DRLS.
In the present work, we extend GridTorrent and propose a complete architecture
which can be directly deployed in a real-life Grid environment and integrate with
existing Grid services.
**The BitTorrent Protocol. Our work as well as other related work on the area**
rely on the BitTorrent protocol [15]. BitTorrent is a peer-to-peer protocol that
allows clients to download files from multiple sources while uploading them to
other users at the same time, rather than obtaining them from a central server.
Its goal is to reduce the download time for large, popular files and the load on
servers that serve these files. BitTorrent divides every file in piece and each piece
in blocks. Clients find themselves through a centralized service called the tracker
and can exploit this fragmentation by simultaneously downloading blocks from
many sources. Useful file information is stored in a metainfo file, identified by
the extension .torrent. Peers are categorized in seeds when they already have
the whole file and leechers when they are still downloading pieces. The latest
version of the BitTorrent client [4] uses a Distributed Hash Table (DHT) for dynamically locating the tracker responsible for each file transaction. Note that, in
contrast to the Data Management architecture presented here, BitTorrent does
not yet use a DHT for storing and distributing file information and metadata.
The corresponding .torrent files still have to be downloaded from a central repository, or manually exchanged between users. The data transfer component of our
architecture, GridTorrent, enhances the BitTorrent protocol with new features
in order to make it compatible with existing Grid architectures. Moreover, new
functionality is added, so as to be able to instruct downloads to remote peers.
Finally, the tracker, which constitutes a centralized component of the BitTorrent
architecture is replaced by DRLS, eliminating possible performance bottlenecks
and single points of failure.
**Related Work Using BitTorrent. A related work that is based in torrent-**
like architecture for data transfers in Grid environments can be found in GridTorrent Framework [18], which cites our previous work and therefore should not
be confused our proposed architecture. The authors of GridTorrent Framework focus on a centralized tracker to provide information for the available replicas, but
also use the tracker to impose security policies for data access. Their work also
extend to the exploitation of parallel TCP streams between two single peers in
order to surpass the limitations of the TCP window algorithm and saturate high
-----
Optimizing Data Management in Grid Environments 503
bandwidth links. Nevertheless, the Framework’s centralized design suffers of all
the undesirable characteristics of centralized techniques, while the lack of integration with standardized Grid components remains a substantial disadvantage. A
similar work is presented in [25], where the authors compare BitTorrent to FTP
for data delivery in Computational Desktop Grids, demonstrating that the former
is efficient for large file transfers and scalable when the number of nodes increases.
Their work is concentrated in application environments like SETI@Home [16], distributed.net [5] and BOINC [9] where methods like cpu scavenging are used to get
temporary resources from Desktop computers. In contrast to GridTorrent, their
prototype uses centralized data catalog and repository, fails to communicate with
standard Grid components like GridFTP and RLS, lacks the support of Globus
Security Infrastructure and doesn’t tackle the problem of efficient file upload in
multiple repositories.
**Other Data Transfer Mechanisms. The efficient movement of distributed**
volumes of data is a subject of constant research in the area of distributed
systems. Various techniques have been proposed, apart from the ones mentioned
above, centralized or in the context of the peer-to-peer paradigm. Kangaroo [24]
is a data transfer system that aims at better overall performance by making
opportunistic use of a chain of servers. The Composite Endpoint Protocol [26]
collects high-level transfer data provided by the user and generates a schedule
which optimizes the transfer performance by producing a balanced weighting of
a directed graph. Nevertheless, the aforementioned models remain centralized.
Slurpie [22] follows a similar approach to BitTorrent, as it targets bulk data
transfer and makes analogous assumptions. Nonetheless, unlike BitTorrent, it
does not encourage cooperation.
## 3 GridTorrent
GridTorrent, a peer-to-peer data transfer approach for Grid environments, was
initially introduced in [27]. Based on BitTorrent, GridTorrent allows clients to
download files from multiple sources while uploading them to other users at the
same time, rather than obtaining them from a central server. Using BitTorrent
terminology, GridTorrent creates a swarm where leechers are users of the Grid
downloading data and seeds are storage elements or users sharing their data in
the Grid. The cooperative nature of the algorithm ensures maximum bandwidth
utilization and its tit-for-tat mechanism provides scalability in heavy load conditions or flash crowd situations. More specifically, GridTorrent exploits existing
infrastructure since GridFTP repositories can be used as seeds with other peers
downloading from them using the GridFTP partial file transfer capability. The
_.torrent file used in BitTorrent is replaced by the already existing RLS. In order_
to start a file download only the file’s unique identifier (UID) is required, which is
actually the content’s digest. The rest of the information can be extracted from
the RLS using this UID. Finally, GridTorrent makes the BitTorrent’s tracker
service obsolete and integrates its functionality in the RLS. Therefore, all the
-----
504 A. Zissimos et al.
peers that participate in a GridTorrent swarm are also registered in the RLS, so
that they are able to locate each other. In the following paragraphs we analyze
the further enhancements we have developed in GridTorrent.
**3.1** **Security**
In a Grid environment, only authenticated users are considered trustworthy
of serving or downloading file fragments. Moreover, encryption is provided for
the transfer of sensitive information. In order to guarantee security, our data
transfer mechanism implements the Globus Grid Security Infrastructure (GSI).
Currently, GridTorrent deploys the standard GSI mechanisms, in terms of authentication, integrity and encryption. A Java TCP socket is created and
wrapped, along with the host credentials, as a grid-enabled socket. This is performed when the plain socket passes through the createSocket method of the
GssSocketFactory of the globus GSI API. Thus, an appropriate socket is created,
with respect to the input parameters that enable encryption, message integrity,
peer authentication or none of the above, according to the user’s preferences.
**3.2** **Control Channel**
In GridTorrent, peers communicate with each other and exchange information
regarding the current file download according to the protocol. A novel feature of
GridTorrent, not found in BitTorrent protocol, is the ability of a peer to issue
commands to remote peers. We call this feature control channel, because it is
similar to the GridFTP’s control channel. This feature overcomes the BitTorrent
disadvantage of not being able to upload data before another peer is interested
to download them, which is common practice for a peer-to-peer network, but
not applicable to Grid environments. In detail, the GridTorrent control channel
supports the following commands:
Start. [UID] [RLS] Starts downloading the file with the given UID, getting
publishing information from the given RLS.
Start. [filename] [RLS] Starts sharing the existing file determined by the given
local filename. RLS will be used for publishing information regarding the
download.
Stop. [UID] Stops an active file download. Takes as a parameter the UID of
the file to stop downloading.
Delete. [filename] Deletes a local file.
List. Lists all active file downloads of the node.
Get. [UID] Gets statistics about an active file download regarding messages
exchanged and data transfer throughput. Takes the UID of the file as a
parameter.
Shutdown. Shuts down the GridTorrent peer.
## 4 Replica Location Service
The RLS used in GridTorrent stores two types of metadata: static information
(file properties) and dynamic information (peers that have the file or part of it).
-----
Optimizing Data Management in Grid Environments 505
In our design, we select a set of attributes required to initiate a torrent-like data
transfer. Therefore, the file properties stored in the RLS are the following:
**Logical filename (LFN): This is the name of the stored file. This name is**
supplied by the user to identify his file.
**File size: The total size of the file in bytes.**
**File hash type: The type of the hashing used to identify the whole file data.**
Hashing is enabled in this level to ensure data consistency.
**File hash: The actual file data hash. It is also used as a UID for each file.**
**Piece length: The size of each piece in which the file is segmented. The piece**
is the smallest fraction of data that is used for validating and publishing
purposes. Upon a complete piece download and integrity check, other peers
are informed of the acquisition.
**Piece hash type: The type of the hashing used to identify each piece of the**
file. Hashing is enabled in this level to facilitate partial download and resume
download operations.
**Piece hash: The actual piece data hash. All the hashes of all the pieces are**
concatenated starting from the first piece.
Besides the file properties, the RLS also stores a list of all the physical locations
where the file is actually stored. This is described by a physical filename (PFN).
A physical filename has the following form:
protocol://fqdn:port/path/to/file
where protocol is the one that is used for the data transfer. Currently the
supported protocols are gsiftp (GridFTP) and gtp (GridTorrent). The fully
qualified domain name fqdn is the DNS registered name of the peer and it is
followed by the peer’s local path and the local filename.
**4.1** **Distributed RLS**
RLS as a core Grid service must use distribution algorithms with unique scalability and fault-tolerance properties–assets already available by peer-to-peer
architectures. To this end, in [12] we proposed a Replica Location Service based
on a Distributed Hash Table (DHT). The underlying DHT is a modified Kademlia peer-to-peer network that enables mutable storage. We enhance this work by
exploiting the XOR Object Store (XOROS) [11], a DHT that provides serializable data updates to the primary replicas of any key in the network. XOROS
uses a Kademlia [19] routing scheme, along with a modified protocol for inserting and looking up values, that accounts for dynamic or Byzantine behavior of
overlay participants. The put operation allows either an in-place update, or a
read-modify-write via a single, unified transaction, that consists of a mutual exclusion mechanism and an accompanying value propagation step. GridTorrent
has a modular architecture that enables the use of different types of Replica Location Service per swarm. More specifically, when a user initiates a file transfer,
he must also supply the RLS URL, which has the following form:
protocol://fqdn:port
-----
506 A. Zissimos et al.
**Table 1. Security overhead in the overall file transfer**
configuration mean time (sec) overhead
authentication 43,3 0%
authentication + integrity check 44,3 2%
authentication + encryption 55,3 27%
Currently the supported protocols are rls (Globus RLS) and drls (Distributed
RLS based on XOROS), so GridTorrent parses the URL to load the corresponding RLS implementation. One advantage of the above modification is the use
of already implemented features to model our solution, preserving the backwards compatibility with the existing Grid Architecture. Therefore, the proposed
changes in the current Grid Architecture not only enhance the performance of
data transfers, but also seamlessly integrate with the current state-of-the-art in
Grid Data Management.
## 5 Implementation and Experimental Results
Our GridTorrent prototype implementation is entirely written in Java. The GridTorrent client has bindings with Globus Toolkit 4 libraries [3] and exploits the
GridFTP client API, the Replica Location Service API and the Grid Security
Infrastructure API. These bindings enrich our prototype with the abilities to use
existing grid infrastructure, such as data stored in GridFTP servers, metadata
stored in Globus RLS and x509 certificates that are already issued to users and
services for authentication, authorization, integrity protection and confidentiality. For the experiments we started GridTorrent to a number of physical nodes
and issued remote requests through the control channel, to initiate and monitor
the overall file transfer.
**5.1** **GridTorrent Security and Fault-Tolerance Performance**
We first test the effect that Grid Security has in the overall data transfer process
by monitoring the time needed for the transfer of a 128MB file. We distinguish
three different configurations for Globus GSI:
**Authentication only: This is a simple configuration where both sides need**
to present a valid x509 certificate signed from a Certificate Authority that is
mutually trusted.
**Integrity check: In this configuration besides the mutual authentication, the**
receiver verifies all messages to prevent man-in-the-middle attacks.
**Encryption: This is the most secure configuration, where apart form mutual**
authentication and integrity check, every message is also encrypted.
-----
Optimizing Data Management in Grid Environments 507
**Fig. 4. Average time of completion over**
various number of failure rates and block
sizes
**Fig. 5. Average size of uploaded data**
from leechers only over various number of
failure rates and block sizes
The test is executed 100 times between a pair of peers (different each time) in
side the same LAN. As shown in Table 1, only the Globus GSI configuration that
enables encryption has considerable (about 30%) cost on the file transfer latency.
This overhead is natural, because when encryption is enabled every message is
duplicated in memory and parsed by a cpu-intensive cryptographic algorithm.
We continue our experiments by testing GridTorrent’s tolerance in an error
prone network. For this purpose we use a single server acting as seed for a file
of 128MB and 16 clients that simultaneously download the file. After extensive
testing we have tuned GridTorrent to use a piece size of 1024KB. In GridTorrent,
just like BitTorrent, hashes are kept in a per piece basis, and peers exchange a
smaller fraction of data called block. To simulate the failure rate, every peer
(leecher or seed) makes a decision to sent altered blocks based on a random
uniformly distributed function, without enabling any globus security option. The
results are presented in Figures 4 and 5. First of all, in all cases the download
completes with an acceptable overhead, in contrast to GridFTP which has no
mechanism of protection against these kinds of errors. Furthermore, we notice
that as the failure rate increases transfers with smaller block sizes are more
heavily affected, because one bad block causes the retransmission of all the blocks
in a certain piece. So in cases where block size is [1]
8 [of the piece size, the slowdown]
is 3 to 4 times in comparison with the case of a block the size of a piece and in
failure rates up to 16%.
**5.2** **GridTorrent vs. GridFTP Performance**
In this experiment we compare the performance of the GridTorrent prototype
against the current GridFTP implementation in both Local and Wide Area Network environments. Specifically, we increase the number of concurrent requests
over a single 128MB file from different physical nodes. Results for different file
sizes (up to 512MB) are qualitatively similar. We measure the minimum, maximum and average completion time of this operation on all requesters. Our setup
assumes a single server that seeds this file and up to 32 physical machines that
issue simultaneous download requests. For the LAN experiments, we use our
-----
508 A. Zissimos et al.
**Fig. 6. Min, max and average time of**
completion for both GFTP and GTP over
various number of downloaders in the
LAN setting
**Fig. 8. Min, max and average time of**
completion for both GFTP and GTP over
various number of downloaders in the
WAN setting
**Fig. 7. Min, max and average size of up-**
loaded data from leechers only in the LAN
setting
**Fig. 9. Min, max and average size of up-**
loaded data from leechers only in the
WAN setting
laboratory cluster infrastructure with gigabit ethernet interconnect. For the
WAN experiments, we allocate the same amount of nodes in PlanetLab [20,6].
In this environment, there exist several heavily loaded nodes, geographically distributed with various network latencies and bandwidth constraints. Obviously,
PlanetLab offers an environment more similar to a real world Grid environment,
where requests may occur from different places over the globe using personal
computers. Location information on the file, the list of peers that obtain or are
currently downloading the file, as well as other file metadata are stored in DRLS,
located in a single machine which simulates 30 nodes in a XOROS DHT.
In Figure 6, we present the completion times for the LAN setup. We notice
that GridTorrent can be over 10 times faster than GridFTP in all measured
times. This occurs for the largest number of leechers. GridFTP cannot enforce
cooperation among nodes; thus a single server must accommodate all clients in
an serialized manner. One would expect that GridFTP would not be affected by
the flash crowd effect in a LAN, especially with the Gigabit ethernet connectivity, but this is not the case. GridTorrent shows remarkable performance in all
-----
Optimizing Data Management in Grid Environments 509
**Table 2. The effect of the κ parameter in DRLS**
_κ α ϵ Average Messages Mean latency (sec)_
20 3 2 44 1.37
15 3 2 42 1.06
10 3 2 30 0.83
5 3 2 22 0.61
three metrics, as they remain unaltered by the increase in requests. Our method
can be readily employed to sustain flash crowd effects as, due to the increasing
cooperation among peers, it effectively reduces the load of the single server and
provides adaptive portions of the file to the rest of the nodes. In Figure 7, we
present this cooperation in terms of bytes sent exclusively among the leechers.
We notice that, as the number of leechers increase, this traffic increases, showing the clients’ active part in this process. On average, each leecher seems to be
responsible for sending almost one file’s worth of data to the other leechers and
no more than two times the file size in maximum.
Figure 8 summarizes our results from the WAN setup. It is evident that
GridFTP cannot cope with increasing transfer loads in a real world environment.
GridFTP’s minimum times remain constantly low and close to GridTorrent’s due
to the fact that there always exists at least one leecher close to the single server
that downloads the file faster. Furthermore, we register a major difference in
GridFTP’s maximum, minimum and average times (e.g., for 32 leechers the last
one receives the file 30 times slower than the faster one and about 2 times slower
on average). This large variance is due to the protocol’s inability to cope with
heterogeneity – small number of close nodes finish early while the rest of the
clients that are not close to the server are drastically affected.
In GridTorrent, the closest nodes that finish faster are exploited and upload
data to the remaining ones, decreasing the overall completion time that gracefully scales with the number of simultaneous leechers. Our method is 3 to 10
times faster both on average and in the worst case, while it exhibits very small
variation between the three reported metrics. In Figure 9, we can see the level of
cooperation between the leechers as they increase in numbers. We clearly notice
a greater variance in the bytes sent by each peer compared to the LAN setting.
This shows how adaptive GridTorrent is: Close nodes that finish early contribute
to the other peers more than average, while the are few nodes that finish late and
cannot share interesting data with the rest of the peers. The WAN experiment
depicts in the best way why our protocol is a robust, bandwidth efficient means
of file transfer that vastly outperforms current practices.
**5.3** **DRLS Performance**
To evaluate the DRLS implementation we created a scenario where 64 peers,
storing about 1000 items perform random lookups and updates at increasing
rates. Measuring the mean number of messages and time required for each
-----
510 A. Zissimos et al.
operation reveals that in the absence of node churn, results remain almost constant, even when constantly doubling the request rate from 1 operation every
6 seconds up to 10 _[operations]_ . This suggests that the underlying XOROS pro
_sec_
tocol scales to flash-crowd usage patterns as expected. When participants start
to leave and new ones enter some messages get lost, so nodes have to wait for
timeouts to expire before proceeding with a command. Nevertheless, as the node
population settles and routing tables are updated, the performance characteristics return to the expected levels. An interesting find is that during periods of
churn, a higher request rate may result in more messages, but this helps nodes
react quicker to overlay changes and refresh their routing tables faster.
During this series of experiments we have also investigated the impact of the
various DHT parameters: κ, which controls the number of replicas kept for each
data item and sets the quorum size for the mutex protocol, α, which defines
how many parallel messages can be in-flight during an operation and ϵ, which
marks the number of peers that may exhibit arbitrary behavior or fail before
a request is completed. As expected, the replication factor κ plays the most
important role in shaping both message count and latency. Table 2 summarizes
the results for multiple runs of the aforementioned scenario, with different values
of κ. Lowering κ reduces the number of nodes that should be contacted in each
operation, thus causing the overall latency to drop. However, mean latency is not
directly proportional to the number of messages, as a lot of communication is
done in parallel. Dividing the latency numbers with the average messaging cost
of 80 msec results in the mean number of messages have to be sent in serial order,
either due to the protocol or the α parameter. When the network is small, like
the case of 64 nodes, we believe that a replication factor of 5 should be enough.
On the other hand, when deploying DRLS to a massive number of participants
(i.e. a “Desktop Grid”), keeping κ to the default value of 20 can help avoid data
loss in case of sudden network blackouts or other unplanned and unadvertised
peer problems, even if the messaging cost is higher.
## 6 Conclusion
In this paper, we describe a P2P-based data management architecture that comprises of GridTorrent and DRLS. GridTorrent is a cooperative data transfer
mechanism that maximizes performance by adaptively choosing where each node
retrieves file segments from. DRLS is a distributed replica service which is based
on a modified kademlia DHT, allowing efficient processing even during node
churn. Our proposed solution is compatible with the current Data Grid architecture and can be utilized without any changes by already deployed middleware.
Experiments conducted both in LAN and WAN environments (the PlanetLab
infrastructure), show that GridTorrent vastly outperforms GridFTP, being up
to 10 times faster. Moreover, experiments on DRLS in a dynamic environment
show that the benefits of a peer-to-peer network can be readily exploited to
provide a scalable Grid service without significant loss in performance. DRLS is
able to provide reliable location services even when the load rates multiply.
-----
Optimizing Data Management in Grid Environments 511
## References
[1. The Large Hadron Collider, http://lhc.web.cern.ch/lhc/](http://lhc.web.cern.ch/lhc/)
[2. The GREDIA Project, http://www.gredia.eu/](http://www.gredia.eu/)
[3. The official site of Globus Toolkit, http://globus.org/toolkit](http://globus.org/toolkit)
[4. The official BitTorrent client, http://www.bittorrent.org](http://www.bittorrent.org)
5. Distributed.net, RSA Labs 64bit RC5 Encryption Challenge,
[http://www.distributed.net](http://www.distributed.net)
6. PlanetLab: An open platform for developing, deploying, and accessing planetary
[scale services, http://www.planet-lab.org/](http://www.planet-lab.org/)
7. Allcock, B., Bester, J., Bresnahan, J., Chervenak, A.L., Foster, I., Kesselman, C.,
Meder, S., Nefedova, V., Quesnel, D., Tuecke, S.: Data management and transfer
in high-performance computational grid environments. Parallel Computing 28(5),
749–771 (2002)
8. Allcock, W., Bresnahan, J., Kettimithu, R., Link, M., Dumitresku, C., Raicu, I.,
Foster, I.: The globus striped gridftp framework and server. In: Proceedings of the
ACM/IEEE Conference on Supercomputing, SC 2005 (2005)
9. Anderson, D.: Boinc: A system for public-resource computing and storage. In:
Proceedings of the 5th IEEE/ACM International Workshop on Grid Computing
(2004)
10. Cai, M., Chervenak, A., Frank, M.: A peer-to-peer replica location service based
on a distributed hash table. In: Proceedings of the 2004 ACM/IEEE conference on
Supercomputing, Pittsburgh, PA (November 2004)
11. Chazapis, A., Koziris, N.: Xoros: A mutable distributed hash table. In: Proceedings
of the 5th International Workshop on Databases, Information Systems and Peerto-Peer Computing (DBISP2P 2007), Vienna, Austria (2007)
12. Chazapis, A., Zissimos, A., Koziris, N.: A peer-to-peer replica management service
for high-throughput grids. In: Proceedings of the 2005 International Conference on
Parallel Processing (ICPP 2005), Oslo, Norway (2005)
13. Chervenak, A., Foster, I., Kesselman, C., Salisbury, C., Tuecke, S.: The data grid:
Towards an architecture for the distributed management and analysis of large
scientific datasets. Journal of Network and Computer Applications (2000)
14. Chervenak, A., Palavalli, N., Bharathi, S., Kesselman, C., Schwartzkopf, R.,
Stockinger, H., Tierney, B.: Performance and Scalability of a replica location service. In: Proc. of the 13th IEEE International Symposioum on High Performance
Distributed Computing Conference (HPDC), Honolulu (June 2004)
15. Cohen, B.: Incentives build robustness in bittorrent. In: Workshop on Economics
of Peer-to-Peer Systems, Berkeley, CA, USA (June 2003)
16. Sullivan III, W.T., Werthimer, D., Bowyer, S., Cobb, J., Gedye, D., Anderson,
D.: New major seti project based on project serendip data and 100,000 personal
computers. In: Astronomical and Biochem ical Origins and the Search for Life in
the Universe, Proc. of the Fifth Intl. Conf. on Bioastronomy (1997)
17. Foster, I., Kesselman, C., Tsudik, G., Tuecke, S.: A security architecture for com
putational grids. In: Proceedings of the 5th ACM conference on Computer and
communications security, pp. 83–92. ACM Press, New York (1998)
18. Kaplan, A., Fox, G., von Laszewski, G.: Gridtorrent framework: A high
performance data transfer and data sharing framework for scientific computing.
In: Proceedings of GCE 2007, Reno, Nevada (2007)
19. Maymounkov, P., Mazi`eres, D.: Kademlia: A peer-to-peer information system based
on the xor metric. In: Druschel, P., Kaashoek, M.F., Rowstron, A. (eds.) IPTPS
2002. LNCS, vol. 2429, p. 53. Springer, Heidelberg (2002)
-----
512 A. Zissimos et al.
20. Peterson, L., Anderson, T., Culler, D., Roscoe, T.: A blueprint for introducing
disruptive technology into the internet. In: Proceedings of HotNets–I, Princeton,
NJ (October 2002)
21. Ripeanu, M., Foster, I.: A decentralized, adaptive, replica location service. In: Pro
ceedings of the 11th IEEE International Symposium on High Performance Distributed Computing (HPDC-11 2002), Edinburgh, UK (July 2002)
22. Sherwood, R., Braud, R., Bhattacharjee, B.: Slurpie: A cooperative bulk data trans
fer protocol. In: Proceedings of IEEE INFOCOM (March 2004)
23. Stockinger, H., Samar, A., Holtman, K., Allcock, B., Foster, I., Tierney, B.: File
and object replication in data grids. Cluster Computing 5(3), 305–314 (2002)
24. Thain, D., Basney, J., Son, S.-C., Livny, M.: The kangaroo approach to data move
ment on the grid. In: Proceedings of the Tenth IEEE Symposium on High Performance Distributed Computing, HPDC10 (2001)
25. Wei, B., Fedak, G., Cappello, F.: Collaborative data distribution with bittorrent for
computational desktop grids. In: Proceedings of the 4th International Symposium
on Parallel and Distributed Computing, ISPDC 2005 (2005)
26. Weigle, E., Chien, A.A.: The composite endpoint protocol (cep): Scalable endpoints
for terabit flows. In: Proceedings of the IEEE International Symposium on Cluster
Computing and the Grid, CCGrid 2005 (2005)
27. Zissimos, A., Doka, K., Chazapis, A., Koziris, N.: Gridtorrent: Optimizing data
transfers in the grid with collaborative sharing. In: Proceedings of the 11th Panhellenic Conference on Informatics, Patras, Greece (2007)
-----
| 10,345
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-642-05148-7_38?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-642-05148-7_38, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "CLOSED",
"url": ""
}
| 2,009
|
[
"JournalArticle",
"Conference"
] | false
| 2009-11-07T00:00:00
|
[] | 10,345
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0040ca8c0407abee38ceeb32f326f45c2382bc67
|
[] | 0.838162
|
A Novel Auction Blockchain System with Price Recommendation and Trusted Execution Environment
|
0040ca8c0407abee38ceeb32f326f45c2382bc67
|
Mathematics
|
[
{
"authorId": "2694431",
"name": "Dong-Her Shih"
},
{
"authorId": "48865347",
"name": "Ting-Wei Wu"
},
{
"authorId": "38159019",
"name": "Ming-Hung Shih"
},
{
"authorId": "9000436",
"name": "Wei-Cheng Tsai"
},
{
"authorId": "2054342617",
"name": "David C. Yen"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-283014",
"https://www.mdpi.com/journal/mathematics"
],
"id": "6175efe8-6f8e-4cbe-8cee-d154f4e78627",
"issn": "2227-7390",
"name": "Mathematics",
"type": null,
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-283014"
}
|
Online auctions are now widely used, with all the convenience and efficiency brought by internet technology. Despite the advantages over traditional auction methods, some challenges still remain in online auctions. According to the World Business Environment Survey (WBES) conducted by the World Bank, about 60% of companies have admitted to bribery and manipulation of the auction results. In addition, buyers are prone to the winner’s curse in an online auction environment. Since the increase in information availability can reduce uncertainty, easy access to relevant auction information is essential for buyers to avoid the winner’s curse. In this study, we propose an Online Auction Price Suggestion System (OAPSS) to protect the data from being interfered with by third-party programs based on Intel’s Software Guard Extensions (SGX) technology and the characteristics of the blockchain. Our proposed system provides a smart contract by using α-Sutte indicator in the final transaction price prediction as a bidding price recommendation, which helps buyers to reduce the information uncertainty on the value of the product. The amount spent on the smart contract in this study, excluding deployed contracts, plus the rest of the fees is less than US$1. Experimental results of the simulation show that there is a significant difference (p < 0.05) between the recommended price group and the actual price group in the highest bid. Therefore, we may conclude that our proposed bidder’s price recommendation function in the smart contract may mitigate the loss of buyers caused by the winner’s curse.
|
# mathematics
_Article_
## A Novel Auction Blockchain System with Price Recommendation and Trusted Execution Environment
**Dong-Her Shih** **[1]** **, Ting-Wei Wu** **[1], Ming-Hung Shih** **[2,]*** **, Wei-Cheng Tsai** **[1]** **and David C. Yen** **[3]**
1 Department of Information Management, National Yunlin University of Science and Technology,
Douliu 64002, Taiwan; [email protected] (D.-H.S.); [email protected] (T.-W.W.);
[email protected] (W.-C.T.)
2 Department of Electrical and Computer Engineering, Iowa State University, 2520 Osborn Drive,
Ames, IA 50011, USA
3 Jesse H. Jones School of Business, Texas Southern University, 3100 Cleburne Street, Houston, TX 77004, USA;
[email protected]
***** Correspondence: [email protected]
[����������](https://www.mdpi.com/article/10.3390/math9243214?type=check_update&version=1)
**�������**
**Citation: Shih, D.-H.; Wu, T.-W.;**
Shih, M.-H.; Tsai, W.-C.; Yen, D.C. A
Novel Auction Blockchain System
with Price Recommendation and
Trusted Execution Environment.
_[Mathematics 2021, 9, 3214. https://](https://doi.org/10.3390/math9243214)_
[doi.org/10.3390/math9243214](https://doi.org/10.3390/math9243214)
Academic Editor: Jan Lansky
Received: 11 November 2021
Accepted: 9 December 2021
Published: 13 December 2021
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright: © 2021 by the authors.**
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: Online auctions are now widely used, with all the convenience and efficiency brought by**
internet technology. Despite the advantages over traditional auction methods, some challenges still
remain in online auctions. According to the World Business Environment Survey (WBES) conducted
by the World Bank, about 60% of companies have admitted to bribery and manipulation of the
auction results. In addition, buyers are prone to the winner’s curse in an online auction environment.
Since the increase in information availability can reduce uncertainty, easy access to relevant auction
information is essential for buyers to avoid the winner’s curse. In this study, we propose an Online
Auction Price Suggestion System (OAPSS) to protect the data from being interfered with by thirdparty programs based on Intel’s Software Guard Extensions (SGX) technology and the characteristics
of the blockchain. Our proposed system provides a smart contract by using α-Sutte indicator in
the final transaction price prediction as a bidding price recommendation, which helps buyers to
reduce the information uncertainty on the value of the product. The amount spent on the smart
contract in this study, excluding deployed contracts, plus the rest of the fees is less than US$1.
Experimental results of the simulation show that there is a significant difference (p < 0.05) between
the recommended price group and the actual price group in the highest bid. Therefore, we may
conclude that our proposed bidder’s price recommendation function in the smart contract may
mitigate the loss of buyers caused by the winner’s curse.
**Keywords: online auction; winner’s curse; blockchain; price recommendation; SGX technology**
**1. Introduction**
With advancing modern technology, E-commerce has become a part of daily life
and has made considerable progress in recent years. In 2020, it is estimated that online
transactions accounted for 25% of all business transactions. While more and more users
have explored the business opportunities on the internet, the online auctions market has
become an important business entity among them. Unlike traditional auctions, online
auctions generate a lot of data during the transactions, including information about the
products, participants, and related behaviors. If we studied and analyzed it properly, the
data could bring huge benefits to buyers, sellers, and auction platforms.
The World Business Environment Survey (WBES) conducted by the World Bank has
shown that approximately 60% of companies have been bribed and admitted to manipulating the online auction process, which ultimately affects the final results. A previous study
has suggested that transparency of the overall process is a critical part of auctions [1].
Despite many benefits over traditional auctions, online auctions still cannot overcome
some existing challenges. The winner’s curse, for example, is a phenomenon where the
-----
_Mathematics 2021, 9, 3214_ 2 of 19
winner overpaid to win the auctions. It has been a challenge to bidders in traditional
auctions and now in online auctions as well [2]. Past studies have indicated that when
participants overestimate the value of an item in a competitive bidding environment, they
will pay more than the market value and suffer the loss [3].
Providing relevant information, such as the price of the merchandise, can reduce the
uncertainty and suppress the winner’s curse. While past research has studied the causes
and impacts of the winner’s curse, few have suggested how to reduce or avoid it. In
addition, how to leverage the data collected from online auctions to provide the bidders
with more accurate information and recommendations remains in question.
In this study, we design an online auction system that provides auction procedures
by writing smart contracts in general. Our system aims to avoid loss from the winner’s
curse using the characteristics of a blockchain and price recommendation on the auction
items, which reduces the information uncertainty for bidders. In addition, we protect the
system environment with Intel’s Software Guard Extensions (SGX) technology to ensure
all the information can be safely processed without manipulations from third parties or
malicious intruders.
**2. Background and Related Work**
_2.1. Online Auctions_
Online auctions have grown substantially since the late 1990s. As an alternative
form of retail with a dynamic pricing mechanism, electronic auctions have attracted many
businesses and individual users who can buy and sell almost anything on the internet. In
the second quarter of 2020, eBay (www.ebay.com, accessed on 1 June 2020), the current
leader in electronic auctions, had 157 million active buyers worldwide, with 800 million
products listed and 25 million sellers daily. The tremendous growth of electronic auctions
has undoubtedly aroused great research interest.
Auctions have existed in human history for thousands of years. Klein and O’Keefe [4]
believe auction is a standardized transaction procedure. With the restrictions of auction
rules, participants bid and set the item price interactively. Traditionally, there are four main
types of auctions [5]:
1. First Price Sealed Bidding Auction (FPSBA): The buyer seals the bid in an envelope
and delivers it to the auctioneer. Subsequently, the auctioneer opens the envelope to
determine the winner with the highest bid.
2. Second Price Sealed Bidding Auction (Vickrey auction): It is similar to FPSBA except
that the winner will pay the second-highest bid.
3. Open Ascending Auction (English auction): Bidders make increasingly higher bids
and stop bidding if they are unwilling to pay higher than the current highest bid.
4. Open Descending Auction (Dutch auction): The auctioneer initially sets a high price
and then gradually reduces it until any buyer decides to pay at the current price.
_2.2. Winner’s Curse_
The first study on the winner’s curse was to discuss the rights of oil drilling [6].
Without enough information about the auction item, the buyer may have given up benefits
or even suffered losses when the winner’s curse occurred. Bazerman and Samuelson [7]
conducted an experiment to prove the existence of the winner’s curse. In a bidding auction,
the person with the highest bid wins and the reason behind the high bidding price is
because the person expects the item to be of a higher value.
According to a past study [8], during an auction, the participating buyers usually have
insufficient information about the values of the auction items. The information obtained by
each buyer is imbalanced, and the most optimistic buyer tends to win the bid. Therefore, it
is common to overestimate the products, where the winner pays more than the actual value.
-----
_Mathematics 2021, 9, 3214_ 3 of 19
_2.3. Price Recommendation and Prediction_
The α-Sutte Indicator prediction method was proposed originally in 2017 as a new
method to predict stock trends. Subsequent research has shown that this method can also be
used to predict all-time series data [9]. During the prediction process, α-Sutte Indicator only
needs the first four data points and does not require any hypothesis, providing the flexibility
to analyze any type of data. We chose α-Sutte Indicator as our price prediction method
in consideration of the auctioning environment and compatibility with the blockchains.
This method provides better accuracy and is not limited to predicting stock price trends
but can also be used to predict various time series data. Compared with ARIMA and other
methods, this method is more capable of writing formula conditions in smart contracts and
the cost of implementation is also lower. α-Sutte Indicator is described as the following
Equation (1) [10]:
2 2
(1)
3
∆y
_β + α_
+ r
_r +[∆][z] β_
_α_
_α[∆] +[x] δ_
2
+ β
_αt =_
where
_δ = a(t −_ 4)
_α = a(t −_ 3)
_β = a(t −_ 2)
γ = a(t−1)
∆x = α − _δ = a(t −_ 3) − a(t − 4)
∆y = β − _α = a(t −_ 2) − a(t − 3)
∆z = γ − _β = a(t −_ 1) − a(t − 2)
a(t) = the observation at time t
a(t − k) = the observation at time t − k
In the studies of machine learning and blockchains, most of the related algorithms are
performed outside the blockchain [11] or experiment with external data sets [12]. However,
the algorithm of α-Sutte Indicator can be integrated into the blockchain, can be used in a
similar way to internal functions, can be compared with external use, is simpler, and is also
the key to adopting this method in this study.
_2.4. Blockchain_
A blockchain with a distributed consensus protocol is a distributed ledger technology
(DLT) that combines peer-to-peer networking, cryptography, and game theory, but the data
structure of the blockchain itself is older than DLT [13]. It originated from Nakamoto’s
white paper [14]. When there is no verification or auditing mechanism, the trust issue to
the information system will be extremely complex, especially with sensitive information,
such as economic transactions using virtual currency. Nakamoto proposed two radical
concepts in his research. The first is Bitcoin, a virtual cryptocurrency that maintains its
value without the support of any centralized agency or financial entity. Instead, tokens
are collectively and safely held by a decentralized network of P2P participants, which
constitutes an auditable network. A blockchain is the second concept, and it has been more
popular than cryptocurrency. Blockchain technology consists of six elements [15]:
1. Decentralized: The most basic feature of a blockchain is that the data do not rely on a
centralized node but can be recorded and stored in a decentralized fashion.
2. Transparent: The data can be updated on any nodes in the blockchain system, which
is the main contributor to the blockchain’s trustworthiness.
3. Open source: Most of the blockchain systems are open to the public for inspection,
verification, and usage to create other applications.
4. Autonomous: Based on the consensus algorithm, all nodes in the blockchain system
can safely transmit and update data without intervention.
-----
_Mathematics 2021, 9, 3214_ 4 of 19
5. Unchangeable: All records will be stored forever and cannot be changed unless one
party occupies at least 51% of the nodes at the same time.
6. Anonymous: A blockchain resolves the problem of trust between nodes, so data
can be transmitted and traded in an anonymous manner, with only the blockchain
address being known to each other.
_2.5. Ethereum_
In 1997, Szabo [16] defined a smart contract as a “computerized transaction agreement
that enforces the terms of the contract.” One key feature of a smart contract is having a way
to execute contract terms on its own, which was not technically feasible until the blockchain
was proposed. In fact, a blockchain is an ideal technology to support smart contracts, where
smart contracts also contributed to the development of the blockchain, commonly known
as blockchain 2.0. In the absence of centralized control, automated contract execution in a
trusted environment could potentially change the traditional ways of business.
In summary, Ethereum technology has the ability to remove third parties from the
environment while executing developers’ applications on the blockchain. Smart contracts
can execute different conditions according to different roles; the online auction situation
also requires multiple roles and exclusive behaviors. In this study, we use Ethereum’s
smart contract with α-Sutte Indicator as the core of our system.
_2.6. Blockchain-Based E-Auction_
In the research of blockchain-based online auctions, Foti and Vavalis [17] proposed
the design of the decentralized, real-time, unified-price double-auction energy market.
Desai et al. [18] proposed a novel hybrid framework that combines private and public
blockchains to help protect the privacy of participants. Wang and Mu [19] proposed a
system framework that uses blockchain technology and smart contract to solve the privacy
and security problems of E-bidding systems. Jiao et al. [20] proposed an auction-based
market model for efficient computing resource allocation. Braghin et al. [21] developed an
online auction system based on Ethereum smart contracts. Smart contracts are executable
codes that run on the blockchain to facilitate and execute agreements between untrusted
parties without the participation of trusted third parties.
In most of the studies on blockchain and online auctions, mainly aimed at the protection of privacy, the bidding process can be integrated into the blockchain without a third
party, but it has not yet tried to integrate decision-making in the blockchain. This study
attempts to present time series forecasting methods through smart contracts and provide
the function of the bidder’s price recommendation.
Most of the research on online auctions with blockchains tends to be decentralized,
real-time, and smart-contract driven. However, it is quite rare to find price-related recommendations in the auction research with a blockchain. Ethereum provides different roles
and smart contracts to help integrate the online auction situation process into the blockchain.
α-Sutte Indicator, the time series forecasting method, is easier to write into smart contracts
than other methods. This study tends to add a price recommendation function to smart
contracts, which may mitigate the loss of buyers caused by the winner’s curse.
_2.7. Trusted Execution Environment_
Trusted execution environments (TEE), such as Intel’s Software Guard Extensions
(SGX) and ARM TrustZone [22], are widely used in personal computers, servers, and
mobile platforms, respectively. TEE provides an isolated execution environment that runs
in parallel with the host operating system and standard cryptographic functions. In this
study, we use Intel SGX as the TEE for our system.
Intel SGX is the technology developed by Intel with the main purpose of enhancing the
security of executing programs. While it cannot identify or isolate all malicious programs
on the platform, it packages the safe operations of legitimate programs in an enclave to
protect them from malicious programs. Neither privileged nor unprivileged programs can
-----
_Mathematics 2021, 9, 3214_ 5 of 19
access this area. In other words, once the programs and data enter this security zone, they
will not be affected even by the operating system. The security zone created by SGX can be
considered a trusted execution environment.
In the past research on online auctions and blockchains, the bidding process was
generally transplanted to the blockchain. This study mainly integrates the time series
method α-Sutte Indicator on the Ethereum platform, provides price recommendations
through smart contracts during the bidding process, and helps bidders reduce the chances
of creating a winner’s curse.
**3. System Framework**
_3.1. System Environment_
This study uses the online IDE environment Remix to write and test smart contracts
and disassemble the α-Sutte Indicator formula and integrate it into the smart contract to
provide price recommendations. After the bidding process is tested without errors, the
final step is to conduct cost and safety analysis. Table 1 presents the system environment of
this study.
**Table 1. System environment.**
**Parameter** **Value**
OS Windows 10
CPU 8-Core Intel(R) i7
RAM 32 GB
TEE Intel SGX
Language Solidity
IDE Remix IDE
_3.2. Roles_
In this section, we define the roles in the environment of online auction bidding.
1. Buyer/Bidder: The role with the capability of bidding on items in an online auction.
2. Bidder of Decision Support: The role with bidding capability and price recommendations suggested by the system. The price predictions are based on data collected from
past auctions.
3. Seller: The role of publishing product auctions and collecting payments with the
capability of specifying item auction price, auction time, and other information
in detail.
4. Auction Manager: The role of verifying information of the auctioned products or
identities of all the other participating roles.
_3.3. Auction Scenario_
Figure 1 shows the complete process of online auctions, from the listing of the auctioned item to the receipt of the payment by sellers, and the description is as follows:
1. The seller sends the system a request to list the auctioned item.
2. Upon receipt of the seller’s listing request, the auction manager verifies the product
information and checks if there is any missing information.
3. Each buyer can pay for the registration to access the price recommendations before
the auction starts.
4. Buyers who paid for the registration are converted to the role of “bidders of
price suggestions.”
5. Bidders of price suggestions can request a price recommendation.
6. The price recommendation request is sent to SGX for secure processing.
7. The price recommendation is calculated and returned to the bidder of
price suggestions.
8. The auction manager starts the auction.
-----
_Mathematics 2021, 9, 3214_ tions. 6 of 19
### 8. The auction manager starts the auction. 9. The system takes bids from all buyers. 10.9. The auction manager verifies the auction time and bid counts during the auction. The system takes bids from all buyers.
10. auction is closed at the end of auction time or if there exists a winner. The auction manager verifies the auction time and bid counts during the auction. The
### 11. The auction manager verifies the winner’s information. auction is closed at the end of auction time or if there exists a winner.
11. The auction manager verifies the winner’s information.
### 12. The winning buyer submits the payment.
12. The winning buyer submits the payment.
### 13. The seller verifies the payment.
13. The seller verifies the payment.
**Figure 1. Online auction process scenario.**
**Figure 1. Online auction process scenario.**
_3.4. Ethereum Smart Contract_
### 3.4. Ethereum Smart Contract In this study, the smart contract is a program deployed on the Ethereum blockchain
network, which contains pre-defined states, transition rules, execution conditions, andIn this study, the smart contract is a program deployed on the Ethereum blockch
execution logic. When the conditions are met, the execution logic is automatically exe
### network, which contains pre-defined states, transition rules, execution conditions, and
cuted [23]. We designed a smart contract system called the Online Auction Price Suggestion
### ecution logic. When the conditions are met, the execution logic is automatically execu
System (OAPSS). Figure 2 shows the process of calling events for the entire auction. In
### [23]. We designed a smart contract system called the Online Auction Price Sugges
the beginning, the auction manager deploys the smart contract and the auction platform
### System (OAPSS). Figure 2 shows the process of calling events for the entire auction. Inusing Deploy(). The seller calls SellerRegister() and the buyer calls BidderRegister() to beginning, the auction manager deploys the smart contract and the auction platform usregister as seller and buyer, respectively. The seller calls the ApplyProduct() function when Deploy(). The seller calls SellerRegister() and the buyer calls BidderRegister() to regilisting an item for auction, which executes VerifiedProductInformation() subsequently to
verify the seller’s information about the item. Before the auction starts, the buyer can call
### as seller and buyer, respectively. The seller calls the ApplyProduct() function when lis
changeToSuggestBidder() to convert into the role of the bidder of the price suggestions.
### an item for auction, which executes VerifiedProductInformation() subsequently to ve
Buyers with successful conversions can then start RequestToPriceSuggest() to get price
### the seller’s information about the item. Before the auction starts, the buyer can call chanrecommendations. To start the auction, the auction manager calls ActiveAuction(). During ToSuggestBidder() to convert into the role of the bidder of the price suggestions. Buythe active auction, any buyer can call RequestBid() to bid on the items. At the end of the
auction, the auction manager calls AnnouncementWinner() to announce the winner of the
auction and notify the buyer and the seller of the final price. The buyer calls WinnerPayment() to submit the payment to the seller. Based on smart contracts, the characteristics of
the OAPSS framework are unchangeable, unalterable, and truthful. Table 2 is the overall
OAPSS smart contract functions used in this study. And, Algorithms 1–8 are their detailed
algorithm in smart contracts.
-----
#### the OAPSS framework are unchangeable, unalterable, and truthful. Table 2 is the overall
_Mathematics 2021, 9, 3214_ 7 of 19
#### OAPSS smart contract functions used in this study. And, Algorithms 1–8 are their detailed algorithm in smart contracts.
**Figure 2. Figure 2.Sequence diagram of the OAPSS system. Sequence diagram of the OAPSS system.**
**Table 2. Table 2.Overall OAPSS smart contract functions. Overall OAPSS smart contract functions.**
**FunctionFunction** **Smart Contract AlgorithmSmart Contract Algorithm**
Deploy the contract.Deploy the contract. Deploy()Deploy()
Register the seller.Register the seller. SellerRegister()SellerRegister()
Register the buyer. BidderRegister()
#### Register the buyer. BidderRegister()
List an auctioned item. ApplyProduct()
Verify the auctioned item.List an auctioned item. VerifiedProductInformation()ApplyProduct()
Register the buyer for a price suggestion.Verify the auctioned item. ChangeToSuggestBidder()VerifiedProductInformation()
The buyer requests a price suggestion. RequestToPriceSuggest()
#### Register the buyer for a price suggestion. Start the auction. ActiveAuction()ChangeToSuggestBidder() The buyer requests a price suggestion. Bid on an item. RequestToBid()RequestToPriceSuggest()
Announce the winner. AnnoucementWinner()
#### Start the auction. ActiveAuction()
The winner pays the seller. Winner Payment()
#### Bid on an item. RequestToBid()
ApplyProduct(): Sellers use this function to put the product information of the auctioned item on the system and wait for the system and auction managers to review it.
-----
_Mathematics 2021, 9, 3214_ 8 of 19
**Algorithm 1 ApplyProduct**
**Input: Ethereumaddress(EA) of SellerAddr**
ProductName, AuctionLowPrice, AuctionStartTime, AuctionEndTime
**1. if** _SellerAddr = Seller Address then_
**2.** Add product information to ProductHashtable
**3.** Setting Auction Time
**4.** str = Identity verification success
**5. else**
**6.** str = Identity verification failed
**7. end**
VerifiedProductInformation(): The auction manager verifies the product information
and rejects the request with incomplete or incorrect information.
**Algorithm 2 VerifiedProductInformation**
**Input: Ethereumaddress(EA) of AuctionManagerAddr**
ProductName, AuctionLowPrice, AuctionTime
**1. if** _AuctionManagerAddr = AuctionManager Address then_
**2.** **if Product Information <> null then**
**3.** _AuctionReadyState = true;_
**4.** str = Product apply success
**5.** **else**
**6.** _AuctionReadyState = false;_
**7.** str = Product apply fail
**8. else**
**9.** str = Identity verification failed
**10. end**
ChangeToSuggestionBidder(): Buyers use this function to apply for conversion to a
bidder of price suggestions. The system checks if the related fee has been collected, and it
either approves or declines the conversion request.
**Algorithm 3 ChangeToSuggestionBidder**
**Input: Ethereumaddress(EA) of BidderAddr**
Fee
**1. if** _BidderAddr = Bidder Address then_
**2.** **if Fee = true**
**3.** Add Bidder to SuggestionBidderArrayList
**4.** str = change to Suggestion Bidder is success
**5.** **else**
**6.** str = change to Suggestion Bidder is fail
**7.** **end**
**8. else**
**9.** str = Identity verification failed
**10. end**
RequestToPriceSuggest(): Buyers who have converted can use this function to request
a price recommendation for specific auctioned products. The system makes a prediction of
the final price using α-Sutte Indicator, and the result is returned to the buyer.
-----
_Mathematics 2021, 9, 3214_ 9 of 19
**Algorithm 4 RequestToPriceSuggest**
**Input: Ethereumaddress(EA) of BidderPriceSuggestionAddr**
ProductName
**1. if BidderPriceSuggestionAddr = Bidder of price suggestion Address then**
**2.** EnterSGXenclave
**3.** collect product information
**4.** use α-Sutte indicator to predict price
**5.** return suggest price;
**6. else**
**7.** str = Identity verification failed
**8. end**
ActiveAuction(): The auction manager uses this function to start the auction when
ready. Buyers can then start to place bids on the items.
**Algorithm 5 ActiveAuction**
**Input: Ethereumaddress(EA) of AuctionManagerAddr**
ProductName, AuctionTime, AuctionReadyState
**1. if AuctionManagerAddr = AuctionManager Address then**
**2.** **if AuctionReadyState = true**
**3.** **while AuctionActive = false**
**4.** **if AuctionStartime = now then**
**5.** _AuctionActive = true_
**6.** str = Auction Start
**7.** **else**
**8.** _AuctionActive = false_
**9.** str = Auction time is not up yet
**10.** **end**
**11.** **else**
**12.** str = Auction not ready, please check product information
**13.** **end**
**14. else**
**15.** str = Identity verification failed
**16. end**
RequestToBid(): Both types of buyers can use this function to place a bid on the auctioned item. This function checks whether the bid amount is higher than the current highest
price, update the current highest price if it exceeds it, and keep the bidder’s information.
**Algorithm 6 RequestToBid**
**Input: Ethereumaddress(EA) of BidderAddr**
ProductName, BidPrice, AuctionTime
**1. if** _BidderAddr = Bidder Address then_
**2.** **if BidPrice ≥** _Base standard && BidPrice > CurrentHighestPrice then_
**3.** **if now < AuctionEndTime then**
**4.** CurrentHighestPrice = BidPrice
**5.** CurrentHighestBidder = Bidder
**6.** str = Bid success
**7.** **else**
**8.** str = Bid fail
**9.** **end**
**10.** **else**
**11.** str = Bid fail
**12.** **end**
**13. else**
**14.** str = Identity verification failed
**15.end**
-----
_Mathematics 2021, 9, 3214_ 10 of 19
AnnouncementWinner(): The auction manager can use this function to conclude the
auction with the highest bidding price and the winner. This function first checks whether
the auction time exceeds the originally scheduled time. If the time has been exceeded, it
stops the buyer from bidding and announces the current highest bidder and the final price.
**Algorithm 7 AnnoucementWinner**
**Input: Ethereumaddress(EA) of AuctionManagerAddr**
**1. if** _AuctionManagerAddr = AuctionManager Address then_
**2.** **if** _now < AuctionEndTime_ **then**
**3.** Get HighestBidder
**4.** Winner = HighestBidder
**5.** Add Winner to WinnerNoPayArrayList
**6.** Notify Winnerto payment
**7.** str = Auction End
**8.** **else**
**9.** str = Auction time is not up yet
**10.** **end**
**11. else**
**12.** str = Identity verification failed
**13. end**
WinnerPayment(): The winner can use this function to make the payment to the seller
after a successful bid.
**Algorithm 8 WinnerPayment**
**Input: Ethereumaddress(EA) of BidderAddr**
PaymentAmount
**1. if** _BidderAddr = WinnerNoPayArrayList then_
**2.** **if PaymentAmount = CurrentHighestPrice** **then**
**3.** **transfer winner money to smart contract**
**4.** str = wait to seller receive payment
**5.** **else if PaymentAmount > CurrentHighestPrice then**
**6.** return PaymentAmount - CurrentHighestPrice
**7.** str = wait to seller receive payment
**8.** **else**
**9.** str = Amount is enough
**10.** **end**
**11. else**
**12.** str = Identity verification failed
**13. end**
**4. Testing and Security Analysis**
_4.1. Deploy Results_
We present the deployment results of our system, OAPSS. First, we set the account
addresses for each role in the auction scenario, as shown in Table 3: buyers (B), buyers
with price prediction (BP), the seller (S), and the auction manager (AM). Then we use the
accounts to test the smart contracts through Remix IDE.
**Table 3. Role account address.**
**Account** **Address**
B 0xAb8483F64d9C6d1EcF9b849Ae677dD3315835cb2
BP 0x4B20993Bc481177ec7E8f571ceCaE8A9e22C02db
S 0x78731D3Ca6b7E34aC0F824c42a7cC18A495cabaB
AM 0x5B38Da6a701c568545dCfcB03FcB875f56beddC4
-----
_Mathematics 2021, 9, 3214_ 11 of 19
S 0x78731D3Ca6b7E34aC0F824c42a7cC18A495cabaB
AM 0x5B38Da6a701c568545dCfcB03FcB875f56beddC4
4.1.1. Deploy Contracts
4.1.1. Deploy Contracts
When creating an OPASS smart contract, the creator will be set as the auction manager
When creating an OPASS smart contract, the creator will be set as the auction man
and the smart contract will be deployed. The result of the creation screen as an example is
ager and the smart contract will be deployed. The result of the creation screen as an ex
shown in Figureample is shown in Figure 3. 3.
**Figure 3. Deploy contract.**
**Figure 3. Deploy contract.**
4.1.2. Winner Announcement4.1.2. Winner Announcement
The auction manager enters the product name and seller address in the auction to
The auction manager enters the product name and seller address in the auction to end
end the auction, settle the winning bid amount, and announce the winner. The result of
_Mathematics 2021, 9, x FOR PEER REVIEW the auction, settle the winning bid amount, and announce the winner. The result of thethe winner announcement is shown in Figure 4._ 12 of 19
winner announcement is shown in Figure 4.
**Figure 4.Figure 4. Winning bidder announcement.Winning bidder announcement.**
_4 2 Experiment on Winner’s Curse_
-----
_Mathematics 2021, 9, 3214_ 12 of 19
_4.2. Experiment on Winner’s Curse_
To understand whether the final transaction price prediction function of the OAPSS
system can help the bidder avoid the winner’s curse in a practical auction environment, we
simulate the online auction platform of the eBay environment. The flow chart of experiment
is shown in Figure 5. The purpose of this quasi-experiment evaluation is as follows:
1. Confirm the existence of the winner’s curse.
_Mathematics 2021, 9, x FOR PEER REVIEW_ 13 of 19
2. Compare the difference between two scenarios, with and without the final transaction
price prediction.
**Figure 5. Flowchart of experiment.**
**Figure 5. Flowchart of experiment.**
**Table 4. Definitions of groups.**
4.2.1. Framework and Methods
Buyer: The buyers are divided into the experimental group (with price prediction)Groups **Abbreviation** **Definitions**
_•_
and the control group (without price prediction) for comparison before the auctionThe past four final transaction
Average of the past four fi- PP
stage. Detailed group descriptions are shown in Tableprices for each of the 15 auctioned 4.
nal prices (past price)
Auction Stage: Fifteen items of various types are introduced to each buyer group for
_•_ items from eBay
the auctions. The auction procedure includes price recommendation, bidding, product
The final price and highest bid
acquisition, winner announcement, and final payment.PdP
dings with buyers using price rec
Experimental group Data Aggregation: The two sets of 15 final transaction prices and the highest bids(with prediction price
_•_
ommendations for each of the 15
obtained from both buyer groups are collected. There are three groups, which are withhelp)
auctioned items
prediction price (PdP) help group, without prediction price (NPdP) help group, and
The final price and highest bid
past prices (PP). Each price in PP is the average of the past four final prices collected
NPdP
from eBay. Table 3 summarizes the data groups and their definitions.dings without price recommenda
Control group (without prediction price
Data Analysis: We use analysis of variance (ANOVA) and Tukey’s post-analysis [tions for each of the 15 auctioned 24]
_•_ help)
to evaluate the differences between groups. The goal of this analysis is to verify ouritems
price predictions on the final transaction price and how it impacts the winner’s curse.
**Table 5. The ANOVA analysis of this experiment is shown in TablesDescriptive statistics of the transaction prices.** 5 and 6.
**_N_** **Avg** **Std** **Min** **Max**
PP 15 4699.85 5165.55 634.25 21301.5
PdP 15 6220.67 3942.58 1690 15060
NPdP 15 6942 4867.39 1010 15510
**Table 6. Descriptive statistics of the highest bid price.**
-----
_Mathematics 2021, 9, 3214_ 13 of 19
**Table 4. Definitions of groups.**
**Groups** **Abbreviation** **Definitions**
Average of the past four final PP
prices (past price)
PdP
Experimental group
(with prediction price help)
NPdP
Control group (without prediction price
help)
**Table 5. Descriptive statistics of the transaction prices.**
The past four final transaction
prices for each of the 15
auctioned items from eBay
The final price and highest
biddings with buyers using
price recommendations for
each of the 15 auctioned items
The final price and highest
biddings without price
recommendations for each of
the 15 auctioned items
**_N_** **Avg** **Std** **Min** **Max**
PP 15 4699.85 5165.55 634.25 21,301.5
PdP 15 6220.67 3942.58 1690 15,060
NPdP 15 6942 4867.39 1010 15,510
**Table 6. Descriptive statistics of the highest bid price.**
**_N_** **Avg** **Std** **Min** **Max**
PP 15 4699.85 5165.56 634.25 21,301.5
PdP 15 7468 5266.40 1800 20,000
NPdP 15 10,434.67 5866.46 2700 20,000
4.2.2. Experimental Results
To verify the difference of groups, ANOVA with the clusters as a covariate was
performed to verify the significance [25]. Table 7 summarizes the results from ANOVA.
The probability of a type I error was set to 0.05. We can see that there is no significant
difference in the final transaction prices between different groups in Table 8 as there is no
significant value below 0.05, and Tukey’s post-analysis on the transaction price has shown
similar results in Table 9. The asterisk represents their difference is significant (p < 0.05).
However, judging from the comparison of the highest bids among groups, it can be
seen in Table 9 that there is a significant difference in the highest bids between the PP
and the NPdP groups (p = 0.17 < 0.05). It indicates that without final price prediction (or
recommendation), buyers may overbid in an auction. In addition, there is no significant
difference between PP and PdP groups in the highest bid (p = 0.354 > 0.05) in Table 8. It
means that if giving the final transaction price prediction (or recommendation) can cause
the highest bid to be close to the final transaction price, the buyer may reduce the loss or
escape the winner’s curse.
**Table 7. ANOVA.**
**Sum of Squares** **F** **Sig.**
Comparison between the
39,302,208.67 0.894 0.417
transaction prices
Comparison between highest bids 246,759,438.67 4.167 0.022 *
-----
_Mathematics 2021, 9, 3214_ 14 of 19
**Table 8. Tukey’s test on the transaction price.**
**(I) Group** **(J) Group** **Mean Difference (I–J)** **Sig.**
PdP _−1520.82_ 0.650
PP
NPdP _−2242.15_ 0.398
PP 1520.82 0.650
PdP
NPdP _−721.33_ 0.907
PP 2242.15 0.398
NPdP
PdP 721.33 0.907
**Table 9. Tukey’s test on the highest bid.**
**(I) Group** **(J) Group** **Mean Difference (I–J)** **Sig.**
PdP _−2768.15_ 0.354
PP
NPdP _−5734.81 *_ 0.017 *
PP 2768.15 0.354
PdP
NPdP _−2966.67_ 0.304
PP 5734.82 * 0.017 *
NPdP
PdP 2966.67 0.304
_4.3. Cost Analysis_
Executing a smart contract and calling the functions in the Ethereum environment
will consume gas. The gas consumption is based on the complexity of each function, which
can be considered as a handling fee. The cost of gas consumption is calculated by the
amount of gas consumed times the unit gas price. During the execution of a transaction,
gas consumption could be restricted by the gas limit parameter to avoid malicious users
from attacking the smart contract by executing functions arbitrarily and preventing the
extra consumption caused by executing the wrong process. In this study, we set the gas
limit to 6,000,000 units when testing OAPSS. Table 10 summarizes the costs of each function
call in our proposed OAPSS. Note that the conversions between gas units and US dollars
are according to the data from the CoinGecko website in February 2021, where 1 gas unit of
ETH was equivalent to US$1545.82 for conversion. The amount spent on the smart contract,
excluding deployed contracts, plus the rest of the fees is less than $1.
**Table 10. The cost of function gas of the proposed OAPSS system.**
**Function Name** **Transaction Cost** **Execution Cost** **USD**
Deploy Contract 2,982,237 2,220,393 4.61
SellerRegister() 63,787 42,515 0.10
BidderRegister() 45,034 23,762 0.07
ApplyProduct() 195,003 170,787 0.30
VerifiedInformation() 33,802 11,122 0.05
ChangeToSuggestionBidder() 99,075 76,203 0.15
RequestToPriceSuggest() 44,350 21,926 0.07
ActiveAuction() 33,078 40,654 0.05
RequestToBid() 102,525 79,909 0.16
AnnoucementWinner() 104,576 110,744 0.16
WinnerPayment() 38,580 14,556 0.06
_4.4. Security Analysis_
The research by Luu et al. [26] addressed the security concerns of smart contracts and
proposed solutions for specific attacks. The common vulnerabilities of a smart contract
-----
_Mathematics 2021, 9, 3214_ 15 of 19
are reentrancy vulnerability, replay attack, access restriction, and timestamp dependency.
Many viewpoints, such as confidentiality, data integrity, availability, authorization, and
non-repudiation, have been put forward in the security analysis [27–31].
Reentrancy Vulnerability
_•_
When a user makes function calls in a smart contract, it could involve transferring
remittances, where the reentrancy vulnerability could be caused by the sequence of calls. In
other words, if the remittance is transferred before the states change, an attacker can create a
new contract through the loophole to steal the Ether in the victimized contract. In this study,
our smart contract verifies the identities of each role and related data using the require()
function. Only if verified can a user make function calls. Identity verification prevents
reentrancy vulnerability from causing damages and financial loss to smart contracts.
Replay Attack
_•_
A replay attack is a malicious action that repeats or delays legitimate data transmissions on the network. It can be performed by the initiator or the middleman who intercepts
and retransmits the data as part of a spoofing attack through IP packet replacement. This
attack has been resolved by the subsequent Geth 1.5.3 update on smart contracts, and thus
we do not consider it as a threat to our system in this study.
Access Restriction
_•_
Access restriction, or access control (AC), is to manage and restrict access to certain
spaces or resources. In this study, we implement the modifier() function to restrict the
identities from accessing function calls unless the identity has sufficient rights.
Timestamp Dependency
_•_
In the smart contract design, block.timestamp or now is often used to obtain the
timestamp of a block in blockchain. A malicious miner can obtain a certain degree of
knowledge at the right time. Therefore, any usage of timestamps in calculations should
be carefully reviewed. In this study, we did not use block.timestamp or now for any
calculation of money or sequence, and hence our system is not vulnerable to this attack.
Confidentiality
_•_
The auction participants in this study can register and change their roles through smart
contracts without entering other private information and can watch the corresponding
information during the bidding process. Each stakeholder will be authenticated by their
Ethereum address to protect their identity.
Data Integrity
_•_
Blockchain technology maintains the integrity and immutability of data because the
distributed ledger does not allow modification, addition, and deletion of data [32]. Any
data modifications required are re-entered into the ledger as a new transaction. Therefore,
all participants can view the data history at any point in time.
Availability
_•_
This describes that data can only be accessed by authorized users. It also refers to
the ability of the technology to provide data even in the presence of malicious code or
denial-of-service attacks. This study can only be accessed by registered roles, but it has not
been tested in situations of malicious code.
Authorization
_•_
Authorization is related to the access rights provided by different people in the
network. In the OAPSS system of this study, different roles have relative smart contracts
based on their character. Therefore, it must be authorized.
Non-repudiation
_•_
-----
_Mathematics 2021, 9, 3214_ 16 of 19
Stakeholders in the blockchain network cannot deny the actions or transactions they
perform. The roles involved in this study conduct transactions through Ethereum addresses,
and the relevant results are also presented in the blockchain network. Therefore, members
of OPASS cannot be denied that a specific payment has not been received.
Sybil Attack
_•_
A Sybil attack is a type of attack seen in peer-to-peer networks in which a node in
the network operates multiple identities actively at the same time and undermines the
authority/power in reputation systems.
Double-Spend Attack
_•_
The idea of a double-spend attack is to use the same money for two (or more) different payments, creating conflicting transactions. Double-spending can be thought of as
fraudulently spending the same cryptocurrency, or units of value, more than once.
Integrating relevant results into an IDE environment for presentation is a common
situation in many blockchain studies. For example, [27] built automated healthcare contracts on the blockchain network and implemented them through the Remix IDE. This
study is compared with the safety analysis proposed in [27,28,31], as shown in Table 11.
Refs. [28,31] explain the importance of scalability in security analysis, and [31] adds more
attack methods on the blockchain.
**Table 11. Security comparison of different schemes.**
**[27]** **[28]** **[31]** **Our Study**
Confidentiality ✓ ✓ ✓ ✓
Data integrity ✓ ✓ - ✓
Availability ✓ - ✓
Authorization ✓ ✓ ✓ ✓
Non-repudiation ✓ ✓ ✓ ✓
Scalability _×_ ✓ ✓ _×_
Sybil attack _×_ _×_ ✓ ✓
Double-spend attack ✓ ✓ ✓ ✓
Man in the middle attack ✓ _×_ ✓ ✓
(✓: stands for done. ×: represents not provided or done. -: stands for uncertainty).
**5. Conclusions and Future Work**
In the current auction environment, it is possible that specific persons or internal
personnel may manipulate the auction process, thereby affecting the final price of the
transaction and the winner. This study aims to provide the transparency of the auction
process and prevent manipulation of the auction by establishing a transparent online
auction system using blockchain technology to store records and auction data in a trusted
execution environment.
In addition, the buyers are prone to the winner’s curse in an auctioning environment.
To mitigate the loss caused by the winner’s curse, this study uses the α-Sutte Indicator
prediction method to provide a system-recommended price on the auctioned item for
registered buyers. We have proposed a systematic framework to provide a better online
auction infrastructure. To the best of our knowledge, this is the first study to provide price
recommendations in the blockchain environment for online auctions. The amount spent on
the smart contract in this study, excluding deployed contracts, plus the rest of the fees is
less than $1.
This study compares other studies that combine the blockchain with online auctions.
From Table 12, it can be seen that although this study does not conduct a follow-up analysis
for scalability, in the experiment with the highest bid, there is a significant difference
-----
_Mathematics 2021, 9, 3214_ 17 of 19
between the actual price group and the recommended price group (p < 0. 05). This study
provides a price recommendation in a smart contract that may mitigate the loss of buyers
caused by the winner’s curse.
**Table 12. Comparison with different studies.**
**[17]** **[18]** **[21]** **Our Study**
Hybrid
Environment setup Ethereum blockchain Ethereum Ethereum
architecture
Privacy protection ✓ ✓ ✓ ✓
Decentralization ✓ ✓ ✓ ✓
Scalability ✓ ✓ _×_ _×_
Cost analysis ✓ ✓ ✓ ✓
Final price
recommendation _×_ _×_ _×_ ✓
Trusted execution
environment _×_ _×_ _×_ ✓
(✓: stands for done. ×: represents not provided or done).
Due to the limitations of the Solidity language and the Remix IDE compiler, we were
not able to apply deep learning methods to the blockchain systems for price prediction in
our system. In addition, transactions in our system are based on Ethereum, which has a
large fluctuation in the exchange rate to US dollars and it may not be a good and stable
candidate for the trading currency of online auctions. As in the future, we plan to study
and implement other prediction methods using the Solidity language and compare the
performance of different methods.
The advantage of this study is that online auctions are integrated into the blockchain
environment to provide price recommendations and write the time series forecasting
method directly into the smart contract, instead of making predictions outside the blockchain.
It is just that α-Sutte Indicator requires at least four pieces of historical data to make predictions, and if the historical record of the items to be auctioned in the future is relatively
unpopular, the prediction effect will be limited.
This study chooses to integrate online auctions and time series forecasting into the
blockchain. In the future, more time series research can also be conducted in other fields,
such as renewable energy forecasting [9], COVID-19 confirmed cases, and stock market
prices [33]. With different roles and smart contracts, it is possible to establish a stock pricerelated investment platform and an early warning platform for the number of infections.
This study is one of the few that incorporate time series forecasting methods into
the blockchain and provide price recommendations to bidders, helping them reduce the
occurrence of the winner’s curse. In the future, in addition to α-Sutte Indicator, gray
prediction theory can be integrated into the blockchain, providing appropriate decisions
based on different situations.
[The source code of our system is shared on Github: https://github.com/kk3329188](https://github.com/kk3329188/lib/blob/main/OAPSS)
[/lib/blob/main/OAPSS.](https://github.com/kk3329188/lib/blob/main/OAPSS)
**Author Contributions: Conceptualization, D.-H.S.; data curation, W.-C.T.; formal analysis, T.-W.W.**
and W.-C.T.; investigation, M.-H.S. and W.-C.T.; methodology, D.-H.S. and T.-W.W.; project administration, D.-H.S. and D.C.Y.; resources, M.-H.S.; software, W.-C.T. and D.C.Y.; supervision, D.-H.S.;
validation, T.-W.W.; visualization, D.C.Y.; writing—original draft, T.-W.W.; writing—review and
editing, M.-H.S. All authors have read and agreed to the published version of the manuscript.
**Funding: This work was partially supported by the Taiwan Ministry of Science and Technology**
(grants MOST 109-2410-H-224-022 and MOST 110-2410-H-224-010). The funder has no role in study
design, data collection and analysis, decision to publish, or preparation of the manuscript.
-----
_Mathematics 2021, 9, 3214_ 18 of 19
**Institutional Review Board Statement: Not applicable.**
**Informed Consent Statement: Not applicable.**
**Data Availability Statement: Data sharing is not applicable.**
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. Olaya, J.; Boehm, F. Corruption in Public Contracting Auctions: The Role of Transparency in Bidding Process. Ann. Public Coop.
_Econ. 2006, 77, 431–452._
2. Amyx, D.A.; Luehlfing, M.S. Winner’s curse and parallel sales channels—Online auctions linked within e-tail websites. Inf.
_[Manag. 2006, 43, 919–927. [CrossRef]](http://doi.org/10.1016/j.im.2006.08.010)_
3. [Milgrom, P.R.; Weber, R.J. A Theory of Auctions and Competitive Bidding. Econometrica 1982, 50, 1089. [CrossRef]](http://doi.org/10.2307/1911865)
4. Klein, S.; O’Keefe, M. The Impact of the Web on Auctions: Some Empirical Evidence and Theoretical Considerations. Int. J.
_[Electron. Commer. 1999, 3, 7–20. [CrossRef]](http://doi.org/10.1080/10864415.1999.11518338)_
5. Krishna, V. Auction Theory; Academic Press: Cambridge, MA, USA, 2009.
6. Capen, E.C.; Clapp, R.V.; Campbell, W.M. Competitive Bidding in High-Risk Situations. JPT J. Pet. Technol. 1971, 23, 641–653.
[[CrossRef]](http://doi.org/10.2118/2993-PA)
7. [Bazerman, M.H.; Samuelson, W.F. I Won the auction but don’t want the prize. J. Confl. Resolut. 1983, 27, 618–634. [CrossRef]](http://doi.org/10.1177/0022002783027004003)
8. [Goeree, J.; Offerman, T. Winner’s curse without overbidding. Eur. Econ. Rev. 2003, 47, 625–644. [CrossRef]](http://doi.org/10.1016/S0014-2921(02)00290-8)
9. Ahmar, A.S. A Comparison of α-Sutte Indicator and ARIMA methods in renewable energy forecasting in Indonesia. Int. J. Eng.
_[Technol. 2018, 7, 20–22. [CrossRef]](http://doi.org/10.14419/ijet.v7i1.6.12319)_
10. Ahmar, A.S. Sutte indicator: An approach to predict the direction of stock market movements. Songklanakarin J. Sci. Technol. 2018,
_[40, 1228–1231. [CrossRef]](http://doi.org/10.14456/sjst-psu.2018.150)_
11. Bhowmik, M.; Chandana, T.S.S.; Rudra, B. Comparative Study of Machine Learning Algorithms for Fraud Detection in Blockchain.
In Proceedings of the 2021 5th International Conference on Computing Methodologies and Communication (ICCMC), Erode,
India, 8–10 April 2021; pp. 539–541.
12. Cheema, M.A.; Ashraf, N.; Aftab, A.; Qureshi, H.K.; Kazim, M.; Azar, A.T. Machine Learning with Blockchain for Secure E-voting
System. In Proceedings of the 2020 First International Conference of Smart Systems and Emerging Technologies (SMARTTECH),
Riyadh, Saudi Arabia, 3–5 November 2020; pp. 177–182.
13. [Haber, S.; Stornetta, W.S. How to time-stamp a digital document. J. Cryptol. 1991, 3, 99–111. [CrossRef]](http://doi.org/10.1007/BF00196791)
14. [Nakamoto, S. Bitcoin: A peer-to-peer electronic cash system. Decentralized Bus. Rev. 2008, 21260. Available online: https:](https://www.debr.io/article/21260.pdf)
[//www.debr.io/article/21260.pdf (accessed on 1 December 2021).](https://www.debr.io/article/21260.pdf)
15. [Lin, I.-C.; Liao, T.-C. A survey of blockchain security issues and challenges. Int. J. Netw. Secur. 2017, 19, 653–659. [CrossRef]](http://doi.org/10.6633/IJNS.201709.19(5).01)
16. [Szabo, N. Formalizing and securing relationships on public networks. First Monday 1997, 2. [CrossRef]](http://doi.org/10.5210/fm.v2i9.548)
17. Foti, M.; Vavalis, M. Blockchain based uniform price double auctions for energy markets. Appl. Energy 2019, 254, 113604.
[[CrossRef]](http://doi.org/10.1016/j.apenergy.2019.113604)
18. Desai, H.; Kantarcioglu, M.; Kagal, L. A Hybrid blockchain architecture for privacy-enabled and accountable auctions. In
Proceedings of the 2019 IEEE International Conference on Blockchain (Blockchain), Atlanta, GA, USA, 14–17 July 2019; pp. 34–43.
19. [Wang, D.; Zhao, J.; Mu, C. Research on Blockchain-Based E-Bidding System. Appl. Sci. 2021, 11, 4011. [CrossRef]](http://doi.org/10.3390/app11094011)
20. Jiao, Y.; Wang, P.; Niyato, D.; Suankaewmanee, K. Auction mechanisms in cloud/fog computing resource allocation for public
[blockchain networks. IEEE Trans. Parallel Distrib. Syst. 2019, 30, 1975–1989. [CrossRef]](http://doi.org/10.1109/TPDS.2019.2900238)
21. Braghin, C.; Cimato, S.; Damiani, E.; Baronchelli, M. Designing smart-contract based auctions. In International Conference on
_Security with Intelligent Computing and Big-Data Services; Springer: Cham, Switzerland, 2019; pp. 54–64._
22. McKeen, F.; Alexandrovich, I.; Berenzon, A.; Rozas, C.V.; Shafi, H.; Shanbhogue, V.; Savagaonkar, U.R. Innovative instructions
and software model for isolated execution. In Proceedings of the HASP’13: The Second Workshop on Hardware and Architectural
[Support for Security and Privacy, Tel-Aviv, Israel, 23–24 June 2013; p. 1. [CrossRef]](http://doi.org/10.1145/2487726.2488368)
23. Sun, J.; Huang, S.; Zheng, C.; Wang, T.; Zong, C.; Hui, Z. Mutation testing for integer overflow in ethereum smart contracts.
_[Tsinghua Sci. Technol. 2022, 27, 27–40. [CrossRef]](http://doi.org/10.26599/TST.2020.9010036)_
24. Magalhães, F.A.; Souza, T.R.; Araújo, V.L.; Oliveira, L.M.; de Paula Silveira, L.; de Melo Ocarino, J.; Fonseca, S.T. Comparison of
the rigidity and forefoot—Rearfoot kinematics from three forefoot tracking marker clusters during walking and weight-bearing
[foot pronation-supination. J. Biomech. 2020, 98, 109381. [CrossRef]](http://doi.org/10.1016/j.jbiomech.2019.109381)
25. Harb, H.; Makhoul, A.; Couturier, R. An enhanced K-means and ANOVA-based clustering approach for similarity aggregation in
[underwater wireless sensor networks. IEEE Sens. J. 2015, 15, 5483–5493. [CrossRef]](http://doi.org/10.1109/JSEN.2015.2443380)
26. Luu, L.; Narayanan, V.; Zheng, C.; Baweja, K.; Gilbert, S.; Saxena, P. A Secure sharding protocol for open blockchains. In
Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28 October
2016; pp. 17–30.
27. Omar, I.A.; Jayaraman, R.; Debe, M.S.; Salah, K.; Yaqoob, I.; Omar, M. Automating procurement contracts in the healthcare supply
[chain using blockchain smart contracts. IEEE Access 2021, 9, 37397–37409. [CrossRef]](http://doi.org/10.1109/ACCESS.2021.3062471)
-----
_Mathematics 2021, 9, 3214_ 19 of 19
28. Xiong, W.; Xiong, L. Data Trading Certification Based on Consortium Blockchain and Smart Contracts. IEEE Access 2021, 9,
[3482–3496. [CrossRef]](http://doi.org/10.1109/ACCESS.2020.3047398)
29. Karpinski, M.; Kovalchuk, L.; Kochan, R.; Oliynykov, R.; Rodinko, M.; Wieclaw, L. Blockchain Technologies: Probability of
[Double-Spend Attack on a Proof-of-Stake Consensus. Sensors 2021, 21, 6408. [CrossRef]](http://doi.org/10.3390/s21196408)
30. Longo, R.; Podda, A.S.; Saia, R. Analysis of a Consensus Protocol for Extending Consistent Subchains on the Bitcoin Blockchain.
_[Computation 2020, 8, 67. [CrossRef]](http://doi.org/10.3390/computation8030067)_
31. Cui, Z.; Fei XU, E.; Zhang, S.; Cai, X.; Cao, Y.; Zhang, W.; Chen, J. A hybrid blockchain-based identity authentication scheme for
[multi-WSN. IEEE Trans. Serv. Comput. 2020, 13, 241–251. [CrossRef]](http://doi.org/10.1109/TSC.2020.2964537)
32. Abu-Elezz, I.; Hassan, A.; Nazeemudeen, A.; Househ, M.; Abd-Alrazaq, A. The benefits and threats of blockchain technology in
[healthcare: A scoping review. Int. J. Med. Inform. 2020, 142, 104246. [CrossRef] [PubMed]](http://doi.org/10.1016/j.ijmedinf.2020.104246)
33. Ahmar, A.S.; del Val, E.B. SutteARIMA: Short-term forecasting method, a case: Covid-19 and the stock market in Spain. Sci. Total
_[Environ. 2020, 729, 138883. [CrossRef]](http://doi.org/10.1016/j.scitotenv.2020.138883)_
-----
| 15,324
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/math9243214?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/math9243214, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2227-7390/9/24/3214/pdf?version=1639385273"
}
| 2,021
|
[
"Review"
] | true
| 2021-12-13T00:00:00
|
[
{
"paperId": "394b5a6037932672db06eeb05a23196e7eec676c",
"title": "Blockchain Technologies: Probability of Double-Spend Attack on a Proof-of-Stake Consensus"
},
{
"paperId": "2f01ecf487cc62e171f94816cf7eab87880ccc53",
"title": "Research on Blockchain-Based E-Bidding System"
},
{
"paperId": "5a34a323f94c4fd331d5ed556ca0dfd908640efd",
"title": "The benefits and threats of blockchain technology in healthcare: A scoping review"
},
{
"paperId": "bbfb8eb2000314c2b86a874a605fafc6720a7c1b",
"title": "Analysis of a Consensus Protocol for Extending Consistent Subchains on the Bitcoin Blockchain"
},
{
"paperId": "f998e62606445f109b02b11ed20e563bd8ef3e6b",
"title": "SutteARIMA: Short-term forecasting method, a case: Covid-19 and stock market in Spain"
},
{
"paperId": "00c101d45aa76a0fe65eb6aade9416baf1b103e4",
"title": "Comparison of the rigidity and forefoot - Rearfoot kinematics from three forefoot tracking marker clusters during walking and weight-bearing foot pronation-supination."
},
{
"paperId": "26da370edcdc698971e519358bdce50fba80fa31",
"title": "Blockchain based uniform price double auctions for energy markets"
},
{
"paperId": "929165d1b54d099cda62ee82208de57cc5a2c724",
"title": "Designing Smart-Contract Based Auctions"
},
{
"paperId": "5e551b2b04b05987953176562fa922a62598e43b",
"title": "A Comparison of α-Sutte Indicator and ARIMA Methods in Renewable Energy Forecasting in Indonesia"
},
{
"paperId": "ec79422e0bfdb61d8b6d2a6ec5b2dfbcab970852",
"title": "Innovative instructions and software model for isolated execution"
},
{
"paperId": "a1498c8892331ac1a89cbd316a2abac172930fc2",
"title": "Winner's curse and parallel sales channels - Online auctions linked within e-tail websites"
},
{
"paperId": "27e6189b272914046c9acb2e19c28e4f73d7918a",
"title": "Corruption in Public Contracting Auctions: The Role of Transparency in Bidding Processes"
},
{
"paperId": "6a35c6f2245420fc410a5185de53fcaebf67505b",
"title": "The Impact of the Web on Auctions: Some Empirical Evidence and Theoretical Considerations"
},
{
"paperId": "5b4cf1e37954ccd1ca6b315986d45904f9d2f636",
"title": "Formalizing and Securing Relationships on Public Networks"
},
{
"paperId": "e666475b79db212c768459eebbf6cfae30804c7e",
"title": "I Won the Auction But Don't Want the Prize"
},
{
"paperId": "4e76f762110578ffa16a37f4bb8e3ea527fecc39",
"title": "A theory of auctions and competitive bidding"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
}
] | 15,324
|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00451acbf15f0c110b4cdfcaa5c31d29bb09f5b8
|
[
"Medicine",
"Computer Science"
] | 0.886705
|
Hyperledger Fabric Blockchain for Securing the Edge Internet of Things
|
00451acbf15f0c110b4cdfcaa5c31d29bb09f5b8
|
Italian National Conference on Sensors
|
[
{
"authorId": "1399719589",
"name": "Houshyar Honar Pajooh"
},
{
"authorId": "144818046",
"name": "M. A. Rashid"
},
{
"authorId": "30507786",
"name": "F. Alam"
},
{
"authorId": "97772132",
"name": "Serge N. Demidenko"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"SENSORS",
"IEEE Sens",
"Ital National Conf Sens",
"IEEE Sensors",
"Sensors"
],
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-142001",
"http://www.mdpi.com/journal/sensors",
"https://www.mdpi.com/journal/sensors"
],
"id": "3dbf084c-ef47-4b74-9919-047b40704538",
"issn": "1424-8220",
"name": "Italian National Conference on Sensors",
"type": "conference",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-142001"
}
|
Providing security and privacy to the Internet of Things (IoT) networks while achieving it with minimum performance requirements is an open research challenge. Blockchain technology, as a distributed and decentralized ledger, is a potential solution to tackle the limitations of the current peer-to-peer IoT networks. This paper presents the development of an integrated IoT system implementing the permissioned blockchain Hyperledger Fabric (HLF) to secure the edge computing devices by employing a local authentication process. In addition, the proposed model provides traceability for the data generated by the IoT devices. The presented solution also addresses the IoT systems’ scalability challenges, the processing power and storage issues of the IoT edge devices in the blockchain network. A set of built-in queries is leveraged by smart-contracts technology to define the rules and conditions. The paper validates the performance of the proposed model with practical implementation by measuring performance metrics such as transaction throughput and latency, resource consumption, and network use. The results show that the proposed platform with the HLF implementation is promising for the security of resource-constrained IoT devices and is scalable for deployment in various IoT scenarios.
|
# sensors
_Article_
## Hyperledger Fabric Blockchain for Securing the Edge Internet of Things
**Houshyar Honar Pajooh** **[1,]*** **, Mohammad Rashid** **[1]** **, Fakhrul Alam** **[1]** **and Serge Demidenko** **[1,2]**
1 Department of Mechanical and Electrical Engineering, Massey University, Auckland 0632, New Zealand;
[email protected] (M.R.); [email protected] (F.A.)
2 School of Science and Technology, Sunway University, Subang Jaya 47500, Malaysia;
[email protected]
***** Correspondence: [email protected]; Tel.: +64-21440684
**Abstract: Providing security and privacy to the Internet of Things (IoT) networks while achieving**
it with minimum performance requirements is an open research challenge. Blockchain technology,
as a distributed and decentralized ledger, is a potential solution to tackle the limitations of the
current peer-to-peer IoT networks. This paper presents the development of an integrated IoT system
implementing the permissioned blockchain Hyperledger Fabric (HLF) to secure the edge computing
devices by employing a local authentication process. In addition, the proposed model provides
traceability for the data generated by the IoT devices. The presented solution also addresses the
IoT systems’ scalability challenges, the processing power and storage issues of the IoT edge devices
in the blockchain network. A set of built-in queries is leveraged by smart-contracts technology to
define the rules and conditions. The paper validates the performance of the proposed model with
practical implementation by measuring performance metrics such as transaction throughput and
latency, resource consumption, and network use. The results show that the proposed platform with
the HLF implementation is promising for the security of resource-constrained IoT devices and is
scalable for deployment in various IoT scenarios.
[����������](https://www.mdpi.com/1424-8220/21/2/359?type=check_update&version=2)
**�������**
**Citation: Honar Pajooh, H.; Rashid,**
M.; Alam, F.; Demidenko, S.
Hyperledger Fabric Blockchain for
Securing the Edge Internet of Things.
_[Sensors 2021, 21, 359. https://](https://doi.org/10.3390/s21020359)_
[doi.org/10.3390/s21020359](https://doi.org/10.3390/s21020359)
Received: 7 December 2020
Accepted: 5 January 2021
Published: 7 January 2021
**Publisher’s Note: MDPI stays neu-**
tral with regard to jurisdictional clai
ms in published maps and institutio
nal affiliations.
**Copyright: © 2021 by the authors. Li-**
censee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and con
ditions of the Creative Commons At
[tribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
[4.0/).](https://creativecommons.org/licenses/by/4.0/)
**Keywords: Internet of Things; hyperledger fabric; smart contract; security and privacy; data prove-**
nance; edge computing
**1. Introduction**
Internet of Things (IoT) [1] technologies are associated with the significant growth
of generated, collected and used data. At the same time, with the rapid involvement of
distributed heterogeneous devices, various aspects of traditional IoT applications and
platforms face challenges in security, privacy, data integrity, and robustness [2]. The
blockchain has emerged as an innovative engine that can facilitate reliable and transparent
data transactions. It has been widely applied to traditional sectors, including finance,
commerce, industry, and logistics.
Most IoT platforms and applications depend on centralized architecture by connecting
to cloud servers via gateways. Unfortunately, this leads to severe security and privacy
risks. Wireless communication between sensor nodes and IoT gateways might also be very
susceptible to attack. Cloud servers are potential targets for Distributed Denial-of-Service
(DDoS) attacks resulting in significant infrastructure collapse [3]. Moreover, the centralized
server solution introduces a single point of failure risk to the entire system.
Networked devices in an IoT system are heterogeneous in terms of their security
requirements and resource availability. Resource-constrained devices operate in an open
environment that increases the risks of physical and wireless accessibility by adversaries.
RSA (Rivest–Shamir–Adleman) [4] and ECC (Elliptic Curve Cryptography) [5] are the two
most popular key cryptosystems. However, computing RSA is time-consuming due to
the modular exponentiation involved. Similarly, point multiplication in ECC relies on
-----
_Sensors 2021, 21, 359_ 2 of 29
modular multiplication, which is computation-intensive thus resulting in a prolonged
operation. The computational complexity of conventional security techniques such as SSL
(Secure Sockets Layer) [6] and its successor, TLS (Transport Layer Security), make them not
suitable for IoT devices. The SSL/TLS approach supported by CRL (Certificate Revocation
List) creates scalability challenges for IoT applications. Homomorphic encryption [7] is
very useful in protecting the privacy of users. However, the homomorphic encryption
may be slow thus requiring special implementation techniques to speed up the execution.
The ideal solution must provide data security and integrity while handling vast traffic
and being attack-resistant. Furthermore, lightweight, scalable, transparent access control
are to be associated with such a model. Blockchain is regarded as a promising solution
to provide decentralized accountability and an immutable approach that can be used to
overcome the aforementioned problems in heterogeneous scenarios [1]. It offers great
security features while providing high transparency and enhancing efficiency. Meanwhile,
it can also improve data traceability and eliminate third-party intervention at a lower cost.
Thanks to the development of edge computing platforms, data generated by the IoT
devices can be transferred to the edge gateways for further process and analysis. At the
same time, cloud-centric services are not suitable for the edge computing applications
due to the limited network bandwidth, security, and data privacy. When applied to
the edge computing systems, the blockchain provides a feasible solution to protect IoT
data from being tampered [8]. It is a general distributed, decentralized, and peer-to-peer
system that guarantees data integrity and consistency within existing industrial domains.
Ethereum [9] is a common blockchain service showing intrinsic characteristics of distributed
applications (dApps) over the blockchain network such as decentralization, anonymity, and
auditability. However, common blockchain platforms (e.g., Ethereum) require tremendous
computational power, making the integration of IoT nodes challenging.
The blockchain is an emerging technology playing a vital role in storing information
and securing IoT systems and devices [10]. Although the blockchain is a promising application to solve IoT privacy and security challenges of current centralized systems, lots
of IoT devices are constrained to perform complex operations due to their limited power
of CPU, restricted data storage, and constrained battery resources. Furthermore, existing
consensus algorithms in blockchain-based networks such as the Proof of Work (PoW) [11]
cannot be implemented on devices with limited computing resources. The mining process
described as taking decisions by all the nodes in peer-to-peer networks, requires considerable computational capabilities. Smart contracts present another promising application of
blockchain technology that can distributively enforce various access control policies in IoT
applications in the real-world scenarios. The data provenance plays a decisive role in the
security and privacy of IoT systems. Additionally, the integrity of all generated data by IoT
devices can be ensured by private blockchain technology.
In this paper, a blockchain-enabled edge computing approach is proposed and implemented for the IoT network with an open-source Hyperledger Fabric (HLF) blockchain
platform. HLF is the best fit for this study because of its lower processing complexity
(fewer number of transactions). Moreover, the transactions there can be performed in
parallel while using various validators. Additionally, the processing is made more efficient by employing the fast RAFT [12] consensus algorithm. Finally, it provides a channel
mechanism for private communication and private data exchange between members of a
consortium. Moreover, all the HLF programs run in the docker [13] containers providing a
sandbox environment that separates the application program from the physical resources
and isolates the containers from each other to ensure the application’s security. A layerwise security architecture is designed according to the capabilities of different nodes and
functionality to fit the scalable IoT applications. The infrastructure includes Base Stations
(BS), Cluster Heads (CH), and IoT devices facilitating access control policies and management. Mutual authentication and authorization schemes for IoT devices are proposed
and implemented with the aim to ensure the security of the interconnected devices in the
scalable IoT platform. The local authentication is used for ordinary IoT devices connected
-----
_Sensors 2021, 21, 359_ 3 of 29
to CHs (edge IoT gateways), while the blockchain service provides the authentication
of the IoT edge gateways i.e., the edge IoTs. The practical end-to-end lightweight HLF
prototype for IoT applications is deployed on the embedded edge IoT hardware built upon
the ARM64 CPU-based Raspberry Pi to validate the feasibility of the proposed design. HLF
docker images are customized to fit with the IoT gateways. The Fabric client facilitates
the request and query of transactions through invoking ChainCodes (CC) in IoT gateways.
Off-chain data storage and blockchain distributed data storage are employed to support
the architecture data traceability. HLF is implemented to act as a medium for multiple
device interactions while exchanging information. Moreover, the blockchain maintains a
global computation state. The distributed data storage is secure, and it has a large capacity.
The data processing confidentiality and efficiency are guaranteed by implementing external off-chain computations. An HLF blockchain middle-ware module embedded in the
IoT gateways ensures secure data transactions for the IoT distributed applications. The
performance metrics such as throughput, latency, resource consumption and network use
of the proposed model are evaluated using the edge IoT devices and x86-64 commodity
virtual hardware.
The following distinct contributions are made in this work:
1. A novel architecture for the security and privacy of IoT edge computing using a permissioned blockchain is proposed. The proposed architecture considers 5G-enabled
IoT technologies for node communications. The architecture is suitable for real-world
IoT systems due to the developed ChainCodes that facilitate storage and retrieval of
data in a tamper-proof blockchain system. Moreover, blockchain-based data traceability for 5G-enabled edge computing using the HLF is designed to provide auditability
of the IoT metadata through a developed NodeJS client library.
2. The adaptability of the Hyperledger Fabric for ARM architecture of the edge IoT
devices is improved by modifying official docker images from the source as there are
no official or public images of HLF to support the 64-bit ARMv8 architecture.
3. A lightweight mutual authentication and authorization model is designed to facilitate
a secure and privacy-preserving framework for IoT edge that protects the sensor nodes’
sensitive data through a permissioned fabric platform. Furthermore, it provides trust
for the IoT sensors, edge nodes, and base stations by the private blockchain. This
is achieved by using the edge nodes to record the IoT data in an immutable and
verifiable ledger to guarantee metadata traceability and auditability.
4. Performance characteristics of the proposed architecture blockchain in terms of
throughput, transaction latency, computational resources, network use, and communication costs are experimentally evaluated in two network setups.
The rest of the paper is organized as follows. In Section 2, a review of the related
works is presented. Section 3 presents the main characteristics of blockchain technology.
Section 4 describes the proposed HLF model implementation and elaborates on the details
of the system design. In Section 5 the profiling and analysis are presented, including results
from real-life IoT applications. Finally, Section 6 presents the conclusion and directions for
future work.
**2. Related Work**
_2.1. IoT Overview_
In general terms, IoT is a collection of physical devices, computers, servers, and
small objects embedded within a network system [14]. Some of the most prominent IoT
application areas are smart homes [15] and smart cities [16], vehicular systems [17], and
smart healthcare networks [18]. All these systems are highly distributed. The evolution
from the conventional cloud-centric architecture has been accelerated by the emergence of
the edge computing technologies [19,20]. A unified standard classification is defined to
ensure the consistency of the development and structures of IoT. It includes four layers:
service layer, platform layer, network layer, and device layer [21]. A comprehensive
review of security attacks towards Wireless Sensor Networks (WSNs) and IoT is presented
-----
_Sensors 2021, 21, 359_ 4 of 29
in [22]. The study also provides the techniques for prevention, detection, and mitigation of
those attacks.
IoT systems normally include many interconnected IoT devices generating a massive
amount of data. Meanwhile, IoT devices normally have limited capabilities in terms of the
CPU processing performance, memory capacity, and battery energy volume. Therefore,
they can be characterized as having restricted ability to resist various cyber-attacks. This
leads to issues associated with insufficient security and potential compromising of privacy.
New technologies have been developed to address the IoT’s decentralization challenges
with the blockchain being among the most promising of them.
_2.2. IoT Blockchain_
Most IoT applications are prone to problems such as system failure and data leakage.
Blockchain technology can mitigate these problems by providing better security and
scalability for IoT applications. However, there are many challenges associated with the
actual implementation of the approach. They are associated with tasks distribution between
IoT devices as well as with the limited capabilities of the IoT devices such as computational
performance, memory capacity, power resources. Numerous research works on blockchain
technology focus on coping with these challenges to adopt blockchain in IoT [23–25].
Many distributed and decentralized IoT systems have adopted blockchain technology
to provide trust [26], security [27], data management [28], fault-tolerance [29], as well
as peer-to-peer and interoperable transactions [30]. The application scope of blockchain
platforms can be divided into three main types depending on the way they manage user
credentials: (i) public or permissionless blockchain, (ii) private or permissioned blockchain,
and (iii) consortium blockchain. Blockchains that anonymous nodes can join, read data,
and participate in transactions with equivalent status are public blockchains. In contrast,
private or consortium blockchains are based on permissions and different types of nodes.
Some nodes need to be authenticated to perform specific actions [31].
Scalability is the major challenge in the integration of blockchain and IoT systems.
Many research works have addressed the scalability issues within Bitcoin’s architecture [32].
Smart contracts are promising solutions to facilitate the integration of distributed IoT systems and blockchain technology. However, their performance and scalability are directly
linked to overall blockchain system performance [33]. Multiple IoT applications recently
adopted blockchain for digital payment, smart contract services [34], and data storage [35].
Nonetheless, continuous developments have shown that new technologies can bring significantly higher scalability and degree of performance to next-generation blockchain systems.
The layer-based IoT blockchain frameworks are proposed in the literature to cope
with the scalability challenges in IoT systems while providing higher performance and
security. The layer-wised structure is a promising solution to smart cities’ security by
integrating smart devices and blockchain technology [36]. A hybrid-network architecture is
seen to leverage the strength of emerging Software Defined Network (SDN) and blockchain
technologies in a multi-layer platform [37]. Layer-based blockchain can potentially address
the IoT systems’ challenges such as response time and resource consumption [38]. This
approach can further facilitate the integration of blockchain technology in IoT systems by
tackling the complexity of blockchain implementation in the layer-based model [39].
Security challenges associated with the cyber-physical systems (CPSs) of smart cities
are reviewed in [40] and adoption of distributed anomaly detection systems by CPSs of
smart cities is proposed. A permissioned private blockchain-based solution in the context
of the Industrial IoT (IIoT) is proposed in [41] to secure the encrypted image. This approach
stores the cryptographic pixel values of an image on the blockchain, ensuring the image
data privacy and security. The state of the art in industrial automation is presented in [42]
to provide a better understanding of the enabling technologies, potential advantages and
challenges of Industry 4.0 and IIoT. Also, it covers the cyber-security related needs of IIoT
users and services.
-----
_Sensors 2021, 21, 359_ 5 of 29
_2.3. Blockchain for Mobile Edge Computing_
Several pieces of research have considered the integration of blockchain technology
and edge computing layer over the past few years. Multiple works have focused on enabling secure and efficient distributed edge computing [43,44]. Such integration targets
security enhancement. It also uses blockchain technology to develop access control policies
for various applications at the edge [45–47]. Other works [48,49] investigated the edge
resource management by implementing the blockchain. Distributed robotic system automation was also considered [50]. The integration of blockchain significantly benefits the
security of edge computing [51]. Permission blockchain and Distributed Ledger Technology
(DLT) embedded with identity management bring benefits to address many challenges by
adding a resilience layer while network traffic integrity is guaranteed against malicious
diversion and traffic manipulation. Network resource manipulation and fraudulent use
of shared resources are avoidable through the blockchain-enabled resource management.
Moreover, the blockchain provides a higher degree of security for the automotive sector [48]
and the healthcare sector at the edge [52]. Blockchain is applied to provide a decentralized
authentication model in edge and IoT environments [53]. The blockchain application is
further explored to enhance the privacy, integrity, and authentication between IoT, mobile
edge computing, and cloud in telehealth systems connected with 5G and IoT [54]. An
HLF-based blockchain architecture is proposed in [55] for healthcare monitoring applications. The authors in [56] highlighted the importance and benefits of fog computing for IoT
networks. The study also provides a comprehensive investigation of hardware security to
fog devices through an enriched literature review. A model based on HLF blockchain is
proposed in [57] as a service to answer IoT systems’ specific requirements, including low
hardware, storage, and networking capabilities.
_2.4. Blockchain for Data Sharing and Traceability_
Digital signatures and Message Authentication Code (MAC) are two standard methods to identify data lineage and origin. However, these cryptographic techniques are not
able to provide comprehensive data provenance [58]. Furthermore, the key management
in a heterogeneous IoT network with data sourced from different nodes is complicated.
Although logging-based methods can facilitate data transmission and system events monitoring, they cannot efficiently track data in distributed IoT systems [59]. Blockchain
technology has been widely considered for data provenance within a distributed system
such as IoT. Data operations are embedded in the blockchain transactions to provide the
data provenance [60]. ProvChain [61] is a distributed and decentralized blockchain-based
data provenance architecture to provide verifiability and data integrity in cloud environment. A blockchain network records the data operations as the provenance of data in the
blockchain transactions while the system stores the data record in a local ledger. Smart
contracts can automate the blockchain-enabled provenance systems without the off-chain
verification [62]. A function for tracing the data deviation is designed into smart contracts
with built-in access rules to protect data privacy in a distributed ledger [63]. SmartProvenance [64] is the blockchain-based distributed data provenance system that facilitates
the verification of provenance records and provides trustworthy data and provenance
collection using smart contracts and the Open Provenance Model (OPM). The blockchain is
proposed to ensure secure and trustworthy industrial operations [65]. The complexity of
blockchain implementation causes various limitations in deploying the aforementioned
provenance techniques in IoT systems. Existing works on data provenance are computationally complex and pose a hardware cost. Therefore, these methods are not feasible
for resource-constrained IoT systems with limited CPU performance, memory size, and
power capacity.
Despite the benefits that blockchain brings to IoT applications, there are resource
constraints and scalability challenges associated with the integration [2,66,67]. Generally,
the blockchain demands substantial computational power for the mining process in Proof
of Work (PoW), low latency, and high bandwidth. IoT devices with low processing power
-----
_Sensors 2021, 21, 359_ 6 of 29
are not capable of performing the blockchain mining process. The data encryption process
is frequently happening in blockchain systems. The computationally intensive process
of blockchain drains the low power capacity of IoT devices. The size of the blockchain
ledger increases continuously while the storage capacity of most IoT devices is low. Storing
a copy of the full blockchain ledger for IoT devices is not feasible as it requires a large
memory capacity. With Bitcoin, the blockchain storage size rests at over 200 GByte while
for Etherum it is around 1.5 TByte. New block generation and agreement reaching in
the blockchain require the nodes to exchange information through the consensus process
frequently. The consensus process and information exchange need high bandwidth and
low latency. However, the bandwidth of IoT devices is normally strictly limited.
One common concern about the blockchain system is associated with the need for
achieving high scalability in a blockchain network [68]. The problem with such a large
blockchain size is centralization risk. Most IoT systems have a very high number of
interconnected devices. In addition, IoT networks frequently change to suit different
applications by adding or removing IoT devices. Therefore, a solution is required to
address the IoT system scalability challenges. Moreover, the limitations in the processing
power and storage capacity of IoT devices in the blockchain network are also to be resolved.
Addressing these challenges is the main focus of this paper.
**3. Blockchain Overview**
Satoshi Nakamoto, first implemented a decentralized digital currency in 2009 [69]. The
blockchain can be described as a distributed ledger consisting of immutable and verifiable
transactions. All network participants share a replica of the ledger in the network. Integrity,
immutability, transparency, non-repudiation and equal rights are the main properties of
the blockchain systems.
Bitcoin [70] is known as the most popular blockchain platform. PoW is used in Bitcoin
to perform ownership management and tracking coins owner via implementing public-key
cryptography with a consensus algorithm. The consensus algorithm is executed when a
new block is introduced to the previous block to guarantee the reliability and validity of all
transactions. The nodes will reach a consensus when 51% of the nodes are truthful.
IOTA [71] is a distributed ledger designed for IoT to facilitate the value and data exchange. A machine-to-machine communication is facilitated by the Tangle protocol capable
of forming micro-payment systems. Additionally, it establishes IOTA network, which is a
set of Tangle graphs. This set constitutes the ledger to store transactions submitted by the
network nodes. The process of block validation leads to making a decision and adding a
new block to the blockchain.
_3.1. Consensus Algorithm_
Li et al. [72] reviewed the most common consensus algorithms in the existing
blockchain systems. These consensus mechanisms are PoW, Proof of Stake (PoS), Practical
Byzantine Fault Tolerance (PBFT), Delegated Proof of Stake (DPoS), Proof of Authority
(PoA), Proof of Elapsed Time (PoET), and Proof of Bandwidth (PoB).
PoW is the widest deployed consensus algorithm [73] that was first introduced by
Bitcoin. The nodes use computational power to compete in finding the nonce value. This
process is called mining. The difficulty level for PoW is adjustable when the number of
participants increases to manage the block’s average processing time. Higher difficulty
results in a lower number of blocks. No user should take more than 50% of the processing
power to avoid controlling the system by just one user.
PoS [74] was introduced to address the vast energy consumption issues associated
with the competing process in PoW. No competition is employed in the PoS algorithm.
The network selects a node as a validator (so-called a transaction validator node). The
node is chosen in advance to be a part of the Proof of Stake and attend a similar process of
difficulty adjustment as PoW. If the validator does not validate the transaction, the network
-----
_Sensors 2021, 21, 359_ 7 of 29
sets the next node as a validator, and the process continues until any node validates the
transaction. PoS deploys CASPER protocol to perform the consensus process.
PoA [74] algorithm is based on a chosen set of trusted nodes (known as Authorities).
This consensus algorithm is a Byzantine Fault Tolerant (BFT) variation. The chain becomes
a part of the permanent records when most authority nodes (for example at least N/2 + 1)
signs off the chain. This procedure facilitates the creation of a permissioned chain and is
associated with a lighter exchange of messages.
Hyperledger [75], introduced in 2016 by the Linux Foundation, is the most successful
and the most popular permissioned blockchain in the industrial and IoT domains. The
designed permissioned blockchains for enterprise ecosystems deploy the RAFT Consensus
Protocol [12], which is a better fit because it is more straightforward and less resource
consuming. Figure 1 shows the process of the RAFT consensus protocol and block creation
considered in this study.
Kafka [76] and RAFT are the same types of consensus that use Crash Fault Tolerant
(CFT) for ordering service implementation. They can tolerate up to N/2 system failures.
RAFT follows a “leader and follower” approach. There a leader node is dynamically
elected among the ordering nodes in a channel (this collection of nodes is known as the
“consenter set”), and the followers replicate its decisions. However, RAFT’s ordering service
deployment is easier and more manageable than Kafka-based ordering services from the
configuration to the process’s speed. Additionally, the RAFT configuration originates
directly from the orderer (unlike the Kafka case, which cannot be configured directly from
orderer services and must create a Zookeeper cluster to enable the state machine replication
process). The comprehensive design facilitates different organizations to contribute nodes
to a more distributed ordering service.
**Figure 1. Overview of the RAFT consensus protocol and block creation.**
The process is initiated by sending the transaction proposals to the blockchain peers.
A transaction proposal consists of various values, IoT metadata as well as other blockchainrelated contents. The client application is responsible for starting the process and then
transaction broadcasting to each blockchain member organizations’ peers. Once the peers
receive the transactions, they activate the endorsement process by executing the ChainCode implementing authentication and authorization mechanism. The transaction is then
endorsed and returned as the signed transaction. When all peers have endorsed the transaction based on the endorsement policy, the next step includes sending the transaction to
the ordering service when the consensus is reached (i.e., RAFT in our case). The last step is
encompassing the creation of the final block and committing it to the ledger.
-----
_Sensors 2021, 21, 359_ 8 of 29
_3.2. Smart Contracts_
Smart contracts are executable distributed programs to facilitate, execute, and enforce
the terms of an agreement on a decentralized consensus tamper-proof and typically selfenforcing through automated execution [77]. The smart contracts are simply executable
scripts that are filed on the blockchain with a specific address.
Smart contracts are triggered by transactions to execute and perform operations based
on recorded instructions. They are installed and instantiated on blockchain participants.
HLF is programmable by a construct called ChainCode (CC). Conceptually, CC is the same
as the smart contract on other distributed ledger technologies. CC sits next to the ledger.
Participants of the network can execute CC in the context of a transaction that is recorded
in the ledger. Automation of business processes through CC leads to higher efficiency,
transparency, and greater trust among the participants. Smart contracts allow decision
automation thus making them suitable for IoT applications.
**4. Hyperledger Fabric IoT System Model**
_4.1. Overall Design_
The network model proposed in this work is based on blockchain technology as
an individual application integrated with edge computing to provide security, identity
management, and authentication. This study builds on the model introduced in our
previous work [78] using a multi-layer platform approach and the Lightweight Hyperledger Blockchain technology along with smart contracts to enhance the performance
of the blockchain-IoT combination. The whole network is divided into several layers
and sub-networks. The devices in each layer have different computational capabilities
and energy storage capacity. As a result, different security approaches are proposed for
individual layers based on the blockchain. However, the blockchain implementation is
modified to suit the devices of each particular layer. These layers are Base Station (BS)
nodes, Cluster Head (CH) nodes (edge layer), and IoT devices. In the current work, we
propose an additional layer—Off-Chain Storage servers—to enhance the data storage of IoT
devices. Moreover, it facilitates the system performance improvement as the increase in the
shared ledger size causes system performance degradation. The Hyperledger Blockchain
platform is considered to be a potential solution to cope with scalability challenges while
distributed programs are defined to facilitate various tasks and transactions [79]. However,
the blockchain implemented in the embedded edge gateways provides reliable connectivity
considering sufficient power and computational resources requirements. Figure 2 shows
the conceptual framework of the proposed IoT Blockchain platform. The presented model
encompasses interconnected IoT devices, Edge IoT nodes (CHs), client application nodes,
external data storage, and IoT servers orchestrated in the peer-to-peer blockchain-based
network to form a multi-layer blockchain model.
_4.2. Multi-Layer IoT Blockchain Network_
4.2.1. Layer-1
A cluster of IoT devices is collected under each CH, a service agent for that cluster. This
layer is the external service interface, in which IoT devices collect sensing data, perform
local computing, and send results for storage and further analysis. CH nodes register the
identity of each connected IoT device by implementing a smart contract. Each IoT device
has a unique address within the IoT system. Each IoT node exists only in one cluster. The
nodes in this layer have limited power, computational performance, and storage resources.
4.2.2. Layer-2
Cluster heads at Layer-2 are responsible for data routing, security management (such
as local authentication and authorization procedures), and network management. Beyond
the aforementioned responsibilities, the IoT blockchain service is running in this layer to
provide blockchain technology services and form a distributed system. The IoT devices’
identity management, communications, and consensus processes are run in this layer
-----
_Sensors 2021, 21, 359_ 9 of 29
within the peer-to-peer blockchain network. The blockchain also handles the shared
distributed ledger across all participants. Furthermore, this layer handles consensus
algorithms and smart contract services to form data consistency and traceability.
**Figure 2. Conceptual framework of the integrated IoT blockchain platform.**
A client application node across the network can have granted access to invoke various
blockchain behaviors. Various ledger modifications are enabled by running smart contracts
installed and instantiated in all peer nodes or selected peer nodes. The CH nodes running
local authentication and authorization mechanism are directly connected to BS nodes.
ChainCodes provide deployment, query, and invocation services. The API rest server can
act as an interface by the client application with modifying the network-related operations
and behaviors. Furthermore, the application client performs transaction submission to
the blockchain. Therefore, various services can be defined within the blockchain network,
including user enrollment and authentication and IoT device registrations. The IoT device
authentication and authorization need to be carried on before transaction submission. The
local authentication and authorization process manages this procedure. Consequently, a
registered participant can sign a transaction using its private keys.
Data queries are enabled through CC, which is an executable logic hosted by peer
nodes. Additionally, it facilitates appending data from data stored in the ledger. CC and
related functionalities are mirrored across all peer nodes. CC deployment can be done to
a specific number of peers to address the scalability issues. Therefore, parallel execution
can be supported, which is resulted in an overall increase in system performance. The
client application performs several operations, including storing the data checksum, data
pointers, and data ownership in the blockchain. The actual data is stored in an external
data storage, which is off-chain.
4.2.3. Layer-3
In general, this layer is consistent with the current centralized cellular network encompassing Base Station nodes while the cloud server manages the process requests and
-----
_Sensors 2021, 21, 359_ 10 of 29
data generated from various devices. Powerful devices in this layer can choose to use a
non-symmetric encryption algorithm for data transmission. Layer-3 provides connectivity
and wide area networking capabilities for the edge nodes. Network in the Layer-3 is
decentralized, and BS units are distributed. The nodes trust the BSs in the system while
they can access public networks.
4.2.4. Layer-4
This layer is designed for storing sensed data by the IoT devices as well as enabling
big data analytic applications for further analysis. It is generally done off-chain. It stores
the actual data, while the blockchain ledger data includes data checksum, pointers, and
data ownership. The blockchain world state is stored in a database such as LevelDB or
CouchDB. The stored data in be queried and traced by a file ID in the blockchain. This
method provides data provenance and data consistency between the edge nodes.
_4.3. Local Authentication and Authorization of IoT Devices in Layer-1_
Identity of IoT devices is registered and stored in the shared ledger. Each IoT device
can join only one cluster. The registration request is sent to CH. It includes the required information such as IoT node ID, cluster identity, and timestamp. CH runs the smart contract
in the local blockchain to perform the IoT device registration. The mutual authentication
model is designed to provide the security of IoT devices with limited resources. The role of
CH is to register the IoT devices as well as locally authenticate and authorize IoT entities.
It also interacts with other cluster heads to form a secure communication between entities
through the implemented blockchain network.
The entire process is orchestrated in a smart contract to form an Authentication and
Authorization ChainCode. The CC is installed and instantiated by the blockchain peer to
perform the IoT blockchain local authentication procedure. This process is illustrated in
Figure 3.
Authentication of the IoT devices consists of a few steps: the discovery of devices, key
exchange, authentication, and data encryption. These procedures consider two network
entities: the CLIENT (IoT sensor nodes) and SERVER (an edge computing gateway or
intermediary node). It is noteworthy that the authentication of the IoT devices implements
the exchange of keys using Diffie-Hellman Ephemeral (DHE) for the collection of session
keys or secret keys. The following six steps describe the local Authentication of IoT devices.
Step 1 The first step starts with the CLIENT sending a package to the SERVER to establish
a “connection”. For visualization purposes, this package contains the “HELLO
CLIENT” character string.
Step 2 The answer from the SERVER to the CLIENT with the “HELLO SERVER” string.
With that, the connection is established. For better performance, it is suggested to
use chain bits for establishing the connection.
Step 3 The CLIENT generates a pair of asymmetric keys consisting of the public key
(K[C][pub] ) and the private key (K[C][priv] ). For the key generation, an Initialization Vector
(IV) is required with random values guaranteeing the distinction between the
generated keys. Then, a packet is sent to the SERVER containing: the CLIENT’s
public key (K[C][pub] ); a value such as “challenge-response” generated by the CLIENT;
a character string Fdr defining the “challenge-response”.
Step 4 The SERVER generating a pair of asymmetric keys: the public key (K[S][pub] ) and the
private key (K[S][priv] ). In sequence, the SERVER receives the CLIENT’s package and
responds with another package containing its public key (K[S][pub] ) and the response
to the “challenge-response” calculated from the Fdr function. The Fdr is a mathematically predefined function that can be sum, subtraction, or multiplication
applied to the value of IV received.
Step 5 The CLIENT calculates Diffie-Hellman values. A new package consisting of the
obtained DH value (DH[C]), the parameters g and p used in the calculation, a
new value of IV (iv[C]), and the value of IV obtained from the SERVER applied
-----
_Sensors 2021, 21, 359_ 11 of 29
to the function Fdr (F (iv[S])) will be sent to the SERVER. Moreover, a summary
function (Hash) for all these data and its result is encrypted with the CLIENT
key (K[C][priv]). It is then included in the package. The whole package is then
encrypted with the public key of the SERVER (K[S][pub] ). The encryption guarantees
the data confidentiality.
Step 6 The SERVER performs the calculation of the Diffie-Hellman values from the
information coming from the CLIENT. The SERVER then performs the same
actions as done by the CLIENT in step 5. It sends the resulting package to the
CLIENT at the end of the process. With that, both the parties have a common key:
the session key (DHK).
After exchanging the keys, the client and the server can exchange encrypted data with
a symmetric key (DHK), which can last for the session.
**Figure 3. Local authentication flow.**
-----
_Sensors 2021, 21, 359_ 12 of 29
_4.4. Secured IoT Blockchain for Edge Computing Nodes in Layer-2_
The proposed model as illustrated in Figure 4 encompasses the blockchain as part of
the individual applications of the edge computing layer to provide security, data traceability,
identity management, and privacy. A blockchain orchestrates a decentralized database that
allows applications to trace the history of appended transactions to a shared ledger.
**Figure 4. Blockchain-based edge services.**
The main component of the proposed model in this layer is HLF blockchain framework
running on the docker containers and integrated client library. The storage component is
designed in a separate layer to store the actual collected data off-chain. The client library
initiates the operations and communicates with other elements. The seamless provenance
of metadata storage is enabled while the data checksums are recorded in a tamper-proof
blockchain ledger.
4.4.1. Nodes in IoT Edge Hyperledger
There are three distinct types of nodes in HLF: Peer, Orderer, and Client. The client is
the node that applications use for initiating the transactions. Client nodes perform issuing
transactions to the peers, collecting proposal responses, and sanding blocks for ordering.
Peers are the nodes that interact with the blockchain ledger and endorse transactions
through running CC. Peers are the nodes that keep the ledger in-sync across the network.
Orderers are the communication backbone for the blockchain network. They are
responsible for the distribution of transactions. Furthermore, the orderer nodes are accountable for the validity and verification of responses. Moreover, the order nodes form
new blocks from grouped transactions when the consensus is achieved.
Peers nodes update the ledger after the blocks are generated. Members can participate
in multiple Hyperledger Blockchain networks. Transactions in each network are isolated,
and this is made possible by way of what is referred to as a channel. Peers connect with the
channels that can receive all the transactions that are getting broadcasted on those channels.
The transaction flow is presented in Figure 5.
There are two particular types of peer nodes: Anchor and Endorser. These peers need
to be configured with appropriate cryptographic materials, such as certificates. Peers in the
member’s organization receive transaction invocation requests from the clients within the
organization. Once transactions are created in the network and new blocks get generated,
they are sent out to the peers by the ordering service. Peers receiving these blocks need
to validate and update the ledger. This is managed on the peer node. Inherently, this
-----
_Sensors 2021, 21, 359_ 13 of 29
architectural approach is highly scalable as there is no need for a centralized effort to scale
the network or scale the infrastructure.
**Figure 5. Proposed HLF network transaction flow.**
Each member organization can look at their needs and set up the needed infrastructure
based on their requirements. Member organizations can have multiple peers. However,
not all peers receive the block information from the Orderer—only the relevant anchor peer
receives them. To avoid a single point of failure, an organization can create a cluster of the
anchor peers. The anchor peers are set up and defined as part of the channel configuration.
The anchor peers are by default discoverable. Peers may be marked as the endorsers or
take up the endorser’s role (known as the endorsing peers). A client sends the invocation
requests to the endorsing peer. On receiving the request for the invocation, the endorsing
peer validates the transaction. For example, it checks whether the end-user has used a
valid certificate. If the validation checks out fine, then it simulates CC.
A set of IoT edge nodes is configured to run HLF processes through Docker. Network
participants run the peer process and maintain the blockchain ledger by receiving various
transaction proposals. The peer process is the main component of the HLF network
while hosting CC and the ledger. Network’s efficiency can be enhanced by increasing
the number of running peers. However, one peer node per organization is normally
sufficient. The ordering service handles blocks of ordering tasks and validates the proposed
blocks by peers with a deterministic consensus algorithm. The proposed model can be
enhanced through the multiple Orderers approach for fault tolerance using RAFT [12] or
Kafka [76] methods.
4.4.2. ChainCode in IoT Edge
Each peer participating in HLF networks keeps a copy of the ledger. The ledger
consists of the blockchain and world state. Each block contains packed transactions,
ordered and broadcasted by ordering service based on peer proposals. The world state
database keeps the latest state in key or value form. CC is a program (smart contract) that
is written to read and update the ledger state. Its operation is the process of deploying
a well-developed CC onto a fabric network (channel) such that client applications can
invoke CC functions. CC deployment (lifecycle ChainCode) includes: (i) install CC to
-----
_Sensors 2021, 21, 359_ 14 of 29
selected peers, (ii) instantiate CC to a channel and specify an endorsement policy as well as
initial function arguments when needed. After the deployment, invoking the ChainCode
functions is accessible.
One enhancement in HLF is that the CC governance becomes decentralized. The CC
package does not need to be identical across channel members. This means that organizations can extend the CC to include additional validation. Lifecycle CC includes steps in
which member organizations can explicitly participate in the ChainCode deployment. The
current design implements ChainCodes to manage IoT devices’ identity connected to edge
gateways, store, and retrieve data from the blockchain ledger. The checksum of all collected
data objects is stored in the ledger. Moreover, the location of data and the data ownership
(authenticated ID) are considered to be recorded. This approach enables the system to track
the data location and verify the integrity of the data. Using the certificate for invoking
the transaction, the system records who and when edited or stored an item. The data
lineage traceability is enabled by recording the references of the items used to generate it.
The client library facilitates the ledger’s interaction to perform various functions, storing
and querying the provenance information. The proposed model implements multiple
endorsing nodes to ensure running the CC in a lightweight environment.
Part of the ChainCode design includes running the authentication and authorization processes for security, privacy, and identity management. Furthermore, CC tracks
the owner of performed operations on data. The Client Identity (CID) CC library [58]
introduced in HLF v1.1 is used in this research to save a userID issued by the Certificate
Authority (CA).
4.4.3. Certificate Authority
Membership Services Provider (MSP) is an abstract component of the HLF system that
provides clients’ and peers’ credentials to participate in the Hyperledger Fabric network.
The default MSP implementation is based on the Public-Key Infrastructure (PKI). There
are two primary services provided by MSP: authentication and authorization. In PKIbased implementations, there is a need to manage the identity by way of certificates. The
certificates are issued, validated, and revoked by the CA.
Each component needs to be authenticated and identified before accessing the fabric
network. In a typical case, a user is issued with a digital certificate that includes proper
information associated with that user. Fabric CA is the Certificate Authority developed by
HLF serving a CA role. Once the Fabric CA is up and running, it can issue new certificates
with the request’s specific requirement. Fabric CA can be accessed using Fabric-CA Client
or Fabric SDK, both from HLF. Digital Certificate is issued by CA that is trusted by the
fabric network. The user’s operation is then accepted and processed by the fabric network.
The digital certificate can be issued when crypto material is generated with Cryptogen and
Configtxgen binaries, or more commonly, generated through registration and enrollment
on CA. The current design implements Hyperledger’s CA docker image, customized to
provide persistent certificate database storage. The fabric-CA implementation has two
parts: fabric-CA server and fabric-CA client. Members are issued a root certificate that they
can use for issuing their own identities within their organizations. Thus, the Hyperledger
fabric network can have one or more certificate authorities to manage the certificates.
4.4.4. Ledger Implementation
HLF is a distributed ledger technology. All peers in the network have a copy replica of
the ledger. The ledger has two parts: a transaction log and state database. The transaction
log keeps track of all the transactions invoked against the assets. The state data are a
representation of the current state of the asset at any point in time. The transaction log is
implemented using the LevelDB, that is a lightweight library for building a key-value data
store. It is embedded and used as part of the fabric peer implementation. Unfortunately,
the LevelDB does not provide a capability for creating and executing complex queries.
However, one can replace the state database (which is implemented in the LevelDB) with
-----
_Sensors 2021, 21, 359_ 15 of 29
CouchDB that supports the creation of complex queries. Therefore, the state database is
pluggable at the peer level. The transaction log is immutable. At the same time, the state
data are not immutable. The creation of records in the transaction log is possible, as well as
the retrieving of existing transaction records from the transaction log. However, it is not
possible to update a current transaction record that is present in the log while it is possible
to delete any of the transactions added to the log. From the state data perspective, create,
retrieve, update, and delete operations can be carried out on the state data for an asset. The
ledger implementation in the proposed model is shown in Figure 6.
**Figure 6. Ledger implementation flow.**
_4.5. Base Station Nodes with High Computational Power in Layer-3_
BS node’s main functionality includes several tasks such as nodes management under
each base station, collecting and aggregating the received data from sensing nodes, processing, analyzing, and storing the received data. As an organization manager, BS is trusted
by other network participants. CH nodes (edge IoT devices) first need to be initialized
and authenticated by BS before joining the network. Base stations can connect to public
networks or clouds as they have robust computing and storage resources. In a public
blockchain, nodes build trust in a decentralized manner through a consensus algorithm.
Running public blockchain within resource constraint IoT nodes is not feasible due to the
lack of needed massive capacity and time for the frequent authentication process. The
unified authentication scheme is presented in Layer-2 to facilitate the joining process for
nodes in a local private blockchain framework. The current hybrid design proposes a
public blockchain for base stations in Layer-3 of the network model. Cluster head nodes
are registered and authenticated with BS nodes through implementing the smart contracts.
The node’s identity information is recorded in a public blockchain ledger.
_4.6. Layer-4 Off-Chain Storage_
Implementation of Distributed Ledgers Technologies (DLT) with blockchain is limited
in terms of the amount of data stored in their ledger. The size of the shared ledger is
growing incessantly, causing the system performance degradation. The solution to this
challenge in the proposed design includes the use of off-chain storage. The blockchain
in Layer-2 stores only the metadata’s provenance while the actual generated IoT data
-----
_Sensors 2021, 21, 359_ 16 of 29
are stored in non-blockchain-based storage. This amount is a small fraction of the total
generated data by the IoT devices. The data checksums are computed, stored, and verified
with the blockchain records to ensure the integrity and immutability of the stored IoT data.
The CC functions and the ledger functionality are independent of the off-chain storage
choice. However, quick adding multiple storage (or other) resources is possible based on
system requirements.
The current design implements SSHFS [80] as shared storage, while Raspberry Pi are
employed as CHs (edge IoT devices). Thus, the choice of external shared storage needs
to be aligned with the ARM64 architecture of the Raspberry Pi system. The SSHFS is a
FUSE-based user-space client. It allows mounting a remote filesystem using SFTP as an
underlying protocol through SSH. Most SSH servers enable and support the SFTP protocol
and provide access by default. Performance evaluation of distributed storage services in the
community network shows that SSHFS is comparable with other network file systems [81].
Moreover, the system enhancement is achievable with a more resilient distributed file
system such as Open AFS [82] or cloud-based services such as Amazon EFS [83].
**5. Performance Evaluation**
The primary objective of any deployed blockchain applications is to maintain submitted transactions by network participants, transaction verification and ordering processes,
block generation, and store the transaction outcome in a distributed ledger. Therefore, the
blockchain system performance can be evaluated with the following performance metrics:
- Throughput: The maximum number of transactions that the blockchain system can
handle, and record the ledger’s transaction outcomes in a given time.
- Latency: The time between the transaction invoking by a client and writing the
transaction to the ledger.
- Computational Resources: Hardware and network infrastructure required for the
blockchain operation.
The detailed desperation of Hyperledger performance metrics is documented in the
Hyperledger Performance and Scale Working Group white paper [84].
_5.1. Experimental Setup and Implementation_
The experimental setup consists of two different environments of the same network.
The first network was set up and run on virtual desktop nodes. The other system included
Raspberry Pi (RPi) devices acting as IoT edge nodes. These RPis were chosen as IoT cluster
heads and were connected to several small IoT sensors.
The virtual desktop setup had five virtual machines running on VMware virtual
platform environment: 5 Intel(R) Xenon(R) Gold 5220 CPU@202GHz 2C2T. All nodes run
Ubuntu 18.04. The official Hyperledger Fabric (version 1.4) framework was deployed as
an underlying blockchain application. HLF is a permissioned open-source blockchain
architecture designed for the enterprise ecosystem. Figure 7 shows the system under test
high-level architecture.
The same network setup was implemented on four RPi Broadcom BCM2711 Quadcore Cortex-A72 (ARM v8) 64-bit [email protected] devices, and one virtual desktop used as
CA server. RPi nodes run the Debian 64-bit OS and nodes interconnected in a peer-topeer network thus forming a distributed and decentralized network. Because the official
HLF framework cannot be run on Raspberry Pi devices, the docker images for ARM64
architecture has been modified to support running the HLF on the RPi nodes.
Measurements on both the networks were taken enabling a comparison between
the architectures. The two system setups encompass devices with dissimilar capabilities.
That helped to better understand the system performance and devices’ capabilities in
different scenarios of running the HLF platform. Docker containers consisted of blockchain
components that were orchestrated by the Docker Swarm and deployed across the network
of nodes. A client was considered to be load-generating one that could submit transactions
into the system, and invoke transactions and system behaviors from it.
-----
_Sensors 2021, 21, 359_ 17 of 29
**Figure 7. Experimental setup and system under test.**
_5.2. System Configurations_
The system configurations encompass various tasks while taking into account also
configuring system dependencies. They included Docker composes configuration, docker
swarm setup, loading needed certificates and different scripts, CC configurations, external off-chain storage setting, various network access, modifying Docker images for RPi,
etc. Many issues were coming from unsupported 64-bit RPi images, including software,
libraries, and kernel issues. A shared Docker swarm network was implemented to manage
and deploy multiple Docker containers to edge IoT nodes. Docker composes and related
compose files were the central point for configuring containers deployment, modifying variables, initializing scripts, and testing the fabric network. Docker images were built to suit
the RPi 64-bit ARMv8 architecture as the HLF does not officially support ARM architecture.
_5.3. Transaction Throughput_
Transaction Throughput is a performance metric defined by the Hyperledger Performance and Scale Working Group [84]. This metric represents the number of transactions
processed by blockchain, leading to writing the outcome in a distributed ledger within a
specific time. For this purpose and to measure the throughput, multiple rounds of benchmark applications were run on the top of the implemented HLF network with varying
transaction batches. The corresponding time for each transaction and batch were measured through the benchmark application. The total time and average time were found to
determine the response times and the number of transactions per minute.
-----
_Sensors 2021, 21, 359_ 18 of 29
5.3.1. Desktop Measurements
The throughput measurement was conducted by submitting several transactions
together while varying load intensity levels. Figure 8a indicates exponential growth in the
throughput with the batch sizes increase until it reaches its peak around 3500 transactions.
Larger batch sizes can help the system to order more messages within the same block while
it is submitted in the same timeout. Furthermore, Figure 8a indicates that many blocks
are required to be filled up quickly to achieve higher throughput. The maximum number
of transactions performed by the implemented virtual environment system was around
3500 transactions per minute, the peak system throughput. It is essential to consider that
these large batch sizes were generated to evaluate the system performance. The system
was limited to 58 transactions per second (approximately 3500 transactions per minute)
due to the hardware capability of the virtual desktop.
Transactions response time is illustrated in Figure 8b. The response time increased
with the growth in batch size. A large number of transactions caused system congestion—
more transactions needed to be handled by peers and verified by the Orderer. Therefore,
the individual transaction response time increased accordingly. As shown in Figure 8b, the
transactions were handled quickly at the beginning of the process. However, the response
time increased with the growth in the number of transactions in the queue to be handled
and verified.
With the increase in the transaction arrival rate, the throughput increased linearly
as expected until it flattened out at the peak point. This was because the number of
ordered transactions waiting in the queue during the validation phase grew rapidly while
subsequently affecting the commit latency. It shows that the validation phase was a
bottleneck in the system performance. An increase in the number of Orderer nodes and
validation peers could address this challenge.
(a) Throughput (b) Average Response
**Figure 8. Effects of transaction sizes on the throughput and average response times in Desktop setup.**
5.3.2. Raspberry Pi Measurements
The same system evaluation was performed in the environment consisting of RPi
devices so to compare with the results obtained while using the virtual desktop setup.
The results that are shown in Figure 9a,b confirm the same trend as was observed
previously while using the desktop setup. The maximum throughput peak happened
around 750 transactions batch size per minute (i.e., 12 per second), which is lower than
the results for the virtual desktop case. Moreover, the higher response times than in
the desktop version were observed. The peak throughput occurred in the batch sizes
-----
_Sensors 2021, 21, 359_ 19 of 29
around 750 transactions per minute due to constraints of RPi devices in terms of the
CPU capabilities.
The blockchain distributed ledger may be limited due to the amount of data stored
in the blockchain system. The growth in the shared ledger causes degradation in the
performance. To address this issue, the provenance of data was kept in the HLF ledger.
External storage was dedicated in layer-4 of the proposed model to store the data verified
by immutable blockchain records.
It should be noted that the results show satisfactory performance for the system
in general. However, it is expected that the same results could be achieved by adding
more clients to the system. Most of the restrictions, in this case, are related to the client’s
hardware on which the applications are run and are related to the peer nodes’ limitations.
The results show that storing information and recording data in the ledger do not affect the
system performance any much. However, the limitations are mostly related to the time
required to perform these operations as it should be done in a sequence, thereby affecting
bandwidth and response times.
(a) Throughput (b) Average Response
**Figure 9. Effects of transaction sizes on the throughput and average response times in Raspberry Pi setup.**
_5.4. Transactions Latency_
Transaction Latency indicates the time between the invoking of a transaction by a client
and recording the transaction on the ledger. In the experimental setup, the measurements
of a single transaction latency were performed by an application that sent a defined number
of transactions to the HLF network while recording the individual transaction time, total
average time, and corresponding statistical metrics. The results are shown in Figure 10 for
CC Operation latency are the average of 100 separate operations.
Table 1 presents the results for operator SET in both desktop and RPi setup. It is
evident from Table 1 that in the case of operator SET, the Raspberry Pi setup measurements
were worse than those associated with the Desktop setup. The reason for this can be found
in the standard deviation of related measures. The results of throughput measurements
in the case of Raspberry Pi show a lot of fluctuations compared to the desktop option. It
can be explained as the capability difference between the two implementations. Indeed, it
took 2109 ms to submit a transaction and confirm it by running the HLF on the Desktop
setup, while the time for Raspberry Pi was about 2348 ms. The Retrieving operations time
for GET operators was about 100 ms in both cases. The results for RPi indicate more delays
compared to the desktop environment. When the number of ordered transactions waiting
in the verification process queue during the validation phase increased, it significantly
increased the commit latency. Therefore, a validation phase can be considered to be a
-----
_Sensors 2021, 21, 359_ 20 of 29
bottleneck. However, the increase in the number of involved peers also causes higher
latency. Furthermore, the experiments indicate that for real applications such as IoT to
achieve lower transaction latency, the use of a smaller block size with a low transaction
rate would be needed. In contrast, the higher transaction rates need a larger block size to
achieve higher throughput and lower transaction latency.
**Table 1. Statistics analysis of SET ChainCode latency.**
**Setup Environment** **Avg** **Std** **Med** **Max** **Min**
Desktop 2109 42.5 2105 2518 2103
RPi 2348 252 2306 4029 2204
**Figure 10. Latency for all ChainCode operation.**
The experiment was further developed with multiple rounds of the benchmark to
submit transactions with different sending rates starting from 10 to 500 transactions per
second (TPS) for different block sizes. The experiment aimed to measure the maximum,
average, and minimum transaction latency and transaction throughput. The results are
presented in Figure 11. The minimum latency remained below 1 s during the experiments,
while the maximum latency proliferated as the send rate reached 100 TPS.
_5.5. Resource Consumption_
Resource measurements encompass CPU computational capability, memory, and
network use. The measurements carried out with varying load levels employed edge,
middle, and large load cases. The operation of storing various data sizes in the network
was performed with different transactions to calculate the resource consumption. The
volumes were different for desktop and Raspberry Pi network setups due to hardware
limitations and RPi devices’ capability.
_5.6. CPU and Memory Use Measurements_
The CPU and memory activities were measured with the Psrecord utility [85] by
attaching the processes’ pid and submitting transactions with varying data seizes. Psrecord
is an open-source monitoring tool that can record real-time metrics in time-series databases.
The Psrecord monitors and records a defined process. The specific usage is recorded by the
Psrecord tool up to a maximum of 400% of maximum system use. The result for Orderer
and ChainCode processes indicates that the resource consumption of these two processes
was negligible. The Peer nodes consumed most of the memory and CPU resources. This
was because the verification of the transaction and smart contracts by peer nodes required
high CPU usage. Therefore, the investigation mainly dealt with the peer process and client
application processes.
-----
_Sensors 2021, 21, 359_ 21 of 29
(a) 5Peers-10Blocks (b) 5Peers-50Blocks
**Figure 11. Latency vs. transaction sending rate.**
5.6.1. Desktop Setup
Evaluation of the CPU and memory use by the involved process provided a comprehensive view of the overhead and the impact on the device hardware. Therefore, a series of
measurements were conducted to analyze resources’ consumption, including the resources
of the network, CPU, and memory of the involved devices. Peer, Orderer, ChainCode,
and application client processes were involved. The experiment was initiated by sending
3000 transactions per minute each of 1 KByte. The initial measurements indicated a high
dependency on peer and client processes to the data sizes and throughput. However,
Orderer and ChainCode processes used a small CPU capacity percentage (about 9%) and
memory (approximately 16 MByte and 33 MByte). Due to that fact, the evaluation and
analysis were focused more on peer and client processes’ usage of resources. With lower
load sizes, the peer processes showed similar behavior. When increasing the throughput,
the peer process used a higher CPU percentage (about 20%), and memory usage at around
150 MByte. The client process used approximately 40% of the CPU capacity continuously
and used 120 MByte of memory. The reason for this can be attributed to multiple processes
in the client. It mainly involves connecting to a peer for each transaction, invoking CC
and related operators, performing related transactions, executing the proposal requests
and responses related to ordered transactions. The use of resources is also increased if the
client uses external storage. In this case, it needs to calculate the checksums stored in the
ledger as well as storing the data in external storage. These experiments were carried out
with the highest possible load amount (in the real-world scenarios, these values would be
significantly lower). The results are presented in Figure 12.
Similar to the scenario with the client process, the peer process used about 40% of CPU
capacity and 150 MByte of memory. One of the key elements in any HLF network is a peer
node and its related processes, playing a vital role in ordering transactions. The peer node
plays the role of a response coordinator to all components and from them while Peers must
keep the ledger coordinated across the HLF network. Peers connect with the channels,
and they can receive all the transactions that are getting broadcasted on that channel. Peer
nodes’ measurements show more resource consumption than the orderer, ChainCode, and
clients to synchronize with other components in the HLF network. To better evaluate
and analyze peer and client processes’ behavior, the consumption of resources at different
data size levels with three separate throughputs were investigated. The different levels
selected were low throughput and large data size (small), low throughput and small data
size (medium), and high throughput and small data size (large).
-----
_Sensors 2021, 21, 359_ 22 of 29
(a) 5 tx/min (b) 50 tx/min (c) 1500 tx/min
**Figure 12. CPU and memory use for varying data sizes for peer process in the Desktop setup.**
The results are plotted in Figure 13 for CPU and Memory use of peer and client
application processes over 10 min span with sampling per second. As seen in the plots, the
peer process required a higher CPU use for the larger load with 30% increase. Similarly, the
use of memory was higher, as the peer process must handle more transactions. To evaluate
the client process performance and related applications, external storage was added to
assess its impact on CPU and memory use. From the low number of transactions and up to
many transactions, these values were sampled (Figure 13). Larger files needed more CPU
and memory levels. Finally, it can be concluded that the client process can be influenced by
the file size and the level of the load intensity to handle.
(a) 5 tx/min (b) 50 tx/min (c) 1500 tx/min
**Figure 13. CPU and memory use for varying data sizes for client process in the Desktop setup.**
5.6.2. Raspberry Pi Setup
Following up with analyzing the use of the resources, the RPi system setup was tested.
It is crucial to acknowledge that the RPi hardware was less capable and had hardware
limitations. Therefore, it was necessary to pay attention to the data sizes sent through
and the number of transactions. Consequently, we considered the maximum number of
transactions to be 500 per minute.
As is evident from Figure 14, the difference between 5 transactions per minute and
50 transactions per minute cases was more visible than the desktop setup. The continuation
of the comparisons led to the conclusion that with the same throughput, the RPi uses
more CPU resources (4 to 5 times more), which was interpreted as a hardware restriction
inherent to RPi devices. Although it was not possible to make a comprehensive comparison
between 500 transactions (tx) per minute in the case related to RPi setup and 1500 tx per
minute related to desktop setup, as shown in Figure 14, the CPU usage and memory were
approximately the same in both the cases.
-----
_Sensors 2021, 21, 359_ 23 of 29
(a) 5 tx/min (b) 50 tx/min (c) 500 tx/min
**Figure 14. CPU and memory use for varying data sizes for peer process in RPi setup.**
Similarly, the same measurements were performed for the client application process
in the RPi setup. In this case, external data storage was considered. Figure 15 shows the
results of the experiment. The higher usage of CPU was due to the difference in devicerelated clock rate in each of the separate setups. The peer process memory consumption
was higher in the RPi setup compared to the desktop one. This can be found in peer process
behavior in handling transactions. In both the setups in the client application process, the
level of memory use was similar. However, in all cases, the use of 200 MByte to 300 MByte
of memory was sufficient, and it was not considered the system’s main limitation. The
Desktop setup’s resource consumption with a realistic transaction load size of around
50 KByte every five seconds was around 5% CPU and 15% in RPi.
(a) 5 tx/min (b) 50 tx/min (c) 500 tx/min
**Figure 15. CPU and memory for varying data sizes for client process in RPi setup.**
_5.7. Network Use Measurements_
To assess the consumption of available network resources and to check the network
overhead, launching the peer node and client application node locally could be employed
to send the transactions to the orderer, other peers, and external data storage. If the peer
node is launched locally, it allows us to monitor ledger updates. At the same time, all
transmitted traffics between different involved participants can be checked. Furthermore,
it would be possible to have an overview of all the factors of the transmitted data.
To measure and analyze network traffic, the Speedometer utility running on the Linux
environment [86] was used. Speedometer measured the sent and received network traffic
over a specific network interface. All other network activities were disabled. The HLF
network and external storage-related communication processes were monitored using
the iftop Linux monitoring tool to measure network traffic accurately. The experiments
were initiated without running any processes such as the Docker, and only the process
run by the operating system to be monitored was allowed. The results show that baseline
3–5 KByte/s data can be written off to others as the network traffic.
-----
_Sensors 2021, 21, 359_ 24 of 29
With running the HLF, significant changes in network traffic were detectable. Figure 16
displays that with the onset of the peer process, network traffic increased by about five
times compared to the baseline mode. In this case, there were no transactions between peers.
The main reason for this was the beginning of the communication between peer process
and network components, to have ledger consistency and reaching a synchronization
through the gossip protocol. For further analysis and finding out how network resources
would be affected by offered load, different offered load levels were engaged, and various
modes were evaluated with and without external storage resources. The relevant results
are presented in Figure 17.
**Figure 16. Network use for peer process with no transactions.**
The results show that receiving and sending traffic to perform transactions every 5 s
occupies something about 1–40 KByte/s spectrum. Involving an external storage source
significantly increases traffic and increases its range to about 100 KByte/s. This increase
was also visible in the incoming traffic and indicated by the file storage’s confirmation in
the shared folder. Further increase in the number of transactions would increase the sent
and received traffic.
**Figure 17. Network use vs. load sizes with/without external storage.**
-----
_Sensors 2021, 21, 359_ 25 of 29
**6. Conclusions**
Providing security to massive interconnected IoT devices while ensuring the scalability of IoT systems with minimum resource requirements is a challenging problem.
Additionally, the heterogeneity and diversity of connected devices within the IoT realm
make it even more challenging. Therefore, the interoperability, identity, and privacy of
IoT systems need to be guaranteed securely. The existing centralized solutions, such as
a cloud-centric model, are costly. Moreover, these solutions’ latency is also noticeable.
Furthermore, the single point of failure issue is a considerable risk to the security of the
centralized solutions. Blockchain technology is a promising solution to provide security
for IoT devices while leveraging trust and interoperability.
This paper presented an implementation of the Hyperledger Fabric Blockchain platform as a permissioned blockchain technology integrated with edge IoTs to test and analyze
the performance of the proposed BlockChain-based multi-layer IoT security model. The
presented proof of concept was implemented using two different environment setups on
the Raspberry Pi devices and VMware Virtual desktops. The performance metrics such
as transaction throughput, transaction latency, computational resources, and network use
of the implemented networks, were evaluated. The implemented prototype facilitates
the record of sensing data by IoT devices (metadata) in a tamper-proof and transparent
blockchain-based framework to provide data traceability. Moreover, the framework’s
security is guaranteed by implementing a layer-wise blockchain approach and local authentication process for IoT nodes in each cluster. The client application is developed with the
help of Hyperledger Node SDK where various Hyperledger ChainCodes help to perform
local authentication and authorization. Moreover, they facilitate the record of file pointers
to provide checksums traceability and data validation.
The presented findings indicate a significantly optimal throughput for IoT applications. Peers and clients’ processes are the primary source of resource consumption in the
network. The Orderer and ChainCode use fewer resources compared to the peer process.
Experimental results show a significant increase in throughput of approximately six times
compared to the optimal scale implementation of HLF. The Desktop setup’s resource consumption with a realistic transaction load size of around 50 KByte every five seconds is
around 5% CPU and for the RPi setup is around 15% CPU. Peer and client processes are the
primary resource consumers in HLF as our measurements indicate an average of 40% to
50% CPU consumption respectively at full load, while these measurements for the Orderer
process and ChainCode use an average of about 10% of CPU resources. The deployed
model could retrieve a single record in 100 ms. However, the use of the built-in ChainCode
queries allows retrieving 10 dependent IoT records in 102 ms. The empirical results all
indicate low overhead for running the proposed model.
Further work will consider the deployment of the proposed model in larger-scale IoT
scenarios significantly increasing the number of peers for the empirical analysis of the
system performance for both overall and detailed Fabric performance metrics, including
throughput, latency, block size, endorsement policy, and scalability.
**Author Contributions: Conceptualization, H.H.P. and M.R.; methodology, H.H.P. and M.R.; software,**
H.H.P.; validation, H.H.P., M.R. and F.A.; formal analysis, H.H.P.; investigation, H.H.P.; writing—
original draft preparation, H.H.P., M.R., F.A. and S.D.; writing—review and editing, H.H.P., M.R.,
F.A. and S.D.; supervision, M.R., F.A. and S.D.; project administration, M.R. All authors have read
and agreed to the published version of the manuscript.
**Funding: This research received no external funding.**
**Institutional Review Board Statement: Not applicable.**
**Informed Consent Statement: Not applicable.**
**Data Availability Statement: The data presented in this study are available on request from the**
corresponding author.
-----
_Sensors 2021, 21, 359_ 26 of 29
**Acknowledgments: The authors acknowledge Massey University for the resources provided to**
conduct this research. H.H.P. also acknowledges the support received through the Massey University
Doctoral Scholarship.
**Conflicts of Interest: The authors declare no conflict of interest.**
**Abbreviations**
The following abbreviations are used in this manuscript:
BFT Byzantine Fault Tolerant
BS Base Station
CA Certification Authority
CC ChainCodes
CH Cluster Head
CID Client Identity
CRL Certificate Revocation List
CPSs Cyber-Physical Systems
dApps distributed Applications
DDoS Distributed Denial-of-Service
DHE Diffie-Hellman Ephemeral
DLTS Distributed Ledger Technologies
DPoS Delegated Proof of Stake
ECC Elliptic Curve Cryptography
HLF Hyperledger Fabric
IoT Internet of Things
IIoT Industrial IoT
MAC Message Authentication Code
MSP Membership Services Provider
NFS Network File Systems
OPM Open Provenance Model
PBFT Practical Byzantine Fault Tolerance
PoA Proof of Authority
PoB Proof of Bandwidth
PoET Proof of Elapsed Time
PoS Proof of Stake
PoW Proof of Work
RPi Raspberry Pi
RSA Rivest–Shamir–Adleman
SDN Software Defined Networking
SSL Secure Sockets Layer
TSL Transport Layer Security
WSNs Wireless Sensor Networks
**References**
1. Ali, M.S.; Vecchio, M.; Pincheira, M.; Dolui, K.; Antonelli, F.; Rehmani, M.H. Applications of blockchains in the Internet of Things:
[A comprehensive survey. IEEE Commun. Surv. Tutor. 2018, 21, 1676–1717. [CrossRef]](http://doi.org/10.1109/COMST.2018.2886932)
2. Lao, L.; Li, Z.; Hou, S.; Xiao, B.; Guo, S.; Yang, Y. A survey of IoT applications in blockchain systems: Architecture, consensus,
[and traffic modeling. ACM Comput. Surv. (CSUR) 2020, 53, 1–32. [CrossRef]](http://dx.doi.org/10.1145/3372136)
3. Javaid, U.; Siang, A.K.; Aman, M.N.; Sikdar, B. Mitigating loT device based DDoS attacks using blockchain. In Proceedings of the
1st Workshop on Cryptocurrencies and Blockchains for Distributed Systems, Munich, Germany, 15 June 2018; pp. 71–76.
4. Zhou, X.; Tang, X. Research and implementation of RSA algorithm for encryption and decryption. In Proceedings of the 2011 6th
International Forum on Strategic Technology, Harbin, China, 22–24 August 2011; Volume 2, pp. 1118–1121.
5. Suárez-Albela, M.; Fernández-Caramés, T.M.; Fraga-Lamas, P.; Castedo, L. A practical evaluation of a high-security energyefficient gateway for IoT fog computing applications. Sensors 2017, 17, 1978.
6. Oppliger, R. SSL and TLS: Theory and Practice; Artech House: Norwood, MA, USA, 2016.
7. Caudhari, A.; Bansode, R. Securing IoT devices generated data using homomorphic encryption. In Intelligent Computing and
_Networking; Springer: Singapore, 2020; pp. 219–226._
8. Hou, L.; Zheng, K.; Liu, Z.; Xu, X.; Wu, T. Design and prototype implementation of a blockchain-enabled LoRa system with edge
[computing. IEEE Internet Things J. 2020. [CrossRef]](http://dx.doi.org/10.1109/JIOT.2020.3027713)
-----
_Sensors 2021, 21, 359_ 27 of 29
9. Wood, G. Ethereum: A secure decentralised generalised transaction ledger. Ethereum Proj. Yellow Pap. 2014, 151, 1–32.
10. Panarello, A.; Tapas, N.; Merlino, G.; Longo, F.; Puliafito, A. Blockchain and IoT integration: A systematic survey. Sensors 2018,
_[18, 2575. [CrossRef]](http://dx.doi.org/10.3390/s18082575)_
11. Johansen, S.K. A Comprehensive Literature Review on the Blockchain as a Technological Enabler for Innovation; Department of
Information Systems, Mannheim University: Mannheim, Germany, 2018; pp. 1–29.
12. Ongaro, D.; Ousterhout, J. In search of an understandable consensus algorithm. In Proceedings of the 2014 USENIX Annual
Technical Conference (USENIXATC 14), Philadelphia, PA, USA, 19–20 June 2014; pp. 305–319.
13. Merkel, D. Docker: Lightweight Linux containers for consistent development and deployment. Linux J. 2014, 2014, 2.
14. [Atzori, L.; Iera, A.; Morabito, G. The internet of things: A survey. Comput. Netw. 2010, 54, 2787–2805. [CrossRef]](http://dx.doi.org/10.1016/j.comnet.2010.05.010)
15. Stojkoska, B.L.R.; Trivodaliev, K.V. A review of Internet of Things for smart home: Challenges and solutions. J. Clean. Prod. 2017,
_[140, 1454–1464. [CrossRef]](http://dx.doi.org/10.1016/j.jclepro.2016.10.006)_
16. Zanella, A.; Bui, N.; Castellani, A.; Vangelista, L.; Zorzi, M. Internet of Things for Smart Cities. IEEE Internet Things J. 2014,
_[1, 22–32. [CrossRef]](http://dx.doi.org/10.1109/JIOT.2014.2306328)_
17. Queralta, J.P.; Gia, T.N.; Tenhunen, H.; Westerlund, T. Collaborative mapping with ioe-based heterogeneous vehicles for enhanced
situational awareness. In Proceedings of the 2019 IEEE Sensors Applications Symposium (SAS), Sophia Antipolis, France, 11–13
March 2019; pp. 1–6.
18. Mutlag, A.A.; Abd Ghani, M.K.; Arunkumar, N.A.; Mohammed, M.A.; Mohd, O. Enabling technologies for fog computing in
[healthcare IoT systems. Future Gener. Comput. Syst. 2019, 90, 62–78. [CrossRef]](http://dx.doi.org/10.1016/j.future.2018.07.049)
19. Qingqing, L.; Yuhong, F.; Queralta, J.P.; Gia, T.N.; Tenhunen, H.; Zou, Z.; Westerlund, T. Edge computing for mobile robots:
multi-robot feature-based lidar odometry with FPGAs. In Proceedings of the 2019 Twelfth International Conference on Mobile
Computing and Ubiquitous Network (ICMU), Kathmandu, Nepal, 4–6 November 2019; pp. 1–2.
20. Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge computing: Vision and challenges. IEEE Internet Yhings J. 2016, 3, 637–646.
[[CrossRef]](http://dx.doi.org/10.1109/JIOT.2016.2579198)
21. [Lee, S.K.; Bae, M.; Kim, H. Future of IoT networks: A survey. Appl. Sci. 2017, 7, 1072. [CrossRef]](http://dx.doi.org/10.3390/app7101072)
22. Butun, I.; Österberg, P.; Song, H. Security of the Internet of Things: Vulnerabilities, attacks, and countermeasures. IEEE Commun.
_[Surv. Tutor. 2019, 22, 616–644. [CrossRef]](http://dx.doi.org/10.1109/COMST.2019.2953364)_
23. [Bahga, A.; Madisetti, V.K. Blockchain platform for industrial internet of things. J. Softw. Eng. Appl. 2016, 9, 533–546. [CrossRef]](http://dx.doi.org/10.4236/jsea.2016.910036)
24. Huh, S.; Cho, S.; Kim, S. Managing IoT devices using blockchain platform. In Proceedings of the 2017 19th International
Conference on Advanced Communication Technology (ICACT), Pyeongchang, Korea, 19–22 February 2017; pp. 464–467.
[[CrossRef]](http://dx.doi.org/10.23919/ICACT.2017.7890132)
25. Sharma, P.K.; Singh, S.; Jeong, Y.; Park, J.H. DistBlockNet: A Distributed Blockchains-Based Secure SDN Architecture for IoT
[Networks. IEEE Commun. Mag. 2017, 55, 78–85. [CrossRef]](http://dx.doi.org/10.1109/MCOM.2017.1700041)
26. Song, J.C.; Demir, M.A.; Prevost, J.J.; Rad, P. Blockchain design for trusted decentralized IoT networks. In Proceedings of the 2018
13th Annual Conference on System of Systems Engineering (SoSE), Paris, France, 19–22 June 2018; pp. 169–174.
27. Qian, Y.; Jiang, Y.; Chen, J.; Zhang, Y.; Song, J.; Zhou, M.; Pustišek, M. Towards decentralized IoT security enhancement:
[A blockchain approach. Comput. Electr. Eng. 2018, 72, 266–273. [CrossRef]](http://dx.doi.org/10.1016/j.compeleceng.2018.08.021)
28. Ayoade, G.; Karande, V.; Khan, L.; Hamlen, K. Decentralized IoT data management using blockchain and trusted execution
environment. In Proceedings of the 2018 IEEE International Conference on Information Reuse and Integration (IRI), Salt Lake
City, UT, USA, 6–9 July 2018; pp. 15–22.
29. Su, P.H.; Shih, C.S.; Hsu, J.Y.J.; Lin, K.J.; Wang, Y.C. Decentralized fault tolerance mechanism for intelligent IoT/M2M middleware.
In Proceedings of the 2014 IEEE World Forum on Internet of Things (WF-IoT), Seoul, Korea, 6–8 March 2014; pp. 45–50.
30. Chen, J. Devify: Decentralized internet of things software framework for a peer-to-peer and interoperable iot device. ACM
_[SIGBED Rev. 2018, 15, 31–36. [CrossRef]](http://dx.doi.org/10.1145/3231535.3231539)_
31. Zheng, Z.; Xie, S.; Dai, H.; Chen, X.; Wang, H. An overview of blockchain technology: Architecture, consensus, and future trends.
In Proceedings of the 2017 IEEE International Congress on Big Data (BigData Congress), Honolulu, HI, USA, 25–30 June 2017;
pp. 557–564.
32. Karame, G. On the security and scalability of bitcoin’s blockchain. In Proceedings of the 2016 ACM SIGSAC Conference on
Computer and Communications Security, Vienna, Austria, 24–28 October 2016; pp. 1861–1862.
33. Scherer, M. Performance and Scalability of Blockchain Networks and Smart Contracts. Master’s Thesis, Umeå University, Umea,
Sweden, 2017 .
34. Christidis, K.; Devetsikiotis, M. Blockchains and smart contracts for the internet of things. IEEE Access 2016, 4, 2292–2303.
[[CrossRef]](http://dx.doi.org/10.1109/ACCESS.2016.2566339)
35. Peng, Z.; Wu, H.; Xiao, B.; Guo, S. VQL: Providing query efficiency and data authenticity in blockchain systems. In Proceedings of
the 2019 IEEE 35th International Conference on Data Engineering Workshops (ICDEW), Macao, China, 8–12 April 2019; pp. 1–6.
36. Biswas, K.; Muthukkumarasamy, V. Securing Smart Cities Using Blockchain Technology. In Proceedings of the 2016 IEEE
18th International Conference on High Performance Computing and Communications; IEEE 14th International Conference on
Smart City; IEEE 2nd International Conference on Data Science and Systems (HPCC/SmartCity/DSS), Sydney, Australia, 12–14
[December 2016; pp. 1392–1393. [CrossRef]](http://dx.doi.org/10.1109/HPCC-SmartCity-DSS.2016.0198)
-----
_Sensors 2021, 21, 359_ 28 of 29
37. Sharma, P.K.; Park, J.H. Blockchain based hybrid network architecture for the smart city. Future Gener. Comput. Syst. 2018,
_[86, 650–655. [CrossRef]](http://dx.doi.org/10.1016/j.future.2018.04.060)_
38. [Mbarek, B.; Jabeur, N.; Pitner, T. Mbs: Multilevel blockchain system for IoT. Pers. Ubiquitous Comput. 2019, 1–8. [CrossRef]](http://dx.doi.org/10.1007/s00779-019-01339-5)
39. Xuan, S.; Zhang, Y.; Tang, H.; Chung, I.; Wang, W.; Yang, W. Hierarchically Authorized Transactions for Massive Internet-of-Things
[Data Sharing Based on Multilayer Blockchain. Appl. Sci. 2019, 9, 5159. [CrossRef]](http://dx.doi.org/10.3390/app9235159)
40. Butun, I.; Österberg, P. Detecting intrusions in cyber-physical systems of smart cities: Challenges and directions. In Secure
_Cyber-Physical Systems for Smart Cities; IGI Global: Goteborg, Sweden, 2019; pp. 74–102._
41. Khan, P.W.; Byun, Y. A Blockchain-Based Secure Image Encryption Scheme for the Industrial Internet of Things. Entropy 2020,
_[22, 175. [CrossRef]](http://dx.doi.org/10.3390/e22020175)_
42. Butun, I. Industrial IoT: Challenges, Design Principles, Applications, and Security; Springer Nature: Cham, Switzerland, 2020.
43. Zhu, H.; Huang, C.; Zhou, J. Edgechain: Blockchain-based multi-vendor mobile edge application placement. In Proceedings of
the 2018 4th IEEE Conference on Network Softwarization and Workshops (NetSoft), Montreal, QC, Canada, 25–29 June 2018;
pp. 222–226.
44. Queralta, J.P.; Qingqing, L.; Gia, T.N.; Truong, H.L.; Westerlund, T. End-to-End Design for Self-Reconfigurable Heterogeneous
Robotic Swarms. arXiv 2020, arXiv:2004.13997.
45. Dai, Y.; Xu, D.; Maharjan, S.; Chen, Z.; He, Q.; Zhang, Y. Blockchain and deep reinforcement learning empowered intelligent 5G
[beyond. IEEE Netw. 2019, 33, 10–17. [CrossRef]](http://dx.doi.org/10.1109/MNET.2019.1800376)
46. Xiong, Z.; Zhang, Y.; Niyato, D.; Wang, P.; Han, Z. When mobile blockchain meets edge computing. arXiv 2017, arXiv:1711.05938.
47. Rahman, M.A.; Hossain, M.S.; Loukas, G.; Hassanain, E.; Rahman, S.S.; Alhamid, M.F.; Guizani, M. Blockchain-Based Mobile
[Edge Computing Framework for Secure Therapy Applications. IEEE Access 2018, 6, 72469–72478. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2018.2881246)
48. Samaniego, M.; Deters, R. Using blockchain to push software-defined IoT components onto edge hosts. In Proceedings of
the International Conference on Big Data and Advanced Wireless Technologies, Blagoevgrad, Bulgaria, 10–11 November 2016;
pp. 1–9.
49. Samaniego, M.; Deters, R. Virtual Resources & Blockchain for Configuration Management in IoT. J. Ubiquitous Syst. Pervasive
_Netw. 2018, 9, 1–13._
50. Queralta, J.P.; Qingqing, L.; Zou, Z.; Westerlund, T. Enhancing Autonomy with Blockchain and Multi-Acess Edge Computing in
Distributed Robotic Systems. In Proceedings of the Fifth International Conference on Fog and Mobile Edge Computing (FMEC),
Paris, France, 20–23 April 2020.
51. Soldani, D. 5G and the Future of Security in ICT. In Proceedings of the 2019 29th International Telecommunication Networks and
Applications Conference (ITNAC), Auckland, New Zealand, 27–29 November 2019; pp. 1–8.
52. Ferrer, E.C.; Rudovic, O.; Hardjono, T.; Pentland, A. Robochain: A secure data-sharing framework for human-robot interaction.
_arXiv 2018, arXiv:1802.04480._
53. Ma, Z.; Meng, J.; Wang, J; Shan, Z. Blockchain-based Decentralized Authentication Modeling Scheme in Edge and IoT Environ[ment. IEEE Internet Things J. 2020. [CrossRef]](http://dx.doi.org/10.1109/JIOT.2020.3037733.)
54. Hewa, T.; Braeken, A.; Ylianttila, M.; Liyanage, M. Multi-Access Edge Computing and Blockchain-based Secure Telehealth System
Connected with 5G and IoT. In Proceedings of the 8th IEEE International Conference on Communications and Networking (IEEE
ComNet’2020), Hammamet, Tunisia, 28–30 October 2020.
55. Attia, O.; Khoufi, I.; Laouiti, A.; Adjih, C. An Iot-blockchain architecture based on hyperledger framework for healthcare
monitoring application. In Proceedings of the 2019 10th IFIP International Conference on New Technologies, Mobility and
Security (NTMS), Canary Islands, Spain, 24–26 June 2019; pp. 1–5.
56. Butun, I.; Sari, A.; Österberg, P. Hardware Security of Fog End-Devices for the Internet of Things. _Sensors 2020, 20, 5729._
[[CrossRef]](http://dx.doi.org/10.3390/s20205729)
57. Peši´c, S.; Radovanovi´c, M.; Ivanovi´c, M.; Toši´c, M.; Ikovi´c, O.; Boškovi´c, D. Hyperledger Fabric Blockchain as a Service for the
IoT: Proof of Concept. In Proceedings of the International Conference on Model and Data Engineering, Toulouse, France, 28–31
October 2019; pp. 172–183.
58. Heller, B.; Sherwood, R.; McKeown, N. The controller placement problem. ACM SIGCOMM Comput. Commun. Rev. 2012,
_[42, 473–478. [CrossRef]](http://dx.doi.org/10.1145/2377677.2377767)_
59. Suen, C.H.; Ko, R.K.; Tan, Y.S.; Jagadpramana, P.; Lee, B.S. S2logger: End-to-end data tracking mechanism for cloud data
provenance. In Proceedings of the 2013 12th IEEE International Conference on Trust, Security and Privacy in Computing and
Communications, Melbourne, Australia, 16–18 July 2013; pp. 594–602.
60. Salman, T.; Zolanvari, M.; Erbad, A.; Jain, R.; Samaka, M. Security services using blockchains: A state of the art survey. IEEE
_[Commun. Surv. Tutor. 2018, 21, 858–880. [CrossRef]](http://dx.doi.org/10.1109/COMST.2018.2863956)_
61. Liang, X.; Shetty, S.; Tosh, D.; Kamhoua, C.; Kwiat, K.; Njilla, L. Provchain: A blockchain-based data provenance architecture in
cloud environment with enhanced privacy and availability. In Proceedings of the 2017 17th IEEE/ACM International Symposium
on Cluster, Cloud and Grid Computing (CCGRID), Madrid, Spain, 14–17 May 2017; pp. 468–477.
62. Neisse, R.; Steri, G.; Nai-Fovino, I. A blockchain-based approach for data accountability and provenance tracking. In Proceedings
of the 12th International Conference on Availability, Reliability and Security, Reggio Calabria, Italy, 29 August–1 September 2017;
pp. 1–10.
-----
_Sensors 2021, 21, 359_ 29 of 29
63. Demichev, A.; Kryukov, A.; Prikhodko, N. The approach to managing provenance metadata and data access rights in distributed
storage using the hyperledger blockchain platform. In Proceedings of the 2018 Ivannikov Ispras Open Conference (ISPRAS),
Moscow, Russia, 22–23 November 2018; pp. 131–136.
64. Ramachandran, A.; Kantarcioglu, M. Smartprovenance: A distributed, blockchain based dataprovenance system. In Proceedings
of the Eighth ACM Conference on Data and Application Security and Privacy, Tempe, AZ, USA, 19–21 March 2018; pp. 35–42.
65. Latif, S.; Idrees, Z.; Ahmad, J.; Zheng, L.; Zou, Z. A blockchain-based architecture for secure and trustworthy operations in the
industrial Internet of Things. J. Ind. Inf. Integr. 2020, 21, 100190.
66. Atlam, H.F.; Alenezi, A.; Alassafi, M.O.; Wills, G. Blockchain with internet of things: Benefits, challenges, and future directions.
_[Int. J. Intell. Syst. Appl. 2018, 10, 40–48. [CrossRef]](http://dx.doi.org/10.5815/ijisa.2018.06.05)_
67. Dorri, A.; Kanhere, S.S.; Jurdak, R. Blockchain in internet of things: Challenges and solutions. arXiv 2016, arXiv:1608.05187.
68. Buterin, V. A Next-Generation Smart Contract and Decentralized Application Platform. White Pap. 2014, 3, 37.
69. [Nakamoto, S.; Bitcoin, A. A Peer-to-Peer Electronic Cash System; Bitcoin: 2008, Volume 4. Available online: https://bitcoin.org/](https://bitcoin.org/bitcoin. pdf)
[bitcoin.pdf (accessed on 30 December 2020).](https://bitcoin.org/bitcoin. pdf)
70. Singhal, B.; Dhameja, G.; Panda, P.S. Beginning Blockchain: A Beginner’s Guide to Building Blockchain Solutions; Springer: New York,
NY, USA, 2018.
71. [Popov, S. The Tangle. IOTA Whitepaper.pdf. 2017. Available online: https://iota.org (accessed on 15 November 2020).](https://iota.org)
72. Li, X.; Jiang, P.; Chen, T.; Luo, X.; Wen, Q. A survey on the security of blockchain systems. Future Gener. Comput. Syst. 2020,
_[107, 841–853. [CrossRef]](http://dx.doi.org/10.1016/j.future.2017.08.020)_
73. Tschorsch, F.; Scheuermann, B. Bitcoin and beyond: A technical survey on decentralized digital currencies. IEEE Commun. Surv.
_[Tutor. 2016, 18, 2084–2123. [CrossRef]](http://dx.doi.org/10.1109/COMST.2016.2535718)_
74. Zheng, Z.; Xie, S.; Dai, H.N.; Chen, X.; Wang, H. Blockchain challenges and opportunities: A survey. Int. J. Web Grid Serv. 2018,
_[14, 352–375. [CrossRef]](http://dx.doi.org/10.1504/IJWGS.2018.095647)_
75. Cachin, C. Architecture of the hyperledger blockchain fabric. In Proceedings of the Workshop on Distributed Cryptocurrencies
and Consensus Ledgers,Chicago, IL, USA, 25 July 2016; Volume 310.
76. Castro, M.; Liskov, B. Practical Byzantine fault tolerance. In Proceedings of the OSDI, New Orleans, LA, USA, 25–26 February
1999; Volume 99, pp. 173–186.
77. Metcalfe, W. Ethereum, Smart Contracts, DApps. In Blockchain and Crypt Currency; Springer: Singapore, 2020; pp. 77–93.
78. Rashid, M.A.; Pajooh, H.H. A Security Framework for IoT Authentication and Authorization Based on Blockchain Technology.
In Proceedings of the 2019 18th IEEE International Conference On Trust, Security and Privacy in Computing And Communications/13th IEEE International Conference On Big Data Science and Engineering (TrustCom/BigDataSE), Rotorua, New Zealand,
[5–8 August 2019; pp. 264–271. [CrossRef]](http://dx.doi.org/10.1109/TrustCom/BigDataSE.2019.00043)
79. Valenta, M.; Sandner, P. Ethereum, Hyperledger Fabric and Cord, FSBC Working Paper. June 2017. pp. 1–8. Available online:
[http://explore-ip.com/2017_ComparisonofEthereumHyperledgerCorda.pdf (accessed on 30 December 2020).](http://explore-ip.com/2017_ComparisonofEthereumHyperledgerCorda.pdf)
80. Hoskins, M.E. Sshfs: Super easy file access over ssh. Linux J. 2006, 2006, 4.
81. Rajgarhia, A.; Gehani, A. Performance and extension of user space file systems. In Proceedings of the 2010 ACM Symposium on
Applied Computing, Sierre, Switzerland, 22–26 March 2010; pp. 206–213.
82. Milicchio, F.; Gehrke, W.A. OpenAFS. Distributed Services with OpenAFS: for Enterprise and Education; Springer: Rome, Italy, 2007;
pp. 81–147.
83. [Mukherjee, S. Benefits of AWS in Modern Cloud. 2019. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3415956)
[3415956 (accessed on 30 November 2020).](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3415956)
84. Hyperledger Performance and Scale Working Group, Hyperledger Blockchain Performance Metrics. 2018. Available online:
[Available online: https://www.hyperledger.org/wpcontent/uploads/2018/10/HL_Whitepaper_Metrics_PDF_V1 (accessed on](https://www.hyperledger.org/wpcontent/uploads/2018/10/HL_Whitepaper_Metrics_PDF_V1)
15 November 2020).
85. [Robitaille, T. Psrecord: Record the CPU and Memory Activity of a Process. 2017. Available online: https://github.com/astrofrog/](https://github.com/astrofrog/psrecord)
[psrecord (accessed on 15 November 2020).](https://github.com/astrofrog/psrecord)
86. [Ward, I. Speedometer 2.8. 2015. Available online: http://excess.org/speedometer/ (accessed on 15 November 2020).](http://excess.org/speedometer/)
-----
| 23,478
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC7825674, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/1424-8220/21/2/359/pdf?version=1610010040"
}
| 2,021
|
[
"JournalArticle"
] | true
| 2021-01-01T00:00:00
|
[
{
"paperId": "1a392e71f7b71d16f776e15fce024fe8a672485d",
"title": "A blockchain-based architecture for secure and trustworthy operations in the industrial Internet of Things"
},
{
"paperId": "c1bedf52f2774add23fe9b13ce455c09889f04cc",
"title": "Design and Prototype Implementation of a Blockchain-Enabled LoRa System With Edge Computing"
},
{
"paperId": "0417fd77a1149efafc0b512eb6ad93d641d0d19b",
"title": "Blockchain-Based Decentralized Authentication Modeling Scheme in Edge and IoT Environment"
},
{
"paperId": "51af77e3c4d09adb564645abea0b9b45cbd19b9c",
"title": "Multi-Access Edge Computing and Blockchain-based Secure Telehealth System Connected with 5G and IoT"
},
{
"paperId": "39cc97a588662c125870ae1035dea92213655a6b",
"title": "Industrial IoT: Challenges, Design Principles, Applications, and Security"
},
{
"paperId": "316e4cf65cec97e85aa15138b863f1800becf4ca",
"title": "Hardware Security of Fog End-Devices for the Internet of Things"
},
{
"paperId": "3fca93877075a91b47ac25f89b247921520c0e98",
"title": "End-to-End Design for Self-Reconfigurable Heterogeneous Robotic Swarms"
},
{
"paperId": "8d2d18db5526adff6a29e295cbdcb2121735d857",
"title": "Enhancing Autonomy with Blockchain and Multi-Access Edge Computing in Distributed Robotic Systems"
},
{
"paperId": "d8e49199939494b41ac30fd2672c05cd8cf3546b",
"title": "A Survey of IoT Applications in Blockchain Systems"
},
{
"paperId": "408bffe8a306595cd764f22332ee833b7d875ed4",
"title": "A Blockchain-Based Secure Image Encryption Scheme for the Industrial Internet of Things"
},
{
"paperId": "d32bb9648c0cd0c558410012970b65ea311e4acb",
"title": "Hierarchically Authorized Transactions for Massive Internet-of-Things Data Sharing Based on Multilayer Blockchain"
},
{
"paperId": "3eea996b49b6ed7f60c5d56c4c9b717ac29ad331",
"title": "MBS: Multilevel Blockchain System for IoT"
},
{
"paperId": "aefed5d67013c6492fa32655caa1775e33bfdf3e",
"title": "5G and the Future of Security in ICT"
},
{
"paperId": "cecc70c0e891f8713438bb4ffe4ad2f28417c384",
"title": "Edge Computing for Mobile Robots: Multi-Robot Feature-Based Lidar Odometry with FPGAs"
},
{
"paperId": "620ade9e99d480fefc6405460ff2813b82e6d99f",
"title": "Security of the Internet of Things: Vulnerabilities, Attacks, and Countermeasures"
},
{
"paperId": "f2b360abf82513631fa86f8183fe776cabbf6fa9",
"title": "Hyperledger Fabric Blockchain as a Service for the IoT: Proof of Concept"
},
{
"paperId": "e4752dedf9dd416e244be55be0895d4f090be20e",
"title": "A Security Framework for IoT Authentication and Authorization Based on Blockchain Technology"
},
{
"paperId": "1803665b8e585de9658db627efd013ffa464e2d8",
"title": "An IoT-Blockchain Architecture Based on Hyperledger Framework for Healthcare Monitoring Application"
},
{
"paperId": "cae22ac04006c776a9110802f4b14567e9794d54",
"title": "Blockchain and Deep Reinforcement Learning Empowered Intelligent 5G Beyond"
},
{
"paperId": "3765a0c09976c0e2d0e8825f47388d5c80812e2b",
"title": "VQL: Providing Query Efficiency and Data Authenticity in Blockchain Systems"
},
{
"paperId": "2f9e2fb2022e27bc9efcb4ff5bc2761ea9833a99",
"title": "Benefits of AWS in Modern Cloud"
},
{
"paperId": "2850f5033d29099b6e082ecd3d2f49e1a9d41afa",
"title": "Collaborative Mapping with IoE-based Heterogeneous Vehicles for Enhanced Situational Awareness"
},
{
"paperId": "4031b68c7d32f53f73a55292a214d103096175d0",
"title": "Enabling technologies for fog computing in healthcare IoT systems"
},
{
"paperId": "a4b2837509af0c33ac182a5b84bd47e2e38a61f7",
"title": "Blockchain-Based Mobile Edge Computing Framework for Secure Therapy Applications"
},
{
"paperId": "14bbec33f79a4219546a8903f9056fd078e2133e",
"title": "The Approach to Managing Provenance Metadata and Data Access Rights in Distributed Storage Using the Hyperledger Blockchain Platform"
},
{
"paperId": "55eda9f7c5812384de9e03626fa33a2278e038a5",
"title": "Towards decentralized IoT security enhancement: A blockchain approach"
},
{
"paperId": "cfc8779d875ee8e5574c4aac1f845bf5fb19f444",
"title": "Security Services Using Blockchains: A State of the Art Survey"
},
{
"paperId": "305edd92f237f8e0c583a809504dcec7e204d632",
"title": "Blockchain challenges and opportunities: a survey"
},
{
"paperId": "0919faff858302c01be75739f4dd703fea74a82d",
"title": "Blockchain based hybrid network architecture for the smart city"
},
{
"paperId": "383057f972b11b99cbc8c0d3e6c47170e9d95c1c",
"title": "Blockchain and IoT Integration: A Systematic Survey"
},
{
"paperId": "9727206903eb40d4fa42606711bad3402f2ba9aa",
"title": "Decentralized IoT Data Management Using BlockChain and Trusted Execution Environment"
},
{
"paperId": "3a7022837872ccc7b281087e435209883e12f924",
"title": "Mitigating loT Device based DDoS Attacks using Blockchain"
},
{
"paperId": "01157f7c700e92323a5933e00c71cf001a8bac88",
"title": "Blockchain with Internet of Things: Benefits, Challenges, and Future Directions"
},
{
"paperId": "bd71e3de5b2c1e35ff0f764824c7d524200c1f15",
"title": "Devify: decentralized internet of things software framework for a peer-to-peer and interoperable IoT device"
},
{
"paperId": "f7b5ca0c68cd639fedf962b5cc7b31e25464a282",
"title": "Blockchain Design for Trusted Decentralized IoT Networks"
},
{
"paperId": "ff335c0e20e5bf509df5b7fc438c6fc1ceabeece",
"title": "SmartProvenance: A Distributed, Blockchain Based DataProvenance System"
},
{
"paperId": "98d71a9d5a2235319058b5728a644f29c8c5d850",
"title": "RoboChain: A Secure Data-Sharing Framework for Human-Robot Interaction"
},
{
"paperId": "06a9f7c0977dd1422dda1c7ac207f04dc40d776a",
"title": "EdgeChain: Blockchain-based Multi-vendor Mobile Edge Application Placement"
},
{
"paperId": "461d523f9ba942c7474aef332412fe7b53c731be",
"title": "When Mobile Blockchain Meets Edge Computing"
},
{
"paperId": "f8668b2d5ea74b0c513b7c2fd14926738161a3b2",
"title": "Future of IoT Networks: A Survey"
},
{
"paperId": "76bd712e4908a42c5514c50427a168d6d7952c70",
"title": "DistBlockNet: A Distributed Blockchains-Based Secure SDN Architecture for IoT Networks"
},
{
"paperId": "5c258e7ed26da9b4f2947e4c056471bf68961e58",
"title": "A Practical Evaluation of a High-Security Energy-Efficient Gateway for IoT Fog Computing Applications"
},
{
"paperId": "ca4c0ab7304ebbbb052887332d80dbe673ed4b7c",
"title": "A Survey on the Security of Blockchain Systems"
},
{
"paperId": "ee177faa39b981d6dd21994ac33269f3298e3f68",
"title": "An Overview of Blockchain Technology: Architecture, Consensus, and Future Trends"
},
{
"paperId": "2a91211f54b80a15db2bd1efd36f6caa546e2754",
"title": "A Blockchain-based Approach for Data Accountability and Provenance Tracking"
},
{
"paperId": "84eae1234a6a9d64b6be73756ef9abafd31a83bf",
"title": "ProvChain: A Blockchain-Based Data Provenance Architecture in Cloud Environment with Enhanced Privacy and Availability"
},
{
"paperId": "6a063f13e3a891d14ecf43aa92396cb781ec4e4b",
"title": "Securing Smart Cities Using Blockchain Technology"
},
{
"paperId": "70f455aaef44339e6e00a7ade84ee5df181bad38",
"title": "Using Blockchain to push Software-Defined IoT Components onto Edge Hosts"
},
{
"paperId": "5e36ac3604b1291d872057511404fc6843d2d491",
"title": "On the Security and Scalability of Bitcoin's Blockchain"
},
{
"paperId": "628c2bcfbd6b604e2d154c7756840d3a5907470f",
"title": "Blockchain Platform for Industrial Internet of Things"
},
{
"paperId": "451729b3faedea24771ac4aadbd267146688db9b",
"title": "Blockchain in internet of things: Challenges and Solutions"
},
{
"paperId": "e3a442aa24e5df7e6b2a25e21e75c4c325f9eedf",
"title": "Edge Computing: Vision and Challenges"
},
{
"paperId": "c998aeb12b78122ec4143b608b517aef0aa2c821",
"title": "Blockchains and Smart Contracts for the Internet of Things"
},
{
"paperId": "8db5d1d7169a1f5391cb184332b95835ae668cf4",
"title": "Bitcoin and Beyond: A Technical Survey on Decentralized Digital Currencies"
},
{
"paperId": "9979809e4106b29d920094be265b33524cde8a40",
"title": "In Search of an Understandable Consensus Algorithm"
},
{
"paperId": "d080fb7c7cd1bf65be60ec9cd47dd44fca0abb66",
"title": "The internet of things: a survey"
},
{
"paperId": "046780b3162df2c188f68a600b9f1479753a68b4",
"title": "Decentralized fault tolerance mechanism for intelligent IoT/M2M middleware"
},
{
"paperId": "875d90d4f66b07f90687b27ab304e04a3f666fc2",
"title": "Docker: lightweight Linux containers for consistent development and deployment"
},
{
"paperId": "52f168c6c4f42294c4c9f9305bc88b6d25ffec9a",
"title": "Internet of Things for Smart Cities"
},
{
"paperId": "01ecd06d16b9ee6afef08aff9b0e2448222b097c",
"title": "S2Logger: End-to-End Data Tracking Mechanism for Cloud Data Provenance"
},
{
"paperId": "a5d426d6f4bd5dc3311f0c994f642cf1f5de0488",
"title": "Research and implementation of RSA algorithm for encryption and decryption"
},
{
"paperId": "95071fc20d7212bf6d4af2a7e1bdf102c80f0bec",
"title": "Performance and extension of user space file systems"
},
{
"paperId": "60c5cbc6c966de27f40462aca2ad30afec8c142d",
"title": "SSL and TLS: Theory and Practice"
},
{
"paperId": "fc2a6f4add0e282f10d45f44a57e6523e703710e",
"title": "SSHFS: super easy file access over SSH"
},
{
"paperId": "8132164f0fad260a12733b9b09cacc5fff970530",
"title": "Practical Byzantine fault tolerance"
},
{
"paperId": "c903130c1f1208eb5814909a3fa8eb5dad370584",
"title": "Ethereum, Smart Contracts, DApps"
},
{
"paperId": "433561f47f9416a6500c8350414fdd504acd2e5e",
"title": "Bitcoin Proof of Stake: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": "7c867a649f966d3bb7c49b6c25a8ac09e5186f0e",
"title": "Securing IoT Devices Generated Data Using Homomorphic Encryption"
},
{
"paperId": null,
"title": "2008, Volume 4"
},
{
"paperId": "b238c5f4a23dd2ff36f387ad5b16acfbb50ed7b9",
"title": "Detecting Intrusions in Cyber-Physical Systems of Smart Cities"
},
{
"paperId": "3d267bbcce5a599ac9cc42964fefb40e7b49cbb1",
"title": "Applications of Blockchains in the Internet of Things: A Comprehensive Survey"
},
{
"paperId": "91bfb9e9a222765fcd7e4afd891be47c8ad3dc78",
"title": "ETHEREUM: A SECURE DECENTRALISED GENERALISED TRANSACTION LEDGER"
},
{
"paperId": null,
"title": "Beginning Blockchain: A Beginner’s Guide to Building Blockchain Solutions"
},
{
"paperId": null,
"title": "A Comprehensive Literature Review on the Blockchain as a Technological Enabler for Innovation"
},
{
"paperId": "be45a7e01d17cc2e09e7ddecfabb2d28ac2763d9",
"title": "Performance and Scalability of Blockchain Networks and Smart Contracts"
},
{
"paperId": "16d444cc8a24fd34834e0df0cdbf5f44a11b260b",
"title": "Virtual Resources & Blockchain for Configuration Management in IoT"
},
{
"paperId": "24711b2a7a4dc4d0dad74bbbfeea9140abab047b",
"title": "Managing IoT devices using blockchain platform"
},
{
"paperId": "490369507b3a1e425ff4b7150fc9d083783bb908",
"title": "A review of Internet of Things for smart home: Challenges and solutions"
},
{
"paperId": null,
"title": "Ethereum, Hyperledger Fabric and Cord"
},
{
"paperId": null,
"title": "Psrecord: Record the CPU and Memory Activity of a Process"
},
{
"paperId": "f852c5f3fe649f8a17ded391df0796677a59927f",
"title": "Architecture of the Hyperledger Blockchain Fabric"
},
{
"paperId": "0dbb8a54ca5066b82fa086bbf5db4c54b947719a",
"title": "A NEXT GENERATION SMART CONTRACT & DECENTRALIZED APPLICATION PLATFORM"
},
{
"paperId": "43586b34b054b48891d478407d4e7435702653e0",
"title": "The Tangle"
},
{
"paperId": null,
"title": "Speedometer 2 . 8"
},
{
"paperId": "3c50bb6cc3f5417c3325a36ee190e24f0dc87257",
"title": "ETHEREUM: A SECURE DECENTRALISED GENERALISED TRANSACTION LEDGER"
},
{
"paperId": "058b213e3b407f11d0260f67d703bffdc7042335",
"title": "The controller placement problem"
},
{
"paperId": "826dd6e25651baaf9f50a88077cd91c691dd7327",
"title": "Distributed Services with OpenAFS: for Enterprise and Education"
},
{
"paperId": null,
"title": "Hyperledger Fabric and Cord, FSBC Working Paper"
},
{
"paperId": null,
"title": "Step 2 The answer from the SERVER to the CLIENT with the “HELLO SERVER” string. With that"
}
] | 23,478
|
en
|
[
{
"category": "Law",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/004604a9f58d55c509734450315f02018fd27637
|
[] | 0.874639
|
Legal Analysis of Cryptocurency Utilization in Indonesia
|
004604a9f58d55c509734450315f02018fd27637
|
Rechtsnormen Journal of Law
|
[
{
"authorId": "2226341821",
"name": "Wira Agustian Tri Haryanto"
},
{
"authorId": "2363092868",
"name": "Muhammad Irayadi"
},
{
"authorId": "152692188",
"name": "A. Wahyudi"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
Background. Bitcoin is the world's first digital currency that uses the concept of Cryptocurrency, which is a digital asset designed as a medium of exchange using cryptographic techniques to secure transactions and control the administration of its currency units that are likely to continue to grow in the future. Based on Law No. 7 of 2011 on Currency or cryptocurrencies, Bitcoin cannot be considered as legal tender in Indonesia.
Purpose. It is said to be a means of payment because the means of payment in Indonesia is the Rupiah, but based on the Regulation of the Minister of Trade of the Republic of Indonesia Number 99 of 2019, crypto assets are one of the commodities that can be used as the subject of futures contracts traded on futures exchanges.
Method. his research uses a statute approach. In addition, a case approach is also used to find out the ratio decidendi used by the Constitutional Court judges in deciding cases of judicial review of laws related to indigenous peoples.
Results. This type of research is normative juridical research. The nature of research in this research is descriptive analytical. The type of data used in this research is library research. The validity of crypto asset transactions based on Indonesian contract law which refers to the Civil Code is valid because it fulfills the terms of the agreement in article 1320 of the Civil Code and is supported by the principles contained in the Civil Code itself, including the principle of freedom of contract, the principle of consensualism, the principle of pacta sunt servanda, and the principle of good faith. Therefore, crypto asset transactions are also legalized according to Law Number 11 of 2008 concerning Electronic Information and Transactions (UU ITE) because crypto asset transactions are carried out online through the internet network.
Conclusion. The Indonesian government then compiled several rules to accommodate interests as guidelines and clarity for the public regarding the government's recognition of the existence of bitcoin and virtual currencies, namely through the policy of the Minister of Trade of the Republic of Indonesia Number 99 of 2019, and based on the rules of the Bappebti Regulation Number 5 of 2019 concerning Technical Provisions for the Implementation of the Crypto Asset Physical Market on the Futures Exchange.
|
# Rechtsnormen Journal of Law | Research Pa p ers
https://journal.ypidathu.or.id/index.php/rjl/
P - ISSN: 2988-4454
E - ISSN: 2988-4462
**Citation:** Haryanto, T, A, W., Irayadi, M.,
Wahyudi, A. (2023). Legal Analysis of
Cryptocurency Utilization in Indonesia.
*Rechtsnormen Journal of Law*, *1* (2), 67–76.
[https://doi.org/10.55849/rjl.v1i2.390](https://doi.org/10.55849/rjl.v1i2.390)
**Correspondence:**
Wira Agustian Tri Haryanto,
[[email protected]](mailto:[email protected])
**Received:** July 12, 2023
**Accepted:** July 15, 2023
**Published:** July 31, 2023
# **Legal Analysis of Cryptocurency Utilization in ** **Indonesia **
## **Wira Agustian Tri Haryanto [1], Muhammad Irayadi [2]** **, Andri Wahyudi [3]** 1 Sekolah Tinggi Ilmu Hukum IBLAM, Indonesia 2 Sekolah Tinggi Ilmu Hukum IBLAM, Indonesia 3 Sekolah Tinggi Ilmu Hukum IBLAM, Indonesia
**ABSTRACT**
**Background.** Bitcoin is the world's first digital currency that uses the
concept of Cryptocurrency, which is a digital asset designed as a
medium of exchange using cryptographic techniques to secure
transactions and control the administration of its currency units that are
likely to continue to grow in the future. Based on Law No. 7 of 2011
on Currency or cryptocurrencies, Bitcoin cannot be considered as legal
tender in Indonesia.
**Purpose.** It is said to be a means of payment because the means of
payment in Indonesia is the Rupiah, but based on the Regulation of the
Minister of Trade of the Republic of Indonesia Number 99 of 2019,
crypto assets are one of the commodities that can be used as the subject
of futures contracts traded on futures exchanges.
**Method.** his research uses a statute approach. In addition, a case
approach is also used to find out the ratio decidendi used by the
Constitutional Court judges in deciding cases of judicial review of laws
related to indigenous peoples.
**Results.** This type of research is normative juridical research. The
nature of research in this research is descriptive analytical. The type of
data used in this research is library research. The validity of crypto
asset transactions based on Indonesian contract law which refers to the
Civil Code is valid because it fulfills the terms of the agreement in
article 1320 of the Civil Code and is supported by the principles
contained in the Civil Code itself, including the principle of freedom of
contract, the principle of consensualism, the principle of pacta sunt
servanda, and the principle of good faith. Therefore, crypto asset
transactions are also legalized according to Law Number 11 of 2008
concerning Electronic Information and Transactions (UU ITE) because
crypto asset transactions are carried out online through the internet
network.
**Conclusion** . The Indonesian government then compiled several rules
to accommodate interests as guidelines and clarity for the public
regarding the government's recognition of the existence of bitcoin and
virtual currencies, namely through the policy of the Minister of Trade
of the Republic of Indonesia Number 99 of 2019, and based on the
rules of the Bappebti Regulation Number 5 of 2019 concerning
Technical Provisions for the Implementation of the Crypto Asset
Physical Market on the Futures Exchange.
**KEYWORDS**
Le g al, Politics, Re g ulatin g
## **INTRODUCTION ** The utilization of technology used by the public for electronic transactions must be based on several principles, namely, the principle of legal certainty which provides a legal basis for the community (Noorsanti dkk., 2018). The
## **Wira Agustian Tri Haryanto, Muhammad Irayadi, Andri Wahyudi**
-----
Legal Analysis of Cryptocurency Utilization in Indonesia | Research Papers
## principle of benefits which means that the use of technology aims to improve welfare (Nawari & Ravindran, 2019). The principle of caution where everyone must pay attention to the possibilities that will occur for themselves and others (Chen dkk., 2020). The principle of good faith where there is no intentional purpose that results in harm to other parties and The principle of neutral technology where the use of information technology and electronic transactions can always keep up with the times. Bitcoin is present as an online payment tool that uses a peer to peer payment network that is open source. Bitcoin does not take the form of physical currency issued by a bank nor is it the currency of a country (Abou Jaoude & George Saade, 2019). Bitcoin is the world's first digital currency using the concept of Cryptocurrency, which is a digital asset designed as an exchange medium using cryptographic techniques to secure its transactions and control the administration of its currency units, which is very possible to continue to grow in the future (White dkk., 2020). The concept of the currency is identical to the requirements of legal tender, which are unique, non- perishable, and mutually agreed upon between the Bitcoin users themselves. The phenomenon of Bitcoin as a means of payment has received more attention from the government to the community, the author also found one scientific work that discusses this, namely a scientific journal by Dhea Nada Safa Prayitno related to the Legality of Bitcoin as a Virtual Payment Instrument in Business Transactions in Indonesia (Troster dkk., 2019). The use of Bitcoin is still widely found, bitcoin users still use this means of payment in trade transactions. Cryiptocurrency or cryptocurrency is increasingly recognized by many people in Indonesia (Di Vaio dkk., 2020). The recognition of this cryptocurrency can be seen from the block chain representation whose impact can be enjoyed directly by the community (consumer), and there are still many other potentials that can be explored so that interest in cryptocurrencies, generally as an investment instrument, actually only increased sharply after the Bitcoin exchange rate experienced a high surge. Based on Law N0. 7 of 2011 regarding Currency or cryptocurrency, Bitcoin cannot be said to be a legal tender in Indonesia (Morel dkk., 2020). It is said to be a means of payment because the means of payment in Indonesia is the Rupiah, but based on the Indonesian Minister of Trade Regulation Number 99 of 2019, crypto assets are one of the commodities that can be used as the subject of futures contracts traded on futures exchanges (Chen dkk., 2020). Bank Indonesia (BI) is a state institution that regulates money circulation throughout Indonesia (Coppola dkk., 2019). Apart from being an official regulator, Bank Indonesia is also an institution that has the right to print and circulate official State money (Rupiah) with the cooperation of Perum Peruri (Tambe dkk., 2019). Regarding Bitcoin and other cryptocurrency policies, Bank Indonesia has taken a firm stance by stating that Bitcoin or other virtual currencies are not legal currencies in the territory of the Republic of Indonesia (Paul dkk., 2021). Bank Indonesia initially gave a strong warning to the public and business actors not to use Bitcoin and virtual currencies as a means of payment (Y. Yang dkk., 2019). BI's statement regarding this matter was issued in Press Release No. 16/6/6Dkom, which stated that BI's statement on Bitcoin and virtual currencies is not legal tender: 16/6/6Dkom, which states that Bitcoin and various other virtual currencies are not legal tender in the territory of Indonesia (Chandrasekar dkk., 2020). All risks related to the use and ownership of Bitcoin are borne by the owners and users themselves. It is also explained that Bank Indonesia has currently conducted a study or assessment of the Central Bank Digital Currency-Digital Rupiah to see the potential and benefits of digital currencies, including design, technology, and risk mitigation (W.-Y. Yang dkk., 2019). Bank Indonesia is also coordinating with other central banks, including through international forums to deepen the 67 RJL | Vol. 1 | No. 2 | 2023
-----
Legal Analysis of Cryptocurency Utilization in Indonesia | Research Papers
## issuance of digital currencies or Central Bank Digital Currency-Digital Rupiah. The Central Bank Digital Currency-Digital Rupiah will be fortified with a firewall to avoid cyber attacks, both preventive and resolution (Karimi-Maleh dkk., 2022). The design and security system must be prepared before the digital rupiah can be used by the public. Bank Indonesia also explains the difference between Central Bank Digital Currency-Digital Rupiah and electronic money (Riess dkk., 2019). Central Bank Digital Currency-Digital Rupiah is digital money issued by the central bank so that it is an obligation of the central bank to its holders (Luque dkk., 2019). Electronic money is a payment instrument issued by a private party or industry and is an obligation of the electronic money issuer to the holder (Zhang dkk., 2020). Bank Indonesia also emphasized that the legal currency for transactions at this time according to Indonesian law is only rupiah, both cash and non-cash . Bank Indonesia sees from the monetary side that there will be no difference with the current conditions in society such as the use of Cartal Money (paper and metal money), Money stored in accounts, to the convenience of using Digital Banking, Electronic Money, and Electronic Wallets (Stuart dkk., 2019). The presence of Central Bank Digital Currency (CBDC) which is applied throughout the Central Bank provides convenience in digital transformation from the community side, while from the Central Bank side the management will be easier because it is decentralized. **RESEARCH METHODOLOGY ** To discuss the problems that have been formulated and limited as mentioned above, then in the method of preparing and completing researchers in this study, research methods and techniques will be used as below (Pretorius dkk., 2021). The type of research conducted is normative juridical research (Nosyk dkk., 2021). The nature of research in this study is descriptive analytical. The type of data used in this research is library research (Callhoff dkk., 2020). The data source used in this research is secondary data in the form of primary legal materials: Law N0. 7 of 2011 concerning Currency; Law Number 3 of 2011, concerning fund transfers; Bank Indonesia Regulation Number 20/PBI/2018 of 2018 concerning Electronic Currency (Makdessi dkk., 2019). Secondary legal materials: namely legal materials obtained from reading books and reports on the results of legal research that have to do with the problem under study and tertiary legal materials, namely legal materials that complement their nature to provide additional guidance or explanation of primary legal materials and secondary legal materials (Elvén dkk., 2022). This tertiary legal material is contained in research such as legal dictionaries, language dictionaries, encyclopedias and so on (Soerjono Soekanto & Sri Mamudji, 2001). **RESULT AND DISCUSSION ** **The Existence of Digital Currency as a Means of Payment Under Indonesian Law ** In carrying out legal payment transactions in the national scope and in order to ensure legal protection and legal certainty, Bank Indonesia as the Central Bank has the authority to regulate or make and issue regulations which are the implementers of the Law so that Bank Indonesia is allowed to impose administrative sanctions (Mao dkk., 2019), administrative sanctions are one of the legal consequences arising from Bitcoin transactions as a means of payment in Indonesia (Scarabottolo dkk., 2022). Bank Indonesia in Law Number 3 of 2004 concerning Amendments to Law of the Republic of Indonesia Number 23 of 1999 concerning Bank Indonesia, has an important role in regulating and maintaining a smooth payment system, one of the powers of Bank Indonesia is to determine payment instruments that can be used by the public, including electronic payment instruments (Ardiano & Rochaeti, 2022). 68 RJL | Vol. 1 | No. 2 | 2023
-----
Legal Analysis of Cryptocurency Utilization in Indonesia | Research Papers
## The regulation of money or currency in Indonesia is based on the Currency Law. In this law, money is a symbol of state sovereignty that must be respected and proud of by all Indonesian citizens. As a symbol of sovereignty, the use of money as a legal tender is carried out in the entire territory of Indonesia (Bojanic & Warnick, 2020), including ships and aircraft flying the flag of the Republic of Indonesia, the Embassy of the Republic of Indonesia, and other Representative offices of the Republic of Indonesia abroad (article 1). The use of rupiah must be used in: (a) every transaction that has a payment purpose; (b.) settlement of other obligations that must be fulfilled with money; and/or (c.) other financial transactions (article 21 paragraph 1) with the exception of: (a). certain transactions in the context of implementing the state revenue and expenditure budget; (b). receipt or provision of grants from or to foreign countries; (c). international trade transactions; (d). deposits in banks in foreign currency; or (e). international financing transactions (article 21 paragraph 2). Furthermore, those who violate or do not use rupiah shall be punished with a maximum imprisonment of 1 (one) year and a maximum fine of Rp. 200,000,000.00 (two hundred million rupiah) (article 33). The rupiah currency consists of "paper rupiah" and "metal rupiah" (article 2). In the provisions of this law, cryptocurrency clearly cannot be categorized as "money" or "currency". Cryptocurrencies of various types have no legal basis to be used as a transaction tool in Indonesia (Assyamiri & Hardinanto, 2022). Thus, it is understandable that Bank Indonesia as the Central Bank, which has the responsibility to maintain public trust in banks, issued Bank Indonesia Regulation Number 18/40/PBI/2016 concerning the Implementation of Payment Transaction Processing, which regulates crypto money as virtual currency (Njogu, 2021). The above Bank Indonesia regulation is a response to the development of fintech (financial technology) in the era of the industrial revolution 4.0. Bank Indonesia responds to the needs of the community by prioritizing prudential principles and adequate risk management and paying attention to expanding access, national interests and consumer protection (consideration of PBI 18/40/PBI/2016). With this regulation, Bank Indonesia actually answers the ambiguity of the legal legality of crypto-money because if it is based on Law Number 11/2008, crypto-money meets the minimum requirements of a legalized electronic system in Indonesia (Bagus & Bhiantara, 2018). Bank Indonesia Regulation No. 18/40/PBI/2016 is very limited in regulating cryptocurrencies. There is only one article that normatively states that virtual currency is prohibited in the implementation of payment systems (Article 34). The word used is virtual currency, not cryptocurrency (Kharismawan, 2021). However, the statement in article 34 letter a is explained as follows: What is meant by virtual currency is digital money issued by parties other than monetary authorities obtained by mining, purchasing, or transferring rewards, including Bitcoin, BlackCoin, Dash, Dogecoin, Litecoin, Namecoin, Nxt, Peercoin, Primecoin, Ripple, and Ven. Not included in the definition of virtual currency is electronic money. The definition of virtual currency clearly mentions several examples such as Bitcoin, Dash, Dogecoin, Litecoin and Ripple, which are known as popular cryptocurrencies. However, in this regulation virtual currency is included in the group as digital money. So it can be understood that the prohibition of the use of virtual currency or crypto money is because it is not issued by the competent authority. Oscar Darmawan, CEO of Indodax, has a different opinion because he does not view crypto money as digital money. The way cryptocurrency works, according to him, is like the Visa or Mastercard payment system. Oscar emphasizes that Bitcoin (which is the most popular cryptocurrency) is a protocol, not a form of digital currency. When a country legalizes Bitcoin as a means of payment, it will automatically involve the local currency (Vanani & Suselo, 2021). Bank Indonesia also issued another regulation, namely Bank Indonesia Regulation Number 69 RJL | Vol. 1 | No. 2 | 2023
-----
Legal Analysis of Cryptocurency Utilization in Indonesia | Research Papers
## 19/12/PBI/2017 on the Implementation of Financial Technology. In its provisions, Bank Indonesia reiterates that virtual currency is prohibited from being used by financial technology providers (Article 8 paragraph 2). In addition to being required to use rupiah, financial technology providers are also required to "apply the principles of anti-money laundering and prevention of terrorism financing" (Article 8 paragraph 1 point e). The explanation states: What is meant by virtual currency is digital money issued by parties other than monetary authorities obtained by mining, purchasing, or transferring rewards. The prohibition of conducting payment system activities using virtual currency is because virtual currency is not a legal tender in Indonesia (Puanandini, 2021). Another regulation that also mentions virtual currency is Bank Indonesia Regulation Number 20/6/PBI/2018 on Electronic Money. Just like the previous two regulations, this regulation is a response to the need to respond to the increasingly strong digital financial climate. Article 62 states that electronic money payment processing is prohibited from using virtual currency with the same explanation, namely as money that is not issued by the monetary authority (Dwi Kurniawan et al., 2021). Thus, reading the regulations issued by Bank Indonesia, it can be said that both electronic money and virtual currency are digital money. The difference is that electronic money is considered legal, while virtual currency, in this case crypto money, is not legal as a means of payment. With the background of providing protection for the public and legal certainty for crypto money, the ministry issued Regulation of the Minister of Trade No. 99/2018 on the General Policy for the Implementation of Crypto Asset Futures Trading. In this regulation, it turns out that there is a shift in provisions or definitions. Crypto money is no longer referred to as digital money, but commodities. Crypto assets can be used as the Subject of Futures Contracts traded on the Futures Exchange (article 1). This regulation was then technically followed by Regulation of the Commodity Futures Trading Supervisory Agency (BAPPEBTI) Number 5 of 2019 concerning Technical Provisions for the Implementation of the Physical Market for Crypto Assets on the Futures Exchange (Nurullia, 2021). By turning cryptocurrencies into "merchandise", the benefits and risks of price and exchange rate movements are minimized. transferred to investors or members of the Futures Exchange. However, tradable crypto assets must meet strict requirements. With this shift, regulation has two ways of stipulation. On the one hand Bank Indonesia defines it as prohibited digital money and the Ministry of Trade defines it as tradable "digital assets". The Financial Services Authority is also neutral on this distinction and prefers to supervise its financial institutions. This misalignment leaves the law in Indonesia still in the space between (Fajri & Yamin, 2019). The government still has homework to build strong economic laws, especially in the regulation of this crypto money, taking into account the welfare and all the economic changes that occur. Institutions provided by the State or law. An excuse is a reason that can be used as a basis for erasing (forgiving) the guilt of the defendant who has committed an unlawful act because the defendant is considered innocent. The reasons that can be used as a basis for forgiveness are the forms of acts committed by the defendant such as acts committed due to force (overmacht) or an act committed outside the realm of consciousness. (Noorsanti dkk., 2018). **Factors Causing Criminal Acts Involving Educators and Education Personnel and Legal ** **Efforts in Overcoming Them ** Bank Indonesia as a monetary regulator appealed through a press release circulated through social media on January 13, 2018 by Bank Indonesia entitled Bank Indonesia Warns All Parties Not to Sell, Buy, or Trade Virtual Currency Number 20/4/Dkom (Nisa & Rofiq, 2021). The release confirms that Bank Indonesia does not recognize Bitcoin or any other digital currency as legal tender. From the broadcast, it can be seen that Bank Indonesia strictly prohibits and does not 70 RJL | Vol. 1 | No. 2 | 2023
-----
Legal Analysis of Cryptocurency Utilization in Indonesia | Research Papers
## recognize any digital currency as legal tender. Regulations regarding legal tender in Indonesia are governed by Law Number 7 Year 2011 on Currency (Currency Law). Referring to the provisions in Article 1 number 2 of the Currency Law, it is determined that Money is a legal tender. The Currency Law also expressly determines that the currency issued by Indonesia is the Rupiah as specified in the provisions of Article 1 number 1 of the Currency Law. (Kusumaningtyas & Derozari, 2019). Referring to the provisions in Article 21 paragraph (1) of the Currency Law, Rupiah must be used in every transaction that has the purpose of payment, settlement of other obligations that must be fulfilled with money, and/or other financial transactions carried out in the territory of the Unitary State of the Republic of Indonesia. (Rani dkk., 2021). Bank Indonesia even stated that bitcoin and other virtual currencies are not currencies or legal tender in Indonesia as stated in the Bank Indonesia Statement in Bank Indonesia Press Release No. 16/6/Dkom with the title "Bank Indonesia Statement Regarding Bitcoin and Other Virtual Currencies (Harahap et al., 2022). In the statement, Bank Indonesia even emphasized that all risks arising from the use of bitcoin and other virtual currencies are the responsibility of bitcoin users and the Government of Indonesia is not responsible for risks that may occur and be experienced by users. But along with its development, Indonesia then regulates cryptocurrency as a commodity or buying and selling crypto assets. The Indonesian government then compiled several rules to accommodate the interests of crypto asset trading as well as guidelines and clarity for the public regarding the government's recognition of the presence of bitcoin and virtual currancy, namely through the policy of the Minister of Trade of the Republic of Indonesia Number 99 of 2019 concerning the General Policy for the Implementation of Crypto Asset Futures Trading which essentially regulates that Crypto Assets (crypto assets) are designated as Commodities that can be used as Subjects of Futures Contracts traded on the Futures Exchange (Nurjannah & Artha, 2019), as specified in Article 1. Further arrangements are also regulated by the Commodity Futures Trading Supervisory Agency in Bappebti rules Number 3 of 2019 and Bappebti Number 5 of 2019. Based on the rules of Bappebti Number 5 of 2019 concerning Technical Provisions for the Implementation of the Physical Market for Crypto Assets on the Futures Exchange (Dwi Kurniawan et al., 2021), to ensure certainty and protection of the market, 2021), to ensure certainty and legal protection for cryptocurrency asset owners, a form of legal protection for cryptocurrency asset owners, all cryptocurrency marketplaces must fulfill all the conditions stipulated in the Bappebti rules by collecting all requested files, prioritizing correct business management principles such as prioritizing the rights of futures exchange members to obtain open value and ensuring that consumers remain protected in order to prevent money laundering and financing of terrorism and proliferation of weapons of mass destruction (Disemadi & Delvin, 2021). PT Indodax in its efforts to obtain an official license from Bappebti as a Crypto Asset Physical Trader is to fulfill the requirements in Bappebti Regulation Number 5 of 2019 concerning Technical Provisions for the Implementation of the Crypto Asset Physical Market on the Futures Exchange, including the capital of the futures company as much as IDR 1,000,000. 1,500,000,000 and ISO (International Organization for Standardization) certification. The new regulations issued by Bappebti are considered still lacking in terms of consumer protection, namely related to complaint procedures by crypto asset owners in the event of a loss where the seller is not a company (institution) but rather individuals who sell their assets (Aufima, 2019). In crypto asset transactions on the Futures Exchange, legal relations can occur between the parties. Based on the Regulation of the Commodity Futures Trading Supervisory Agency Number 5 of 2019 concerning Technical Provisions for the Implementation of the Crypto Asset Physical Market, regulates the parties to crypto asset trading. These parties include the Futures Exchange, 71 RJL | Vol. 1 | No. 2 | 2023
-----
Legal Analysis of Cryptocurency Utilization in Indonesia | Research Papers
## Futures Exchange Members which are divided into two, namely Crypto Asset Physical Traders, Crypto Asset Customers, Futures Clearing House, Crypto Asset Depository Institution (Amdar, 2021). Based on Bappebti Regulation Number 5 of 2019, it explains that there are two parties in the crypto asset trading transaction, namely Crypto Asset Physical Traders and Crypto Asset Customers. The trader here acts as a party that facilitates crypto asset transactions between one customer and another. Customers here are referred to as Crypto Asset Customers who use the services of Crypto Asset Traders in buying and selling assets in the Crypto Asset Physical Market (Rohman, 2021). The regulation of cryptocurrency investment rules by Bappebti does not guarantee that there will be no disputes that will occur between cryptocurrency asset owners and the cryptocurrency marketplace. Dispute resolution in the rules made by Bappebti is where settlement is still prioritized through consensus, namely by conducting deliberations. One type of dispute resolution through non-litigation channels is Arbitration. Based on Law Number 30 of 1999 concerning Arbitration and Alternative Dispute Resolution Article 1 Number 1 states that Arbitration is a way of resolving a civil dispute outside the public court based on an arbitration agreement made in writing by the parties to the dispute (Tampi, 2017). If in the process no consensus is reached, then the parties to the criypto Physical Asset transaction Trade in dispute can resolve through the forum provided by the Futures Exchange through the Commodity Futures Trading Arbitration Board (BAKTI). BAKTI specializes in civil disputes related to Commodity Futures Trading, Warehouse Receipt Systems and / or other transactions regulated in Bappetpti (Honggowongso & Kholil, 2021). If problem solving through alternative methods is not achieved, litigation legal efforts will be carried out if problem solving through mediation, arbitration and BAKTI is not achieved, then the parties can choose to resolve disputes by going through the Consumer Dispute Resolution Agency (hereinafter BPSK) as stated in the provisions of Article 52 of Law Number 8 of 1999 concerning Consumer Protection that BPSK has the authority to carry out handling and settlement of consumer disputes, by way of through mediation or arbitration or conciliation (Akub, 2020). In connection with legal protection against losses suffered by crypto asset owners as consumers in crypto asset transactions that are carried out by containing elements of fraud by business actors who sell crypto assets, crypto asset owners can file a dispute resolution lawsuit with BPSK where the BPSK decision is final and binding. Criminal sanctions against perpetrators of crimes in Cyber Crime that result in losses to crypto asset customers or crypto asset owners in the physical market of crypto assets such as theft of a number of crypto assets from a person's wallet to fraud that traps crypto asset owners to make transfers to the fraudster's wallet address. These crimes are subject to sanctions under Law Number 11 of 2008 concerning Electronic Information and Transactions (hereinafter referred to as the ITE Law), namely in Article 45 which regulates criminal provisions and imposes prison sentences and fines (Puanandini, 2021). There are two types of cyber crimes that can target crypto assets, namely (Rsya, 2018): (1) Hacking; a technique carried out by people (hackers, crackers, intruders, or attackers) to attack a system, network, and application by exploiting the weaknesses of these things with the intention of gaining access rights to data and systems. The perpetrator of the criminal offense of hacking may be subject to Article 30 paragraph 1 jo Article 46 of the ITE Law. (2) Scam; Scam is any form of planned action that aims to get money by deceiving or outsmarting other people. Based on the ITE Law, it is explained that online fraud occurs because the perpetrator intentionally and without the right to spread false and misleading news that results in 72 RJL | Vol. 1 | No. 2 | 2023
-----
Legal Analysis of Cryptocurency Utilization in Indonesia | Research Papers
## consumer losses in Electronic Transactions. Based on this, it can be charged with Article 28 paragraph 1 jo Article 45A of the ITE Law, as well as Article 378 of the Criminal Code (KUHP). Civil dispute resolution through the courts is regulated in articles 38 and 39 of the ITE Law and article 23 of Law Number 8 of 1999 concerning Consumer Protection, where the injured party can file a civil lawsuit caused by Unlawful Acts (PMH), namely fraud or bedrog carried out in accordance with the provisions of laws and regulations (Julianti & Apriani, 2021). Based on the provisions of Article 1328 of the Civil Code, fraud may not just be alleged, but must be proven. For the success of the fraud argument, it is required that the false picture is caused by a series of deceit (kunstgrepen). Proof of the existence of a series of lies or deceit will certainly be maximized if processed in a criminal court, rather than through a civil court. This is in line with one of the principles of proof which reads "Whoever postulates something is obliged to prove it (Affirmanti Incumbit Probate), as stipulated in Article 1865 of the Civil Code (Damar Juniarto, 2019). **CONCLUSION** The validity of crypto asset transactions based on Indonesian contract law which refers to the Civil Code is valid because it fulfills the terms of the agreement in article 1320 of the Civil Code and is supported by the principles contained in the Civil Code itself, including the principle of freedom of contract, the principle of consensualism, the principle of pacta sunt servanda, and the principle of good faith. Therefore, crypto asset transactions are also legalized according to Law Number 11 of 2008 concerning Electronic Information and Transactions (UU ITE) because crypto asset transactions are carried out online via the internet network. The Indonesian government then compiled several rules to accommodate interests as a guideline and clarity for the public regarding the government's recognition of the presence of bitcoin and virtual currancy, namely through the policy of the Minister of Trade of the Republic of Indonesia Number 99 of 2019, as well as based on the rules of Bappebti Number 5 of 2019 concerning Technical Provisions for the Implementation of the Physical Market for Crypto Assets (Crypto Asset) on the Futures Exchange, to ensure certainty and legal protection for cryptocurrency asset owners, a form of legal protection for cryptocurrency asset owners, all cryptocurrency marketplaces must fulfill all the conditions that have been regulated in the Bappebti rules. With the Bappebti rules, the marketplace that will trade cryptocurrency and its funds are guaranteed in advance so that later it will minimize the criminal acts of fraud committed by the cryptocurrency marketplace. The regulation of money or currency in Indonesia is based on Law No. 7 of 2011 concerning Currency. In this law, money is a symbol of state sovereignty that must be respected and proud of by all Indonesian citizens. As a symbol of sovereignty, the use of money as a legal tender. Indonesian law already has provisions or regulations regarding crypto money. In the Currency Law article 2 paragraph (1) that the Currency of the Unitary State of the Republic of Indonesia is the Rupiah, and in paragraph 2 it is stated that the rupiah currency consists of paper rupiahs and metal rupiahs. In the provisions of this law, crypto money clearly cannot be categorized as money or currency. Crypto money of various types has no legal basis to be used as a transaction tool in Indonesia. This shows that the government has an awareness of creating the rule of law in the new atmosphere of the development of human economic activities in the digital era. However, in its normative provisions, there are still conflicting perspectives in viewing crypto money. On the one hand, Bank Indonesia places it as digital money and therefore prohibited as a means of payment, while the Ministry of Trade places it as a "digital asset" and therefore allowed to be traded on the Futures Exchange. Two legal perspectives in viewing the same object certainly cause confusion in the use of legal references. 73 RJL | Vol. 1 | No. 2 | 2023
-----
Legal Analysis of Cryptocurency Utilization in Indonesia | Research Papers
## **AUTHORS’ CONTRIBUTION ** Author 1: Conceptualization; Project administration; Validation; Writing - review and editing. Author 2: Conceptualization; Data curation; In-vestigation. Author 3: Data curation; Investigation; Other contribution; Resources; Visuali-zation. **REFERENCES ** Abou Jaoude, J., & George Saade, R. (2019). Blockchain Applications – Usage in Different Domains. IEEE Access, 7, 45360–45381. https://doi.org/10.1109/ACCESS.2019.2902501 Bojanic, D. C., & Warnick, R. B. (2020). The Relationship between a Country’s Level of Tourism and Environmental Performance. Journal of Travel Research, 59 (2), 220–230. https://doi.org/10.1177/0047287519827394 Callhoff, J., Albrecht, K., Redeker, I., Lange, T., Goronzy, J., Günther, K., Zink, A., Schmitt, J., Saam, J., & Postler, A. (2020). Disease Burden of Patients With Osteoarthritis: Results of a Cross‐Sectional Survey Linked to Claims Data. Arthritis Care & Research, 72 (2), 193–200. https://doi.org/10.1002/acr.24058 Chandrasekar, R., Chandrasekhar, S., Sundari, K. K. S., & Ravi, P. (2020). Development and validation of a formula for objective assessment of cervical vertebral bone age. Progress in Orthodontics, 21 (1), 38. https://doi.org/10.1186/s40510 - 020 - 00338 - 0 Chen, Z., Li, C., & Sun, W. (2020). Bitcoin price prediction using machine learning: An approach to sample dimension engineering. Journal of Computational and Applied Mathematics, 365, 112395. https://doi.org/10.1016/j.cam.2019.112395 Coppola, L., Cianflone, A., Grimaldi, A. M., Incoronato, M., Bevilacqua, P., Messina, F., Baselice, S., Soricelli, A., Mirabelli, P., & Salvatore, M. (2019). Biobanking in health care: Evolution and future directions. Journal of Translational Medicine, 17 (1), 172. https://doi.org/10.1186/s12967 - 019 - 1922 - 3 Di Vaio, A., Palladino, R., Hassan, R., & Escobar, O. (2020). Artificial intelligence and business models in the sustainable development goals perspective: A systematic literature review. Journal of Business Research, 121, 283–314. https://doi.org/10.1016/j.jbusres.2020.08.019 Elvén, M., Kerstis, B., Stier, J., Hellström, C., Von Heideken Wågert, P., Dahlen, M., & Lindberg, D. (2022). Changes in Physical Activity and Sedentary Behavior before and during the COVID-19 Pandemic: A Swedish Population Study. International Journal of Environmental Research and Public Health, 19 (5), 2558 . https://doi.org/10.3390/ijerph19052558 Fajri, A., & Yamin, M. (2019). Digital Currency like Bitcoin within the International Monetary System Field. Verity: Jurnal Ilmiah Hubungan Internasional (International Relations Journal), 10 (20). https://doi.org/10.19166/verity.v10i20.1458 Karimi-Maleh, H., Darabi, R., Shabani-Nooshabadi, M., Baghayeri, M., Karimi, F., Rouhi, J., Alizadeh, M., Karaman, O., Vasseghian, Y., & Karaman, C. (2022). Determination of D&C Red 33 and Patent Blue V Azo dyes using an impressive electrochemical sensor based on carbon paste electrode modified with ZIF-8/g-C3N4/Co and ionic liquid in mouthwash and toothpaste as real samples. Food and Chemical Toxicology, 162, 112907. https://doi.org/10.1016/j.fct.2022.112907 Kusumaningtyas, R. F., & Derozari, R. G. (2019). Tinjauan Yuridis Kepastian Hukum Penggunaan Virtual Currency dalam Transaksi Elektronik (Ditinjau dari Undang-Undang Nomor 7 Tahun 2011 Tentang Mata Uang). Jurnal Penelitian Hukum De Jure, 19 (3). https://doi.org/10.30641/dejure.2019.v19.339 - 348 Luque, A., Carrasco, A., Martín, A., & De Las Heras, A. (2019). The impact of class imbalance in classification performance metrics based on the binary confusion matrix. Pattern Recognition, 91, 216–231. https://doi.org/10.1016/j.patcog.2019.02.023 74 RJL | Vol. 1 | No. 2 | 2023
-----
Legal Analysis of Cryptocurency Utilization in Indonesia | Research Papers
## Makdessi, C. J., Day, C., & Chaar, B. B. (2019). Challenges faced with opioid prescriptions in the community setting – Australian pharmacists’ perspectives. Research in Social and Administrative Pharmacy, 15 (8), 966–973. https://doi.org/10.1016/j.sapharm.2019.01.017 Mao, S.-J., Shen, J., Xu, F., & Zou, C.-C. (2019). Quality of life in caregivers of young children with Prader–Willi syndrome. World Journal of Pediatrics, 15 (5), 506–510. https://doi.org/10.1007/s12519 - 019 - 00311 - w Morel, L., Yao, Z., Cladé, P., & Guellati-Khélifa, S. (2020). Determination of the fine-structure constant with an accuracy of 81 parts per trillion. Nature, 588 (7836), 61–65. https://doi.org/10.1038/s41586 - 020 - 2964 - 7 Nawari, N. O., & Ravindran, S. (2019). Blockchain and the built environment: Potentials and limitations. Journal of Building Engineering, 25, 100832. https://doi.org/10.1016/j.jobe.2019.100832 Nosyk, B., Slaunwhite, A., Urbanoski, K., Hongdilokkul, N., Palis, H., Lock, K., Min, J. E., Zhao, B., Card, K. G., Barker, B., Meilleur, L., Burmeister, C., Thomson, E., Beck-McGreevy, P., & Pauly, B. (2021). Evaluation of risk mitigation measures for people with substance use disorders to address the dual public health crises of COVID-19 and overdose in British Columbia: A mixed-method study protocol. BMJ Open, 11 (6), e048353. https://doi.org/10.1136/bmjopen - 2020 - 048353 Paul, D., Sanap, G., Shenoy, S., Kalyane, D., Kalia, K., & Tekade, R. K. (2021). Artificial intelligence in drug discovery and development. Drug Discovery Today, 26 (1), 80–93. https://doi.org/10.1016/j.drudis.2020.10.010 Pretorius, B., Ambuko, J., Papargyropoulou, E., & Schönfeldt, H. C. (2021). Guiding Nutritious Food Choices and Diets along Food Systems. Sustainability, 13 (17), 9501. https://doi.org/10.3390/su13179501 Riess, A. G., Casertano, S., Yuan, W., Macri, L. M., & Scolnic, D. (2019). Large Magellanic Cloud Cepheid Standards Provide a 1% Foundation for the Determination of the Hubble Constant and Stronger Evidence for Physics beyond ΛCDM. The Astrophysical Journal, 876 (1), 85. https://doi.org/10.3847/1538 - 4357/ab1422 Scarabottolo, C. C., Tebar, W. R., Gobbo, L. A., Ohara, D., Ferreira, A. D., Da Silva Canhin, D., & Christofaro, D. G. D. (2022). Analysis of different domains of physical activity with health- related quality of life in adults: 2-year cohort. Health and Quality of Life Outcomes, 20 (1), 71. https://doi.org/10.1186/s12955 - 022 - 01981 - 3 Stuart, T., Butler, A., Hoffman, P., Hafemeister, C., Papalexi, E., Mauck, W. M., Hao, Y., Stoeckius, M., Smibert, P., & Satija, R. (2019). Comprehensive Integration of Single-Cell Data. Cell, 177 (7), 1888-1902.e21. https://doi.org/10.1016/j.cell.2019.05.031 Tambe, P., Cappelli, P., & Yakubovich, V. (2019). Artificial Intelligence in Human Resources Management: Challenges and a Path Forward. California Management Review, 61 (4), 15– 42. https://doi.org/10.1177/0008125619867910 Troster, V., Tiwari, A. K., Shahbaz, M., & Macedo, D. N. (2019). Bitcoin returns and risk: A general GARCH and GAS analysis. Finance Research Letters, 30, 187–193. https://doi.org/10.1016/j.frl.2018.09.014 White, R., Marinakis, Y., Islam, N., & Walsh, S. (2020). Is Bitcoin a currency, a technology-based product, or something else? Technological Forecasting and Social Change, 151, 119877. https://doi.org/10.1016/j.techfore.2019.119877 Yang, W.-Y., Melgarejo, J. D., Thijs, L., Zhang, Z.-Y., Boggia, J., Wei, F.-F., Hansen, T. W., Asayama, K., Ohkubo, T., Jeppesen, J., Dolan, E., Stolarz-Skrzypek, K., Malyutina, S., Casiglia, E., Lind, L., Filipovský, J., Maestre, G. E., Li, Y., Wang, J.-G., … for The International Database on Ambulatory Blood Pressure in Relation to Cardiovascular Outcomes (IDACO) Investigators. (2019). Association of Office and Ambulatory Blood Pressure With Mortality and Cardiovascular Outcomes. JAMA, 322 (5), 409. https://doi.org/10.1001/jama.2019.9811 75 RJL | Vol. 1 | No. 2 | 2023
-----
Legal Analysis of Cryptocurency Utilization in Indonesia | Research Papers
## Yang, Y., Gao, W., Guo, S., Mao, Y., & Yang, Y. (2019). Introduction to BeiDou‐3 navigation satellite system. Navigation, 66 (1), 7–18. https://doi.org/10.1002/navi.291 Zhang, S., Yao, L., Sun, A., & Tay, Y. (2020). Deep Learning Based Recommender System: A Survey and New Perspectives. ACM Computing Surveys, 52 (1), 1–38. https://doi.org/10.1145/3285029
**Copyright Holder :**
**©** Wira Agustian Tri Haryanto et al. (2023)
**First Publication Right :**
**©** Journal Emerging Technologies in Education
**This article is under:**
## 76 RJL | Vol. 1 | No. 2 | 2023
-----
| 9,774
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.55849/rjl.v1i2.390?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.55849/rjl.v1i2.390, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBYSA",
"status": "HYBRID",
"url": "https://journal.ypidathu.or.id/index.php/rjl/article/download/390/156"
}
| 2,023
|
[
"JournalArticle",
"Review"
] | true
| 2023-07-24T00:00:00
|
[
{
"paperId": "b6d1c2cd07cb87e90a66451331aabcf4ba36ac0b",
"title": "Analysis of different domains of physical activity with health-related quality of life in adults: 2-year cohort"
},
{
"paperId": "8926c6d274e63184818bbe311fa298d655cdd364",
"title": "Determination of D&C Red 33 and Patent Blue V Azo dyes using an impressive electrochemical sensor based on carbon paste electrode modified with ZIF-8/g-C3N4/Co and ionic liquid in mouthwash and toothpaste as real samples."
},
{
"paperId": "daaff8b90c31509b019ef1fd676d5710db4489cf",
"title": "Changes in Physical Activity and Sedentary Behavior before and during the COVID-19 Pandemic: A Swedish Population Study"
},
{
"paperId": "ccb8d20a52be737cc14d2e30e9ada527a7c5ba99",
"title": "Guiding Nutritious Food Choices and Diets along Food Systems"
},
{
"paperId": "78bfe68f1f738172db0ef2662d4aad70f45efa6c",
"title": "Evaluation of risk mitigation measures for people with substance use disorders to address the dual public health crises of COVID-19 and overdose in British Columbia: a mixed-method study protocol"
},
{
"paperId": "e369b0404c4bed6e4aa7835b4e051c7616c27b55",
"title": "Determination of the fine-structure constant with an accuracy of 81 parts per trillion"
},
{
"paperId": "4dabbc4ff5b22def26576b04b078d1f45a997d85",
"title": "Artificial intelligence and business models in the sustainable development goals perspective: A systematic literature review"
},
{
"paperId": "715f30e22ef0b5f9c15be92bafc296cbe570fdd2",
"title": "Development and validation of a formula for objective assessment of cervical vertebral bone age"
},
{
"paperId": "cec3d533193d922b73b96e8556198f113e1de934",
"title": "Bitcoin price prediction using machine learning: An approach to sample dimension engineering"
},
{
"paperId": "d6faa1112ae766f50510fee65b5a8f7ff29515f7",
"title": "Is Bitcoin a currency, a technology-based product, or something else?"
},
{
"paperId": "a73e0b3cbff7fa2d614ee1827757098dd42c8b35",
"title": "Disease Burden of Patients With Osteoarthritis: Results of a Cross‐Sectional Survey Linked to Claims Data"
},
{
"paperId": "f2add59d6c7f4831d4750a3e5a87e392fbbe64b1",
"title": "The Relationship between a Country’s Level of Tourism and Environmental Performance"
},
{
"paperId": "0efc0f514712cd33106d278ca183e39d92524dab",
"title": "Tinjauan Yuridis Kepastian Hukum Penggunaan Virtual Currency dalam Transaksi Elektronik (Ditinjau dari Undang-Undang Nomor 7 Tahun 2011 Tentang Mata Uang)"
},
{
"paperId": "351a63ecb22115ce567c8bbcad349392a02b0677",
"title": "Bitcoin returns and risk: A general GARCH and GAS analysis"
},
{
"paperId": "45b12c0e678b869f450e25291acb7fd8e6eb3390",
"title": "Quality of life in caregivers of young children with Prader–Willi syndrome"
},
{
"paperId": "a5ccb045ec9ffe920af52725fa691a54e35731be",
"title": "Challenges faced with opioid prescriptions in the community setting - Australian pharmacists' perspectives."
},
{
"paperId": "7491760fdef8f8703c18428c08547530dc839bc5",
"title": "The impact of class imbalance in classification performance metrics based on the binary confusion matrix"
},
{
"paperId": "0c723bf1ddf14e71d9cc66a11c48be615db07ee7",
"title": "Biobanking in health care: evolution and future directions"
},
{
"paperId": "80c5d486c73bc43fcbbfcfbbb1971c1a72a8f27b",
"title": "Artificial Intelligence in Human Resources Management: Challenges and a Path Forward"
},
{
"paperId": "aa905a45c2acab62c00fc5b44ef30918fe72cf29",
"title": "Large Magellanic Cloud Cepheid Standards Provide a 1% Foundation for the Determination of the Hubble Constant and Stronger Evidence for Physics beyond ΛCDM"
},
{
"paperId": "1f00c4c1d91f6f38dbe5f5483cf01585d50e7362",
"title": "Digital Currency like Bitcoin within the International Monetary System Field"
},
{
"paperId": "659dd200ab51398295b55c7ea1779684d21d28c7",
"title": "Blockchain Applications – Usage in Different Domains"
},
{
"paperId": "2287a3930a7568a956aae5f3f037efe8fed675e7",
"title": "Comprehensive Integration of Single-Cell Data"
},
{
"paperId": "a2a12e82c0cce1bd72dff5a5edf5ac7eb9cbfe2c",
"title": "Introduction to BeiDou‐3 navigation satellite system"
},
{
"paperId": "4e3cf1f761b8749afbac46ab949ed30896d3f44a",
"title": "Artificial Intelligence in Drug Discovery and Development"
},
{
"paperId": null,
"title": "Legal Analysis of Cryptocurency Utilization in Indonesia |"
}
] | 9,774
|
en
|
[
{
"category": "Medicine",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0048b090dd3baa9b503c885ab93601fb8b8b6cfd
|
[] | 0.940737
|
Keynote speakers: Charting our future together: Turning discovery science into precision health
|
0048b090dd3baa9b503c885ab93601fb8b8b6cfd
|
[
{
"authorId": "2913503",
"name": "G. Gibbons"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
# KEYNOTE SPEAKERS
## GARY H. GIBBONS, M.D.
### NATIONAL HEART, LUNG, AND BLOOD INSTITUTE
MONDAY, 6 NOVEMBER, 2017 8:45AM – 9:45AM
Gary H. Gibbons, M.D., is Director of the National Heart, Lung, and Blood Institute (NHLBI) at the National
Institutes of Health (NIH), where he oversees the third largest institute at the NIH, with an annual budget of
approximately $3 billion and a staff of nearly 1,000 federal employees. NHLBI provides global leadership for
research, training, and education programs to promote the prevention and treatment of heart, lung, and
blood diseases and enhance the health of all individuals so that they can live longer and more fulfilling lives.
Since being named Director of the NHLBI, Dr. Gibbons has enhanced the NHLBI investment in fundamental
discovery science, steadily increasing the payline and number of awards for established and early stage
investigators. His commitment to nurturing the next generation of scientists is manifest in expanded funding for
career development and loan repayment awards as well as initiatives to facilitate the transition to
independent research awards.
Dr. Gibbons provides leadership to advance several NIH initiatives and has made many scientific contributions
in the fields of vascular biology, genomic medicine, and the pathogenesis of vascular diseases. His research
focuses on investigating the relationships between clinical phenotypes, behavior, molecular interactions, and
social determinants on gene expression and their contribution to cardiovascular disease. Dr. Gibbons has
received several patents for innovations derived from his research in the fields of vascular biology and the
pathogenesis of vascular diseases.
Dr. Gibbons earned his undergraduate degree from Princeton University in Princeton, N.J., and graduated
magna cum laude from Harvard Medical School in Boston. He completed his residency and cardiology
fellowship at the Harvard-affiliated Brigham and Women’s Hospital in Boston. Dr. Gibbons was a member of the
faculty at Stanford University in Stanford, CA, from 1990-1996, and at Harvard Medical School from 1996-1999.
He joined the Morehouse School of Medicine in 1999, where he served as the founding director of the
Cardiovascular Research Institute, chairperson of the Department of Physiology, and professor of physiology
and medicine at the Morehouse School of Medicine, in Atlanta. While at Morehouse School of Medicine, Dr.
Gibbons served as a member of the National Heart, Lung, and Blood Advisory Council from 2009-2012.
Throughout his career, Dr. Gibbons has received numerous honors, including election to the former Institute of
Medicine of the National Academies of Sciences (now National Academy of Medicine); selection as a Robert
Wood Johnson Foundation Minority Faculty Development Awardee; selection as a Pew Foundation Biomedical
Scholar; and recognition as an Established Investigator of the American Heart Association (AHA).
-----
#### Charting Our Future Together: Turning Discovery Science into Precision Health
For nearly 70 years, the National Heart, Lung, and Blood Institute (NHLBI) has provided global leadership for
research, training, and education programs to promote the prevention and treatment of heart, lung, blood,
and sleep (HLBS) diseases and disorders. Throughout this period, NHLBI-supported research discoveries have
helped fuel dramatic declines in death and disability from HLBS diseases and disorders and continued
improvements in quality of life in the United States and abroad. Despite these successes, heart disease remains
the leading cause of death in the United States and at the global level while other diseases in the NHLBI mission
areas such as chronic obstructive lung disease, asthma, and sickle cell disease contribute significant mortality,
morbidity, and lost economic productivity worldwide. Additionally, disparities in access to quality care based
on race, ethnicity, sex, geography, and socioeconomic status remains pervasive in the United States and
abroad. Despite these persistent challenges, the NHLBI remains committed to advancing discovery science
and related translation to promote precision health for all and enhance human health through several
enduring principles that have sustained the NHLBI legacy of excellence. These principles include: valuing
investigator-initiated fundamental discovery science; maintaining a balanced portfolio across basic,
translational, clinical, population, and implementation science; training a diverse new generation of scientists;
supporting implementation science that empowers patients and partners to improve the nation’s health; and
innovating an evidence-based elimination of health inequities. Successful pursuit of this endeavor requires the
collective effort of a diverse community of partners including patients, researchers, policymakers, care
providers, professional organizations, and the private sector. The NHLBI Strategic Vision released in 2016
provides a unique opportunity for a mission-driven focus on building on our past successes, leveraging
technological advances, and importantly, taking the next leap forward in precision health for all. This focus
addresses both the quality and longevity of life as well as the reduction and elimination of health inequities. In
particular, our ability to integrate truly diverse biomedical datasets with social and environmental determinants
could usher in a new era of precision health that emphasizes the right treatment, in the right amount, tailored
for the right individual patient, delivered at the right time, that yields the right outcomes. Our strategic vision for
turning discovery science into precision health is perfectly aligned with the theme of the 2017 NIH-IEEE Special
Topics Conference on Healthcare Innovations and Point of Care Technologies: Technology in Translation. We
are excited about the opportunity to translate discovery science into health impact and chart our future
together with our community of investigators and our key partners – our patients, their family members, and the
public. The future has never looked brighter.
-----
## GEORGE M. WHITESIDES, PH.D.
### HARVARD UNIVERSITY
TUESDAY, 7 NOVEMBER, 2017 8:30AM – 9:30AM
George M. Whitesides received his AB degree from Harvard University in 1960, and his PhD from the California
Institute of Technology in 1964 (with J.D Roberts). He began his independent career at M.I.T., and is now the
Woodford L. and Ann A. Flowers University Professor at Harvard University. His current research interests include
physical and organic chemistry, materials science, biophysics, water, self-assembly, complexity and simplicity,
origin of life, dissipative systems, affordable diagnostics, and soft robotics.
### PRESENTATION ABSTRACT
#### The Point of Care and the Developing World
This talk will describe bioanalytical/medical methods designed for use in resource-limited environments, for
public health, at the point of care, and in related applications in food and water safe, forensics, and others.
These methods include paper diagnostics, electrochemical methods, and cell-phone based methods. The talk
will also ask what strategies in academic research will be most successful in translating results from university
bench science into real solutions to problems in health in the hands of users, and who else must be involved in
this translation.
-----
## ERIC DISHMAN
### PRECISION MEDICINE COHORT PROGRAM, NATIONAL INSTITUTES OF HEALTH
WEDNESDAY, 8 NOVEMBER, 2017 8:30AM – 9:30AM
Eric Dishman is the Director of the All of Us Research Program at the National Institutes of Health. In this role, he
leads efforts to build a research cohort of one million U.S. participants to advance precision medicine.
Previously, Dishman was an Intel Fellow and Vice President of the Health and Life Sciences Group at Intel
Corporation, where he was responsible for driving Intel’s cross-business strategy, research and development,
and product and policy initiatives for health and life science solutions.
He is widely recognized as a global leader in health care innovation with specific expertise in home and
community-based technologies and services for chronic disease management and independent living. Trained
as a social scientist, Dishman is known for pioneering innovation techniques that incorporate anthropology,
ethnography, and other social science methods into the development of new technologies. He also brings his
own experience as a cancer patient for 23 years—finally cured thanks to precision medicine—to drive a
person-centric view of health care transformation.
-----
# INVITED SPEAKERS
## ERIC BERSON, PH.D. – UNIVERSITY OF LOUISVILLE
Dr. Eric Berson is currently an Associate Professor of Chemical Engineering at the University of Louisville. Dr.
Berson’s research program has focused on the development and/or improvement of bio-processes where
existing techniques are limited due to complexities with the working media such as multi-phases, high-solids
content, and complex flow fields. Integrating computational fluid dynamics with experimental work has been
instrumental in overcoming limitations when experimental observations or measurements are difficult or
impractical. Example applications include bioreactor design, kinetic and mechanistic modeling of enzymatic
reactions, characterization of fluid forces in complex flow fields, correlation of fluid forces to mammalian
cellular responses, and most recently a non-invasive technique for detecting and assessing coronary stenosis.
The interdisciplinary work has resulted in collaborations with other engineering disciplines, MD’s, and
microbiologists from universities in the US and Europe plus national labs and industry. He earned his BS in
Chemical Engineering from Florida State University in 1991 and PHD from the University of Louisville in 2000.
## JODI BLACK, PH.D. – OFFICE OF EXTRAMURAL RESEARCH,
NATIONAL INSTITUTES OF HEALTH
Dr. Jodi Black is the Deputy Director of the Office of Extramural Research, where she oversees and supports
initiative development and grants management policy and processes and the small business and extramural
technology development programs.
Dr. Black has over 25 years of scientific research and research administration leadership experience with a
diverse background in basic and clinical science, and programmatic administration. In her career, she has
developed, implemented, and managed large, diverse, multidisciplinary scientific programs in areas including
infectious diseases, cancer and genomics and has developed strategic alliances between academic,
healthcare and commercial organizations to leverage resources and capacity across institutions. While at the
National Heart, Lung, and Blood Institute (NHLBI), she provided leadership and management for initiative
development, the peer review process, policy development and implementation, grants and contracts
including training, small business and international awards, as well as the development and implementation of
programs and partnerships to enhance the translation of innovative technologies from the bench to the market
to enhance health.
Dr. Black earned a PhD in pathology and a Masters of Medical Science in infectious diseases from Emory
University.
-----
## CAROLE C. CAREY, BSEE, MENG, C3-CAREY CONSULTANTS, LLC
Carole Carey is an IEEE senior member and a member of the IEEE Eta Kappa Nu Honor Society. She currently
serves as chair of the EMBS Standards Committee, liaison to the IEEE Standards Association, and was recently
selected as a recipient of the 2016 IEEE-SA Standards Medallion Award. She is a former U.S. FDA official in the
Center for Devices and Radiological Health (CDRH) with over 23 years of regulatory science experience as a
Scientific Reviewer and International Advisor. As a Reviewer, she was team leader of highly complex,
innovative cardiovascular devices and a peer-reviewed expert regulatory review scientist. In this capacity, she
was also active in the development of industry consensus standards in her areas of specialization, both at the
national and international levels. As a Mansfield Fellow, she trained side-by-side and collaborated with
regulatory counterparts in Japan’s Ministry of Health, Labour and Welfare (MHLW) and its scientific review arm,
the Pharmaceutical and Medical Devices Agency (PMDA) -- on regulatory device issues, scientific matters
concerning device safety and effectiveness, the recognition of international standards and global
harmonization initiatives. Later, she served as Director of International Staff in FDA CDRH. Furthermore, she
conducted device regulatory workshops in Europe, Asia and Latin America. Currently, she is a regulatory
consultant providing advice and strategic approaches in premarket submissions, investigational device clinical
trials and postmarket issues for regulated industry. Carole earned her engineering degrees from Johns Hopkins
University and Loyola University of Maryland.
### PRESENTATION ABSTRACT
#### The Role of Standards and Regulations in Translation of Biomedical Technology
As healthcare is becoming increasingly dependent on new and potentially disruptive technologies, the
biomedical engineering community is more engaged in collaborative efforts with academia, clinicians, the
health service and industry. Some examples of biomedical engineering developments driving innovation are
latest sensors, new biocompatible materials, and novel approaches in measuring techniques. The aspirations
are to reduce the cost of medical devices and improve the performance of healthcare technology with
reliable products that are safe and effective. The goal of multidisciplinary partnerships is to apply research
discoveries and preclinical studies, investigational trials in humans, and finally early access to benefit public
health. For medical devices to be marketed legally around the globe, the device industry must overcome
many translation challenges in order to seek and obtain regulatory approvals. Innovation, biomedical
technology, and use of standards play influential roles in the regulatory process to market a device. They can
shorten the translation process and lead to successful commercialization. This presentation will highlight the
importance of using consensus standards as well as pursuing the development of new international standards.
Examples of existing standards and active projects in development under the IEEE Standards Association (IEEESA) and Engineering in Medicine and Biology (EMB) standards committee will be introduced. We will also
explain how standards are used and are becoming an important part in carrying out the regulatory mission.
-----
## JUE CHEN, PH.D. – NATIONAL HEART, LUNG, AND BLOOD INSTITUTE
Jue Chen received the Bachelor of Medicine Degree in Preventive Medicine in 2001 and Master Degree in
Toxicology in 2004, from Fudan University, Shanghai, China. She obtained her Ph.D. degree in Pharmacology
from Emory University in 2011 before joining in the Laboratory of Biochemistry in the Intramural Program of the
National Heart, Lung, and Blood Institute (NHLBI). In May 2015, she joined the NHLBI extramural program as a
program director in the Division of Cardiovascular Sciences (DCVS).
Dr. Chen’s past research focused on redox signaling and she has expertise in public health, pharmacology,
environmental toxicology, and protein chemistry. She currently manages basic and preclinical research grants
studying atherosclerosis in the Atherothrombosis and Coronary Artery Disease Branch of the DCVS in the NHLBI.
## JEAN-PHILIPPE COUDERC, PH.D. – STRONG MEMORIAL HOSPITAL,
UNIVERSITY OF ROCHESTER
Short Biographical Sketch Dr. Couderc is a biomedical engineer who obtained his PhD degree with highest
honors from the French National Institute of Applied Sciences in Lyon, France in 1997. He is Associate Professor
of Medicine in the Cardiology Department of Strong Memorial Hospital (Rochester, NY) and Research Associate
Professor of Electrical and Computer Engineering at University of Rochester (NY). He is leading the Telemetric
and Holter ECG Warehouse initiative (THEWproject.org), and he is the Assistant Director of the Heart Research
Follow-up Program (HRFUP). Dr. Couderc is a principal investigator and a co-investigator in several federal
funded research grants involving the development of ECG and wearable technologies. In addition he holds
the position of Chief Technology Officer at iCardiac Technology Inc. a Rochester-based research spin-off
delivering high-precision ECG-based safety and efficacy metrics to international pharmaceutical companies.
Dr. Couderc has been invited for lectures by universities in US and Europe and by private and national
laboratories (NIH and EPA). He is currently holding a position of Special Governmental Employee at the Center
for Drug Evaluation and Research (CDER) for the Food and Drug Administration of the US. Department of Health
and Human Services. Currently, he has more than 80 publications and his work has been highlighted in the Wall
Street Journal.
-----
## MICHAEL DEMPSEY, FOUNDER & CEO – SECORA CARE
Mike Dempsey has been working in the field of medical devices for more than 30 years; during this time he has
invented or worked on products that have treated over twelve million people. Mike holds over 40 patents on
various medical devices and has ten more patents pending. Mike is currently the founder and CEO of Secora.Care,
an early-stage company that uses “Big Data” to help older people live safely at home as long as possible.
Mike is also the Entrepreneur in Residence at the Center for the Integration of Medicine and Innovative
Technology (CIMIT), the Director of the CIMIT Accelerator Program, the Co-Executive Director of the Center for
Biomedical and Interventional Technology (CBIT) at Yale University, and a faculty member at MIT. Mike’s
primary responsibilities in these academic settings are to lead academic innovators through the
commercialization journey and to teach students the fundamentals of building medical companies. At CIMIT
and Yale, Mike leads a team of highly experienced med-tech executives who join the academic team with up
to a full-time commitment and for as long as two years, effectively acting as an interim CEO. This intensive,
practical, and focused approach to facilitating the academic-to-commercial transition has led to a
commercialization success rate of 42% and an average time to commercialize of 18 months. Mike is also the PI
on several NIH SBIR grants, a frequent grant reviewer for the NIH, and has received a special citation from the
Commissioner of the FDA for “exceptional initiative and leadership to protect the public health.”
## ATAM P DHAWAN, PHD – NEW JERSEY INSTITUTE OF TECHNOLOGY
Atam P. Dhawan obtained his bachelor’s and master’s degrees from the Indian Institute of Technology,
Roorkee, and Ph.D. from the University of Manitoba, all in Electrical Engineering. From 1985-2000, he held faculty
positions in Electrical & Computer Engineering, and Radiology departments at University of Houston, University
of Cincinnati, University of Texas, University of Texas Medical Center (Dallas) and University of Toledo. In July
2000, he joined NJIT where he served as the Chair of the Department of Electrical and Computer Engineering
for nine years. Currently he is Distinguished Professor of Electrical & Computer Engineering and Executive
Director of Undergraduate Research and Innovation. He is also an Adjunct Professor of Radiology at the
University of Medicine and Dentistry of New Jersey.
Dr. Dhawan is a Fellow of the IEEE for his contributions in medical imaging and image analysis. He has published
more than 215 research articles in refereed journals, books, and conference proceedings. His current research
interests are medical imaging, multi-modality medical image analysis, adaptive learning and pattern
recognition. His research work has been funded by NIH, NSF and several industries.
Dr. Dhawan is a recipient of Martin Epstein Award (1984), National Institutes of Health FIRST Award (1988),
Sigma-Xi Young Investigator Award (1992), University of Cincinnati Faculty Achievement Award (1994) and the
prestigious IEEE Engineering in Medicine and Biology Early Career Achievement Award (1995) and University of
Toledo Doermann Distinguished Lecture Award (1999). He served as the Senior Editor of IEEE Transactions of
Biomedical Engineering and Editor-In-Charge of IEEE TBME Letters (2007-2012). He is Co-Editor-In-Chief of the
IEEE J l f T l ti l E i i i H lth d M di i
-----
Biomedical Image Analysis in IEEE EMBS International Conferences (1996, 1997, 2000, 2003). He served as the
Chair of the “Emerging Technologies Committee” of the IEEE-EMB Society from 1997-99, and 2009-11. He is also
a member of the IEEE Life Sciences Committee. He was the Chair of the “New Frontiers in Biomedical
Engineering” Symposium at the World Congress 2000 on Medical Physics and Biomedical Engineering. He was
the Conference Chair of the IEEE 28th International Conference of Engineering in Medicine and Biology
Society, New York in 2006. He has initiated and served as the Conference Chair/Co-Chair of the series of IEEENIH Special Topics Conferences on Healthcare Innovations and Point-of-Care Healthcare Technologies held in
Bangalore, India (2013), Seattle (2014), Bethesda (2015), and Cancun, Mexico (2016).
Dr. Dhawan has chaired numerous NIH special emphasis and review panels including the NIH Chartered Study
Section on Biomedical Computing and Health Informatics (2008-11). He is listed in Who’s Who in the World,
Who’s Who in America, Who’s Who in Engineering, and Who’s Who Among America’s Teachers.
## ECHEZONA EZEANOLUE, M.D., MPH – UNIVERSITY OF NEVADA
Echezona Ezeanolue, MD, MPH is Professor of Pediatrics and Public Health at the University of Nevada, Las
Vegas. He is a Nigeria-born Infectious Disease specialist and physician-epidemiologist with an extensive record
of community-based maternal and child health research. His research focuses on the use of implementation
science to enhance the effectiveness and quality of health services. He serves as the Director of the HRSAfunded comprehensive maternal-child HIV program in Nevada (H12HA24832) and PI on multiple NIH-funded
grants including the Baby Shower Trial (R01HD075050; R21TW010252; R01HD087994; R01HD089871) that seek to
identify feasible, acceptable, and sustainable approaches to test, engage and retain individuals with HIV
infection to achieve viral suppression and improve health outcomes. Dr. Ezeanolue has been recognized as
Nevada Public Health Leader of the Year (2007), Nevada Health Care Hero (2008), Nevada Immunization
Champion (2009) and AAP Local Hero (2010) for his contributions to public health.
### PRESENTATION ABSTRACT
#### Patient-Held Smartcard to Increase Data Quality and Improve Health Outcome
Despite the availability of evidence-based interventions for prevention, HIV and hepatitis B virus (HBV) infections
remain endemic in sub-Saharan African countries. To implement evidence based interventions to prevent these
infections, pregnant women need to be screened during pregnancy and infected women identified and
treated. Additionally, maternal information including laboratory test results should be available at the point-ofdelivery (POD) to enhance implementation of evidence-based interventions to improve health outcomes. Until
recently, the use of information technology to make prenatal data available at the POD has been limited to
high-income countries due to poor infrastructure in developing countries. Fortunately, the unprecedented
spread of mobile technology has made it possible to develop mHealth platforms that provide similar services to
hard-to-reach communities in resource-limited settings. This has led to improved quality of care, decreased rate
of unnecessary testing and allowed for early institution of evidence-based interventions that improve birth
outcomes. We developed an integrated mHealth platform that can: (1) store prenatal data obtained from
community- and facility-based screening programs including laboratory test results for HIV, HBV and genotype
in a secure, web-based database, (2) encrypt this data into a “smartcard”, and (3) make these data available
at the POD using a mobile-phone based application to read the card.
-----
## MIKE FISHER – THE GLOBAL CENTER FOR MEDICAL INNOVATION
Mike Fisher has 20 years of experience developing and commercializing medical products, managing sustaining
engineering efforts, performing International manufacturing scale-up, achieving regulatory concurrence,
navigating patent landscapes, and executing development plans. He is a named inventor on over 20 issued US
patents with almost twice as many applications in prosecution. In 2015, Mike joined GCMI, a not-for-profit
medical device development company that is affiliated with Georgia Tech. Here, he gets to develop disruptive
medical technologies and mentor med tech entrepreneurs. Prior to GCMI, Mike spent 17 years working for CR
Bard, Johnson & Johnson’s DePuy Franchise, the Orthopaedic Research Lab at the University of Virginia, and
several start-up companies in the tissue engineering industry. He earned BS and MS degrees in engineering
mechanics from Virginia Tech where he met his wife. When Mike is not working on medical products, he enjoys
spending time with his wife, children, and Boy/Cub Scouts across Northwestern Georgia.
## BRIAN FITZGERALD – US FOOD AND DRUG ADMINISTRATION
Brian Fitzgerald was educated in England and received his engineering degree from University College Cardiff
in Wales. He became a US citizen in 2003.
He left the private sector in 1992 after a multidisciplinary engineering career, and joined Underwriters
Laboratories (UL) in Raleigh, NC helping to start their software safety initiative. He has contributed to the
development of several national and international standards for programmable systems UL 1998, IEC 60601-1-4,
AAMI SW68 and most recently IEC 62034, IEC 80001 and IEC ACSEC Guide for Privacy and Security. He was
nominated as a US National Expert by AAMI to WG22 of IEC SC62a dealing with programmable systems, to ISO
TC210 WG1 dealing with quality systems and to JWG7 of IEC and ISO for Medical IT networks.
He is a member of the AAMI software committee, the AAMI IT committee and the AAMI Cybersecurity
committee. Prior to joining FDA he was an accredited software expert and lead auditor for two European
notified bodies. He continues to conduct public seminars in software safety, risk management, medical device
cybersecurity, software related regulatory affairs and medical quality systems. He is a member of the US
National Council of the International Electro-technical Commission.
He joined FDA’s CDRH in October 2003 in the Office of Science and Engineering Laboratories to specialize in
systems, software evaluation and safety research activities. He is currently Senior Technical Advisor for
Cybersecurity and High-Performance Computing.
Current projects include researching the use of formal methods as they relate to generalized ‘assurance cases’
including safety cases and compliance cases, and the development of forensic techniques for detecting and
investigating software failure. He leads the technical and research aspects of the FDA cybersecurity team. He is
-----
contribute to FDA Guidance development, product review activities and works with several other Federal
Regulatory Agencies in the field of cybersecurity.
## CINDY J. FLACKS, MPH, M.T., ASCP – CENTERS FOR MEDICARE AND
MEDICAID SERVICES
CDR Flacks serves as Medical Technologist/MLS Regulatory Compliance Lead for the Centers for Medicare and
Medicaid Services (CMS), Survey and Certification Group/Division of Laboratory Services where she oversees
several projects, to include the oversight of CLIA certified International Laboratories. She also served as a
member of the IQCP Planning team which was charged with creating and Implementing IQCP policy
nationwide; Co-authored an educational workbook for laboratories implementing IQCP; and co-wrote and
helped to produce a 20-minute video on the CLIA survey process, among other notable accomplishments.
CDR Flacks was commissioned as an officer in the United States Public Health Service in June 2003 and worked in
the Federal Bureau of Prisons before joining CLIA in March 2008. She was deployed to New Orleans in February
2006 to lead a Public Health Service clinic in support of the first Mardi Gras celebration post Hurricane Katrina.
A native of Petersburg, Illinois CDR Flacks received her MPH, with Honors from American Military University and a
BS, Summa Cum Laude in Clinical Laboratory Science from UMass, Lowell. CDR Flacks is a Certified Medical
Technologist by the American Society of Clinical Pathologists.
Currently residing in Downtown Baltimore, MD with her husband, daughter and two dogs, CDR Flacks enjoys
Yoga, cross-training, and watching professional football, specifically the NY Football Giants. She is also involved
on the board of her daughter’s school Parent Teacher Organization.
## JOHN J. GARGUILO, M.S. – NATIONAL INSTITUTE OF STANDARDS
**AND TECHNOLOGY**
John J. Garguilo is a supervisory computer scientist at the National Institute of Standards and Technology (NIST)
of the United States Department of Commerce. John’s the Group Leader of the Systems Interoperability Group
and leader of the Semantic Interoperability of Medical Devices (SIMD) project focused on medical device
communication research and testing and aimed at enabling the adoption of medical device communication
standards by acute, point-of-care, and personal health medical device manufacturers.
John currently serves as the Health Level Seven (HL7) Healthcare Device Working Group Co-Chair and over ten
years as the test lead as well as four years as the Technical Committee Co chair for the Integrating the
-----
term as the Secretary of the IEEE 11073 Medical Device Communications Point of Care (PoCD) working group.
John’s focus over the past ten years has been on developing conformance test tooling in support of
standardization of medical device information exchange and working with device standard and Standards
Development Organizations (including HL7 V2 and ISO/IEEE 11073). His work includes testing and promoting
adoption of standards for medical device communications throughout the healthcare enterprise as well as
integrating it into the electronic health record. John works and is closely engaged with medical device experts
within the HL7, IHE-PCD domain, and ISO/IEEE Healthcare Devices and Personal Health Devices working groups.
John also leads the HL7 message validation test tooling effort and development of an industry adopted
harmonized medical device terminology database containing ISO/IEEE 11073 terminology.
John holds a Master’s degree from the Johns Hopkins University and Undergraduate degree from the State
University of New York, Potsdam, both in computer science. John has extensive experience over the past 30
years working on and managing software systems to support research, testing, automating work flow
applications, data communications, and electronic commerce.
### PRESENTATION ABSTRACT
#### Testing Semantic Interoperability of Medical Device Communication Information
John Garguilo, computer scientist at the National Institute of Standards and Technology, will present applied
black-box test methods and research approaches based on well recognized international standards used to
help chip away at device to device interoperability and integrating device data throughout the healthcare
enterprise including electronic health records. Conformance test tooling will be described - built in support of
common exchange of information - via standardization and working with medical device domain and
Standards Development Organizations (including Health Level Seven [HL7] and ISO/IEEE 11073 – Medical
Device Communication Family of Standards). Core to the described approach are informational modeling
techniques and a harmonized medical device nomenclature; and a foundational health information
technology test framework used to provide users implementation guide and test case authoring and
management capabilities leading to automatic test tool generation. Such approaches to testing and
communication research is aimed at enabling the adoption of medical device communication standards by
acute, point-of-care, and personal health medical device manufacturers thus affecting improved healthcare
including patient safety, clinical decision support, and semantically intact retrospective data, as well as
financial impact through more informed medical device and system procurement practices.
## CRISTINA GIACHETTI, PH.D. – BILL AND MELINDA GATES FOUNDATION
Cristina Giachetti is the Deputy Director of Diagnostics at the Bill and Melinda Gates Foundation, where she
oversees the development and implementation of diagnostic tools to support the Foundation’s programs in
Global Health. Previously, she was Senior Vice President, Research and Development for the Diagnostics Division
of Hologic and Vice President, Research and Development for Gen-Probe, with responsibilities over Research,
Development, Clinical, Medical and Scientific Affairs. During her tenure at Gen-Probe/Hologic she oversaw the
development of numerous molecular diagnostics and blood-screening tests that were successfully
commercialized worldwide for the TIGRIS and Panther instruments under the APTIMA and PROCLEIX brand names.
In particular, she led the technical team that developed the first FDA licensed blood-screening test for detection
of HIV-1 and HCV nucleic acids, and her work in increasing the safety of the blood supply awarded Gen-Probe
the National Medal for Technology from the US President Cristina received her degrees in Clinical Analysis and
-----
molecular virology and rapid viral evolution at the University of California, San Diego, Department of Biology and
at the University of California, Irvine, Department of Microbiology and Molecular Genetics.
### PRESENTATION ABSTRACT
#### Critical Considerations When Introducing Diagnostics in Global Health
Nascent health care markets in low-resource settings can present considerable challenges in the design,
implementation, and impactful scale-up of diagnostic products. Diagnostics developers often face significant
challenges introducing products to these settings because they may lack a clear understanding of the multiple
customers and their needs, the restricted physical infrastructure and resources available, the regulatory and
policy frameworks, and the dynamics of these emerging markets. While technology innovation is profuse, scaleup of diagnostic interventions has not been particularly successful, and this is especially true with POC
diagnostics, where many new ideas and proofs of concept with the intent of overcoming infrastructural hurdles
abound, but the true realization of their value is lacking.
Unlike vaccines and drugs, the utility and ultimate impact of diagnostic products depend on many intricacies of
the health system (e.g., health-professional skill-set and training, quality, treatment availability, and linkage to care),
and the confounded delivery logistics (e.g., supply chain, procurement mechanisms, funding agencies), not all of
which are under complete control of the diagnostics developer or able to be solved by technology alone.
The objective of this talk is to highlight some of the critical aspects a developer of a new technology would need
to address upfront -and continuously throughout the development process, to be successful in global health.
## JULIAN GOLDMAN, M.D. – PARTNERS HEALTHCARE
Dr. Goldman is the Medical Director of Biomedical Engineering for Partners HealthCare, an anesthesiologist at
the Massachusetts General Hospital, and Director/PI of the Program on Medical Device Interoperability (MD
PnP) – a multi-institutional research program founded in 2004 to advance medical device interoperability to
improve patient safety and HIT innovation
Dr. Goldman performed his clinical anesthesia and research training at the University of Colorado, and is Board
Certified in Anesthesiology and Clinical Informatics. He served as a Visiting Scholar in the FDA Medical Device
Fellowship Program as well as an executive of a medical device company. At MGH, Dr. Goldman served as a
principal anesthesiologist in the “OR of the Future” – a multi-specialty OR that studies diverse technologies and
clinical practices to enable broad adoption.
Dr. Goldman chairs the international standardization committee for the safety and performance of anesthesia
and respiratory equipment (ISO TC 121), and serves in leadership positions of AAMI, UL, and IEC standardization
committees. He Co-Chaired the HHS HIT Policy Committee FDASIA Regulations Subcommittee and the FCC
mHealth Task Force, and co-chairs the healthcare task group of the Industrial Internet Consortium. He was
recently appointed as a Distinguished Lecturer for the IEEE EMBS.
-----
International Council on Systems Engineering Pioneer Award, the American College of Clinical Engineering
award for Professional Achievement in Technology, and American Society of Anesthesiologists awards for
advanced technology applications to improve patient safety.
E-card: www.jgoldman.info
## UMUT A. GURKAN – CASE WESTERN RESERVE UNIVERSITY
Umut A. Gurkan holds BS degrees in Chemical Engineering and Mechanical Engineering from Middle East
Technical University, and a PhD degree in Biomedical Engineering from Purdue University. He completed his
Postdoctoral Training in Medicine at Brigham and Women’s Hospital (Harvard Medical School) and Harvard-MIT
Health Sciences and Technology after which he joined Case Western Reserve University as Assistant Professor of
Mechanical and Aerospace Engineering. Dr. Gurkan is leading the CASE Biomanufacturing and
Microfabrication Laboratory (CASE-BML). CASE-BML’s mission is to improve human health and quality of life by a
fundamental understanding of cell biomechanics, and through innovations in micro/nano-engineering,
microfluidics, biosensors, and point-of-care systems. Dr. Gurkan has received national and international
recognitions and awards for research and education, including, NSF CAREER Award, “Rising Star” Award from
Biomedical Engineering Society (Cellular and Molecular Bioengineering and Advanced Biomanufacturing
Divisions), MIT Technology Review Innovator Under 35 Award (Turkey), Case-Coulter Translational Research
Partnership Award, Clinical and Translational Science Collaborative Award, Case School of Engineering
Research Award, Doris Duke Innovations in Clinical Research Award, Belcher-Weir Family Pediatric Innovation
Award, Translational Research Featured New Investigator Award from Central Society for Clinical and
Translational Research, and Glennan Fellowship from the University Center for Innovation in Teaching and
Education. Dr. Gurkan has authored over 55 research and review articles in leading peer-reviewed journals, in
addition to numerous book chapters and patents. Three of his patents have been licensed for
commercialization, one of them being on a microchip electrophoresis system for point-of-care diagnosis of
hemoglobin disorders in low resource settings. Dr. Gurkan is a member of the following societies: American
Society of Hematology, American Society of Mechanical Engineers, IEEE Engineering in Medicine and Biology
Society, and Biomedical Engineering Society. Email: [email protected] Web: http://www.case-bml.net
## SHOSHANA HERZIG, MD – HARVARD MEDICAL SCHOOL,
BETH ISRAEL DEACONESS MEDICAL CENTER
Shoshana Herzig, MD, MPH, FACP is a hospitalist and Director of Hospital Medicine Research in the Division of
General Medicine at Beth Israel Deaconess Medical Center, an Assistant Professor of Medicine at Harvard
Medical School, and a Senior Deputy Editor at the Journal of Hospital Medicine. Her research focuses on the
interplay between medication decisions and adverse outcomes in the hospital setting in an effort to inform
development of clinical decision rules and computer-based interventions to promote evidence-based
prescribing practices and reduce complications from medical care.
-----
## ERIN ITURRIAGA – NATIONAL HEART, LUNG, AND BLOOD INSTITUTE
Erin Iturriaga serves as a Program Officer and Clinical Trials Specialist at the National Heart, Lung, and Blood
Institute (NHLBI). She led an RFA called Onsite Tools and Technologies for Heart, Lung, and Blood Clinical
Research Point-of-Care and has an interest in technology for home use especially in the aging population. She
led a workshop with the Computer Research Association’s Computing Community Consortium (CCC) funded
by the National Science Foundation to discuss the use and development of technologies for assisting older
adults and people with chronic diseases to live independently. She brings a strong background in clinical
research, including clinical trials management, education, and regulatory responsibilities.
## ZACHERY IVES, PH.D. – UNIVERSITY OF PENNSYLVANIA
Zachary Ives is a Professor of Computer and Information Science at the University of Pennsylvania, where he
also serves as the Associate Dean for Masters and Professional Programs in Penn’s School of Engineering and
Applied Science. His research interests include data integration and sharing, managing “big data,” sensor
networks, and data provenance and authoritativeness. He has worked extensively in applying these
techniques in scientific applications, especially in the field of neuroscience (where he and collaborators built
the IEEG.org portal for sharing epilepsy data). He is a recipient of the NSF CAREER award, and an alumnus of
the DARPA Computer Science Study Panel and Information Science and Technology advisory panel. He is a
co-author of the textbook Principles of Data Integration, and received an ICDE 2013 ten-year Most Influential
Paper award. He has been an Associate Editor for Proceedings of the VLDB Endowment (2014) and a Program
Co-Chair for SIGMOD (2015). He is also a co-founder of Blackfynn, Inc., a company focused on providing
infrastructure for biomedical data science.
## JEFFREY KAYE, M.D. – OREGON HEALTH AND SCIENCE UNIVERSITY
Jeffrey Kaye is the Layton Endowed Professor of Neurology and Biomedical Engineering at Oregon Health and
Science University (OHSU). He directs ORCATECH – the National Institute on Aging (NIA) – Oregon Center for Aging
and Technology and the NIA – Layton Aging and Alzheimer’s Disease Center at OHSU. Dr. Kaye’s research has
focused over the past two decades on the question of why some individuals remain protected from functional
decline and dementia with advancing age while others succumb at much earlier times. This work has relied on a
-----
Brain Aging Study, the Intelligent Systems for Detection of Aging Changes (ISAAC), the Life Laboratory, the
Ambient Independence Measures for Guiding Care Transitions, and the Collaborative Aging (in Place) Research
using Technology (CART) studies using ubiquitous, unobtrusive technologies for assessment of older adults in their
homes to detect changes signaling imminent functional decline. He is co-principal investigator for the Integrated
Analysis of Longitudinal Studies of Aging (IALSA), a worldwide effort to harmonize aging and dementia data for
improved analysis. Dr. Kaye has received the Charles Dolan Hatfield Research Award for his work. He is listed in
Best Doctors in America. He serves on many national and international panels and review boards in the fields of
geriatrics, neurology and technology including as a commissioner for the Center for Aging Services and
Technology (CAST), on the Advisory Council of AgeTech West, the International Scientific Advisory Committee of
AGE-WELL Canada, and Past Chair of the International Society to Advance Alzheimer’s Research & Treatment
(ISTAART). He is an author of over 400 scientific publications and holds several major grant awards from federal
agencies, national foundations and industrial sponsors.
## MONICA KERRIGAN, MPH – JHPIEGO
Monica Kerrigan serves as Jhpiego’s Vice President for Innovations, leading a multidisciplinary team to identify
novel solutions and harness the power of innovations to accelerate progress in preventing needless deaths
among the world’s most vulnerable women, girls and their families. Ms. Kerrigan brings together global and
country experts, innovators and “unlike minds” from diverse backgrounds in public, private, technology and
non-government organizational sectors to address intractable problems in reproductive, maternal, newborn
and adolescent health. In her role, she is forging new partnerships with governments, private sector entities,
donors and philanthropists to advance innovative products, policies and processes that transform health
through positive disruption.
Ms. Kerrigan is a pioneering leader and expert in family planning, maternal health and sexual and reproductive
health and rights. Prior to joining Jhpiego, she worked at the Bill and Melinda Gates Foundation from 2007–2016,
serving most recently as Deputy Director of Family Planning. In that position, she played a pivotal role in
launching the London Summit on Family Planning in 2012. She worked in partnership with the Department of
International Development (DFID), United States Agency for International Development (USAID) and United
Nations Population Fund (UNFPA) to promote the long-term goal of universal access to reproductive health and
support the rights of an additional 120 million women and girls to access quality family planning information,
services and supplies. At the Gates Foundation, Ms. Kerrigan also energized the landscape of family planning
by developing partnerships with governments, donors and private sector and civil society organizations, which
resulted in the design and implementation of the Urban Reproductive Health Initiative; seminal launch of the
Ouagadougou Partnership for Francophone Africa; coordination of the first Implant Volume Guarantee; and
inauguration of global strategies and investments in postpartum family planning.
Prior to joining the Bill and Melinda Gates Foundation, Ms. Kerrigan served as Team Leader for Maternal and
Newborn Health at UNICEF in Indonesia. For more than a decade at USAID, she served as a Senior Technical Advisor
in the Office of Family Planning/Reproductive Health, where she led initiatives on frontline provider performance,
commodity security and post-abortion care. In the early 1990s, Ms. Kerrigan led Jhpiego’s Africa Office, developing
the capacity of countries to deliver high-quality training and services in reproductive and maternal health.
She earned her Master of Public Health degree in maternal and child health from the University of North
Carolina at Chapel Hill. She is a former Peace Corps Volunteer, where she served as a primary health care
trainer in rural Mali
-----
## SHAWNA KHOURI, MBID – GEORGIA INSTITUTE OF TECHNOLOGY
Shawna Khouri, MBID is the Managing Director of the Coulter Translational Fund at Georgia Institute of
Technology and Emory University where she provides business leadership and commercialization strategy at the
intersection of academia, medicine, investment and industry to successfully bridge early-stage technologies
into successful start-ups and licenses to industry. In addition, Shawna provides commercialization coaching to
national clients, including the NIH-C3i Commercialization Training Program, where she mentors R01 and SBIR
recipients in business and development strategies for their medical innovations. Shawna is also a medical
device engineer with patents pending on emergency medicine and orthopedic technologies. These
technologies have received national innovation awards and been featured in a special exhibition at the
Smithsonian. She has both a Master’s Degree in Biomedical Innovation and Development and Bachelor of
Science in Biomedical Engineering from Georgia Institute of Technology.
## MOKA LANTUM, M.D. – MICROCLINIC TECHNOLOGIES
Dr. Lantum is a serial entrepreneur with 20-year experience in health care management in resource-limited
settings, and with specific expertise in m-Health and e-Health in the Africa health setting. As managing director
and founder of MicroClinic Technologies, he carried out extensive market research in public and private clinics
to establish the optimal user experience for mobile electronic medical records systems in Africa. This led to the
development of a) ZiDi™, the first enterprise health management system to be adopted by a Ministry of Health
in Kenya, and subsequently, b) iSikCure™, the first mobile information exchange platform in Africa, for which we
now seek funding to scale. He has grown ZiDi™ to become the leading EMR solution in Kenya, with a turnover
of $650,000 in 2016, through partnerships with the MoH, counties, private provider networks, CSR partners (GSK
Health Innovation Award, Pfizer Foundation, and strategic partners including Huawei Technologies, Philips East
Africa, and other stakeholders in Every Woman Every Childconsortium). Through ZiDi™, he has built relationships
with providers and owners of hospitals in over 12 counties in Kenya and a database with over 1,000 health
providers and 600,000 patients.
Prior to founding MicroClinic Technologies, Dr. Lantum played multiple executive roles in a Fortune 500
manufacturing company and was director of business process improvement for a $6 billion New York-based
health insurance company in the USA.
Dr. Lantum obtained his Doctor of Medicine training at Faculty of Medicine and Biomedical Sciences, University
of Yaoundé, Cameroon; a Diploma in Nutrition and International Child Health, from Uppsala University, Uppsala,
Sweden; a Doctorate in Pharmacology, from the University of Rochester, Rochester, New York. He is a graduate
of the Masters in Health Care Management at the Harvard School of Public Health. He is a frequent featured
guest speaker on social entrepreneurship. Dr. Lantum is the recipient of numerous international awards,
including the 2014 Sankalp Award, the 2013 and 2015 GSK-Save the Children Healthcare Innovation Award,
and was runner-up for the 2014 IFC/Financials Times Sustainable Business Award. He was nominated a 2016 100
Top Global Thinker by the Foreign Policy Magazine.
-----
## TIFFANI BAILEY LASH, PH.D. – NATIONAL INSTITUTES OF HEALTH
Dr. Tiffani Bailey Lash serves as a Program Director/Health Scientist Administrator at the National Institutes of
Health. She manages the research portfolios for Point of Care Technologies, Microfluidic and Bioanalytical
Systems, and Connected Health programs at the National Institute of Biomedical Imaging and Bioengineering
(NIBIB). Dr. Lash is also the Program Director for the NIBIB Point of Care Technologies Research Network,
consisting of three centers charged with developing point-of-care diagnostic technologies through
collaborative efforts that merge scientific and technological capabilities with clinical need.
Prior to her current position, Dr. Lash worked within the NIH science policy administration. During that time, she
worked at the National Institute of General Medical Sciences and National Heart Lung and Blood Institute, as
well as the NIH Office of the Director. Dr. Lash has been selected as a Science Policy Fellow for both the
American Association for the Advancement of Science (AAAS) and the National Academy of Engineering. She
also has a background in small business innovation and intellectual property. Dr. Lash earned her Ph.D. in
Physical Chemistry from North Carolina State University via a collaboration between the Departments of
Chemistry and Chemical and Biomolecular Engineering. Her interdisciplinary research interests include
microfluidics, biopolymers with controlled molecular architecture, and biosensor technologies.
## EDWARD LIVINGSTON, M.D. – THE JOURNAL OF THE AMERICAN MEDICAL ASSN.
Edward H. Livingston, M.D., F.A.C.S., A.G.A.F., has served as Deputy Editor for Clinical Content of JAMA, The
Journal of the American Medical Association since July 1, 2012. Before that, he was a Contributing Editor at
JAMA for 3 years.
Born and raised in Los Angeles, Dr. Livingston received his Medical Degree from UCLA. He completed a
General Surgery Residency at UCLA and served as the Administrative Chief Resident for Surgery in 1992. After
Residency, he remained on the faculty at UCLA eventually serving as Assistant Dean of the Medical School and
Surgical Service Line Director for the VA Greater Los Angeles Health Care System. He also founded the UCLA
bariatric surgery program.
In 2003, he moved to Dallas to become the Professor and Chairman of GI and Endocrine Surgery at the
University Of Texas Southwestern School Of Medicine. During this time period, Dr. Livingston headed the VA’s
national effort in bariatric surgery quality improvement. He was appointed as a Professor of Biomedical
Engineering in 2007 at the University of Texas Arlington. Dr. Livingston became Chairman of the Graduate
Program in Biomedical Engineering at UTSW in 2010.
Dr. Livingston has had peer review funding and has published in excess of 150 peer reviewed papers as well as
numerous other scientific writings. He has also served on numerous local and national committees and is a past
president of the Association of VA Surgeons. He continues to serve as a Professor of Surgery at UTSW.
-----
## MICHAEL LAUER, M.D. – NATIONAL INSTITUTES OF HEALTH
Michael Lauer, M.D., is the Deputy Director for Extramural Research at the National Institutes of Health (NIH). He
received education at Rensselaer Polytechnic Institute, Albany Medical College, Massachusetts General
Hospital, Boston’s Beth Israel Hospital, Harvard School of Public Health, and the NHLBI’s Framingham Heart
Study. A board-certified cardiologist, he spent 14 years at Cleveland Clinic as Professor of Medicine,
Epidemiology, and Biostatistics. From 2007 to 2015 he served as a Division Director at the National Heart, Lung,
and Blood Institute (NHLBI). He has received numerous awards including the NIH Equal Employment
Opportunity Award of the Year and the Arthur S. Flemming Award for Exceptional Federal Service.
## ANAND K. IYER, PH.D. – WELLDOC INC.
Anand is a respected global digital health leader—most known for his insights on and experience with
technology, strategy and regulatory policy. Anand has been instrumental in WellDoc’s success and the
development of BlueStar®, the first FDA-cleared mobile prescription therapy for adults with type 2 diabetes.
Since joining WellDoc in 2008, he has held core leadership positions that included Chief Data Science Officer,
President and Chief Operations Officer. In 2013, Anand was named “Maryland Healthcare Innovator of the
Year” in the field of mobile health.
Prior to joining WellDoc, Anand was already an established thought leader in the field. He had served as the
Director of PRTM’s wireless practice, where helped companies take advantage of disruptive technologies,
business models and process models offered by and enabled by advanced wireless communications.
Anand was the founder and immediate-past president of the In-Building Wireless Alliance, and teaches
advanced wireless courses to senior officers in the US Department of Defense at the Institute for Defense and
Business. Prior to joining PRTM, Anand was a member of the scientific staff at Bell Northern Research and Nortel
Networks. He holds an MS and a PhD in electrical and computer engineering, and an MBA from Carnegie
Mellon University. He also holds a BS in electrical and computer engineering from Carleton University.
-----
## TIM MCCARTHY – TELEMEDICINE AND ADVANCED
TECHNOLOGY RESEARCH CENTER
Tim McCarthy joined TATRC’s “Command Team” after serving 26 years in the Army Medical Department
(AMEDD) in a variety of assignments as a Healthcare Administrator which led to functional and technical
innovation. He also spent 11 years with Electronic Data Systems (EDS) and Hewlett Packard (HP) working in the
technology industry, providing strategic information technology support to the Army Medical Department,
Recruiting Command, and Army Knowledge Online (AKO). Before joining TATRC, Mr. McCarthy spent 6 + years
working for the Defense Center of Excellence (DCoE) for PH and TBI, as Deputy in the Primary Care Behavioral
Health Directorate, providing program development and IT support for case and risk management tracking, as
well as program evaluation. While on active duty, Mr. McCarthy’s focus was on human resources, operations,
leadership development and executive skills, training technology, distance learning, IM/ IT training and
knowledge management. He retired from the AMEDD as the Chief of the Leadership and Instructional
Innovations Branch, where among other things, he was responsible for the creation of the AMEDD’s IM/IT
training program, the Joint Medical Executive Skills Institute, and helped to inspire the creation of AKO.
He also taught in the Army/Baylor University Master’s program in Healthcare Administration. Working for EDS
and HP, Mr. McCarthy led the efforts to bring a knowledge management focus to the IT community and
created “Recruiting Central”, an initial virtual community Recruiting Command. He served as the on-site
Program Manager providing key technology support and strategy for the development of AKO. For the Army
Surgeon General, he was responsible for the creation of many virtual medical communities in AKO, as well as
several other technology projects. During his time at the Primary Care Behavioral Health Directorate, DCoE, he
was responsible for central development of an automated patient tracking/case-management system, and
provided program development, implementation support, the development and collection of metrics and a
flat-file database capability for program evaluation for all DoD Services. Mr. McCarthy currently serves as the
Deputy Director for TATRC working in conjunction with the Director, Chief Scientist, Executive Officer as well as
all Lab Managers, to provide insight to the advancement of technology supporting the MHS.
Tim holds a M.A. in College Student Personnel and Counselling/Higher Education from Bowling Green State
University in Ohio and a B.S. in Biology/Education from SUNY at Geneseo.
## MATTHEW MCMAHON, PH.D. – NATIONAL HEART, LUNG, AND BLOOD INSTITUTE
Dr. McMahon leads the Office of Translational Alliances and Coordination to enable the development and
commercialization of research discoveries funded by the Heart, Lung, and Blood Institute. His office manages
NHLBI’s $100 million/year Small Business Program and a national network of six proof-of-concept centers that
support the translation of academic discoveries into product development projects. He recently served as the
NIH representative on the National Evaluation System for health Technology (NEST) planning board and the
associated registry task force. Dr. McMahon previously created and led the National Eye Institute’s Office of
Translational Research to advance ophthalmic technologies through public-private partnerships with the
pharmaceutical and biotechnology industries. His previous experience includes service as the principal scientist
for the bionic eye company Second Sight Medical Products and as a staff member on the Senate and House
-----
##,,,, ( )
NATIONAL HEART, LUNG, AND BLOOD INSTITUTE
Dr. George Mensah is a clinician-scientist who currently serves as the Director of the Center
for Translation Research and Implementation Science (CTRIS). He also serves as a senior
advisor in the Office of the Director at the National Heart, Lung, and Blood Institute (NHLBI),
part of the National Institutes of Health (NIH). In these roles, Dr. Mensah leads a trans-NHLBI
effort to advance late-stage translational research and implementation science at NHLBI.
Dr. Mensah’s primary focus is the application of late-stage translational research and
implementation science approaches to address gaps in the prevention and treatment of
heart, lung, and blood diseases and the elimination of related health inequities. His goal is to maximize the
population health impact of advances made in fundamental discovery science and pre-clinical or early-stage
translational research. Dr. Mensah is an honors graduate of Harvard University. He obtained his medical
degree from Washington University and trained in internal medicine and the subspecialty of cardiovascular
diseases at Cornell. His professional experience includes more than 20 years of public service between the U.S.
Department of Veterans Affairs (VA), the Centers for Disease Control and Prevention (CDC), and the NIH. He
has had management experience as a chief of cardiology; head of a clinical care department; and a past
member of the Board of Governors of the American College of Cardiology as Governor for Public Health. In
addition to his public service at CDC, Dr. Mensah had 15 years of experience in direct patient care, teaching,
and research at Cornell, Vanderbilt, and the Medical College of Georgia. He was a professor with tenure at
MCG and is currently a Visiting Full Professor at the University of Cape Town, South Africa. He holds a merit of
proficiency from the American Society of Echocardiography and has been designated a hypertension
specialist by the American Society of Hypertension. He has been admitted to fellowships in several medical
societies in Africa, Europe and the US. He maintains active collaboration with several international groups to
advance research on the global burden of diseases, injuries, and risk factors.
## AMIT MISTRY, PH.D. – FOGARTY INTERNATIONAL CENTER
Amit Mistry is a Senior Scientist in NIH’s Fogarty International Center where he advises on science policy issues
and leads multi-disciplinary projects on critical global health challenges. Previously, Amit served as a program
manager in USAID’s Global Development Lab and USAID’s Bureau for Food Security. Amit has also served as a
Congressional Fellow for health, education, and science policy and worked as a high school science teacher
with Teach for America. Amit earned a bachelor’s degree in chemical engineering in 2000 and a doctorate in
bioengineering in 2007, both from Rice University.
-----
## WENDY J. NILSEN, PH.D. – NATIONAL SCIENCE FOUNDATION
Wendy Nilsen, Ph.D. is a Program Director for the Smart and Connected Health Program in the Directorate for
Computer & Information Science & Engineering at the National Science Foundation. Her work focuses on the
intersection of technology and health. This includes a wide range of methods for data collection, advanced
analytics and the creation of effective cyber-human systems. Her interests span the areas of sensing, analytics,
cyber-physical systems, information systems, big data and robotics. More specifically, her efforts include:
serving as co-chair of the Health Information Technology Research and Development working group of the
Networking and Information Technology Research and Development Program; the lead for the NSF/NIH Smart
and Connected Health announcement; convening workshops to address methodology in mobile technology
research; serving on numerous federal technology initiatives; and, leading training institutes. Previously, Wendy
was at the National Institutes of Health.
## LUCILA OHNO-MACHADO, M.D., PH.D. –
UNIVERSITY OF CALIFORNIA, SAN DIEGO
Lucila Ohno-Machado, MD, MBA, PhD received her medical degree from the University of São Paulo and her
doctoral degree in medical information sciences and computer science from Stanford. She is Associate Dean
for Informatics and Technology, and the founding chair of the Health System Department of Biomedical
Informatics at UCSD, where she leads a group of faculty with diverse backgrounds in medicine, nursing,
informatics, and computer science. Prior to her current position, she was faculty at Brigham and Women’s
Hospital, Harvard Medical School and at the MIT Division of Health Sciences and Technology. Dr. OhnoMachado is an elected fellow of the American College of Medical Informatics, the American Institute for
Medical and Biological Engineering, and the American Society for Clinical Investigation. She serves as editor-inchief for the Journal of the American Medical Informatics Association since 2011. She directs the patientcentered Scalable National Network for Effectiveness Research funded by PCORI (and previously AHRQ), a
clinical data research network with over 24 million patients and 14 health systems, as well as the NIH/BD2Kfunded Data Discovery Index Consortium. She was one of the founders of UC-Research eXchange, a clinical
data research network that connected the data warehouses of the five University of California medical
centers. She was the director of the NIH-funded National Center for Biomedical Computing iDASH (integrating
Data for Analysis, ‘anonymization,’ and Sharing) based at UCSD with collaborators in multiple institutions. iDASH
funded collaborations involving study of consent for data and biospecimen sharing in underserved and underrepresented populations.
-----
## PAUL C. PEARLMAN, PH.D. – NATIONAL CANCER INSTITUTE
Dr. Pearlman received his BSEE from the Georgia Institute of Technology. His graduate work took place at Yale
University where he earned an MS, MPhil, and PhD, all in Electrical Engineering. He has conducted research in
the Georgia Tech Biomedical Engineering Department, Georgia Tech Research Institute, Yale Medical School,
and University Medical Center Utrecht. His focus was biomedical image analysis, with emphasis on
development, evaluation, and application of pathology-driven/clinically-applicable computer aided diagnosis
and treatment planning techniques with additional focus on low-cost modalities. After years in basic and
translational research, Dr. Pearlman transitioned to the fields of science policy and diplomacy, obtaining a
prestigious AAAS Science and Technology Policy Fellowship. He is currently a Program Director and the Lead for
Global Health Technology at the United States National Cancer Institute’s Center for Global Health, where he
coordinates global cancer research funding opportunities and engages in cancer control planning activities in
low- and middle-income countries around the world.
## NIRA POLLOCK, M.D., PH.D. – BOSTON CHILDREN’S HOSPITAL
Dr. Pollock is the Associate Medical Director of the Infectious Diseases Diagnostic Laboratory at Boston
Children’s Hospital and a faculty member of the Division of Infectious Diseases at Beth Israel Deaconess
Medical Center (BIDMC) in Boston. She is jointly appointed in the Departments of Medicine and Pathology at
Harvard Medical School. She completed her MD/PhD at the University of California, San Francisco; her medical
residency at Brigham and Women’s Hospital in Boston; and her infectious diseases/clinical microbiology
fellowships at BIDMC.
Dr. Pollock has an active research program focused on the development and evaluation of novel diagnostics
for infectious diseases and related applications. Her diagnostics research has spanned a range of diseases
including C. difficile infection, active and latent tuberculosis, influenza, Lyme disease, and Ebola virus disease
(EVD), and has involved many different technologies, ranging from simple paper-based lateral flow and
microfluidic platforms to novel automated platforms for protein and nucleic acid detection. Her experience in
the point-of-care (POC) diagnostics space includes development and evaluation of a paper-based POC
fingerstick transaminase test, field evaluation of a POC rapid diagnostic test for EVD during the 2014-16
outbreak in Sierra Leone, and recent development of a novel device for collection and dispensation of
fingerstick blood to enable POC testing.
-----
## LAURA POVLICH, PH.D. – FOGARTY INTERNATIONAL CENTER
Laura Povlich is a Program Officer in the Division of International Training and Research at the Fogarty
International Center, part of the National Institutes of Health, where she was previously an American
Association for the Advancement of Science (AAAS) Science & Technology Policy Fellow. Dr. Povlich
administers a portfolio of grants that covers a range of research, research training, and research education
projects related to global health technology, with a significant focus on information and communication
technology. Additionally, she works with U.S. and international researchers to identify gaps in the global health
technology landscape and develops funding opportunity announcements to address these gaps. Prior to
working at Fogarty, Dr. Povlich was the 2011-2012 Materials Research Society/Optical Society Congressional
Science and Engineering Fellow in the Office of Congressman Sander Levin.
Dr. Povlich earned a B.S.E. in Materials Science and Engineering (2006) and a Ph.D. in Macromolecular Science
and Engineering (2011), both from the University of Michigan. Her research focused on the synthesis of
functionalized conjugated polymers for biological sensor applications and for neural probe and prosthetic
device electrode coatings.
## NIMMI RAMANUJAM, PH.D. – GLOBAL WOMEN’S HEALTH TECHNOLOGIES
Dr. Ramanujam is a Professor of Biomedical Engineering, Global Health and Pharmacology and directs the
center for Global Women’s Health Technologies, a partnership between the Pratt School of Engineering and
the Duke Global Health Institute. The center’s mission is to increase research, training and education in
women’s diseases, with a focus on breast and cervical cancer. Her team is involved in three distinct activities:
(1) closing the gap between screening and treatment to reduce cancer disparities through innovative
diagnostic and therapeutic tools, (2) improving the efficacy of local and systemic cancer therapies and (3)
perpetuating biomedical and human centered design concepts to underserved communities and
underrepresented groups through student ambassadors.
Prof. Ramanujam has received several awards for her work in cancer research and technology development
for women’s health. She received the TR100 Young Innovator Award from MIT in 2003, the Global Indus
Technovator award from MIT in 2005 and several Era of Hope Scholar awards from the DOD. She is member of
the NIH BMIT-A study section and chair elect of the DOD’s breast cancer research program (BCRP) integration
panel (IP) that sets the vision of the BCRP program and plans the dissemination of over $100 M of funds for
breast cancer research annually. She is co-editor of the Handbook of Biomedical Optics (publisher Taylor and
Francis). Nimmi earned her PhD in Biomedical Engineering from the University of Texas, Austin in 1995 and then
trained as an NIH postdoctoral fellow at the University of Pennsylvania from 1996-2000. Prior to her tenure at
Duke, she was an assistant professor in the Dept. Biomedical Engineering at the University of Wisconsin, Madison
from 2000-2005.
-----
#### Preventing Cervical Cancer through a Package of High Quality, Cost Effective Interventions
Cervical cancer prevention is based on well-established interventions including human papillomavirus (HPV)
vaccination and screening followed by treatment of pre-invasive disease. In the U.S., cervical cancer
incidence and mortality have decreased by 70% over the last 60 years due to screening with the Pap smear
[10] and, more recently, the HPV test; however, women living in medically underserved regions experience a
disproportionately high burden of cervical cancer. In the U.S. alone for example, half of cervical cancers occur
in women in medically underserved communities. There has been significant effort both in the U.S. and globally
to increase access to screening, and these services are often subsidized, but screen-positive women need a
confirmatory test at a referral setting followed by biopsy, which, if positive, requires yet another visit for
treatment. The three-visit model is required because test results at each visit are not immediate and the
technologies required for confirmatory testing and treatment are not effective in communities where access to
health care is fragile. We aim to prevent cervical cancer via a single visit “see and treat” model. We will talk
about our efforts to prevent cancer by developing an evidence-based, transformative, single visit “see and
treat” model with a package of high quality, cost-effective innovations.
## KATHLEEN ROUSCHE – NATIONAL HEART, LUNG, AND BLOOD INSTITUTE
Dr. Rousche manages the NIH Centers for Accelerated Innovations (NCAI) program within the Office of
Translational Alliances and Coordination (OTAC), Division of Extramural Research Activities, National Heart Lung
and Blood Institute, National Institutes of Health. The NCAI program creates an academic research
environment that encourages innovators to validate the commercial potential of their discoveries to more
effectively transition laboratory discoveries to benefit public health. The three main goals of the network are to
improve the likelihood of individual technologies transitioning from academia to the private sector, improve the
efficiency and effectiveness of the processes supporting biomedical product development, and educate
academic innovators about commercialization.
## STEVEN SCHACHTER, M.D. – CIMIT, HARVARD MEDICAL SCHOOL
Dr. Steven Schachter attended medical school at Case Western Reserve University in Cleveland, Ohio. He
completed an internship in Chapel Hill, North Carolina, a neurological residency at the Harvard Longwood
Neurological Training Program, and an epilepsy fellowship at Beth Israel Hospital in Boston, Massachusetts. He is
Chief Academic Officer and Program Leader of NeuroTechnology at the Consortia for Improving Medicine with
Innovation & Technology (CIMIT) and a Professor of Neurology at Harvard Medical School (HMS). Dr. Schachter
is Past President of the American Epilepsy Society. He is also past Chair of the Professional Advisory Board of the
Epilepsy Foundation and serves on their Board of Directors. He has directed over 70 research projects involving
-----
edited or written 30 other books on epilepsy and behavioral neurology. Dr. Schachter is the founding editor and
editor-in-chief of the medical journals Epilepsy & Behavior and Epilepsy & Behavior Case Reports.
Dr. Schachter is a member of the Administrative Committee (AdCom) of the IEEE Engineering in Medicine and
Biology Society (EMBS) and the Clinical Editor for Journal of Translational Engineering in Health and Medicine.
## ROB TAYLOR – BILL AND MELINDA GATES FOUNDATION
Rob joined the Bill & Melinda Gates Foundation in 2011, and is currently a Program Officer in the Global Health
Innovative Technology Solutions group. Currently Rob’s work is centered around developing new low-cost
diagnostic concepts and developing technology platforms for host and pathogen analysis. Previously, Rob
worked on the foundation’s Point-of-Care Initiative, which was aimed at creating a decentralized platform to
transform diagnostics for the developing world. Prior, Rob had consulted for the foundation’s Discovery group
and supported Grand Challenges Explorations (GCE)- the foundation’s innovative idea engine- among other
programs. Prior to moving to Seattle, Rob supported the Department of Homeland Security’s Advanced
Research Projects Agency (HSARPA), and the Defense Advanced Research Projects Agency (DARPA) in the
management and technical evaluation of next-generation biodetection technologies. Rob received his M.S. in
Microbiology from Virginia Tech.
## IRENE TEBBS, PH.D. – US FOOD AND DRUG ADMINISTRATION
Dr. Tebbs works at FDA as a lead reviewer of premarket submissions and pre-submissions for chemistry,
toxicology and diabetes devices. She also reviews Investigational Device Exemption (IDE) applications for
clinical studies. Dr. Tebbs received her B.S. at The University of Virginia and her Ph.D. from Yale University.
-----
## SRINI TRIDANADPANI, M.D., PH.D. – EMORY UNIVERSITY
Srini Tridandapani received his MSEE and PHD degrees in electrical engineering from the University of
Washington. He then served as an a tenure-track faculty member at the Iowa State University for two years
before taking the bold plunge into medical school at the University of Michigan, where he received his MD and
completed his residency training in Radiology. Subsequently, he earned the MS in Clinical Research and MBA
from Emory University. Dr. Tridandapani is an Associate Professor of Radiology and Imaging Sciences at Emory
University and Adjunct Professor of Electrical & Computer Engineering at the Georgia Institute of Technology.
Dr. Tridandapani’s current research involves the development of novel gating strategies for optimizing cardiac
computed tomography and innovative tools to increase patient safety in medical imaging.
## PAUL YAGER, PH.D. – UNIVERSITY OF WASHINGTON
Paul Yager, a native of Manhattan, received his A.B. in Biochemistry from Princeton in 1975, and a Ph.D. in
Chemistry from the University of Oregon in 1980, specializing in vibrational spectroscopy of biomolecules. After
an NRC Fellowship at the Naval Research Laboratory (1980-1982), he joined the NRL staff as a Research
Chemist. He moved to the Center (now Department) of Bioengineering at the University of Washington as
Associate Professor in 1987, advancing to Professor in 1995; he served as Chair of the department from 2007 to
2013. Initially working on both self-organizing lipid microstructure and optically based biomedical sensors, since
1992, his lab has focused primarily on development of microfluidics for the analysis of biological fluids for use in
low-cost point-of-care biomedical diagnostics for the developed and developing worlds.
From 2005-2010 a team led by Yager was supported by the Bill & Melinda Gates Foundation to develop a lowcost rugged point-of-care system for pathogen identification. Since 2008, most lab activity (with several close
partners) has focused on developing two-dimensional porous networks for ultra-low-cost instrument-free
pathogen identification for human diagnosis. Readout is often coupled with cell phones for quantitative
analysis and data transmission; this has been under support of NIH, NSF, DARPA and DTRA. He has authored
>150 publications in refereed journals, and has almost 40 issued patents. Specifics are at
http://faculty.washington.edu/yagerp/.
-----
## MAUREEN BEANAN, NIAID
RAO DIVI, NCI
MARIA GIOVANNI, NIAID
JAMES LUO, PH.D., NHLBI
MIGUEL OSSANDON, NCI
WILLIAM RILEY, PH.D., NIH
SHIVKUMAR SABESAN, PH.D. GOOGLE
NINA SILVERBERG, NIA
-----
| 17,092
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/HIC.2017.8227564?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/HIC.2017.8227564, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://ieeexplore.ieee.org/ielx7/8187165/8227561/08227564.pdf"
}
| 2,017
|
[] | true
| 2017-11-01T00:00:00
|
[] | 17,092
|
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Medicine",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0048e20fee860d38fecbabf42fcd01b1737b297b
|
[
"Computer Science"
] | 0.878784
|
Blockchain-Based Healthcare Workflows in Federated Hospital Clouds
|
0048e20fee860d38fecbabf42fcd01b1737b297b
|
European Conference on Service-Oriented and Cloud Computing
|
[
{
"authorId": "1615345582",
"name": "Armando Ruggeri"
},
{
"authorId": "143806018",
"name": "M. Fazio"
},
{
"authorId": "1790992",
"name": "A. Celesti"
},
{
"authorId": "1809861",
"name": "M. Villari"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Eur Conf Serv Cloud Comput",
"ESOCC"
],
"alternate_urls": null,
"id": "e9426e94-3ba3-4852-9f81-3b107070b694",
"issn": null,
"name": "European Conference on Service-Oriented and Cloud Computing",
"type": "conference",
"url": "http://www.wikicfp.com/cfp/program?id=920"
}
|
Nowadays, security is one of the biggest concerns against the wide adoption of on-demand Cloud services. Specifically, one of the major challenges in many application domains is the certification of exchanged data. For these reasons, since the advent of bitcoin and smart contracts respectively in 2009 and 2015, healthcare has been one of the major sectors in which Blockchain has been studied. In this paper, by exploiting the intrinsic security feature of the Blockchain technology, we propose a Software as a Service (SaaS) that enables a hospital Cloud to establish a federation with other ones in order to arrange a virtual healthcare team including doctors coming from different federated hospitals that cooperate in order to carry out a healthcare workflow. Experiments conducted in a prototype implemented by means of the Ethereum platform show that the overhead introduced by Blockchain is acceptable considering the obvious gained advantages in terms of security.
|
# **Blockchain-Based Healthcare Workflows in Federated** **Hospital Clouds**
## Armando Ruggeri, Maria Fazio, Antonio Celesti, Massimo Villari **To cite this version:**
#### Armando Ruggeri, Maria Fazio, Antonio Celesti, Massimo Villari. Blockchain-Based Healthcare Work- flows in Federated Hospital Clouds. 8th European Conference on Service-Oriented and Cloud Com- puting (ESOCC), Sep 2020, Heraklion, Crete, Greece. pp.113-121, 10.1007/978-3-030-44769-4_9. hal-03203266
## **HAL Id: hal-03203266** **https://inria.hal.science/hal-03203266v1**
#### Submitted on 20 Apr 2021
#### HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
#### L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
#### Distributed under a Creative Commons Attribution 4.0 International License
-----
## **Blockchain-Based Healthcare Workflows in** **Federated Hospital Clouds**
Armando Ruggeri [1], Maria Fazio [1] *[,]* [2], Antonio Celesti [1] *[,]* [3], and Massimo Villari [1]
1 University of Messina, MIFT Department, Italy
*{* `armruggeri, mfazio, acelesti, mvillari` *}* `@unime.it`
2 IRCCS Centro Neurolesi “Bonino-Pulejo”, Italy
```
[email protected]
```
3 on behalf of INdAM - GNCS Group, Italy
**Abstract.** Nowadays, security is one of the biggest concerns against the
wide adoption of on-demand Cloud services. Specifically, one of the major
challenges in many application domains is the certification of exchanged
data. For these reasons, since the advent of bitcoin and smart contracts
respectively in 2009 and 2015, healthcare has been one of the major sectors in which Blockchain has been studied. In this paper, by exploiting
the intrinsic security feature of the Blockchain technology, we propose a
Software as a Service (SaaS) that enables a hospital Cloud to establish a
federation with other ones in order to arrange a virtual healthcare team
including doctors coming from different federated hospitals that cooperate in order to carry out a healthcare workflow. Experiments conducted
in a prototype implemented by means of the Ethereum platform show
that the overhead introduced by Blockchain is acceptable considering the
obvious gained advantages in terms of security.
**Keywords:** Blockchain, Smart Contract, Healthcare, Cloud, SaaS, Hospital, Federation.
### **1 Introduction**
The demographic growth of the last century combined with the increased life
expectancy and shortage of specialized medical personnel in Europe [1] [2] has
made the access to proper medical treatments one of the major concerns of the
last decade. The recent advancements brought by the Cloud computing paradigm
have been only partially taken in consideration by hospitals and more in general
medical centers so far, in spite of a considerable number of scientific initiatives in
eHealth [3]. In particular, a crucial aspect that have slowed the “Cloudisation”
of hospitals has regarded security of exchanged data. It is essential that shared
pieces of healthcare data are certified and their integrity guaranteed in order to
prevent that pieces of clinical information are either intentionally or accidentally
altered.
In recent years different solutions have been proposed to solve such an issue:
among these, the Blockchain technology, thanks to its intrinsic features of data
-----
2 A. Ruggeri et al.
non-repudiation and immutability, has aroused a great interest in both scientific
and industrial communities. Founded in 2009 as the technology behind Bitcoin
[4], it has completely revolutionized traditional encryption-based security systems, introducing a new approach able to apply hash-based encryption in which
information is saved on blocks and each block is linked to the previous one via a
hash coding. One of the major applications of Blockchain regards smart contract,
i.e., a computer protocol aimed at to digitally facilitate, verify, and enforce the
negotiation of an agreement between subjects without the need of a certification
third party.
Blockchain technologies have been increasingly recognized as a technology
able to address existing information access problems in different applications
domains including healthcare. In fact, it can potentially enhance the perception
of safety around medical operators improving access to healthcare services that
are guaranteed by a greater transparency, security and privacy, traceability and
efficiency.
In this paper, by exploiting the intrinsic security feature of the Blockchain
technology, we propose a clinical workflow that:
**–**
enables to create a virtual healthcare team including doctors belonging to
different federated hospitals;
**–**
enables to share patients’ electronic health records among virtual healthcare
team members preserving sensitive data;
**–**
adopts smart contracts in order to make the transactions related to applied
therapies trackable and irreversible;
**–**
enables security in electronic medical records when they are accessed by
patients and medical professionals;
**–**
guarantees the authenticity of whole federated healthcare workflow.
In general, the proposed solution allows tracking the treatment of patients that
can take place in different federated hospitals from the hospitalization to the
dismissal, supporting the whole medical personnel in planning treatments. Moreover, we discuss a Software as a Service (SaaS) that allows to apply the workflow.
The remainder of this paper is organized as follows. A brief overview of most
recent initiatives about the adoption of Blockchain in healthcare is provided in
Section 2. Motivations are discussed in Section 3. The design of the SaaS is
presented in Section 4, whereas its implementation adopting Flak, MongoDB
and Ethereum is described in Section 5. Experiments demonstrating that the
overhead introduced by Blockchain is acceptable considering the obvious gained
advantages in terms of security are discussed in Section 6. In the end, conclusions
and light to the future are discussed in Section 7.
### **2 Related Work**
In recent years numerous research studies have been conducted in healthcare
domain with particular attention to the application of the Blockchain technology
[5].
-----
Blockchain-Based Healthcare Workflows in Federated Hospital Clouds 3
Blockchain can drastically improve the security of hospital information systems as discussed in many recent scientific works [6] [7] [8] [9]. However, up to
now, most of scientific initiatives are either theoretical or at an early stage and
it is not always clear which protocols and frameworks should be used in order
to carry out system implementation that can be deployed in real healthcare
environments.
Blockchain has been increasingly recognized as a tool able to address existing
open information access issues [10]. In fact, it is possible to improve access to
health services by using the Blockchain technology in order to achieve greater
transparency, security and privacy, traceability and efficiency. In this regard, a
solution adopting Blockchain with the purpose to guarantee authorized access to
the patients’ medical information is discussed in [11]. In particular, mechanisms
to preserve both patient’s identity and the integrity of his/her clinical history is
proposed.
Another application of Blockchain regards the supply chain in the pharmaceutical sector and the development of measures against counterfeit drugs. While
the development of new drugs involves substantial costs related to studies in order to evaluate the safety and updating of the drug, the use of smart contracts
guarantees informed consent procedures and allows in certifying the quality of
data [12].
Differently from the above mentioned most recent scientific initiatives, this
paper describes a practical implementation of how Blockchain can be used to
improve medical analysis treatments empowering collaboration among a group
of federated hospitals.
### **3 Motivation**
This paper aims at recommending new approaches able to harmonize health procedures with new technologies in order to guarantee patients’ safety and therapeutic certification, verifying that every doctor’s choice is immutably recorded,
with the purpose to guarantee and track that all hospital protocols have been
scrupulously followed. Furthermore, the proposed system was designed and implemented in order support a virtual healthcare team including a selected group
of doctors in order to make a clear picture about the patient’s clinical status especially in a critical condition. The anonymized patient’s health data and clinical
analyses are shared among doctors participating in the federation of hospitals
while the patient’s data are never shared.
Figure 1 describes a scenario where patient’s clinical data is shared across
participants to a federation of hospitals for cooperation and knowledge sharing,
and the data exchanged is certified on a private Blockchain where all participants
are known and trusted.
Specifically, the proposed healthcare workflow adopted in the proposed system includes the following phases:
1. **Hospitalization** : patient reaches the hospital and personal details, date and
type of visit are recorded;
-----
4 A. Ruggeri et al.
**Fig. 1.** Federation of hospitals: clinical data is shared across participants for coopera
tion
2. **Analysis** : patient follows the procedures to ascertain the nature of the disease (e.g., blood tests, clinical examinations, possible CT scans, RX laboratory tests, etc) and the results of the analyzes are saved on a Cloud storage
space inside the hospital Cloud managed on a dedicated directory for the
patient identified by a visit identification code;
3. **MD evaluation** : doctor analyzes the results of clinical analysis and prepares
a report with the therapy to be followed;
4. **Federated teleconference** : a selected pool of doctors belonging to the
hospital federation is invited to participate to a virtual healthcare team in a
teleconference in order to clarify the patient’s clinical situation. The patient’s
health data and clinical analysis are shared with the other doctors belonging
to the virtual healthcare team; patient’s details are never shared;
5. **Drug administration** : the hospitalized patient is constantly monitored by
nurses who apply treatments based on therapeutic indications; each treatment is recorded.
### **4 System Design**
Once the virtual healthcare team has identified the disease, it writes a prescription for the treatment indicating the disease itself to cure and a drug description
including dosage and mode of use. It is important to guarantee that only authorized doctors are allowed to create a new prescription or to update an existing
one because a wrong diagnosis can lead to a worsening of clinical condition or
death and so it becomes mandatory to know who created a new electronic health
record.
The system was designed as a Software as a Service (SaaS) in order to store:
i) patient’s electronic health records; ii) treatments for specific diseases resulting
from medical examinations. The objective of the whole system is to harmonize
health procedures by means of the following technologies:
-----
Blockchain-Based Healthcare Workflows in Federated Hospital Clouds 5
**– Blockchain engine** : to use the features of a decentralized and distributed
certification system with the technology offered by the development and
coding of smart contract;
**– Cloud storage** : to use an open-source and open-architecture file hosting
service for file sharing managed with authorizations to archive all the files
required to support the analysis of the nature of the disease such as blood
tests, CT scans and laboratory tests;
**– NoSQL database** : to exploit the potential of a document-oriented database
to store and manage patient data and diseases through tags for a fast and
efficient search and to store blockchain transaction hashes and links to files
stored in Cloud Storage.
### **5 Implementation**
The SaaS was designed in order in order to apply the previously described healthcare workflow supporting a virtual healthcare team whose members are doctors
belonging to different federated hospitals. Figure 2 shows the main software
components used to implement the SaaS.
**Fig. 2.** SaaS software components.
A graphical web interface implemented with HTML5, CSS and JavaScript
serves as an entry point of the SaaS. All requests coming from patients and
doctors flow through such an interface and are elaborated by a server built in
Python3 leveraging Flask as Web Server Gateway Interface (WSGI) and Gunicorn to handle multiple requests with a production-ready setup. All the components are configured as Docker containers in order to take the advantages of the
virtualizaiton technology allowing service portability, resiliency and automatic
updates that are typical of a Cloud Infrastructure as a Service (IaaS).
The Python web server provides a front-end that allows retrieving all existing
patients’ information (such as personal details, disease and pharmaceutic codes,
-----
6 A. Ruggeri et al.
links to documentation and Blockchain hash verification); adding new patients;
and submit new treatments specifying all the required pieces of information.
Specifically, a web page is dedicated to register a new patient, saving his/her
primary personal information, and a separate web page is dedicated to the registration of a new treatment. It is possible to select the medical examination date,
patient and doctor who does the registration to be chosen from the patients
already registered and available in the database.
Since patients’ sensitive data must be anonymized and health records and
treatments must be trackable and irreversible, related pieces of information
where stored combining a NoSQL DataBase Management System (DBMS) with
a Blockchain system. Therefore, all pieces of information are stored in the MongoDB NoSQL DBMS and in the Ethereum private network through a smart
contract developed in solidity. It has been chosen to use Ethereum with a private
network installation considering what has been reported in Blockbench [13] highlighting the impossibility for Hyperledger Fabric, i.e., an alternative Blockchain
platform, to scale above 16 nodes, which results in an important limitation for
the scope of this scientific work which aims at creating a trusted and federated
network among multiple hospital Clouds, and considering that Ethereum is more
mature in terms of its code-base, user-base and developer community.
The smart contract accepts the input parameters such as anonymized patient
id and doctor id, disease and pharmaceutic codes and stores these pieces of
information in a simple data structure. The hash code resulting from the mining
of each transaction is stored in the MongoDB database and can be used for
verification using services like etherscan.io.
All the clinical documentation produced is uploaded in a local instance of
NextCloud storage using a folder per treatment which does not contain any patient’s personal data rather than the patient’s anonymized identification number
in order to be compliant with the General Data Protection Regulation (GDPR).
Every change in the files or content of the folder will be tracked making it possible to keep a history of the documentation and its modifications.
This service is capable of detecting any modification occurred to files or folder
using a listener called *External script* . It is then possible to store the fingerprint
and timestamp of each modification in the database thus making it possible to
track the history of the treatment. This is important to guarantee the system
overall anti-tampering feature.
### **6 Performance Assessment**
Experiments were focused on Blockchain mechanism of our SaaS implementation
in order to asses the performance of the certified treatment prescription system.
In particular, the system assessment has been conducted analysing the total
execution time required to perform a varying number of transactions, i.e., treatment registrations through Ethereum in combination with a varying number of
accounts of doctors. The testbed was arranged considering a server with follow
-----
Blockchain-Based Healthcare Workflows in Federated Hospital Clouds 7
ing hardware/software configuration: Intel *⃝* [R] Xeon R *⃝* E3-12xx v2 @ 2.7GHz, 4
core CPU, 4 GB RAM running Ubuntu Server 18.04.
All analyses have been performed by sending transactions to the server varying the number of total and simultaneous requests. Specifically, each request
invokes a new treatment registration and an Ethereum transaction mining for
that. Experiments were conducted considering 100, 250 and 500 transactions
and 25, 50 and 100 accounts. Each test has been repeated 30 times considering
95% confidence intervals.
To simulate a real private instance of Ethereum Blockchain, all tests have
been performed using Ropsten Ethereum public test network, leveraging 300+
available nodes with a real server load status. It must be considered that Ethereum
Blockchain Ropsten environment is based on Proof of Work (PoW) consensus
protocol which makes difficult to obtain scalability and system speed.
Figure 3(a) describes a new treatment registration request without sending
transactions to Ethereum Blockchain. This demonstrates how the server scales as
the execution time is consistent for simultaneous requests (25, 50, 100) in spite of
the total number of requests. Figure 3(b) shows an expected degradation of the
system as compared to the requests made without Ethereum Blockchain mining
and to the total number of sent transactions. This is the worst-case scenario
based on the number of accounts as one account can only send one transaction
at a time due to the nonce preventing replay attacks.
(a) Test execution without Blockchain
mining.
(b) Test execution with Blockchain
mining.
**Fig. 3.** Total execution time variation.
### **7 Conclusion and Future Work**
This project demonstrates how Blockchain can be used in the healthcare environment to improve hospital workflow guaranteeing the authenticity of stored data.
Experimental results highlight that the performance of the certified treatment
prescription system introduce an acceptable overhead in terms of response time
considering the obvious advantages introduced by the Blockchain technology.
-----
8 A. Ruggeri et al.
Definitely, the Blockchain technology is destined to evolve in the near future
improving system capabilities and robustness, and public test instances with
different consensus protocols will be made available with benefits on performance
and scalability.
In future developments, this work can be extended integrating a comprehensive healthcare scenario with different involved organizations, such as pharmaceutical companies registering in the Blockchain all the phases of drug production
until sealing of final package and shipment, Thus, when patient buys a prescribed
medicine it is possible to link the patient with the medicine box, which would
mean an important step towards the end of drugs’ falsification and an important
assurance for the end-user who can be identified in case a specific drug package
has been recalled.
### **ACKNOWLEDGMENT**
This work has been partially supported by the TALISMAN Italian PON project
and by the Italian Healthcare Ministry founded project Young Researcher (under 40 years) entitled “Do Severe acquired brain injury patients benefit from
Telerehabilitation? A Cost-effectiveness analysis study” - GR-2016-02361306.
### **References**
1. Hassenteufel, P., Schweyer, F.X., Gerlinger, T., Henkel, R., L¨uckenbach, C., Reiter,
R.: The role of professional groups in policy change: Physician’s organizations and
the issue of local medical provision shortages in france and germany. European
Policy Analysis (2019)
2. Dubas-Jak´obczyk, K., Domaga�la, A., Mikos, M.: Impact of the doctor deficit on
hospital management in poland: A mixed-method study. The International Journal
of Health Planning and Management **34** (2019) 187–195
3. Jha, A.K., Ferris, T.G., Donelan, K., DesRoches, C., Shields, A., Rosenbaum, S.,
Blumenthal, D.: How common are electronic health records in the united states? a
summary of the evidence. Health Affairs **25** (2006) W496–W507 PMID: 17035341.
4. Nakamoto, S.: Bitcoin: A peer-to-peer electronic cash system. (2009)
5. Griggs, K., Ossipova, O., Kohlios, C., Baccarini, A., Howson, E., Hayajneh, T.:
Healthcare blockchain system using smart contracts for secure automated remote
patient monitoring. Journal of Medical Systems **42** (2018)
6. Chakraborty, S., Aich, S., Kim, H.: A secure healthcare system design framework
using blockchain technology. In: 2019 21st International Conference on Advanced
Communication Technology (ICACT). (2019) 260–264
7. Dasaklis, T.K., Casino, F., Patsakis, C.: Blockchain meets smart health: Towards
next generation healthcare services. In: 2018 9th International Conference on Information, Intelligence, Systems and Applications (IISA). (2018) 1–8
8. Srivastava, G., Crichigno, J., Dhar, S.: A light and secure healthcare blockchain
for iot medical devices. In: 2019 IEEE Canadian Conference of Electrical and
Computer Engineering (CCECE). (2019) 1–5
9. Hossein, K.M., Esmaeili, M.E., Dargahi, T., khonsari, A.: Blockchain-based
privacy-preserving healthcare architecture. In: 2019 IEEE Canadian Conference of
Electrical and Computer Engineering (CCECE). (2019) 1–4
-----
Blockchain-Based Healthcare Workflows in Federated Hospital Clouds 9
10. Zhang, P., White, J., Schmidt, D., Lenz, G., Rosenbloom, S.: Fhirchain: Applying blockchain to securely and scalably share clinical data. Computational and
Structural Biotechnology Journal **16** (2018)
11. Ramani, V., Kumar, T., Bracken, A., Liyanage, M., Ylianttila, M.: Secure and
efficient data accessibility in blockchain based healthcare systems. 2018 IEEE
Global Communications Conference (GLOBECOM) (2018) 206–212
12. Razak, O.: Revolutionizing pharma — one blockchain use case at a time. (2018)
13. Dinh, T.T.A., Wang, J., Chen, G., Liu, R., Ooi, B.C., Tan, K.L.: Blockbench:
A framework for analyzing private blockchains. In: Proceedings of the 2017 ACM
International Conference on Management of Data, Association for Computing Machinery (2017) 1085–1100
-----
| 5,009
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-030-44769-4_9?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-030-44769-4_9, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GREEN",
"url": "https://hal.inria.fr/hal-03203266/file/493832_1_En_9_Chapter.pdf"
}
| 2,020
|
[
"JournalArticle"
] | true
| 2020-09-28T00:00:00
|
[
{
"paperId": "5c838f8604b3629c683adf486dc2482c909ded3e",
"title": "The role of professional groups in policy change: Physician's organizations and the issue of local medical provision shortages in France and Germany"
},
{
"paperId": "e0bc5a7d121a2fbf70e2b3300ecb5b15f1016636",
"title": "Blockchain-Based Privacy-Preserving Healthcare Architecture"
},
{
"paperId": "cd26743760a99328b168153516c972b0fd1b1475",
"title": "A Light and Secure Healthcare Blockchain for IoT Medical Devices"
},
{
"paperId": "9c0797bbadd482c90dab6c1b44b016e082f9c53d",
"title": "A Secure Healthcare System Design Framework using Blockchain Technology"
},
{
"paperId": "41db775646e5909053126a13ced6c35199377f3f",
"title": "Secure and Efficient Data Accessibility in Blockchain Based Healthcare Systems"
},
{
"paperId": "ae7ac2c993d6159e26dd6e60d2944c3df9e8adb0",
"title": "Impact of the doctor deficit on hospital management in Poland: A mixed‐method study"
},
{
"paperId": "493897a42c53209994787eea20c34f700e1bba63",
"title": "FHIRChain: Applying Blockchain to Securely and Scalably Share Clinical Data"
},
{
"paperId": "967b5daa2fc1378d3b227531d74d090f054f8c49",
"title": "Blockchain Meets Smart Health: Towards Next Generation Healthcare Services"
},
{
"paperId": "6d661299a8207a4bff536494cec201acee3c6c1c",
"title": "Healthcare Blockchain System Using Smart Contracts for Secure Automated Remote Patient Monitoring"
},
{
"paperId": "97b4375a71e98fb5b4628b3cf9bf80c4e006e891",
"title": "BLOCKBENCH: A Framework for Analyzing Private Blockchains"
},
{
"paperId": "c1a2eee23eef31f5678a5622a18cfa46fcc488b2",
"title": "How common are electronic health records in the United States? A summary of the evidence."
},
{
"paperId": null,
"title": "Revolutionizing pharma — one blockchain use case at a time"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": null,
"title": "Blockchain-Based Healthcare Workflows in Federated Hospital Clouds"
}
] | 5,009
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Engineering",
"source": "external"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00496a036e553b7ddc4215df2d5901dbb5129aa2
|
[
"Computer Science",
"Engineering"
] | 0.842257
|
Practical Considerations of DER Coordination with Distributed Optimal Power Flow
|
00496a036e553b7ddc4215df2d5901dbb5129aa2
|
2020 International Conference on Smart Grids and Energy Systems (SGES)
|
[
{
"authorId": "112952066",
"name": "Daniel Gebbran"
},
{
"authorId": "3364581",
"name": "Sleiman Mhanna"
},
{
"authorId": "1996149",
"name": "Archie C. Chapman"
},
{
"authorId": "1697657",
"name": "Wibowo Hardjawana"
},
{
"authorId": "1705795",
"name": "B. Vucetic"
},
{
"authorId": "2448835",
"name": "G. Verbič"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
The coordination of prosumer-owned, behind-the-meter distributed energy resources (DER) can be achieved using a multiperiod, distributed optimal power flow (DOPF), which satisfies network constraints and preserves the privacy of prosumers. To solve the problem in a distributed fashion, it is decomposed and solved using the alternating direction method of multipliers (ADMM), which may require many iterations between prosumers and the central entity (i.e., an aggregator). Furthermore, the computational burden is shared among the agents with different processing capacities. Therefore, computational constraints and communication requirements may make the DOPF infeasible or impractical. In this paper, part of the DOPF (some of the prosumer subproblems) is executed on a Raspberry Pi-based hardware prototype, which emulates a low processing power, edge computing device. Four important aspects are analyzed using test cases of different complexities. The first is the computation cost of executing the subproblems in the edge computing device. The second is the algorithm operation on congested electrical networks, which impacts the convergence speed of DOPF solutions. Third, the precision of the computed solution, including the trade-off between solution quality and the number of iterations, is examined. Fourth, the communication requirements for implementation across different communication networks are investigated. The above metrics are analyzed in four scenarios involving 26-bus and 51-bus networks.
|
# Practical Considerations of DER Coordination with Distributed Optimal Power Flow
### Archie C. Chapman
University of Queensland
Brisbane, Australia
[email protected]
### Gregor Verbiˇc
University of Sydney
Sydney, Australia
[email protected]
### Daniel Gebbran
University of Sydney
Sydney, Australia
[email protected]
### Wibowo Hardjawana
University of Sydney
Sydney, Australia
[email protected]
### Sleiman Mhanna
University of Melbourne
Melbourne, Australia
[email protected]
### Branka Vucetic
University of Sydney
Sydney, Australia
[email protected]
**_Abstract—The coordination of prosumer-owned, behind-the-_**
**meter distributed energy resources (DER) can be achieved**
**using a multiperiod, distributed optimal power flow (DOPF),**
**which satisfies network constraints and preserves the privacy**
**of prosumers. To solve the problem in a distributed fashion, it is**
**decomposed and solved using the alternating direction method**
**of multipliers (ADMM), which may require many iterations**
**between prosumers and the central entity (i.e., an aggregator).**
**Furthermore, the computational burden is shared among the**
**agents with different processing capacities. Therefore, computa-**
**tional constraints and communication requirements may make**
**the DOPF infeasible or impractical. In this paper, part of the**
**DOPF (some of the prosumer subproblems) is executed on a**
**Raspberry Pi-based hardware prototype, which emulates a low**
**processing power, edge computing device. Four important aspects**
**are analyzed using test cases of different complexities. The first**
**is the computation cost of executing the subproblems in the**
**edge computing device. The second is the algorithm operation**
**on congested electrical networks, which impacts the convergence**
**speed of DOPF solutions. Third, the precision of the computed**
**solution, including the trade-off between solution quality and the**
**number of iterations, is examined. Fourth, the communication**
**requirements for implementation across different communication**
**networks are investigated. The above metrics are analyzed in four**
**scenarios involving 26-bus and 51-bus networks.**
**_Index Terms—Distributed optimal power flow (DOPF), dis-_**
**tributed energy resources (DER), ADMM, prosumers, demand**
**response, communication latency, edge computing.**
I. INTRODUCTION
hardware, (ii) operation of the algorithm in congested electrical
networks, (iii) the precision of the solution and (iv) the communication requirements for implementation, such as latency
requirements. To fill this gap in the literature, in this paper we
present a DPOF deployment on edge computing devices, and
discuss its characteristics and real-world performance.
_A. Background_
Decentralizing power systems by integrating distributed
energy resources (DER) at the prosumer level offers economic
and technical benefits for both owners and network operators,
but requires careful coordination to minimize negative impacts
on the grid [1]. In this context, distributed optimal power flow
(DOPF) methods have been shown to successfully coordinate
DER [2]–[5], ensuring network constraints are always satisfied
whilst also preserving prosumer privacy and prerogatives.
However, there is currently limited literature analyzing practical applications of DOPF [2], [6], and important implementation aspects have not been discussed in sufficient detail,
such as: (i) the solution time of DOPF on actual distributed
Corresponding author: [email protected].
The AC optimal power flow problem is typically solved using interior point methods, because it is a nonconvex problem.
Although these methods cannot guarantee global optimality in
general (since they solve to local optimality), the resulting solution is guaranteed to be feasible. However, the OPF quickly
becomes intractable when considering DER due to the sheer
number of variables involved. This motivates investigations
into distributed approaches, of which several methods have
been applied: dual decomposition, analytic target cascading,
auxiliary problem principle, optimality condition decomposition, gradient dynamics, dynamic programming with message
passing, and the alternating direction method of multipliers
(ADMM). A comprehensive review of their implementations
can be found in [7].
ADMM [8] has been widely used to solve large-scale OPF
problems [7], as it allows for for flexible decompositions of
the original OPF problem. They range from network subregions [9] down to an element-wise (e.g., generators, buses,
and lines) decomposition [10]. In ADMM, each of the resulting decomposed parts solves a subproblem and exchanges
messages with a central aggregator (or between other agents)
until convergence is achieved [11]. A decomposition at the
point of connection between prosumers and the network was
deemed a pratical balance for DER coordination [2]–[4]. It
preserves privacy of prosumers and allows for parallelization
of subproblems (benefits against centralized approaches), and
offers quicker solutions (smaller number of iterations) when
compared to fully decentralized approaches. This approach
has been demonstrated to successfully coordinate DER in
real-world scenarios in a recent Australian trial [2], and can
be implemented on edge computing devices (at individual
-----
prosumers), benefiting from subproblem parallelization to distribute the computational load [6].
Because this approach is very recent, there is sparse literature and a dearth of information regarding practical considerations for this DER coordination method.
_B. Contributions_
This work offers important technical insights into modeling
and deploying DER coordination methods using DOPF. To
offer a solid testbed, part of the subproblems is deployed on
a hardware prototype, based on Raspberry Pis 3B+ (RPis)
– a small, single-board computer. This allows for a more
realistic analysis, emulating an edge computing archetype
where prosumer computations are conducted on embedded
hardware. The remainder of the problem is solved on a
PC. Four different test cases are simulated, involving two
networks and two time horizons, which allows for comparison
across different setups. The paper focuses on four principal
characteristics of the problem, which can be summarized in
the following contributions:
_• Quantification of computation times for the DOPF imple-_
mented across edge computing devices.
_• Investigation of algorithm execution on normal operation_
versus congested system conditions.
_• Analysis of solution precision, including trade-offs be-_
tween solution quality and computational burden.
_• Discussion of communication requirements for imple-_
mentation on modern communication networks.
_C. Paper Structure_
The remainder of the paper is structured as follows: Section
II formulates the DOPF, including the initial problem, the decomposition and the resulting distributed problem formulation.
Section III discusses details of the implementation, including
algorithm specifications, hardware description and details of
the test networks. Section IV presents the results and discusses
each of the four main proposed metrics. Finally, Section V
presents a general discussion on the results and Section VI
finishes with concluding remarks.
II. MOPF FORMULATION
The proposed approach for DER coordination is formulated
as a multi-period optimal power flow (OPF) problem. It consists of two levels. At the lower level, prosumers schedule their
DER, minimizing energy expenditure[1]. At the upper level,
the distribution network system operator (DNSP) coordinates
prosumers’ actions to minimize the network objective - whilst
abiding by network limits and operational constraints.
The objective function of this problem is:
�
minimize _F_ (x, z) := f (x) + _gh(zh)_
**_x, z_**
_h∈H_
= � �c2(p[+]g,t[)]2 + c1p+g,t [+][ c][0] [+] � �c[tou]i _[p]h,t[+]_ _h,t�[�],_
_[−]_ _[c][fit][p][−]_
_t ∈T_ _h ∈H_
(1)
1When ctou > cfit, as is the case in Australia, this corresponds to PV
self-consumption.
where f (x) represents the network OPF objective function
(which can include, for example, loss minimization, peak load
reduction or minimizing the use of backup diesel as in [2]),
_gh(zh) are prosumer objective functions for each household_
_h, with a fixed time-of-use tariff for purchasing energy, and_
a feed-in-tariff for selling energy, is the set of prosumers,
_H_
**_x is the set of network variables (active/reactive power flows,_**
and voltages, for each t ∈T ), and zh is the set of internal
variables of prosumer h for each t (e.g., battery power
_∈T_
flows), which compose the set of variables for all prosumers
**_z := {zh}h∈H._**
The network constraints for a single-phase OPF are shown
below.[2] They are given for each bus i, and for each time
_∈B_
interval t :
_∈T_
�
_pg,t_ _ph,t = vi,t_ _vj,t(gij cos θij,t + bij sin θij,t),_ (2a)
_−_
_j ∈B_
�
_qg,t_ _qh,t = vi,t_ _vj,t(gij sin θij,t_ _bij cos θij,t),_ (2b)
_−_ _−_
_j ∈B_
_vr,t = 1,_ _θr,t = 0,_ (2c)
_vi ≤_ _vi,t ≤_ _vi,_ (2d)
_pg,t ≤_ _pg,t ≤_ _pg,t,_ _qg,t ≤_ _qg,t ≤_ _qg,t,_ (2e)
where pg,t, qg,t are the total net active/reactive power from the
reference bus, ph,t, qh,t are the total net active/reactive power
to prosumer h connected to bus i, and θij,t = θi,t _θj,t is_
_−_
the angle difference between bus i and its neighboring bus
_j. Additionally, (2a), (2b) model the power flow equations,_
(2c) models the reference, and (2d), (2e) represent voltage
and generator (lower and upper) limits. Moreover, let ph,t =
_p[+]h,t_ _h,t_ [be composed of the non-negative terms][ p]h,t[+] _[, p][−]h,t[,]_
_[−]_ _[p][−]_
representing imported and exported power. The same applies
for pg,t.[3]
Each prosumer h is subject to its own constraints. The
_∈H_
equation modeling the power balance is, _t_ _, h_ :
_∀_ _∈T_ _∈H_
_ph,t = p[bat]h,t_ [+][ p]h,t[d] _h,t[,]_ (3)
_[−]_ _[p][PV]_
where ph,t is the total net power (exchanged with the grid) of
household h, with ph,t ≤ _ph,t ≤_ _ph,t, p[bat]h,t_ [is the scheduled]
battery charging power, with p[bat]h,t _[≤]_ _[p]h,t[bat]_ _[≤]_ _[p]h,t[bat]_ [;][ p][d]h,t [is the]
household non-controllable (fixed) demand, and p[PV]h,t [is the PV]
generation power output, which can be curtailed if necessary
(the total available PV power is ˜p[PV]h,t _[≥]_ _[p]h,t[PV]_ _[≥]_ [0][).]
The battery constraints are, _t_ _, h_ :
_∀_ _∈T_ _∈H_
_p[bat]h,t_ [=][ p]h,t[ch] _h,t[,]_ (4a)
_[−]_ _[p][dis]_
_SoC_ _h,0_ _SoC_ _h,T,_ (4b)
_≤_
_SoC_ _h,t = SoC_ _h,t−∆t + (ηh[ch][p]h,t[ch]_ _[−]_ _[p]h,t[dis]_ _[/η]h[dis][)∆][t,]_ (4c)
2A balanced three-phase network is assumed for simplicity. It can be
modeled as a single phase. However, the single-phase model can be readily
extended, e.g. including unbalanced networks with a combination of singleand three-phase connections [2], increasing the formulation’s complexity.
3Note that because the second term in (1) is a convex piecewise linear
function, at least one of the variables p[+]h,t [and][ p]h,t[−] [can be zero at time slot]
_t. This therefore obviates the need to use binary variables._
-----
where p[ch]h,t[, p][dis]h,t _[≥]_ [0][ compose the battery charging/discharging]
power; SoC _h,t is the battery state-of-charge, with SoC_ _h,t ≤_
_SoC_ _h,t_ _SoC_ _h,t[4], ηh is the battery charge or discharge_
_≤_
efficiency, and ∆t is the time interval within .
_T_
To rewrite the problem in its compact form, let the network
constraints (2) define a feasible set for the network variables
_X_
**_x and prosumer constraints (3), (4) define a feasible set Zh for_**
the variables zh of each prosumer h ∈H. Henceforth, x ∈X
and zh ∈Zh, with z ∈Z (the feasible set for all prosumer
variables). We can now write:
minimize _F_ (x, z) (5)
**_x∈X_** _, z∈Z_
Two problems arise if we are to solve this MOPF centrally.
First, the privacy of all prosumers is violated, since all data
(battery information, consumption data, etc) for each house
has to be sent to the central computing entity. Second, the
problem is computationally hard because it consists of a
non-convex network problem [12]. Solving such a large-scale
nonlinear problem is extremely challenging, especially given
a potentially large number (several tens or even hundreds)
of prosumer subproblems. Hence, a distributed approach is
applied to solve this MOPF with DR problem.
_A. Decomposed Model_
Normally, we would not be able to solve (5) in a distributed
fashion. This is because the variables corresponding to the
prosumer power consumption appear in both and . To
_X_ _Z_
enable a decomposable structure for the problem, we create
two copies of all prosumer power profiles, as shown in Fig.
1, introducing the following coupling constraints:
_pˆh,t = ph,t,_ _∀_ _h ∈H, t ∈T,_ (6)
where the left-hand term is a copy for the network problem,
_pˆh,t ∈X_, and the right-hand term is a copy for the prosumer
problem, ph,t ∈Zh.
Now, we can treat prosumer subproblems separately from
the network, coupled only through prosumer power consumption. Problem (5) can now be decomposed because f (x) and
_gh(zh) are themselves separable. In more detail, duplicating_
the variables as (6) enables us to rewrite (5) as:
minimize _F_ (xˆ, z), (7a)
**_xˆ∈X[ˆ], z∈Z_**
subject to: (6), (7b)
where ˆx is the original set of problem variables with the
addition of the network copy of prosumer’s power profiles
(6), and [ˆ] is the new feasible region of the network problem.
_X_
Now, the sets of variables [ˆ] and are decoupled, and (7a) is
_X_ _Z_
separable if (7b) is relaxed. The resulting decoupled problem
is illustrated in Fig. 1. We will exploit this structure to solve
(7) in a distributed fashion.
4Including (4b) avoids full battery depletion - without considering the
next time horizon. Replacing it is recommended for algorithm implementation
using a rolling horizon basis.
|vr v1 (g12, b12)|1 v|vi vi+1|1 vi+|+2|
|---|---|---|---|---|
|12 12|||||
|(p g, q g) pˆ pˆ i+1 i pˆ i = p i p i p i+1 pˆ = p i+1 i+1|||||
_h∈H_ _t∈T_
+ λh,t(ˆph,t − _ph,t)�[�]_ = F (xˆ, z) + � _Lh,_ (8)
_h∈H_
where ρ is a penalty parameter and λh,t is the dual variable
associated with each coupling constraint.
_B. ADMM Formulation_
The ADMM [8] makes use of the decoupled structure in (7)
by performing alternating minimizations over sets [ˆ] and .
_X_ _Z_
At any iteration k, ADMM generates a new iterate by solving
the following subproblems, until a satisfactory convergence is
achieved:
�
**_xˆ[k][+1]_** := argmin [F (xˆ, z) + _Lh],_ (9a)
**_xˆ ∈_** _X[ˆ]_ _h∈H_
**_z[k]h[+1]_** := argmin [gh(zh) + Lh] _∀_ _h ∈H,_ (9b)
**_zh ∈Zh_**
_λ[k]h,t[+1]_ [:=][ λ]h,t[k] [+][ ρ][(ˆ][p]h,t[k][+1] _[−]_ _[p]h,t[k][+1][)]_ _∀_ _h ∈H, t ∈T,_ (9c)
where (9a) is the subproblem solved at each step by an aggregator (holding p constant at k), (9b) denotes the subproblem
of each individual household (holding ˆp constant at k + 1,
results of the network subproblem), and (9c) is the dual update.
Since household problems are decoupled, they can be solved
in parallel.
III. IMPLEMENTATION
_A. Algorithm Specifications_
Primal and dual residuals are used to define the stopping
criteria [10], which are, respectively:
**_r[k]_** = (ˆp[k]h,t _h,t[)][⊤][,]_ (10a)
_[−]_ _[p][k]_
**_s[k]_** = (p[k]h,t _[−]_ _[p]h,t[k][−][1][)][⊤][,]_ (10b)
where (10a) represent the constraint violations (i.e., (7b)) at
the current solution, and (10b) represents the violation of
the Karush-Kuhn-Tucker (KKT) stationarity constraints at the
current iteration. The termination criteria are then given by:
_∥r[k]∥2 ≤_ _ϵ[pri]_ and _∥s[k]∥2 ≤_ _ϵ[dual],_ (11)
where ϵ[pri] and ϵ[dual] are feasibility tolerances determined by the
following equations [8]:
_√_
_ϵ[pri]_ = _Hϵ[abs]_ + ϵ[rel]max�∥pˆ[k]∥2, ∥p[k]∥2�, (12a)
_√_
_ϵ[dual]_ = _Hϵ[abs]_ + ϵ[rel]∥λ[k]∥2, (12b)
_vr_ _v1_ _vi_ _vi+1_ _vi+2_
(g12, b12)
(pg, qg) _pˆi_ _pˆi+1_
_pˆi = pi_ _pi_ _pi+1_
_pˆi+1 = pi+1_
Fig. 1: Example of network decomposition depicted over a single time period
by duplication of coupling variables.
Finally, we write the augmented (partial) Lagrange function:
_ρ_
�
2 [(ˆ][p][h,t][ −] _[p][h,t][)][2]_
�
_L := f_ (xˆ) +
� �
_gh(zh) +_
-----
1000
100
Fig. 2: 26- and 51-bus networks showing buses, lines, and generator in red.
The blue area encompasses 25 prosumers, and the black area 50 prosumers.
TABLE I: Test cases and problem complexity.
|a)|Col2|Col3|Col4|
|---|---|---|---|
|a)||||
||||Case 1 Case 2 Case 3|
||||Case 4|
10000
1000
10
1
Case Network _T_ No. of variables No. of constraints
1 A _T1_ 11088 9840
4 B _T2_ 43776 38880
where ˆp and p are vectors composed by all variables ˆph,t
and ph,t (7b), λ[k] is the vector composed by all λ[k]h,t [(9c),]
_ϵ[abs], ϵ[rel]_ _∈_ IR+ and their values are, in turn, part of the
analysis described in Section V. Using smaller values for these
tolerances yields more accurate results. However, this requires
a higher number of iterations, which directly impacts the
total computation time. This may lead to inefficient tolerance
values, which is investigated. Finally, an adaptive residual
balancing method is used to update the value of ρ according
to the magnitude of residuals, as described in [10].
100
10
|1 2 3 4|A B A B|T1 T1 T2 T2|11088 21888 22176 43776|9840 19440 19680 38880|
|---|---|---|---|---|
1
|b)|Col2|Col3|Col4|
|---|---|---|---|
|b)||||
|||||
|||||
|||||
0.01 0.001 0.0001 0.00001 0.000001
Є[abs]
Fig. 3: Results for all four cases: a) depicts number of iterations k, and b)
shows the total parallel computation time across different values for ϵ[abs].
TABLE II: Average computation time per iteration, in seconds.
_B. Hardware description_
The aggregator subproblem (9a) is solved on a 32 GB RAM,
Intel i7-7700, 3.60 GHz PC. Five prosumer subproblems (9b)
are solved in parallel on five different Raspberry Pis model
3B+, 1 GB RAM, BCM2837B0, 1.4 GHz (RPis), and the
remaining prosumer subproblems are solved serially on the
PC. All problems were implemented in Python using Pyomo
[13] as a modeling interface, and solved using Ipopt v3.12.11
[14], with linear solver MA27 [15], in both the RPis and the
PC. The PC is connected to the internet with a standard cable
connection, and acts as a multi-client UDP server. All RPis
are connected to the internet via WiFi, and act as UDP clients
in an edge computing framework.
|Case|t + t (9c)[s] (9a)|t [s] (9b)|tcomp[s]|
|---|---|---|---|
|1 2 3 4|2.09 4.13 4.65 9.36|0.25 0.25 0.41 0.41|2.34 4.38 5.06 9.77|
_C. Test networks_
Two low-voltage distribution networks A and B, with 25
and 50 prosumers respectively, have been used for testing the
proposed algorithm. They have 26 and 51 buses respectively;
their configuration is illustrated in Fig. 2.
Prosumer’s load and PV data used are actual power
measurements, with half-hourly resolution on a spring day
(2011/11/07), of an Australian low-voltage network. As such,
we initially define T1 = {0, 1, ..., 47}, ∆t1 = 0.5. Additionally, we have further split these into 15-minute resolution data
sets, in which T2 = {0, 1, ..., 95}, ∆t2 = 0.25.
We have combined networks A and B with T1 and T2,
resulting in a total of four different test cases, as seen in Table
I. The complexity of problem (9a), which takes the longest for
each iteration, is also shown.
IV. RESULTS
The results for the four test cases, with varying tolerances,
are depicted on Fig. 3. Throughout our tests, we have used
_ϵ[rel]_ = 10 ϵ[abs], and ϵ[abs] [10[−][2], 5 10[−][3], 10[−][3], ..., 5
_∈_ _×_ _×_
10[−][6], 10[−][6]] for a total of nine tolerances.
Fig. 3a) shows the number of iterations k each case takes
to converge, across different tolerances. It is notable k is very
similar across all four cases, and therefore mostly independent
of the problem size, which demonstrates the scalability of
ADMM [10].
We discuss the results in four areas, namely: computation
time, system operation under congested conditions, precision
of solutions, and communication requirements.
_A. Computation Time_
The computation time per iteration is shown in Table II.
The term t(9a) refers to the execution time (9a) in the PC,
_t(9b) is determined by the slowest execution time of (9b) in the_
RPis, and t(9c) refers to the dual update (9c) execution on the
PC. The average total parallel computation time per iteration
is shown in the last column, tcomp, representing the time per
iteration a fully distributed implementation would require.
In hindsight, the solution time for the DOPF subproblem is
much more predominant in the total solution time. Albeit the
number of iterations k remains very similar when increasing
the size of the problem, the central computation time increases
linearly, as seen in Table II and consequently, most of the
computation load in Fig. 3b) stems from solving (9a).
-----
70
Over-voltage
60
region
50
40
30
20
Under-voltage
10
region
0
-650 -550 -450 -350 -250 -150 -50 50 150 250 350 450 550
|Col1|Col2|Over|-volta|ge|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||re|gion|||||||||||
|||||||||||||||
|||||||||||||||
|||||||||||||||
|||||||||Un|der-v|oltage||||
||||||||||regi|on||||
Total daily energy import [kWh]
Fig. 4: Number of iterations k across different mixes of energy, for case 1
and ϵ[abs] = 10[−][4].
_B. System Operation under Congested Conditions_
In real systems, demand and generation vary, which may
lead to operation under congested conditions (e.g., over- or
under-voltage). DOPF implementations need to be robust
against these changes, even if they cause a higher number
of iterations.
To test the impact of congested conditions, demand and
generation have been modified for Case 1, with a fixed
tolerance of ϵ[abs] = 10[−][4]. The results in Fig. 4 show the
number of iterations k across different mixes of energy. The
points at which constraints are active (under- and over-voltage,
or input feeder limit) are also denoted in the figure, showing a
clear correlation of increased k on operation under congested
conditions. The maximum value of k does not exceed 70,
roughly twice as high when compared to the average k for
normal operation conditions. Moreover, it is visible that a
system which has surplus of energy generation converges more
rapidly than a system which needs to import more energy from
the upstream network.
_C. Precision of Solutions_
|ϵabs|Case|F %|rmax [W]|r¯ [W]|
|---|---|---|---|---|
|10−2|1 2 3 4|+57.9 +56.2 +52.1 +61.2|198.64 260.50 101.25 98.12|45.21 58.89 31.39 38.26|
|10−3|1 2 3 4|+5.98 +7.42 +6.65 +7.95|70.958 33.697 10.000 10.000|5.547 6.174 3.032 3.439|
|10−4|1 2 3 4|+1.34 +1.47 +1.35 +1.50|0.8082 0.8295 0.4813 1.0317|0.5882 0.6237 0.3351 0.3732|
|10−5|1 2 3 4|+1.05 +1.24 +1.01 +1.32|0.2894 0.2088 0.0408 0.1290|0.0495 0.0663 0.0052 0.0050|
|10−6|1 2 3 4|+0.99 +1.18 +0.97 +1.28|0.0212 0.0285 0.0147 0.0065|0.0031 0.0043 0.0011 0.0011|
A comparison between the optimal solution F (x, z) of the
central problem (5) and each test case is shown in the third
column of Table III. The values demonstrate the evolution of
the solution precision, showing that there is almost no variation
to the end result when using very low tolerance values. Not
only that, but the number of iterations to reach convergence
(and consequently, the computation time) becomes prohibitive,
as seen in Fig. 3b).
The physical implication of different tolerances are shown
in Table III. It depicts the maximum (r[max]) and average (r¯)
violations of constraint (7b) - the definition of primal residual
(10a). In other words, the difference between the copies of
prosumer power profiles for the network and for the household.
The performance across all cases are similar even if the
network sizes and are different.
_T_
_D. Communication Requirements_
The message size at each iteration between prosumers and
aggregator is proportional to the choice of T . For T1, the
message size is smaller than 1 KB, and for T2 it is smaller
than 2 KB. The choice of different communication protocols
(UDP/TCP/HTTP) is only marginally relevant, and they are
capable of dealing with these message sizes, which are much
TABLE III: Solution deviation versus central optimal, maximum and average
primal residuals over five different tolerances for test cases 1, 2, 3 and 4.
_ϵ[abs]_ Case _F%_ _r[max]_ [W] _r¯ [W]_
1 +57.9 198.64 45.21
2 +56.2 260.50 58.89
10[−][2]
3 +52.1 101.25 31.39
4 +61.2 98.12 38.26
1 +5.98 70.958 5.547
2 +7.42 33.697 6.174
10[−][3]
3 +6.65 10.000 3.032
4 +7.95 10.000 3.439
1 +1.34 0.8082 0.5882
2 +1.47 0.8295 0.6237
10[−][4]
3 +1.35 0.4813 0.3351
4 +1.50 1.0317 0.3732
1 +1.05 0.2894 0.0495
2 +1.24 0.2088 0.0663
10[−][5]
3 +1.01 0.0408 0.0052
4 +1.32 0.1290 0.0050
1 +0.99 0.0212 0.0031
2 +1.18 0.0285 0.0043
10[−][6]
3 +0.97 0.0147 0.0011
4 +1.28 0.0065 0.0011
smaller than the lower limits of current mobile broadband
networks download and upload speeds [16], [17].
The actual implementation of the DOPF can utilize different
structures between prosumers and the aggregator. The recent
Australian trial [2] has utilized an hierarchical structure where
groups of prosumers send their information to local computers
(Reposit boxes[5]), which then compute prosumer subproblems
and communicate to a central aggregator every iteration,
sending the final solution (i.e., their scheduling information)
back to prosumers when the solution is achieved. However, it
is possible to make full use of decentralized implementation
of prosumers with edge computing hardware, as shown by the
computation times of the prosumer subproblem on RPis.
This would require communication between the aggregator
and prosumers at every iteration, all of which would be
located within the same geographical region (e.g., in the
same low-voltage network neighborhood). The communication
could be achieved, for example, with the use of last mile
_networks (4G and 5G). Modern network technologies offer_
low latencies for this kind of application. For example, 4G
network latency[6] range from 30 to 160 ms, and upcoming
5G networks will further reduce these values [16]. In parallel,
network technologies tailored for the Internet of Things [18],
such as LTE-M, NB-IoT and EC-GSM-IoT, could also be used
to deploy this communication. These networks have latencies
of 300 to 600 ms in areas within the normal cell edge of the
radio cell [19].
From the technical aspect, the solution time per iteration of
the DOPF, as shown in Table II, is more predominant than the
latency delay of last mile networks. If implemented in a 4G
network, the latency (assume an average of 100 ms) in cases
1 to 4 would take, respectively, 4.3 %, 2.2 %, 2 % and 1 % of
the total time per iteration. Economical aspects could weight
5https://repositpower.com/
6We refer to [16] when defining latency as the delay between agents as
data makes a round trip through the communications network.
-----
in more when choosing the appropriate technology to deploy
this infrastructure, as well as limiting factors such as low area
coverage or poor internet connection [2], [11].
V. GENERAL COMMENTS
The computation time of the DOPF approach grows linearly
with the size of the problem, which in turn imposes a limit on
the available solution time. For instance, when using a rolling
horizon, the window interval for each horizon to be completed
must be compatible with the DOPF solution time. For instace,
larger networks with over one hundred prosumers, as simulated
by the authors in [5], require a longer computation time. This
may not be compatible with a five-minute window interval as
used by the DOPF in [2], with under fifty prosumers.
The choice of an appropriate tolerance and time horizon
must take into account the problem size and the available
_T_
solution time. Moreover, the communication latency and other
limitations imposed by the geographical location of prosumers
and the aggregator must be accounted for. The computational
burden introduced by transforming interval T1 into T2 is associated with doubling the number of variables and constraints,
which in turn doubles the resolution of the problem variables.
Communication networks may not handle well the transmission of data from a very large number of prosumers to
the aggregator, which happen in a very short amount of
time. This may lead to congestion (data traffic above the
network bandwidth) or contention (when many prosumers are
trying to transmit data simultaneously) on the communication
network. These problems are prone to happen when a large
concentration of prosumers (over hundreds or thousands)
are concentrated in the same geographical location, sharing
the same communication network and a limited quantity of
available resources (e.g., spectrum) from the wireless network.
Nonetheless, the network latency and the message size of
the communication between prosumers and aggregator are not
bottlenecks when implementating the DOPF.
_A. Future Work_
As shown in Table II, reducing the computation cost per
iteration is of paramount importance for a practical implementation of the DOPF. This may include a number of strategies
to reduce the computation time for each step, such as splitting
(9a) into smaller subproblems, solved in parallel [2].
Moreover, a model to prevent the aforementioned congestion and contention problems is another suggestion for
further research. This would allow for a better utilization of
the available communication network resources, by allocating
these resources and coordinating data transmission according
to the characteristic of the DER coordination problem.
Finally, using an asynchronous ADMM may be of interest,
which could improve the robustness of the algorithm against
possible communication failures.
VI. CONCLUSION
We have implemented a DER coordination problem using
DOPF, on a PC and a hardware prototype of five RPis.
The central problem was decomposed and decoupled into a
formulation suitable for solution using ADMM. We analyzed
four different test cases, investigating the computation time
and the number of iterations k across different tolerances. The
effect of operation under congested conditions was shown to
impact k. We have shown trade-offs between convergence and
computation speed according to solution precision. Finally,
the communication requirements for the deployment of similar
problems were discussed.
REFERENCES
[1] AEMO, Energy Networks Australia, “Open Energy Networks,” Tech.
Rep., 2018.
[2] P. Scott, D. Gordon, E. Franklin, L. Jones, and S. Thi´ebaux, “Networkaware coordination of residential distributed energy resources,” IEEE
_Transactions on Smart Grid, vol. 10, no. 6, pp. 6528–6537, Nov. 2019._
[3] P. Andrianesis and M. C. Caramanis, “Optimal grid-distributed energy
resource coordination,” in 2019 57th Annual Allerton Conference on
_Communication, Control, and Computing (Allerton), Sep. 2019._
[4] A. Attarha, P. Scott, and S. Thi´ebaux, “Affinely adjustable robust
ADMM for residential DER coordination in distribution networks,”
_IEEE Transactions on Smart Grid, vol. 11, no. 2, pp. 1620–1629, March_
2020.
[5] J. Guerrero, D. Gebbran, S. Mhanna, A. C. Chapman, and G. Verbiˇc,
“Towards a transactive energy system for integration of distributed
energy resources: Home energy management, distributed optimal power
flow, and peer-to-peer energy trading,” Renewable and Sustainable
_Energy Reviews, vol. 132, p. 110000, Oct. 2020._
[6] D. Gebbran, G. Verbiˇc, A. C. Chapman, and S. Mhanna, “Coordination
of prosumer agents via distributed optimal power flow,” in Proceedings
_of the 19th International Conference on Autonomous Agents and Multi-_
_agent Systems (AAMAS 2020), May 2020, pp. 1–3._
[7] D. K. Molzahn, F. D¨orfler, H. Sandberg, S. H. Low, S. Chakrabarti,
R. Baldick, and J. Lavaei, “A survey of distributed optimization and
control algorithms for electric power systems,” IEEE Transactions on
_Smart Grid, vol. 8, no. 6, pp. 2941–2962, Nov. 2017._
[8] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed
optimization and statistical learning via the alternating direction method
of multipliers,” Foundations and Trends in Machine Learning, vol. 3,
pp. 1–122, Jan. 2011.
[9] B. Kim and R. Baldick, “Coarse-grained distributed optimal power flow,”
_IEEE Transactions on Power Systems, vol. 12, no. 2, pp. 932–939, May_
1997.
[10] S. Mhanna, G. Verbiˇc, and A. C. Chapman, “Adaptive ADMM for
distributed AC optimal power flow,” IEEE Transactions on Power
_Systems, vol. 34, no. 3, pp. 2025–2035, May 2019._
[11] J. Guo, G. Hug, and O. K. Tonguz, “On the role of communications
plane in distributed optimization of power systems,” IEEE Transactions
_on Industrial Informatics, vol. 14, no. 7, pp. 2903–2913, July 2018._
[12] D. Bienstock and A. Verma, “Strong NP-hardness of AC power flows
feasibility,” Operations Research Letters, vol. 47, no. 6, pp. 494–501,
Nov. 2019.
[13] W. E. Hart, J.-P. Watson, and D. L. Woodruff, “Pyomo: modeling and
solving mathematical programs in python,” Mathematical Programming
_Computation, vol. 3, no. 3, pp. 219–260, Sep. 2011._
[14] A. W¨achter and L. T. Biegler, “On the implementation of a primaldual interior point filter line search algorithm for large-scale nonlinear
programming,” Mathematical Programming, vol. 106, no. 1, pp. 25–57,
March 2006.
[15] J. Smith, “HSL archive: A collection of Fortran codes for
[large scale scientific computation,” 2018. [Online]. Available: http:](http://www.hsl.rl.ac.uk/)
[//www.hsl.rl.ac.uk/](http://www.hsl.rl.ac.uk/)
[16] OpenSignal, “The State of Mobile Network Experience,” Tech. Rep.,
2019.
[17] I. Grigorik, High Performance Browser Networking. O’Reilly Media,
Inc., 2013.
[18] D. Gebbran, A. C. Chapman, and G. Verbiˇc, “The Internet of Things as a
facilitator of smart building services,” in 2018 Australasian Universities
_Power Engineering Conference (AUPEC), Nov. 2018, pp. 1–6._
[19] O. Liberg, M. Sundberg, E. Wang, J. Bergman, and J. Sachs, Cellular
_Internet of Things, 1st ed._ Elsevier Science, 2017.
-----
| 10,484
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2203.04819, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/2203.04819"
}
| 2,020
|
[
"JournalArticle",
"Conference"
] | true
| 2020-11-01T00:00:00
|
[
{
"paperId": "e6ca7fc9d2e6bcd700635225421f347ade348b3c",
"title": "Towards a transactive energy system for integration of distributed energy resources: Home energy management, distributed optimal power flow, and peer-to-peer energy trading"
},
{
"paperId": "977112221e8c928974ba8a9b4616d76da8e79163",
"title": "Affinely Adjustable Robust ADMM for Residential DER Coordination in Distribution Networks"
},
{
"paperId": "8c772a6a248916e1d874df7ed3a88ec0821e9df0",
"title": "Optimal Grid – Distributed Energy Resource Coordination: Distribution Locational Marginal Costs and Hierarchical Decomposition"
},
{
"paperId": "c1d1a77468533c3456566ab78911990369383dc3",
"title": "Adaptive ADMM for Distributed AC Optimal Power Flow"
},
{
"paperId": "196aef01eeeae106d771d85c583aab17321575dc",
"title": "Network-Aware Coordination of Residential Distributed Energy Resources"
},
{
"paperId": "fb1cfc0329b5c4affa0d3612adc5c184c497fc21",
"title": "The Internet of Things as a Facilitator of Smart Building Services"
},
{
"paperId": "6f58c497759ea72fcfcc322b0be1c7f32d67a301",
"title": "On the Role of Communications Plane in Distributed Optimization of Power Systems"
},
{
"paperId": "6685d35225152d56a6d234e6f4e7159da89a3709",
"title": "A Survey of Distributed Optimization and Control Algorithms for Electric Power Systems"
},
{
"paperId": "e7ab1a9ba5c88c0f8838a39226d7ec913e3a862a",
"title": "Strong NP-hardness of AC power flows feasibility"
},
{
"paperId": "e3504ad0646241fa161a81921d24b73564d88c24",
"title": "Pyomo: modeling and solving mathematical programs in Python"
},
{
"paperId": "802fa87c798e0c5e347e2110d80a814ca7049dc5",
"title": "Coarse-grained distributed optimal power flow"
},
{
"paperId": "35d339e603e9fe005e9c9a0fe26493b88c3641c2",
"title": "Coordination of Prosumer Agents via Distributed Optimal Power Flow: An Edge Computing Hardware Prototype"
},
{
"paperId": null,
"title": "The State of Mobile Network Experience"
},
{
"paperId": null,
"title": "Open Energy Networks"
},
{
"paperId": null,
"title": "Cellular Internet of Things , 1st ed"
},
{
"paperId": null,
"title": "High Performance Browser Networking"
},
{
"paperId": null,
"title": "HSL archive : A collection of Fortran codes for large scale scientific computation , ” 2018 . [ Online ]"
},
{
"paperId": null,
"title": "On the implementation of a primaldual interior point filter line search algorithm for large-scale nonlinear programming"
}
] | 10,484
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Mathematics",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/004c641ddd4914e877747ba941ea9f8cb71cb6b1
|
[
"Computer Science",
"Mathematics"
] | 0.90902
|
Market-based Short-Term Allocations in Small Cell Wireless Networks
|
004c641ddd4914e877747ba941ea9f8cb71cb6b1
|
arXiv.org
|
[
{
"authorId": "2351738",
"name": "S. Mukherjee"
},
{
"authorId": "1794321",
"name": "B. Huberman"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"ArXiv"
],
"alternate_urls": null,
"id": "1901e811-ee72-4b20-8f7e-de08cd395a10",
"issn": "2331-8422",
"name": "arXiv.org",
"type": null,
"url": "https://arxiv.org"
}
|
Mobile users (or UEs, to use 3GPP terminology) served by small cells in dense urban settings may abruptly experience a significant deterioration in their channel to their serving base stations (BSs) in several scenarios, such as after turning a corner around a tall building, or a sudden knot of traffic blocking the direct path between the UE and its serving BS. In this work, we propose a scheme to temporarily increase the data rate to/from this UE with additional bandwidth from the nearest Coordinated Multi-Point (CoMP) cluster of BSs, while the slower process of handover of the UE to a new serving BS is ongoing. We emphasize that this additional bandwidth is additional to the data rates the UE is getting over its primary connection to the current serving BS and, after the handover, to the new serving BS. The key novelty of the present work is the proposal of a decentralized market-based resource allocation method to perform resource allocation to support Coordinated Beamforming (CB) CoMP. It is scalable to large numbers of UEs and BSs, and it is fast because resource allocations are made bilaterally, between BSs and UEs. Once the resource allocation to the UE has been made, the coordinated of transmissions occurs as per the usual CB methods. Thus the proposed method has the benefit of giving the UE access to its desired amount of resources fast, without waiting for handover to complete, or reporting channel state information before it knows the resources it will be allocated for receiving transmissions from the serving BS.
|
# Market-based Short-Term Allocations in Small Cell Wireless Networks
#### Sayandev Mukherjee and Bernardo A. Huberman
#### CableLabs
s.mukherjee, b.huberman @cablelabs.com { }
#### May 12, 2020
**Abstract**
Mobile users (or UEs, to use 3GPP terminology) served by small cells in dense urban
settings may abruptly experience a significant deterioration in their channel to their serving
base stations (BSs) in several scenarios, such as after turning a corner around a tall building,
or a sudden knot of traffic blocking the direct path between the UE and its serving BS. In
this work, we propose a scheme to temporarily increase the data rate to/from this UE with
additional bandwidth from the nearest Coordinated Multi-Point (CoMP) cluster of BSs, while
the slower process of handover of the UE to a new serving BS is ongoing. We emphasize
that this additional bandwidth is additional to the data rates the UE is getting over its primary
connection to the current serving BS and, after the handover, to the new serving BS. The key
novelty of the present work is the proposal of a decentralized market-based resource allocation
method to perform resource allocation to support Coordinated Beamforming (CB) CoMP. It is
scalable to large numbers of UEs and BSs, and it is fast because resource allocations are made
bilaterally, between BSs and UEs. Once the resource allocation to the UE has been made, the
coordinated of transmissions occurs as per the usual CB methods. Thus the proposed method
has the benefit of giving the UE access to its desired amount of resources fast, without waiting
for handover to complete, or reporting channel state information before it knows the resources
it will be allocated for receiving transmissions from the serving BS.
### 1 Introduction
Mobile users (or UEs, to use 3GPP terminology) in dense urban settings may abruptly experience
a significant deterioration in their channel to their serving base stations (BSs) in several scenarios,
1
-----
such as after turning a corner around a tall building, or a sudden knot of traffic blocking the direct
path between the UE and its serving BS. Although networks are usually planned such that total
radio link failure is unlikely before such a UE is either handed over to a new serving BS or a strong
connection to the current serving BS is re-established, the UE does experience a sudden and severe
drop in data rate for some time. While this issue has always existed in cellular networks, it takes
on new urgency in the age of 5G millimeter wave (mmWave) cells because these cells are small
relative to the typical LTE macrocell. Small cells are traversed more quickly than larger cells, but
handover to a single small cell serving BS each time means that there is a high rate of handovers
with corresponding control signaling overheads in the network. Thus, in the context of small cells,
it makes sense to have the UE be served not by a single small cell but by a cluster of small cells
that transmit to the UE simultaneously, a method called Coordinated Multi-Point (CoMP) [1].
In this work, we propose to temporarily augment the data rate to and from this UE with a shortterm dose of additional bandwidth from the nearest CoMP cluster. The UE uses this additional
bandwidth even as the slower process of handover to a new serving BS is ongoing. Note that
this additional bandwidth is indeed additional to the data rates the UE is getting over its primary
connection to the current serving BS and, after the handover, to the new serving BS. When the
handover process is complete, the additional bandwidth is expected to be no longer necessary and
will be relinquished by the UE.
The key novelty of the present work is the proposal of a decentralized market-based resource
allocation method to perform resource allocation to support CoMP. It is scalable to large numbers
of UEs and BSs, and it is fast because resource allocations are made bilaterally, between BSs and
UEs. Once the resource allocation to the UE has been made, the coordinated of transmissions
occurs as per the usual CoMP methods. Thus the proposed method has the benefit of giving the
UE access to its desired amount of resources fast, without first waiting for handover to complete,
or having to report channel state information in order to know the resources that it will be allocated
for receiving transmissions from the serving BS.
### 2 Resource allocation in a CoMP cluster
A CoMP cluster of BSs can serve UEs in different ways. A baseline version of CoMP is coor_dinated beamforming (CB) [2, Sec. 5.3] where a BS uses only its own antennas to serve the UEs_
in its cell, albeit with beamforming across these antennas coupled with coordination across BSs
so as to mitigate inter-cell interference. A more sophisticated CoMP system can perform joint
_transmission (JT) [2, Sec. 6.3] across all BSs in the CoMP cluster, treating all resources (such as_
bandwidth and antennas) in the CoMP cluster as available to serve all UEs served collectively by
the BSs in the cluster. Recently, a third CoMP scheme called dynamic point selection (DPS) was
2
-----
introduced as an alternative to handover to support rapid re-routing of data streams as a means to
mitigate rapid signal degradation. In DPS, all coordinating BSs have access to the data streams of
all their served UEs, but the specific BS that transmits to that UE can change on a frame-by-frame
basis. This is similar to JT in requiring extensive signaling, communication, and synchronization
between the BSs of the CoMP cluster, with the difference from JT being that transmission is by
just one BS.
In 3GPP, the process of handover has high latency and imposes a high overhead on the (logical)
control signaling channels. Hence, it is not advisable to do frequent handovers. Thus, we conceive
of a dual-connectivity approach to retaining session quality: after the UE’s channel to the present
serving BS deteriorates enough to require a handover, we allow the slow and high-overhead handover mechanism to proceed as usual. However, in order to retain session quality, we will also
enable the UE to quickly acquire and aggregate bandwidth from the BSs of the local CoMP cluster
in whose service area the UE is present. The question therefore arises as to how we decide on the
relative fraction of resources deployed at each BS in the cluster to support this UE.
For concreteness, let us consider the case of allocation of just one resource, namely “bandwidth” (strictly speaking, for an LTE or 5G system, this quantity should be measured in terms of
Physical Resource Blocks, or PRBs).
The traditional approach, applied to many kinds of similar problems in various fields of engineering, is to frame the resource allocation problem for a given UE as an optimization problem
and solve it at a central controller that handles the coordinated transmissions of the CoMP cluster and therefore has the relevant information on resources in use at each BS in the cluster. A
recent treatment of this approach is given in the “water-filling” formulation of the resource allocation problem described in [3], especially Example 1 in Section II.C with the quantity pk being the
bandwidth allocation and ak the spectral efficiency (in bits/s/Hz). However, if we want to solve
this problem simultaneously for multiple UEs, as is likely in the dense urban settings where small
cell deployments will exist, the centralized optimization approach does not scale well in terms of
computation, storage, or latency.
We note again that this resource allocation from the CoMP cluster is additional to the resources
the UE is already getting through its connection to the serving BS, which may itself change as a
result of the normal handover process. However, the CoMP resource allocation is designed to
complete much faster than the handover from the previous serving BS to the next serving BS
(which is probably, though not necessarily, one of the BSs in the CoMP cluster).
3
-----
### 3 Factors hindering conventional CoMP deployment
JT should be capable of delivering greater gains (as measured in total cell throughput and especially
the throughputs to UEs that have poor channels to the strongest BS, called “cell edge UEs”), but
field trials have been disappointing [4].
JT requires sharing of served UEs’ data streams and Channel State Information (CSI) across all
BSs in a CoMP cluster that cooperate in JT (called the “cooperation area”), which imposes strict
requirements on timing synchronization and severe loads on the signaling and communication
between the BSs of the CoMP cluster. As summarized recently in [5], “these requirements are
actually constituting the major downfall of JT CoMP in practical cellular networks, rendering hard
to achieve its theoretical gains in practice. On top of that, ... imperfect and/or outdated CSI and
uncoordinated interference have a very large impact on the performance of conventional JT CoMP
schemes. Practical Radio-frequency (RF) components, such as oscillators with phase noise, were
also shown to have a similar effect.”
Note that the above issues with deployment of JT because of its high signaling, communications, and synchronization requirements also apply to DPS. In other words, the problems bedeviling practical deployment of CoMP have remained unchanged for the greater part of a decade, from
the time they were enumerated in [2, p. 457]: “... the importance of having precisely synchronized
base station oscillators for downlink joint transmission” and “... the fact that ... pilot overhead
increases linearly in the cooperation size, limits CoMP to scenarios of moderate cooperation size.
Also, ... CoMP gains have to be carefully traded against potentially significant feedback overhead.”
Note that CB, which does not promise the high gains of JT but at the same time makes fewer
demands on signaling and synchronization than JT or DPS, has been identified as a potential candidate for deployment on both LTE [5] and 5G [6, 7] networks. However, the tradeoff between
CoMP gains and the feedback overhead is a general problem with all CoMP schemes. The unfortunate truth is that in spite of theoretical promise and a fair amount of hooks in successive 3GPP
standards to support it, CoMP has not yet been deployed to a significant extent in any cellular
network today.
In the present work, we will consider only CB, where the coordinating BSs share the CSI
among themselves, but without any need for synchronization. While CB does not require the same
heavy signaling loads and stringent synchronization requirements as JT, it is still susceptible to
out-of-cluster interference. However, for the particular application scenario of a CoMP cluster in
a high-density urban area with urban canyons being considered here, it is expected that the outof-cluster interference will be mitigated merely by the presence of the tall buildings and other
obstacles to radio wave propagation.
4
-----
### 4 A new market-based approach to resource allocation
In the present work, we take a market-based approach to the resource allocation problem, which has
the advantage of being scalable to large numbers of UEs and BSs in a CoMP cluster. As has been
pointed out by several researchers (see, for example, [8]), a market-based approach is by definition
both decentralized (matching buyers and sellers) and efficient (both buyers and sellers maximize
some version of utility and/or profits). Thus, a market-based approach applied to CoMP resource
allocation should be expected to ease some of the signaling load and simplify the synchronization
requirements.
We will discuss two market-based resource allocation schemes to support UEs from a CoMP
cluster. The important common feature of both markets is they are games with strategic actors (the
buyers), i.e., the actions of one buyer influence the actions of other buyers and determine the prices
charged by the sellers for the resources sold on the market. There do exist other market-based
frameworks where the buyers are mere price-takers, i.e., they cannot influence the prices charged
by the sellers (see for example the PSCP scheme in [9]), but we shall not consider such schemes
in the present work.
An early market-based approach to bandwidth assignment (by an MNO) to multiple UEs all
using a single application (voice calling) with a small number of distinct quality of service satisfaction levels (QSLs) was proposed in [10] and named “Bandwidth Market Price” (BMP). In BMP,
each UE has a “QoS profile” with a possibly different budget for each bandwidth allocation. Independently and later, [11] proposed a scheme named “BidPacket” with continuously-valued pricing
(and corresponding QSLs and budgets) that adapt to the allocated bandwidth, and applicable to
many classes of data applications. BMP may therefore be seen as a special case of BidPacket
adapted for voice calling. We will defer the details of BidPacket to Section 6, as our proposed
scheme is a modified version of it.
Several market-based resource allocations have been studied in the context of computing resource allocation to processes and users in a cluster of servers. The so-called Proportional Sharing
scheme (also called Trading Post or Shapley-Shubik Game) was proposed in [12]. Applied to the
CoMP scenario, it means that the prospective buyers (UEs) submit bids for the resources, and each
UE gets allocated a fraction of the total available resources which is proportional to its bid. A Nash
equilibrium was proved to exist in [12], which was then shown in [13] to approximately maximize
the Nash social welfare (i.e., the sum of log-throughputs of the UEs). One advantage of Proportional Sharing is that it can be readily extended to resources of more than one class, e.g., bandwidth
and serving BS/antennas in a CoMP cluster. Unfortunately, the allocation in Proportional Sharing
always fully exhausts each UE’s budget for the resource, which results in overpayment by UEs (or
equivalently, inflated bids for resources).
5
-----
A modified Proportional Share with a penalty term was proposed in [14] to reduce bid inflation
by making each bidder pay a cost (to participate in the market) that is proportional to its bid. It
was shown in [14] that such a scheme has a Nash equilibrium that also maximizes the Nash social welfare. The scheme in [14] was simplified and applied to resource allocation in a wireless
network in [15][1]. Unfortunately, the iterative allocation algorithm in [14] requires solving a system of nonlinear equations at each step of the iteration, which is computationally expensive (see
Appendix A). Moreover, this system of equations involves the bids from all the UEs. Therefore,
this scheme is better suited for a centralized allocation scheme, say JT, where the solution of the
system of equations is done in the CoMP cluster. We do not discuss Proportional Sharing or its
variants in the present work, opting instead to focus exclusively on CB.
### 5 Description of the problem
We now describe the details of the CoMP cluster resource allocation problem, followed by our
proposed market-based framework to solve the resource allocation problem.
1. Each UE gets, with its subscription to the MNO operating the CoMP cluster, a budget to
acquire additional bandwidth when needed in the scenario described above. This budget
can be periodically refreshed (say at the start of each MNO billing cycle), or topped up as
needed, and leftover budget from the previous billing cycle could be carried over to the next,
or converted into a credit toward the MNO’s subscription, or into travel miles, vouchers, etc.
2. Suppose UE i is about to turn a corner or do something else that requires a rapid allocation
of additional bandwidth in order to maintain session quality. Say UE i has a budget of wi,
which we call its wealth.
3. If the quality of UE i’s connection to the serving BS or CoMP cluster begins to degrade
rapidly during a specified interval of duration t, UE i becomes a buyer and applies its wealth
_wi to purchase bandwidth from the new, local, CoMP cluster._
4. We assume that a UE can only purchase additional bandwidth from a single CoMP cluster at
any given time. Thus, if UE i was already using additional bandwidth purchased from some
CoMP cluster before while served by its serving BS, and now the link to that serving BS and
old CoMP cluster has deteriorated enough that the UE needs a rapid allocation of additional
bandwidth from a new CoMP cluster to maintain its session, then the bandwidth purchased
1It appears, however, that the simplified version of [14] that is proposed in [15] has a significant shortcoming,
rendering it largely ineffective – see Appendix B.
6
-----
from the old CoMP cluster is freed up in anticipation of a bandwidth purchase from the new
CoMP cluster.
5. Further, each BS in the new CoMP cluster may be viewed as a seller of bandwidth on the
market defined by the BSs in the CoMP cluster and the UEs that are in the area served by that
cluster. Note that a single UE may purchase bandwidth from more than one BS in the CoMP
cluster, and the aggregated bandwidth will be exploited through coordinated transmissions
from these sellers.
6. The bandwidth allocation in the above steps to any UE i is only valid for a pre-defined, fixed,
short interval (which could, for example, be selected so as to cover the mean time taken to
complete the handover of this UE to the next serving BS). Thus, at the end of this fixed
interval, this additional bandwidth that the UE has purchased will be relinquished unless the
UE re-enters the market and purchases bandwidth again.
The following analysis is for a single interval after the rapid deterioration of the channel of
an arbitrary UE i to its present serving BS or CoMP cluster has triggered a resource purchase.
A typical scenario for this analysis is when we: (i) predict that UE i will soon need additional
bandwidth from a new CoMP cluster and start the timer T, (ii) then observe, within the sub-interval
of duration t at the beginning of interval T, that UE i’s channel to its serving BS or old CoMP
cluster has worsened by more than some threshold amount, say θ, where θ > 0 is in decibels (dB).
Note also that the analysis applies to bandwidth purchases for transmissions in a single direction
(i.e., either the uplink from UEs to BSs, or the downlink, from BSs to UEs).
### 6 BidPacket resource allocation
BidPacket [11] is a market-based bandwidth allocation scheme originally designed for a collection
of user devices seeking to transmit on the uplink (i.e., to an access point). In the original proposal
in [11], the sellers and buyers are both WiFi users – a user with nothing to transmit sells its bandwidth to a user that has a file to transmit and wants more bandwidth than the default allocation
(which is the same for all users). In the present work, the buyers are the UEs, and the sellers are
the BSs of the CoMP.
#### 6.1 Utility model for UEs
Let p be the price per unit of bandwidth on the bandwidth market comprising the new CoMP cluster
BSs as the sellers, and the UEs in the service area of this CoMP cluster as the buyers. We employ
7
-----
the buyer utility function proposed in [11]: if UE i purchases bandwidth Bi, its utility is
1
_Ui = biBi −_ 2wi _pBi[2][,]_ (1)
where wi is the wealth of UE i, and bi 1 is a measure of the need of UE i for bandwidth.
_≤_
For example, suppose UE i’s channel to its serving BS or old CoMP cluster at the end of the
sub-interval of length t is τi > θ (both τ and θ are in dB) worse than at the beginning of this
sub-interval. Say τmax (in dB) is the maximum deterioration in the channel to the serving BS or
old CoMP cluster that can be tolerated before the session is interrupted. Then we could define bi
to be the ratio τi/τmax in the linear scale, i.e., bi = 10[(][τ][i][−][τ][max][)][/][10]. In other words, the greater the
deterioration of the UE’s channel to its serving BS or old CoMP cluster, the greater its need for
additional bandwidth in order to maintain the session, and the greater the utility it derives from a
bandwidth purchase from sellers in the new CoMP cluster.
As shown in [11], UE i maximizes its utility by spending the fraction bi of its wealth wi to
purchase bandwidth, i.e., by purchasing an amount of bandwidth Bi given by
_pBi = wibi._ (2)
Note that the price paid by UE i for acquiring bandwidth Bi is pBi. Now, paying by bandwidth is
not the usual pricing scheme in cellular networks today. MNOs either charge a monthly subscription or price by data usage. The latter is more appropriate for our scenario, since these bandwidth
purchases are for short durations of time defined by allocation epochs. In the above, the price p is
actually the price per unit of bandwidth per allocation epoch.
If the allocation epoch is a unit of time, and Si is the average spectral efficiency (throughput per
unit of bandwidth used) to UE i over that unit of time, then the total data to or from UE i over that
unit of time is Ti = BiSi. Thus, in conventional terms, the charge levied by the MNO for the data
transmitted to or from UE i during the allocation period may be seen as ˜pTi = ˜pSiBi = pBi, where
the price per unit of data is ˜p = p/Si, the bid price p paid by the UE per unit of bandwidth, divided
by the average spectral efficiency Si over the allocation period. Note that ˜p is precisely the BMP
of [10]. Moreover, (2) shows that p is inversely proportional to the demanded bandwidth, exactly
as in the QoS profile proposed in [10] without the restriction to finitely many QoS satisfaction
levels. Thus, the BMP scheme of [10] may be seen as a special case of BidPacket.
#### 6.2 Profit model for BSs in the CoMP cluster
Each BS in the CoMP cluster, being a seller of bandwidth, seeks to maximize its profit from
bandwidth sales. Although the BSs (or more precisely, the MNOs operating these BSs) have
8
-----
already paid a fixed price (at an FCC spectrum auction) for the bandwidth that they are selling, it
is prudent for each BS not to seek to sell all of its available bandwidth all the time, but to conserve
the amount of total bandwidth it sells, i.e., minimize the total bandwidth in use at this BS. This
way, the BS could cope with a sudden surge of demand arising from a spike in traffic caused, for
example, by an influx of UEs into the service area of this BS and CoMP cluster.
Therefore we shall use the cost function defined in [11]: the cost of selling bandwidth B[(][j][)] for
BS j in the CoMP cluster is
�
_B[(][j][)][�][2]_
_C(B[(][j][)]) =_ _,_ (3)
2aj
where aj is a measure of the importance of conserving bandwidth at BS j. Note that C(B[(][j][)])/B[(][j][)]
increases with B[(][j][)], which means that the cost per unit of bandwidth increases with the bandwidth.
The profit to BS j from selling bandwidth B[(][j][)] on the bandwidth market is therefore the difference between its revenue and its cost:
_ρj = pB[(][j][)]_ _−_ _C(B[(][j][)]),_ (4)
and, as shown in [11], the BS’s profit is maximized by selling the amount of bandwidth given by
_B[(][j][)]_ = ajp. (5)
#### 6.3 Equilibrium pricing on the bandwidth market
The utility-maximizing total demanded bandwidth from all UEs in the service area of this CoMP
cluster is
� �
_Bdemand =_ _Bi = [1]_ _wibi,_ (6)
_p_
UEs i UEs i
and the profit-maximizing total supplied bandwidth from all BSs in this CoMP cluster is
� �
_Bsupply =_ _B[(][j][)]_ = p _aj._ (7)
BSs j BSs j
It follows that at equilibrium, the price per unit of bandwidth on the market is such that the bandwidth supply equals the bandwidth demand [11]:
_p =_
��
UEs i _[w][i][b][i]_
_._ (8)
�
BSs j _[a][j]_
Recall from (2) that at the price (8), each UE i will purchase an amount Bi of bandwidth such
that the total price it pays is wibi. In other words, at equilibrium, wibi is precisely the value of UE
9
-----
_i’s bid for bandwidth. Thus we may rewrite (8) as_
_p =_
��
UEs i [bid][i]
_._ (9)
�
BSs j _[a][j]_
Note that BidPacket buyers’ budgets may be funded with virtual currency, but they should be
linked to some currency or credit (like airline miles or discount coupons) with monetary value in
the real world. Otherwise, with a purely virtual currency with no real-world value, it is optimal for
the buyers to spend their entire budgets every time for bandwidth purchase, and to overstate their
urgency/need in order to spend their entire budget.
In the CoMP scenario, the buyers are the UEs, as stated earlier, and the sellers (under the assumption of coordinated beamforming) are the BSs in the CoMP cluster. The BidPacket scheme is
in the form of transactions between individual pairs of buyers and sellers. It thus has the advantage
of maximizing both buyer and seller utility at equilibrium, while being scalable to large numbers
of buyers and sellers.
BidPacket is really only applicable to a single resource (like bandwidth) whereas in a CoMP
cluster the available resources are multi-dimensional (like bandwidth and antenna selection, for
example). Lastly, there are no theoretical results on whether or not a Nash Equilibrium exists
among the UEs such that no UE can change its bids to improve its utility without decreasing the
utility of another UE.
### 7 Bandwidth allocation algorithm
1. At the start of each epoch, (a) the BSs in the CoMP cluster get assigned a random order for
serving UE requests for bandwidth purchases; (b) the UEs submit their total bid amounts to
a bid table that is accessible to all BSs in the CoMP cluster.
2. Following the order assigned to the BSs, the UEs’ bandwidth requests are served by those
BSs, starting from the UE with the highest bid, then the UE with the next highest bid, and
so on. If a UE’s bandwidth request is too much to be satisfied by a single BS, the next BS
satisfies the remaining part of the request. When a BS satisfies the last remaining part of a
UE’s request, that UE’s bid is removed from the bid table.
3. All BSs that together satisfy a UE’s bandwidth request now coordinate their transmissions
to that UE in the next epoch, following conventional CoMP protocols.
Note that the algorithm requires only a single pass through all UEs requesting bandwidth and all
BSs that provide that bandwidth.
10
-----
### 8 Numerical results
We simulate a scenario where 10 UEs are bidding for bandwidth from a CoMP cluster of 4 BSs.
Any UE can purchase bandwidth from any of the 4 BSs. Each BS has 25 units of bandwidth, so that
the total bandwidth available in the CoMP cluster is 100 units. In this simple scenario we further
assume perfect beamforming, so any bandwidth can be used for simultaneous transmissions by
multiple BSs. Each UE starts with an initial budget of C units in some virtual currency (where
_C = 500, 1000, 5000), and a default bandwidth allocation of 100/10 = 10 units._
Note that once a bandwidth assignment is made, the actual transmissions are exactly those of
conventional CB CoMP, hence we have simulated only the bandwidth assignments themselves.
The simulation setup follows that in [11]: at each allocation epoch, a UE wants to receive, with
fixed probability, a video file with length modeled by a Gaussian random variable with mean 150
units and standard deviation 50 units. For simplicity, we assume that one unit of file length requires
a single transmission over one unit of bandwidth, and we ignore any channel imperfections or
possibility of packet error. A UE can either use the default bandwidth 10 that it has been originally
assigned, or purchase more bandwidth for a certain amount of time as per the utility function
described above. In the simulation of the utility, the need bi for bandwidth at UE i is drawn from
the uniform distribution on the interval (0, 1). Similarly, the quantity aj for each BS j is also drawn
from the uniform distribution on (0, 1), and all ai and all bj are independent.
Fig. 1 is a plot of the total data transmitted (in terms of the above units) over 100 and 1000
time periods by the 10 UEs for the three different values of each UE’s initially assigned budget
_C, under the market described above versus the baseline non-market scenario when UEs cannot_
purchase additional bandwidth and must use their default bandwidth of 10 units. Note that no UE’s
budget is replenished during 100 or 1000 epochs over which the throughput is aggregated. Thus,
if a UE exhausts its budget after a certain number of epochs, it will have to fall back on its default
bandwidth for subsequent epochs.
For comparison, note that with the default allocation of bandwidth of 10 units per UE, the total
data over 100 epochs is 10, 000 whereas over 1000 epochs the total data is 100, 000. It is clear that
the market-based bandwidth purchasing significantly increases the total data transmitted over the
baseline which does not permit the UEs to purchase additional bandwidth.
For the small budget of C = 100, we observe that even for 100 epochs, the total data is actually
less than that with the default allocation of bandwidth, meaning that the UEs exhaust their budgets
earlier than 100 epochs. However, with a relatively modest increase of budget per UE from 100 to
500, we observe that each UE has nonnegative budget at the end of even 1000 epochs. Thus in this
simple scenario the concerns about inflated bids with virtual currency do not apply.
11
-----
|Vertical axis: "1" : UE budget "2": UE budget "3": UE budget|= 500 = 1000 = 5000|
|---|---|
Figure 1: Plot of total data transmitted versus total overall bandwidth, for the BidPacket bandwidth
purchase strategy and the baseline with fixed allocation of bandwidth to each UE.
12
-----
### A Overview of allocation scheme in [14]
In [14], a variant of Proportional-Share allocation with a penalty term is proposed and analyzed.
Suppose there are a total of R units of bandwidth, and n UEs with bids[2] **_b = [b1, . . ., bn][T]. The_**
penalty term is proportional to the total bid amount, i.e., the utility function of UE i is
_ui(b, qi) = vi(ri(b)) −_ _qibi,_ _i = 1, . . ., n,_
where ri(b) is the assigned bandwidth to UE i under Proportional-Share, i.e., ri(b) = Rbi/(b1 +
_· · · + bn), vi(·) is the valuation function for UE i (see below for details), and qi is the cost (per unit_
bid amount) to UE i to participate in the auction, and is set by the seller.
The valuation function vi(r) for UE i is the logarithm of the data rate to i when it is served by
the CoMP cluster with bandwidth r:
�
_,_ (10)
_vi(r) = ln(1 + r SEi),_ SEi = log2
�1 + _[P][i][H][i]_
_N0_
where SEi is the spectral efficiency on the downlink to UE i, and is given by the Shannon Formula (10), where Pi is the downlink transmit power of the CoMP cluster to UE i (from one or
several serving BSs in the CoMP cluster) per unit of bandwidth, Hi is the channel gain of UE i,
and N0 is the noise power spectral density.
The following results are proved in [14]:
1. For n > 1, any strictly positive resource assignment r = [r1, . . ., rn][T] can be obtained as the
Proportional-Share assignment of the unique Nash equilibrium (NE)[3] [[˜]b1, . . ., [˜]bn][T] for some
set of penalties q = [q1, . . ., qn][T], which are themselves unique when normalized by their
sum [14, Thm. 5].
2. The penalties q[∗] at the NE yielding the Proportional-Share assignment r[∗] that optimizes the
social welfare [14, Thm. 8]
�����
_n_
�
_ri ≤_ _R,_ _ri ≥_ 0, i = 1, . . ., n
_i=1_
�
arg max
**_r[∗]_**
� _n_
�
_vi(ri)_
_i=1_
(11)
are given by the following indirect expression [14, eqn. (13)]:
�
1 − [(]�[n][ −]n [1)][q]i[∗]
_j=1_ _[q]j[∗]_
�
_ri[∗]_ [=][ R]
_,_ _i = 1, . . ., n._ (12)
2Note that we are now using the notation b for a bid rather than for the need/urgency as in Section 6.
3At NE, for these bid amounts and penalties, no UE can unilaterally change its bid in order to improve its utility.
13
-----
3. The above q[∗] can be found as follows: from any initial q(0), the price trajectory q(t) governed by the differential equation
d _Rqi(t)_
dt _[q][i][(][t][) =][ R][ −]n −[r][i]1[(][t][)]_ _−_ �nj=1 _[q][j][(][t][)]_ _[,]_ _i = 1, . . ., n,_ (13)
converges to q[∗] as t →∞ [14, Thm. 9], where ri(t) ≡ _ri(q(t)), i = 1, . . ., n, the NE_
allocations under penalty q at time t, are the solutions to the system of n equations [14,
eqn. (7), Thm. 2]
[R − _r1(t)]v1[′]_ [(][r][1][(][t][))] 2[(][r][2][(][t][))] _n[(][r][n][(][t][))]_
= [[][R][ −] _[r][2][(][t][)]][v][′]_ = = [[][R][ −] _[r][n][(][t][)]][v][′]_ _,_ (14)
_· · ·_
_q1(t)_ _q2(t)_ _qn(t)_
_r1(t) + r2(t) + · · · + rn(t) = R._ (15)
In practice, we change (13) to the following discrete version: at the kth iteration, update:
_qi[(][k][)]_ = qi[(][k][−][1)] + δ
� �
_R −_ _ri[(][k][−][1)]_ _Rqi[(][k][−][1)]_
_n_ 1 _−_ �n
_−_ _j=1_ _[q]j[(][k][−][1)]_
_,_ (16)
where δ is a small positive step size. Let xi = (R − _ri[(][k][)][)][v]i[′][(][r]i[(][k][)][)][,][ i][ = 1][, . . ., n][. From (14), we have]_
_x1_ _X_ _qi[(][k][)]_
_q1[(][k][)]_ = · · · = _q[x]n[(][k][n][)]_ = �nj=1 _[q]j[(][k][)]_ _⇒_ _xi = ρiX,_ _ρi =_ �nj=1 _[q]j[(][k][)]_ _,_ _i = 1, . . ., n,_ _X =_
_n_
�
_xj,_
_j=1_
and from (10), we have
_ri[(][k][)]_ = _[R][ −]1 +[x][i] x[/][SE]i_ _[i]_ = _[R][ −]1 +[ρ][i] ρ[X/]iX[SE][i]_ _,_ _i = 1, . . ., n,_ (17)
so substituting in (15) yields the following polynomial equation for X:
_R −_ _ρ1X/SE1_
+ + _[R][ −]_ _[ρ][n][X/][SE][n]_ = R. (18)
_· · ·_
1 + ρ1X 1 + ρnX
Note that at each step k of the iteration above, we have to solve the polynomial equation (18) that
in general is of degree n in X, for a real root X. In general, the complexity of finding the roots of
an nth-degree polynomial is O(n[2]) Boolean (bitwise) operations [16, Thm. 7]. Not only must this
computational burden be borne by the CoMP cluster, because only it knows all the terms of (18),
but also the root-finding algorithm is iterative, requiring d iterations to approximate the real roots
to an accuracy of about 2[−][d] _at each step k of the allocation algorithm (16), (18), (17)._
14
-----
### B Problems with the iterative algorithm proposed in [15]
In [15], an iterative algorithm is proposed for resource allocation that seemingly avoids the need
to solve the nth-degree polynomial equation (18) at each step of the iteration as required by [14].
However, as we shall show below, the algorithm in [15] creates a circular sequence of updates that
leads to very undesirable outcomes.
#### B.1 Iterative algorithm for bidding and allocation under Proportional-Share
with penalty
The iterative algorithm in [15, Algorithm 1] starts by setting qi[(0)] to some small value, µ[(0)]i = 0, an
initial assignment of resources ri[(0)][,][ i][ = 1][, . . ., n][, and initial bids calculated as follows:]
1 1
_b[(0)]i_ = _ri[(0)][v]i[′][(][r]i[(0)][)[1][ −]_ _[µ]i[(0)][] =]_ _ri[(0)][v]i[′][(][r]i[(0)][)][,]_ _i = 1, . . ., n._ (19)
_qi[(0)]_ _qi[(0)]_
Subsequently, at iteration k, we make the following updates in the order written:
_µ[(]i[k][)]_ = 1 − _b[(]i[k][−][1)]qi[(][k][−][1)]_ (20)
_ri[(][k][−][1)]vi[′][(][r]i[(][k][−][1)])_
� �
_R −_ _ri[(][k][−][1)]_ _Rqi[(][k][−][1)]_
_qi[(][k][)]_ = qi[(][k][−][1)] + δ _n_ 1 _−_ �n _,_ (16)
_−_ _j=1_ _[q]j[(][k][−][1)]_
1
_b[(]i[k][)]_ = _ri[(][k][−][1)]vi[′][(][r]i[(][k][−][1)])[1 −_ _µ[(]i[k][)][]][,]_ (21)
_qi[(][k][)]_
_ri[(][k][)]_ = R �nb[(]i[k][)] _._ (22)
_j=1_ _[b]j[(][k][)]_
A key observation is that the updates (20) and (21) are circular. Its undesirable consequence is that
_the final assignment depends only on the (random) initial assignment. We prove this below._
First, we note that from (20), we have for all k = 1, 2, . . .,
_ri[(][k][−][1)]vi[′][(][r]i[(][k][−][1)])[1 −_ _µ[(]i[k][)][] =][ b]i[(][k][−][1)]qi[(][k][−][1)],_ _i = 1, . . ., n._ (23)
Applying (23) in (21), we then have
_b[(]i[k][)][q]i[(][k][)]_ = ri[(][k][−][1)]vi[′][(][r]i[(][k][−][1)])[1 − _µ[(]i[k][)][] =][ b]i[(][k][−][1)]qi[(][k][−][1)],_
15
-----
which when applied repeatedly yields
_b[(]i[k][)][q]i[(][k][)]_ = bi[(][k][−][1)]qi[(][k][−][1)] = · · · = b[(0)]i _[q]i[(0)]_ = ri[(0)][v]i[′][(][r]i[(0)][) =][ c][i][,][ say][,] _i = 1, . . ., n,_ (24)
where in the final step we have used (19).
It follows from (24) that at each iteration k, the resource allocation to UE i is given by
_cj_
_._ (25)
_qj[(][k][)]_
_ri[(][k][)]_ = R �nb[(]i[k][)] = R �nci/qi[(][k][)] = R _[c]H[i][/q][(][k]i[(][)][k][,][)]_ _i = 1, . . ., n,_ _H_ [(][k][)] =
_j=1_ _[b]j[(][k][)]_ _j=1_ _[c][j][/q]j[(][k][)]_
_n_
�
_j=1_
Thus the allocation algorithm (19)– (22) can be written in the following mathematically equivalent
form: start by initializing qi[(0)] to some small value, as before. At each iteration k, update (16):
1 _i_ _i_
_−_ _[c][i][/q][(][k][−][1)]_
_H_ [(][k][−][1)][ −] _Q[q][(][(][k][k][−][−][1)][1)]_
1
_,_ _Q[(][k][−][1)]_ =
_n_ 1
_−_
�
_R_
_qi[(][k][)]_ = qi[(][k][−][1)] + δ _n_ 1
_−_
�
_n_
�
_qj[(][k][−][1)]_ (26)
_j=1_
until convergence to qi[∗][,][ i][ = 1][, . . ., n][. From (25), the final resource assignments are]
_ri[∗]_ [=][ Rc][i][/q]i[∗] _i = 1, . . ., n,_ _H_ _[∗]_ =
_H_ _[∗]_ _[,]_
_n_
�
_j=1_
_cj_
_._
_qj[∗]_
From (26) and (23) it follows that the final assignments ri[∗] [depend][ only on][ c][i] [=][ r]i[(0)][v]i[′][(][r]i[(0)][)][,][ i][ =]
1, . . ., n. This is completely undesirable because it means the algorithm deterministically yields
final assignments depending only on the random initializations to the iterative algorithm.
### References
[1] D. Lee, H. Seo, B. Clerckx, E. Hardouin, D. Mazzarese, S. Nagata and K. Sayana, “Coordinated Multipoint Transmission and Reception in LTE-Advanced: Deployment Scenarios
and Operational Challenges,” IEEE Communications Magazine, vol. 50, no. 2, pp. 148-155,
Feb. 2012.
[2] P. Marsch and G.P. Fettweis, eds., Coordinated Multi-Point in Mobile Communications: From
_Theory to Practice, Cambridge University Press, 2011._
[3] C. Xing, Y. Jing, S. Wang, S. Ma and H.V. Poor, “New Viewpoint and Algorithms for WaterFilling Solutions in Wireless Communications,” IEEE Transactions on Signal Processing,
vol. 68, pp. 1618-1634, Feb. 2020.
16
-----
[4] R. Irmer, H. Droste, P. Marsch, M. Grieger, G. Fettweis, S. Brueck, H.-P. Mayer, L. Thiele and
V. Jungnickel, “Coordinated Multipoint: Concepts, Performance, and Field Trial Results,”
_IEEE Communications Magazine, vol. 49, no. 2, pp. 102-111, Feb. 2011._
[5] G.C. Alexandropoulos, P. Ferrand, J.-M. Gorce, C.B. Papadias, “Advanced Coordinated
Beamforming for the Downlink of Future LTE Cellular Networks,” IEEE Communications
_Magazine, vol. 54, no. 7, pp. 54-60, Jul. 2016._
[6] M.U. Sheikh, R. Biswas, J. Lempiainen, R. Jantti, “Assessment of coordinated multipoint
transmission modes for indoor and outdoor users at 28 GHz in urban macrocellular environment,” Advances in Science, Technology and Engineering Systems Journal, vol. 4, no. 2,
pp. 119-126, 2019.
[7] G.R. MacCartney, Jr. and T.S. Rappaport, “Millimeter-Wave Base Station Diversity for 5G
Coordinatd Multipoint (CoMP) Applications,” IEEE Transactions on Wireless Communica_tions, vol. 18, no. 7, pp. 3395-3410, Jul. 2019._
[8] N. Haque, N.R. Jennings and L. Moreau, “Resource Allocation in Communication Networks
Using Market-Based Agents.” In M. Bramer, F. Coenen and T. Allen (eds.), Research and
_Development in Intelligent Systems XXI, pp. 187-200. Springer, London, 2005._
[9] U. Mir, L. Nuaymi, M.H. Rehmani and U. Abbasi, “Pricing strategies and categories for LTE
networks,” Telecommunication Systems, vol. 68, pp. 183-192, Jun. 2018.
[10] W. Ibrahim, J.W. Chinneck, S. Periyalwar and H. El-Sayed, “QoS satisfaction based charging
and resource management policy for next generation wireless networks,” Proceedings of the
_2005 International Conference on Wireless Networks, Communications and Mobile Comput-_
_ing, Maui, HI, 2005, vol. 2, pp. 868-873._
[11] B.A. Huberman and S. Asur, “BidPacket: trading bandwidth in public spaces,” Netnomics,
vol. 17, pp. 223-232, 2016.
[12] M. Feldman, K. Lai and L. Zhang, “The Proportional-Share Allocation Market for Computational Resources,” IEEE Transactions on Parallel and Distributed Systems, vol. 20, no. 8,
pp. 1075-1088, Aug. 2009.
[13] S. Brˆanzei, V. Gkatzelis and R. Mehta, “Nash Social Welfare Approximation for Strategic
Agents,” Proceedings of the 2017 ACM Conference on Economics and Computation, pp. 611628, Jun. 2017.
17
-----
[14] R.T.B. Ma, “Efficient Resource Allocation and Consolidation with Selfish Agents: An Adaptive Auction Approach,” Proceedings of the 2016 IEEE 36th International Conference on
_Distributed Computing Systems, pp. 497-508, Jun. 2016._
[15] Y.K. Tun, N.H. Tran, D.T. Ngo, S.R. Pandey, Z. Han and C.S. Hong, “Wireless Network Slicing: Generalized Kelly Mechanism Based Resource Allocation,” IEEE Journal on Selected
_Areas in Communications, vol. 37, no. 8, pp. 1794-1807, Aug. 2019._
[16] V.Y. Pan and L. Zhao, “Polynomial Root Isolation by Means of Root Radii Approximation,”
[https://arxiv.org/abs/1501.05386, Jun. 2015.](https://arxiv.org/abs/1501.05386)
18
-----
| 11,550
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2005.04326, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/2005.04326"
}
| 2,020
|
[
"JournalArticle"
] | true
| 2020-05-09T00:00:00
|
[
{
"paperId": "6c3cd6a96c2de814485e1af08f2d8b320c7a3ef5",
"title": "Wireless Network Slicing: Generalized Kelly Mechanism-Based Resource Allocation"
},
{
"paperId": "089fe3777b4c5a5ed54987581e38cf0ff4b562b3",
"title": "Millimeter-Wave Base Station Diversity for 5G Coordinated Multipoint (CoMP) Applications"
},
{
"paperId": "a3640705049e6eb8e54a8ce71b1feed5fcf07564",
"title": "New Viewpoint and Algorithms for Water-Filling Solutions in Wireless Communications"
},
{
"paperId": "308e2ed53e0a97fde7c26e7b5663d968e3e0a21c",
"title": "Pricing strategies and categories for LTE networks"
},
{
"paperId": "48e5e8a915baa481882d80fb20384daabc00a5c4",
"title": "BidPacket: trading bandwidth in public spaces"
},
{
"paperId": "e12cbd5c9eeea2a4c2bb0c428076bd3aa0c7aa47",
"title": "Nash Social Welfare Approximation for Strategic Agents"
},
{
"paperId": "dae46dcb9e1a34bf3eb8939715a44a66c6eaa961",
"title": "Efficient Resource Allocation and Consolidation with Selfish Agents: An Adaptive Auction Approach"
},
{
"paperId": "67ec96a2d64064b4b874f7fc4903b8d7bb2acf2f",
"title": "Advanced coordinated beamforming for the downlink of future LTE cellular networks"
},
{
"paperId": "2b3631f7665c661e245395a7ce2e231400c801d9",
"title": "Polynomial Root Isolation by Means of Root Radii Approximation"
},
{
"paperId": "748e25eda5e3557bba4296b42a9af25a61125f8c",
"title": "Coordinated multipoint transmission and reception in LTE-advanced: deployment scenarios and operational challenges"
},
{
"paperId": "30dfe7a6173be9d2de358c0c53e549d5c56d7568",
"title": "Coordinated multipoint: Concepts, performance, and field trial results"
},
{
"paperId": "f032904f34e6f3236f2d1e1e2f145e841228199b",
"title": "The Proportional-Share Allocation Market for Computational Resources"
},
{
"paperId": "67dd1a791a2ee623488e6b19467c12ac98ee7010",
"title": "QoS satisfaction based charging and resource management policy for next generation wireless networks"
},
{
"paperId": "f094d29660efd3f4f32bf0917b9fdf684f18da60",
"title": "Resource allocation in communication networks using market-based agents"
},
{
"paperId": "a1720de8059f04f47ca164cc96fb9fcc0e6993a7",
"title": "Assessment of Coordinated Multipoint Transmission Modes for Indoor and Outdoor Users at 28 GHz in Urban Macrocellular Environment"
}
] | 11,550
|
en
|
[
{
"category": "Business",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
},
{
"category": "Agricultural and Food Sciences",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/004fdaf86c0e2d6cebd3380e2fdabec843876a0b
|
[] | 0.912204
|
Interdisciplinary challenges associated with rapid response in the food supply chain
|
004fdaf86c0e2d6cebd3380e2fdabec843876a0b
|
Supply Chain Management
|
[
{
"authorId": "2260483838",
"name": "Pauline van Beusekom – Thoolen"
},
{
"authorId": "2260484166",
"name": "Paul Holmes"
},
{
"authorId": "2260482997",
"name": "Wendy Jansen"
},
{
"authorId": "2260486825",
"name": "Bart Vos"
},
{
"authorId": "2260485427",
"name": "Alie de Boer"
}
] |
{
"alternate_issns": [
"2627-2938",
"1359-8546"
],
"alternate_names": [
"Supply Chain Manag",
"Supply chain management",
"Supply chain manag"
],
"alternate_urls": [
"http://www.emeraldgrouppublishing.com/scm.htm"
],
"id": "9417c3df-a8da-4b0b-8ace-8096329a9ea9",
"issn": "1359-852X",
"name": "Supply Chain Management",
"type": "journal",
"url": "https://www.emerald.com/insight/publication/issn/1359-8546"
}
|
Purpose
This paper aims to explore the interdisciplinary nature of coordination challenges in the logistic response to food safety incidents while distinguishing the food supply chain positions involved.
Design/methodology/approach
This adopts an exploratory qualitative research approach over a period of 11 years. Multiple research periods generated 38 semi-structured interviews and 2 focus groups. All data is analysed by a thematic analysis.
Findings
The authors identified four key coordination challenges in the logistics response to food safety incidents: first, information quality (sharing information and the applied technology) appears to be seen as the biggest challenge for the response; second, more emphasis on external coordination focus is required; third, more extensive emphasis is needed on the proactive phase in the logistic response; fourth, a distinct difference exists in the position’s views on coordination in the food supply chain. Furthermore, the data supports the interdisciplinary nature as disciplines such as operations management, strategy and organisation but also food safety and risk management, have to work together to align a rapid response, depending on the incident’s specifics.
Research limitations/implications
The paper shows the need for comprehensively reviewing and elaborating on the research gap in coordination decisions for the logistic response to food safety incidents while using the views of the different supply chain positions. The empirical data indicates the interdisciplinary nature of these coordination decisions, supporting the need for more attention to the interdisciplinary food research agenda. The findings also indicate the need for more attention to organisational learning, and an open and active debate on exploratory qualitative research approaches over a long period of time, as this is not widely used in supply chain management studies.
Practical implications
The results of this paper do not present a managerial blueprint but can be helpful for practitioners dealing with aspects of decision-making by the food supply chain positions. The findings help practitioners to systematically go through all phases of the decision-making process for designing an effective logistic response to food safety incidents. Furthermore, the results provide insight into the distinct differences in views of the supply chain positions on the coordination decision-making process, which is helpful for managers to better understand in what phase(s) and why other positions might make different decisions.
Social implications
The findings add value for the general public, as an effective logistic response contributes to consumer’s trust in food safety by creating more transparency in the decisions made during a food safety incident. As food sources are and will remain essential for human existence, the need to contribute to knowledge related to aspects of food safety is evident because it will be impossible to prevent all food safety incidents.
Originality/value
As the main contribution, this study provides a systematic and interdisciplinary understanding of the coordination decision-making process for the logistic response to food safety incidents while distinguishing the views of the supply chain positions.
|
# Interdisciplinary challenges associated with rapid response in the food supply chain
### Pauline van Beusekom – Thoolen
#### Department of Marketing and Supply Chain Management, School of Business and Economics, Maastricht University, Maastricht, The Netherlands
### Paul Holmes
#### Independent Researcher, Best, The Netherlands
### Wendy Jansen
#### Independent Researcher, Breda, The Netherlands
### Bart Vos
#### Department of Marketing and Supply Chain Management, School of Business and Economics, Maastricht University, Maastricht, The Netherlands, and
### Alie de Boer
#### Food Claims Centre Venlo, Maastricht University, Maastricht, The Netherlands
Abstract
Purpose – This paper aims to explore the interdisciplinary nature of coordination challenges in the logistic response to food safety incidents while
distinguishing the food supply chain positions involved.
Design/methodology/approach – This adopts an exploratory qualitative research approach over a period of 11 years. Multiple research periods
generated 38 semi-structured interviews and 2 focus groups. All data is analysed by a thematic analysis.
Findings – The authors identified four key coordination challenges in the logistics response to food safety incidents: first, information quality
(sharing information and the applied technology) appears to be seen as the biggest challenge for the response; second, more emphasis on external
coordination focus is required; third, more extensive emphasis is needed on the proactive phase in the logistic response; fourth, a distinct difference
exists in the position’s views on coordination in the food supply chain. Furthermore, the data supports the interdisciplinary nature as disciplines such
as operations management, strategy and organisation but also food safety and risk management, have to work together to align a rapid response,
depending on the incident’s specifics.
Research limitations/implications – The paper shows the need for comprehensively reviewing and elaborating on the research gap in
coordination decisions for the logistic response to food safety incidents while using the views of the different supply chain positions. The empirical
data indicates the interdisciplinary nature of these coordination decisions, supporting the need for more attention to the interdisciplinary food
research agenda. The findings also indicate the need for more attention to organisational learning, and an open and active debate on exploratory
qualitative research approaches over a long period of time, as this is not widely used in supply chain management studies.
Practical implications – The results of this paper do not present a managerial blueprint but can be helpful for practitioners dealing with aspects of
decision-making by the food supply chain positions. The findings help practitioners to systematically go through all phases of the decision-making
process for designing an effective logistic response to food safety incidents. Furthermore, the results provide insight into the distinct differences in
views of the supply chain positions on the coordination decision-making process, which is helpful for managers to better understand in what phase(s)
and why other positions might make different decisions.
Social implications – The findings add value for the general public, as an effective logistic response contributes to consumer’s trust in food safety
by creating more transparency in the decisions made during a food safety incident. As food sources are and will remain essential for human
existence, the need to contribute to knowledge related to aspects of food safety is evident because it will be impossible to prevent all food safety
incidents.
© Pauline van Beusekom – Thoolen, Paul Holmes, Wendy Jansen, Bart
Vos and Alie de Boer. Published by Emerald Publishing Limited. This article
is published under the Creative Commons Attribution (CC BY 4.0) licence.
Anyone may reproduce, distribute, translate and create derivative works of this
article (for both commercial and non-commercial purposes), subject to full
attribution to the original publication and authors. The full terms of this
licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
The current issue and full text archive of this journal is available on Emerald
Insight at: https://www.emerald.com/insight/1359-8546.htm
Supply Chain Management: An International Journal
Emerald Publishing Limited [ISSN 1359-8546]
[[DOI 10.1108/SCM-01-2023-0040]](http://dx.doi.org/10.1108/SCM-01-2023-0040)
The authors would like to thank all interviewees for their participation in the
study.
Received 26 January 2023
Revised 5 April 2023
18 July 2023
25 August 2023
Accepted 22 September 2023
-----
Pauline van Beusekom – Thoolen et al.
Originality/value – As the main contribution, this study provides a systematic and interdisciplinary understanding of the coordination decisionmaking process for the logistic response to food safety incidents while distinguishing the views of the supply chain positions.
Keywords Supply-chain management, Food industry, Coordination, Food security, Information transparency, Quick response
Paper type Research paper
### 1. Introduction
Globally, every year, the food industry deals with an estimated
600 million cases of foodborne diseases and 420,000 deaths that
are attributed to unsafe food (WHO, 2023). Over the past few
years, various developments in the food supply chain impacted
the response to food safety incidents, such as the introduction of
more stringent food legislation in Europe, an increase in the
number of monitoring programmes, a growing awareness of
corporate social responsibility and more focus on enabling
technological solutions (Jose and Shanmugam, 2020; Pandey
et al., 2022; Possas et al., 2022). Even so, food safety incidents
such as the salmonella bacteria in chocolate products marketed to
children in April 2022 (EFSA, 2022) still illustrate how
vulnerable and interdependent the food chain is and how quickly
the chain can collapse. It also demonstrates the importance of
transparent response processes, and despite the many
investments in technology developments in recent years, supply
chains still appear to struggle with these challenges in the
decision-making process (Astill et al., 2019; Hofmann
et al., 2015; Holgado and Niess, 2023; Li et al., 2023). As
stakeholders in the food supply chain demand (full)
transparency, and as it is impossible to prevent every food safety
incident, there is a need for more research into an effective
logistic decision-making process for the logistic response to food
safety incidents because health risks, branding and food safety
costs are at stake (Arun and Prasanna Venkatesan, 2019; Song
et al., 2020).
Food safety is defined as “the assurance that food will not
cause adverse health effects to the consumer when it is prepared
and/or eaten according to its intended use” (FAO and WHO,
2022). As no world-wide legislation is applicable to food safety,
varying approaches and requirements for the response to food
safety incidents are seen from different countries, creating
challenges between the food actors. A further challenge to the
supply chain response is that it not only involves those formal
structures and procedures of the food actors but is also
pertinent to informal values and cultural norms (Horak et al.,
2020). It is essential to ensure that the coordination plans
specified on paper are in agreement with how they work in
actual practice, as the gap between stipulated and practised
coordination in crisis management also merits further
theoretical considerations (Christensen and Ma, 2020). So, it is
of interest to gain a better understanding of how logistic
response decisions to food safety incidents are made in the food
supply chain. Furthermore, as each food safety incident is
unique, further research is needed to get more insight into the
required countermeasures that can react to unique risks
(Manning and Soon, 2016). Research states that the logistic
response to incidents should be, first and foremost, a
coordinated process (Wankmüller and Reiner, 2020), and
for food security, in particular, there is also a need for
interdisciplinary collaboration with involved parties to face the
challenges of food safety (Doherty et al., 2019).
Food supply chains are being studied from a wide range of
disciplines and differing theoretical perspectives, indicating
that they are by nature interdisciplinary and boundary spanning
(Acevedo et al., 2018; Doherty et al., 2019). Interdisciplinary
research refers to cooperation between several disciplines, with
more emphasis on knowledge exchange than on integration by
the involved actors (Choi and Pak, 2006). These disciplines
include food safety management, organisational sciences,
sociology, marketing, sales and logistics and supply chain
management. In the food supply chain, the actors can be
positioned more or less upstream, midstream or downstream
(Nardi et al., 2020a; Van Hoek, 1999). Interest in the concept
of supply chain positions relates to “power dependencies in the
chain” but is also apparent in recent studies that identify
“perception” as a key element for determining how the
positions will deal with emerging topics in the supply chains,
such as risks and emergency food preparedness (Gerhold et al.,
2019). More upstream positions, such as producers, are more
able to gather information at the supplier side, whereas
more downstream positions, like wholesalers and retailers, have
more (in)direct contact with the consumer. Moreover, theories
suggest that more upstream positions tend to be more reactive
and conservative in nature concerning topics related to risks
than retailers downstream (Lo, 2013). According to Li et al.
(2019), an important element in decision-making is the
dominance in the relationship between two supply chain
positions. Previous research indicates the relevance of
understanding the relationship between the supply chain
positions and how they deal with specific topics and disciplines
related to coordination in the food supply chain (Minnens et al.,
2019; Nordin et al., 2010; Schmidt et al., 2017).
So far, no research has been dedicated to exploring the
interdisciplinary challenges of coordination in the response to
food safety incidents and the views of various supply chain
positions in relation to the logistic response. A better
understanding and more knowledge about this will help to
improve the inter-organisational development practice and
alignment in decision-making for an effective logistic response
to food safety incidents. The research questions that will be
answered in this paper are:
RQ1. What are the key coordination challenges in the
logistics response to food safety incidents?
RQ2. To what extent are the identified coordination
challenges interdisciplinary in nature?
The results of this study are based on PhD research into food
supply chains (Van Beusekom – Thoolen, 2022).
We identified four key coordination challenges in the
logistics response to food safety incidents:
-----
Pauline van Beusekom – Thoolen et al.
1 firstly, information quality (IQ) (sharing information and
the applied technology) appears to be seen as the biggest
challenge for the response;
2 secondly, more emphasis on external coordination focus is
required;
3 thirdly, more extensive emphasis is needed on the proactive phase in the logistic response; and
4 fourthly, a distinct difference exists in the position’s views
on coordination in the food supply chain.
Furthermore, our data supports the interdisciplinary nature, as
disciplines such as operations management, strategy and
organisation, but also food safety and risk management have to
work together to align a rapid response, depending on the
incident’s specifics.
We first describe the theoretical background of various
disciplines that relate to coordination in the logistic response to
food safety incidents. This is followed by an explanation of the
research methodology used. Thirdly, this paper presents the
results of the collected data over various research rounds over a
period of 11 years. Fourthly, this paper provides a thematic case
study analysis, which leads to the discussion and suggestions for
further avenues of research.
### 2. Theoretical background
2.1 Food safety incidents
Food safety is a concept that has been discussed by many
researchers in various disciplines over the years, as well as by
authorities world-wide, to monitor and ensure food safety
(Auler et al., 2017; Nardi et al., 2020a). In the literature,
definitions of food safety incidents are very similar, mostly
initiated by the legislature (governmental agencies) due to their
statutory basis. According to the UK Food Safety Authority
(FSA), the definition of a food safety incident is:
[. . .] any event where, based on the information available, there are
concerns about actual or suspected threats to the safety, quality or integrity
of food and/or feed that could require intervention to protect consumers
interests (FSA, 2017).
This definition has a statutory foundation for some
requirements of the response, which may lead to a withdrawal
or recall that will result in costs and related responsibilities that
need to be part of an unequivocal policy.
Food safety incidents can vary, from a relatively high to a
relatively low level of uncertainty and complexity, and anything
in between (Soon et al., 2020). The higher the level of
uncertainty and complexity of a food safety incident, the more
challenged the accurate evaluation of the implementation of
response plans and this may negatively impact effective
response plans (Song et al., 2020). So, it is important to have
insight into the key aspects of food safety incidents. Besides the
literature review of food supply chains and food safety
incidents, we also reviewed additional literature from various
disciplines such as risk, crises, disaster and emergency
management, as they also focus on preventing and minimising
consequences that can be caused by natural factors and
technological or human errors, similar to food safety incidents
(Al-Dahash et al., 2016; Al Kurdi, 2021). Based on this review,
the key distinctive interdisciplinary aspects of food safety
incidents are presented in Table 1.
The emergence and extensiveness of distinct aspects in an
incident underscore that each food safety incident is unique,
and further research is needed to get more insight into the
required countermeasures that can act against unique risks
(Manning and Soon, 2016).
2.2 Logistics response in food supply chain
As food supply chains become more complex and consumers
more demanding, the appropriate effective response to food
safety incidents is challenged by the ability to align and manage
food safety by all (inter)nationally related supply chain actors
(Song et al., 2020). Formulating an adequate effective response
to food safety incidents is complicated by several factors
(Wiegerinck, 2006):
� increased complexity of the production, manufacturing,
distribution and retailing of products;
� increased distance between place and time of production
and place and time of consumption;
� more advanced technical knowledge of food ingredients;
� technical development of the media; and
� link between firms in the supply chain.
Furthermore, the response to a food safety incident requires a
relatively high level of traceability and transparency; it must
move quickly and decisively under time pressure while
complying with strict legislation (Astill et al., 2019). To manage
the response to food safety incidents, most food organisations
have specific procedures and tools in place. These distinguish
different risk levels in food safety incidents, for example,
“routine incidents” (relatively small and innocent incidents)
and “major incidents” (involving a significantly high level of
health and political risks) (CA Commission, 2013).
According to the response model of Van Beusekom –
Thoolen (2022), the decisions relating to the procedures and
tools in response to food safety incidents refer to the ex ante
(pro-active) part of the logistics response model (see Figure 1).
Moreover, this response model suggests that in the ex ante
part, the impact on a food safety incident is moderated or
regulated, by the firm’s own rules, processes and structures. In
the assessment phase, the requirements are determined for the
further response strategies to be executed in the ex post phase
to ensure that the final result of the response is sufficient.
Finally, the phase lessons learned enhances continuous
improvement by feedback and learning, forming an “open
system” that interacts with the environment and “continually
takes in new information, transforms that information and gives
information back to the environment” (Shockley-Zalabak,
1999, p. 43). Other response models from various theoretical
perspectives in the literature were also reviewed (CFIA, 2020;
Vlajic et al., 2012; Våland and Heide, 2005), but since we are
interested in the decision-making process for the logistic
response in food supply chains, we chose the management
response model of Van Beusekom – Thoolen (2022) to get
more insight into the underlying set of decisions made in this
process. Moreover, this model is based on process-tracing,
which makes it feasible to identify the key events, processes or
decisions that link the hypothesized cause or causes with the
outcomes (George and McKeown, 1985).
As each food safety incident has its unique elements, it is
virtually impossible to have procedures that cover the response
-----
Pauline van Beusekom – Thoolen et al.
Table 1 Key interdisciplinary aspects of food safety incidents
Food safety
Author(s) incident aspects Description
FSA (2017); Gizaw (2019); Lin (2010) Health risks May have negative health consequences due to physical,
chemical or microbiological hazard
Political risks May affect political sensitivity on (inter)national level
Business risks May cause financial and reputational damage in short term and
long term
Charlier and Valceschini (2008);EFSA (2022);
Wilson et al. (2016)
Compliance aspects Legislation plays an important role in the response to an incident
Adeseun et al. (2018); Assefa et al. (2017); Interdependency Actors involved may be dependent on other food supply chain
Auler et al. (2017); Jose and Shanmugam actors in their response
(2020); Manning and Soon (2016); Soon et al. Parties involved May involve multiple food actors such as producers, retailers and
(2020); Song et al. (2020); Trienekens et al. logistic service providers
(2012) Supply chain stage Incidents can occur at any stage in the food supply chain,
whether more upstream or more downstream
Assefa et al. (2017); Zhang et al. (2014); Scale impact May affect wide geographical areas and large population groups
Diabat et al. (2012); FAO & WHO (2022); Hamer Time pressure Time is critical for the response to health risks, and there is time
et al. (2014); Song et al. (2020); Soon et al. pressure for quick decision-making and action
(2020); Verbeke et al. (2007) Response action An incident requires some form of action by food supply chain
actors
Level of uncertainty The level of uncertainty can vary from rather low to high
depending on the nature of the incident, which is often
unpredicted and unprecedented
Source: Authors’ own work
Figure 1 Logistics response model in case of food safety incidents
details for every possible incident. Each incident involves a
unique supply chain response consisting of “multiple, single
actor logistic responses” that need to be aligned and managed
to be effective. An effective logistic response “must have the
intended or expected effect on the individual consumers”
health risk, political risk and business continuity. Van Asselt
et al. (2017) indicate in their research findings that time
pressure and real-time decision-making are important
coordination challenges in the response to food safety
incidents. Furthermore, to enhance an effective response, the
intention of the single actor plays an important role in this
logistic response, as each actor is focused primarily on their
own business, more than on the performance of the food supply
chain as a whole (Speranza, 2018). This illustrates the interest
to better understand how the involved positions perceive the
coordination challenge in the response.
2.3 Supply chain positions
To manage incidents effectively, it is critical for any incident
management system to create collective and cooperative incident
teamwork from all supply chain positions (Subramaniam et al.,
2010). According to Li et al. (2019), an important element in
decision-making on the logistic response is the dominance in
the relationship between the supply chain positions. Previous
research indicates the relevance of understanding the
relationship between the supply chain positions and how they
deal with specific topics (Lo, 2013; Schmidt et al., 2017;
Tacheva et al., 2020). The interest in the concept of supply chain
positions relates to “power dependencies in the chain” but is also
apparent in recent studies that identify “company size”,
“industry”, “perception” and “extent of operability” as key
elements for determining how the positions will deal with
emerging topics in the supply chains, such as sustainability
(Gallo and Jones-Christensen, 2011). Even so, the critical role of
power dependence in supply chain relationship management
deserves more attention in food supply chains to get full insight
and knowledge (Schmidt and Wagner, 2019).
The generic food supply chain has four distinct types of
key stakeholders (see Table 2): food business, consumer,
(business) community and food regulatory and enforcement
agencies (Minnens et al., 2019). These stakeholders all play a
role in the supply chain response to food safety incidents, each
from their own perspective. Based on the main logistic activity
of the stakeholders in the food supply chain (Aung and Chang,
2014), supply chain positions are distinguished.
As stated above, the positions can be more or less upstream,
midstream or downstream in the food supply chain (Van Hoek,
1999) and are defined by the structural position of an
organisation’s logistic value creation activities, measured on the
-----
Pauline van Beusekom – Thoolen et al.
Table 2 Overview of stakeholders and the related food supply chain positions
Stakeholder Chain position Up-/downstream Main (logistic) activity
Food business Producer Upstream Adding value to the product/service
Wholesaler/retailer Downstream Storage and sales
Logistic service provider Overall Transport and distribution
Consumer Consumer Downstream Consumption and disposal
(Business) community Branch organisation Overall Representing industry members as a front man for the supply
chain positions
Regulatory agencies Authority Overall Monitoring, and if required, enforcement to ensure food safety
Source: Authors’ own work
basis of the tier distance from the consumer (Schmidt et al.,
2017). As the response to food safety incidents requires a joint
approach, all positions may play a role in the supply chain
response to food safety incidents, each based on their own
stakeholder’s perspective and main logistic discipline; this calls
for alignment in monitoring, prevention and response to food
safety incidents by food organisations from all over the world
(Leialohilani and De Boer, 2020).
2.4 Coordination
Food safety incidents require coordination and information
exchange by the actors in the food chain. It is essential to
understand the specifics of the incident and, moreover, to know
what kind of decisions are necessary to take coordinated
countermeasures. Critical elements in decision-making may
differ as the response objective may differ per incident
(Jiang and Yuan, 2019). Effective decision metrics can
help practitioners make quick decisions and improve
responsiveness, but can also benefit the coordination of several
interdependent tasks among various actors and streamline the
response to the food safety incident (Balcik et al., 2010).
Research into overcoming destructive incidents indicates that
coordination is an essential critical element for decision-making
(Wankmüller and Reiner, 2020).
Various definitions of coordination in the field of supply
chain management are given in the literature. Such as: “The
process of managing dependencies between activities” (Malone
and Crowston, 1994). In relief supply chain management,
Wankmüller and Reiner (2020) defined coordination as:
“The process of organizing, aligning and differentiating of
participating non-governmental organisations (NGOs)”
actions based on regional knowledge, know-how, specialisation
and resource availability to reach a shared goal in the context of
disasters’. In essence, strong coordination adds to an efficient
and effective logistic response, and it is often seen as a
prerequisite for cooperation and collaboration (Ergun et al.,
2014).
For the purpose of this study, we define coordination based
on Wankmüller and Reiner (2020) as: “The process of
organizing, aligning and differentiating of participating actors’
actions based on knowledge, know-how, specialisation and
resource availability to add to an effective and efficient
process”. This stipulates that the primary intent is to organise,
manage and align the activities in the food supply chain
(Charlier and Valceschini, 2008) by decomposition or the
division of labour among partners, communication and
integration between partners (Castañer and Oliveira, 2020).
2.5 Interdisciplinarity in food research
Food research covers agricultural and nutritional science but
also includes scientific aspects of food safety and food
processing, next to the science of enabling food technology
(Ward et al., 2015). This interdisciplinary approach involves
scientists from multiple disciplines, such as chemistry,
physics, physiology, microbiology, biochemistry, food safety
management, marketing, sales, risk management, branding
value, organisational sciences and supply chain management
(Wynstra et al., 2019). Despite the demarcations for each
research field, “disciplinary boundaries [.] do not have
sharp edges” (Tarafdar and Davison, 2018, p. 6). The
interdisciplinary competences support the enhancement of
knowledge on how to deal with risks in the food supply chain,
and create an interdisciplinary research agenda (Doherty et al.,
2019; Horton et al., 2017).
Our study seeks to identify the coordination challenges in the
logistics response to food safety incidents while distinguishing
the views of the supply chain positions. Finally, we explore to
what extent these coordination challenges are interdisciplinary
of nature.
### 3. Research methods
3.1 Exploratory qualitative study
Given the relatively scarce availability of interdisciplinary
research into logistic responses in relation to food safety
incidents in general and supply chain positions’ views in
particular, there is a need to better understand the coordination
decisions made in response to food safety incidents. We used an
exploratory qualitative study design over a long period of time,
in a total of 11 years, to provide a more robust outcome. In our
research, we aim to study and understand the phenomenon of
logistic response to food safety incidents, with its interaction
between the various contexts and the views on coordination of
the supply chain positions. We have opted for a research
approach in which we gathered the information over a longer
period of time, to better understand the context and provide
more compelling results; the overall research is therefore regarded
as being more robust (Yin, 2018). By remaining open to emergent
phenomena in the research period, our understanding of the
dynamics of food safety incident processes within its complex
social reality may be expected to increase. Qualitative research
supports researchers in situations where there are no simple
explanations or simple solutions, where the problems are complex
and have a specific, often unique, context. Many variables play a
part, and decisions are made at the end of a complex decision
-----
Pauline van Beusekom – Thoolen et al.
making chain in which many stakeholders play an important role.
Our aim to study the supply chain decisions made in response to
food safety incidents suggests that a qualitative approach may help
us to explore what happens during these incidents.
By analysing according to an abductive research approach, we
neither followed the pattern of pure deductive nor of pure
inductive: we adopted theory-building elements by simultaneously
performing the data collection and theory development over
the different research periods (Håkan and Gyöngyi, 2005). The
logistic response model of Van Beusekom - Thoolen (2022) was
applied as a loose framework (Lämsä and Takala, 2000) to
organise and categorise the findings from a process-tracing
perspective (George and McKeown, 1985). This approach helps
us to go back in time and identify key events, processes or decisions
that link to the logistic response.
3.2 Quality requirements
To evaluate the quality of the research design in this study, the
assessment approach by Lincoln and Guba (1985) is chosen, as
we followed a pragmatic research philosophy to develop
knowledge that can be used to improve a situation. Simply put,
the pragmatic value of the research is that “it works” for
managers and practitioners. A qualitative researcher must be
transparent about the way the research is conducted to enable
the readers of the study report to establish that the research is
trustworthy. Trustworthiness is refined by Lincoln and Guba
(1985) in four criteria, which are widely recognised and used
to evaluate the quality of qualitative research. These four
evaluation criteria are credibility, transferability, dependability
and confirmability (Nowell et al., 2017). The credibility of a
study is determined if readers (co-researchers) can recognise the
findings and match these with their own experiences. In our
study, credibility is realised by peer briefings and by prolonged
engagement with the team of researchers and the actors in the
research. Transferability refers to the generalisability of
the research findings. In qualitative research, findings and
conclusions do not go beyond the applicability in the studied
cases. However, transferability is important in this kind of
research and refers to how the reporting of the research enables
the reader to judge if the findings are also useful to his/her case or
situation. With the underlying pragmatic paradigm in this study,
we tried to achieve that by providing a thick description and
quotes to give the reader a feeling of “being there”. We chose
multiple research periods and data sets to provide more
compelling support for the propositions and strengthen
the transferability of the findings (Lincoln and Guba, 1985).
Dependability is assured by demonstrating that the research
Table 3 Overview of the research periods and supply chain positions
process is logical, traceable and clearly documented. In this
study, we show that data analysis has been conducted in a
precise, consistent and exhaustive manner through archiving of
the raw data, systematising and disclosing the methods of
analysis. Finally, confirmability refers to the quality of the study,
that the researcher’s interpretations and findings are clearly
derived from the data. The researcher has to clearly state how
interpretations of the data and the conclusions have been
reached so that the reader/co-researcher is able to understand
which decisions are made and why during the research process.
Confirmability is realised by the audit trail and reflexivity, as the
research team discussed the interpretations of all research
rounds.
3.3 Data collection method
Over a period of 11 years, representatives from the various
supply chain positions were asked to elaborate on their logistic
decision-making process and response to food safety incidents
(see Table 3); this ensured data triangulation to improve the
robustness of our research findings to better understand the
decisions made in responding to food safety incidents
throughout food supply chains. The selection process of the
participants was defined by a combination of factors. Firstly, we
aimed to select participants from each of the five supply chain
positions: producer, logistic service provider, wholesaler/
retailer, branch organisation and (food safety) authority (in
particular, the enforcement department). Furthermore, the
participants should be responsible for the logistic decisionmaking process in the case of a food safety incident within their
organisation. Some participants were selected from the existing
network of contacts of the researchers involved, but most were
selected by snowball-sampling from our networks. After the
pilot study, we conducted 38 semi-structured interviews and
organised two focus groups with the various supply chain
positions. We wanted to collect data from the same participants
(units of observation) over time, but participants switched jobs
and organisations and, so there were 26 units of observation in
total. On average, the participants were involved in two of the
research periods (at least once and at most four times).
The aim of the one-on-one semi-structured interviews was to
collect rich and in-depth data, experiences and views, whereas the
aim of the focus groups was to explore and capture the
experiences and views of the various supply chain positions with
regard to the critical decision-making element of coordination.
Subjects of the discussions in the focus groups were related to the
challenges or opportunities in the decision-making process for the
logistic response to food safety incidents. All semi-structured
Research period Data Participating supply chain positions
2010 Pilot interview Producer
2010 Four interviews Producer and wholesale/retail
2012 Focus group A Producer and wholesale/retail
2012/2013 Twenty-one interviews Producer, wholesale/retail, logistic service provider, branch organisation, authority
2013 Focus group B Producer, wholesale/retail, logistic service provider, branch organisation, authority
2015 Six interviews Producer, wholesale/retail and logistic service provider
2020 Seven interviews Producer and wholesale/retail
Source: Authors’ own work
-----
Pauline van Beusekom – Thoolen et al.
interviews and focus groups were transcribed and coded in NVivo
(Miles and Huberman, 1994). Finally, the data was analysed by
coding the transcripts or minutes, which led to the thematic
analysis (Braun and Clarke, 2006). Similar coding was used for all
interviews and focus group meetings based on the question,
“What are coordination challenges and opportunities for
designing an effective logistic response to food safety incidents?”
After coding, the code list was checked for duplication and
similarities, and codes were combined or deleted.
Our research team has expertise in many relevant disciplines
(logistics and supply chain management, food safety, food law,
social theory and organisational science), strengthening the
interdisciplinary character of the study.
### 4. Findings
We started by analysing the 38 interviews and two focus groups
on the basis of the textual data generated, collected from 2010
to 2020. In total, 1,391 references are coded to coordination in
NVivo.
4.1 Thematic analysis: emergences of categories for
coordination
A thematic analysis in NVivo of the rich data led to the
identification of four categories for coordination: internal
coordination, external coordination, IQ and branding (see
Table 4).
Comparing the results from the supply chain positions,
distinct differences are apparent in the emphasis on the
categories per position (see Figure 2). In all research periods, all
positions stressed the category of IQ by far the most, as a
challenge or opportunity in the logistic response to food safety
incidents.
Of all positions, the FSA is seen to place by far the most
emphasis on challenges or opportunities of coordination. This
relatively high emphasis by the authority on these elements may
indicate that they perceive coordination as the key challenge in
the logistic response to food safety incidents. Another possible
explanation is that coordination challenges directly relate to
their main task priority in their daily work as FSA staff, in which
they are involved in all food safety incidents and not just one
food supply chain.
4.2 Analysis coordination in relation to phases response
model
To create a better understanding of the coordination
challenges, we next analysed the coordination references in
relation to the phases of the logistics response model (see
Table 4 Identified categories of coordination based on thematic analysis
Identified categories of coordination Description
Figure 1). The results show that although coordination
emerged in all five phases in all research periods, a distinct and
persistent picture is the relative emphasis of coordination
references between the five phases (see Figure 3).
The ex post phase is discussed by far the most extensively,
accounting for two-thirds of all coordination references. The
ex ante phase always came in second, with participants
discussing aspects of the ex post phase twice as much
as aspects of the ex ante phase. This indicates that the
participants emphasised aspects of the reactive part of the
logistic response far more than the proactive aspects. As a result
of this distinct and persistent picture of emphasis over all
research periods, further analysis will be discussed by the key
results over time and positions’ views.
4.3 Analysis per position from 2010 to 2020
Analysing of the coordination references suggests that all
positions primarily emphasise (reactive) ex post phase
challenges, during the whole research period, although the
logistic service provider also paid considerable attention to ex
ante (pro-active) aspects. Another marked finding of this
analysis over time is that both the producer and wholesale/retail
show a pattern of a gradual shift in emphasis from internal
coordination towards external coordination (ex post) over the
years (see Figure 4).
Over time, both the producer and wholesale/retail increasingly
stress the need for adequate external alignment, information
sharing and traceability in the food supply chain for an effective
logistic response. It is also interesting that they emphasise that in
the decision-making process, they have no other option but to
rely on their suppliers to share reliable and complete information.
However, we found no consistency in the interpretation of what
defines adequate external coordination. Some participants define
it as “correct information from the outset”, whereas others see it
as “sharing information with the whole supply chain immediately
after a notification of a food safety incident”.
4.4 Analysis views various supply chain positions
4.4.1 Wholesale/retail
Firstly, a marked finding is that wholesale/retail rated branding
as a key decision-making element for an effective logistic
response strategy, second only to health risks, and that costs
play a less important role. Secondly, the data did not indicate
that the severity of the food safety incident had any formal
relationship to the logistic response procedures. Wholesale/
retail has to deal with various food safety incidents every week,
and they indicated that in most cases, they also have to deal
Internal coordination Aspects of organising, managing and aligning of activities from an intra-organisational perspective
External coordination Aspects of organising, managing and aligning of activities from an inter-organisational perspective
Information quality (IQ) Information that is shared and distributed to the involved food actors is used to manage efforts for the logistic
response, including the aspects mentioned on the applied technology
Branding Aspects mentioned on the process of establishing and growing a relationship between a brand and consumers
(by e.g. a name, term, sign symbol [or a combination of these])
Source: Authors’ own work
-----
Pauline van Beusekom – Thoolen et al.
Figure 2 Overview of emphasis on categories of coordination per position
## Overview of emphasis on categories of coordina�on per posi�on, average references per posi�on, for all research periods
Wholesale/retail (n = 23)
Producer (n = 14)
Logis�c service provider (n = 6)
Branch organisa�on (n = 6)
Authority (n = 2)
0 2 4 6 8 10 12 14 16 18 20
Internal coordina�on External coordina�on Informa�on quality Branding
**Source: Authors’ own work**
Figure 3 Overall overview of coordination references to response
phases
with update(s) of each individual incident (also referred to as
revisions). These revisions, and even revision on revision, occur
when new information requires a re-assessment of the food
safety incident specifics. They usually imply that more products
are affected, which leads to an additional workload and also to
more potential mistakes. As a precaution, wholesale/retail
mentioned that they often remove more than the required
affected products after the initial assessment notification:
Quote wholesale/retail: “We tell our supplier: ‘Right guys, we have decided
to remove this product. We are done with it!’ [. . .] This is based on the
batches initially listed for recall; in our experience, the number of recalled
batches usually increases over time”.
Another finding is that the primary focus of wholesale/retail is
on internal aspects of the logistic response, the challenges
or opportunities from their own internal organisational
perspective rather than the external supply chain. Market
power and (consumer) trust in the food supply chain are also
mentioned as important aspects for an effective logistic
decision-making process:
Quote wholesale/retail: “Yes, I worked with companies that try to turn their
back on issues. Suppliers who want to slow us down or just do not want to
see the issue. They are a problem! But what can you do? [. . .] I hesitate to
say it [. . .] but I’ll say it anyway: you use your market power”.
Wholesale/retail notes that many coordination decisions are
made under time pressure and without full information. One
other finding is an inconsistency in the level of organisational
learning, as some in wholesale/retail applied aspects of singleloop learning while others did not.
4.4.2 Producer
The lack of (full) chain transparency poses severe challenges for
the producer when dealing with food safety incidents under
time pressure. Costs and branding are seen as highly important
decision-making criteria, although health risks are the first
priority:
Quote producer: “Yes [. . .] having a private label or not, makes quite a
difference for the choices to be made”.
-----
Pauline van Beusekom – Thoolen et al.
Figure 4 Analysis results from 2010 to 2020 for producer and wholesale/retail positions, ex post; linear interpolation shown in dotted lines
Producer emphasis on coordina�on ex post
2010 2012/2013 Focus groups 2012/2013 2015 2020
and appear to strive for external integration. Main topics
discussed are about challenges related to trust, (reliable)
information sharing and market power. It is interesting that
they stressed the need for a consumer perspective by creating
chain integration to deliver more services for the endconsumer. Reliability of information and health safety are seen
as essential starting points for the logistic response, although
they suggested the need for speedier information sharing
between the various actors in the food supply chain in cases of
food safety incidents. Another finding is that the branch
organisation emphasised the impact of social media on the
decision-making process. They mentioned, for example, that
social media is one of the most powerful tools that NGOs could
use to influence the logistic decisions made, both upstream and
downstream in the food supply chain. Finally, all branch
organisations emphasised the importance of incident
evaluation to stimulate organisational and even supply chain
learning.
4.4.5 Authority
A key finding is that the FSA put far more emphasis on aspects
of coordination than the other positions. They stressed the
need for external integration, although they noted that (full)
chain transparency is still a long way off due to aspects such as
the various levels of automation available within food
organisations world-wide:
Quote authority: “If a supervisory authority asks ‘Shouldn’t we do more to
create a chain approach?’, I’ll say ‘Yes, that is a nice concept. Now try to
follow through[. . .]’”.
Finally, a main issue when creating (full) chain transparency has
to do with trust. This issue is strongly emphasised, according to
the authority, in relation to certification. The predictability of the
auditing process was mentioned, as well as issues related to the
reliability and trustworthiness of certificates, for example, as a
result of inconsistent certification authorities:
Quote authority: “[. . .] let me just put it this way[. . .] the supply chain network
relies on certification; however, that same chain knows how unreliable that
certification is. So, it is a tight rope act. That is what this sector does.”
80%
70%
60%
50%
40%
30%
20%
10%
0%
Wholesale/retail emphasis on coordina�on ex post
2010 2012/2013 Focus groups 2012/2013 2015 2020
80%
70%
60%
50%
40%
30%
20%
10%
0%
**Source: Authors’ own work**
The producers also emphasised that when working in an
internationally oriented supply chain, the various cultures and
the large variety of foreign governments involved also challenge
the decision-making process. Other findings are that the
producer is mainly interested in internal coordination aspects
of the logistic response, implying that the primary focus is on
issues and/or opportunities for the logistic response from their
own internal organisational perspective rather than the external
supply chain perspective. Procedures, tools and aspects of the
physical goods flow are discussed, matters such as removing
products, blocking products and managing the product return
flow. Many of these discussions include challenges of data
gathering and available information systems, which makes
rapid traceability almost impossible in their perspective:
Quote producer: “What makes the logistic response successful are your
procedures, your information systems, and your personnel. That
combination is what needs to work. You also need to mobilise your internal
organization. That is certainly also a success factor”.
4.4.3 Logistic service provider
In marked contrast to the other positions, the logistic service
provider put as much emphasis on the coordination challenges
or opportunities from proactive perspective of the logistic
response. Also, the logistic service provider stressed issues of
external communication, discussing the challenges of getting in
contact with external actors, and how communication is done
via phone, email, in person, etc. More personal contact is seen
to improve the speed of communication, compared to email,
for example.
Due to relatively short-term contracts in the market, the
logistic service provider indicated the need to balance relatively
high information technology investments against the level of
traceability and transparency provided. Finally, we again found
inconsistency in organisational learning: not all logistic service
providers applied aspects of single-loop learning.
4.4.4 The branch organisation
The branch organisation strongly emphasised external aspects
of coordination from a reactive perspective. This suggests that
they are focused on external challenges in the food supply chain
-----
Pauline van Beusekom – Thoolen et al.
### 5. Discussion
In this section, we will discuss the key findings of the identified
coordination challenges in the logistic response to food safety
incidents and, secondly, to what extent these findings are
interdisciplinary in nature.
5.1 Information quality perceived as biggest
coordination challenge
IQ is the most prominent challenge found in all research
periods when discussing challenges related to information
sharing, (full) transparency and traceability. This corresponds
to the findings of Astill et al. (2019), as described in the
literature review, where they conclude that transparency is a
challenge for all food supply chain actors. Recent research in
enabling new technology, such as blockchain that strives to
increase the level of transparency, traceability and trust also
concludes that many challenges must be overcome to
incorporate it into the food supply chain (Duan et al., 2020;
Pandey et al., 2022; Schmidt and Wagner, 2019).
It is interesting that over time, the positions show a growing
tendency towards more emphasis on the categories of both
external coordination and IQ. This is in line with the studies of
Kaipia (2021), Wankmüller and Reiner (2020) and Yu and
Ren (2018), which indicate that, on the one hand, the attention
to (full) chain transparency is growing, while, on the other
hand, this creates more challenges. In all research periods, we
found the perception of an ongoing challenge related to the
need for (full) transparency and traceability in the food supply
chain. This might be explained by the unique character of each
food safety incident (see Table 1), requiring countermeasures
aligned with the needs of the incidents (Manning and Soon,
2016). Concurring with the research gaps defined in the
literature review on block-chain enabled information sharing in
the supply chain by Wan et al. (2020), our study also indicates
issues such as trust and relatively low automation levels of some
chain actors through all research periods.
The findings also indicate the great and varied importance of
information when designing effective logistic responses to food
safety incidents. As food safety incidents require fast, full
and reliable supply chain traceability, a primary implication for
food organisations based on this study is that all positions
considered accessibility of information as a kind of ongoing
hygiene factor. Therefore, it is recommended that future
studies pay extra attention to define what criteria need to be
met to create an adequate level of accessibility of information in
the food supply. It is interesting that our findings correspond
with previous research by Van der Vorst (2004): our results also
suggest a need for more research into “full food traceability” in
relation to supply chain process integration. Apparently, nearly
two decades later, the food industry is still struggling with issues
of traceability, as already pointed out in 2004 by Van der Vorst.
5.2 Increasing emphasis on external coordination poses
challenges
Another key finding is the gradual shift towards more emphasis
on external coordination by the producer and wholesale/retail,
which may be due to an increasing awareness of the need for
supply chain collaboration and coordination. However,
according to Christopher (2016), the extensive focus on
challenges from internal aspects suggests that there is room for
improvement in the level of internal integration. Since 2015,
the data indicates that producer and wholesale/retail are
gradually shifting towards an external orientation and focus far
less on internal orientation. This is in line with the literature in
Section 2, in which we concluded that supply chain thinking is
becoming more central in and outside the food industry.
According to the participants, the food industry faces a long,
bumpy road ahead to create (full) supply chain integration:
Quote Authority (2012/2013): “How much energy should we put into this
chain transparency? I could put 100 people on this, but it would still be
impossible to figure out the how and what”.
Quote Wholesale/retail (2015): “As retail, it is very difficult to oversee the
whole supply chain”
According to various studies, such as Huo (2012) and
(Pradabwong et al., 2017), internal integration should generally
precede external integration. This suggests that the producer
and wholesale/retail have improved their level of internal
integration over the years. To achieve external integration,
previous research in the agri-food industry suggests the
relevance of the concepts trust and commitment as enablers
(Ramirez et al., 2020). The need to study the relationship
between supply chain collaboration and performance is also
recognised in research by Paciarotti and Torregiani (2021) in
the context of sustainable collaboration. It appears that the
importance of supply chain integration has featured more
prominently on the agenda of the food industry recently
because of aspects such as the occurrence of severe food safety
incidents like the E. coli 0104 outbreak in 2011, which was the
deadliest bacterial foodborne outbreak in Europe. It is not yet
clear whether new technologies, such as blockchain or smart
packaging, can support transparency and traceability for the
regular food business activities, as well as for responding to
food safety incident responses (Astill et al., 2019; Bechtsis et al.,
2019; Chen et al., 2020; Moreno et al., 2020; Song et al., 2020).
5.3 Extensive emphasis on ex post phase
Furthermore, the research findings indicate that the
coordination challenges most strongly relate to aspects of the ex
post phase, referring to the reactive aspects of the response. An
explanation for the strong emphasis on the ex post phase could
be that responding reactively is perceived as far more
challenging than preparing proactively in the ex ante phase.
Another explanation might be that our results concur with
findings in a study into risk management by Kırılmaz and Erol
(2017), who found that, in general, supply chain managers
are more focused on reactive (mitigation) parts of risk
management, primarily to reduce costs, than proactive aspects.
Also, the result may reflect the relatively smaller amount of
effort put into creating a culture and organisation that
withstands issues from proactive perspective (Coleman, 2011).
According to Cadden et al. (2013), more attention to cultural
evaluation in the supply chains might also lead to enhanced
trust and openness. Recently, studies indicate that managers
have begun to attach more importance to supply chain
continuity and resilience from a proactive perspective (Kırılmaz
and Erol, 2017). Our data does not confirm this, however, as
from the first until the last round of research, the main
emphasis was on reactive aspects of the logistic response (in the
-----
Pauline van Beusekom – Thoolen et al.
ex post response phase). Although aspects of the ex ante phase
received less attention than the ex post phase, the data does
suggest that this phase is considered important, with many
proactive aspects of the logistic response being discussed
extensively. The difference found between these two phases
might also be explained by the fact that we interviewed mainly
experts on operational and tactical levels, who might address
other aspects of decision-making. This concurs with the
literature of Kotler et al. (2020), that suggests that decisions are
made on several hierarchical organisational levels by different
individuals in the decision-making unit; and all this in the
context of dealing with unique characteristics per incident.
5.4 Distinct difference in views of the supply chain
positions on coordination
Comparing the results from the supply chain positions, distinct
differences appear to exist in the emphasis on coordination in
the five response phases, as shown in Figure 3. Of all positions,
the FSA appears to stress by far most strongly the challenges or
opportunities of coordination ex post. This relatively high
emphasis by the authority on these elements may indicate that
they perceive these as the main challenges in the logistic
response to food safety incidents. Another possible explanation
is that coordination challenges directly relate to their main task
priority in their daily work as FSA staff, in which they are
involved in all food safety incidents and not just one food
supply chain.
Another key finding is a distinct difference that appears to
exist between the position’s wholesale/retail and producer.
Wholesale/retail considers risks to branding and name
reputation as outweighing cost-effectiveness, whereas the
producer balances aspects of branding, reputation, health
impact and also related costs in each incident. This is consistent
with our literature review (Gerhold et al., 2019; Nardi et al.,
2020b), where we found that more upstream positions, such as
producers, appear to have more access to information at the
supplier end; on the other hand, more downstream positions,
such as wholesale/retailers, appear to be more in contact with
the consumer end and therefore more focused on branding and
reputation aspects. Branding includes the perceptions held by
current, past and potential customers about a company’s
products and services (Czinkota et al., 2014). Reputation is far
more than that. Reputation is “the expression of corporate
conduct aimed to differentiate the company from competitors
in the perception of competitive rivalry” (Czinkota et al., 2014,
p. 95). Theories of branding and reputation posit that these
factors play an important role in food supply chains and that
food safety incidents might lead to new and often threatening
trends and pressures that negatively impact the company’s
reputation and its supply chain (Leon-Bravo� et al., 2019). Our
data supports this as we found that branding plays an important
role in the decision-making for the logistic response:
Producer (2020): “There is always a risk. The financial side is the most
obvious one. It will always cost you more to refund their products and
produce them once again. But reputation of your company is more
important because if that goes to the news media then the damage will be
greater”.
The fact that branding is perceived as an important factor in the
logistics response may be a consequence of the awareness that
brands are used to identify the company more readily as the
source of risks in situations of foodborne illness (Parker et al.,
2020). As a final point, it is interesting to note that name
branding trade-offs for a private label producer differ from
those for an A or B brand producer because their name is not
printed on the product label. So, they primarily face only
branding damage within the food industry itself, but not
towards the end-consumers.
5.5 Interdisciplinary nature of coordination challenges
Finally, our study seeks to contribute to insights into the
discipline-based origins of coordination challenges in the
context of food supply chains. Our findings support previous
research into food science (Acevedo et al., 2018; Doherty et al.,
2019; Horton et al., 2017), as they also exhibit interdisciplinary
aspects. The findings indicate that coordination involves
challenges related to aspects such as information sharing, risk
analysis, collaboration, branding and (human) decisionmaking. Therefore, the expertise of various disciplines must be
integrated in a joint, synchronised response to be effective. This
implies that theories, such as supply chain management,
information processing, operations management, strategy and
organisation, risk management, decision-making, food safety
management, marketing and consumer behaviour should all be
considered as part of an adequate effective response to food
safety incidents, depending on the incident’s specifics (see
Table 1).
Furthermore, we explored the disciplinary origins of the
categories of coordination in the logistic response to food safety
incidents in particular. The codes of the category “internal
coordination” appear to be mainly related to aspects addressed
by the discipline of operations management, as the data
primarily emphasises elements of creating effective and efficient
transformation processes. Within this category, elements of
management and organisational processes were also discussed,
albeit less extensively, linked to aspects addressed by the
discipline strategy and organisation. Within the category
“external coordination”, the prevalence of aspects of the
disciplines strategy and organisation are most strongly
emphasised, and the discipline of operations management to a
lesser extent, to arrive at an effective and efficient chain. For the
category “information quality”, aspects addressed by the
discipline of information processing are most strongly
discussed, stressing the elements of information sharing and its
applied technology to enhance the process flow. Finally, the
category “branding” is most strongly linked to aspects
addressed by the disciplines of marketing and consumer
behaviour.
As the data collected is mainly based on insights of logistics
and supply chain experts, there was little or no discussion
involving disciplines such as chemistry, physics, physiology,
microbiology and biochemistry. Even so, we recognise that
these may also play an important role in the interdisciplinary
response to food safety incidents (Acevedo et al., 2018; Doherty
et al., 2019; Horton et al., 2017).
The results clearly indicate the need for robust
interdisciplinary research. Moreover, the need for (full) chain
transparency and external integration suggests that this should
have a high priority on the food research agenda for researchers
from multiple disciplines, as accessibility of information
-----
Pauline van Beusekom – Thoolen et al.
throughout the supply chain is perceived as a kind of hygiene
factor for achieving an effective logistic response.
### 6. Conclusion and implications
6.1 Recommendations to interdisciplinary field of food
supply chain management
This research dealt with the interdisciplinary coordination
challenges associated with the rapid response in food supply
chains. Logistics and supply chain management in the realm of
food safety are usually separate fields that are studied by
different groups in academia. This research integrates both
fields and shows that decision-making theory is useful to better
understand the complexity of the logistic response to food
safety incidents in a supply chain perspective while using the
views of the different supply chain positions on coordination.
The theory of supply chain management is mainly focused on
integrating vertical and horizontal collaborations between the
actors, whereas the theory of logistics is primarily focused on
aligning internal functions, such as procurement, production,
distribution and transport, in which the trade-off between the
aspects: costs, quality and time, are leading. Food safety theory
emphasises aspects such as nutrition and contamination of
ingredients. Our empirical data supports the need to integrate
these theories as the food industry strives for a more integrated
and effective approach while they face many interdisciplinary
coordination challenges in the food supply chain for an effective
logistic response to food safety incidents to minimise health,
political and business risks. More attention needs to be paid to
the views of the supply chain positions on the decision-making
process for the logistic response to improve this process.
To answer the first research question (RQ1), we identified
four key challenges of coordination in the logistics response to
food safety incidents while distinguishing the supply chain
positions. Firstly, the study findings show that IA (by sharing
information and its applied technology) appears to be seen as the
biggest challenge for the response, by all positions in the past
decade. This leaves much room for improvement in the response
to become more transparent, and intensify collaboration
between food actors. Moreover, it is recommended that future
studies pay extra attention to defining what criteria need to be
met to create an adequate level of information accessibility
within the food supply chain. As new technologies are
continuously in development to enhance traceability, such as
blockchain and smart packaging, this might create possibilities to
support information sharing within food organisations but also
throughout the food supply chain. Further research is
recommended to better understand how these technologies can
support an effective response in case of food safety incidents.
Moreover, as trust appears of high importance for information
sharing, further research on how trust impacts the willingness to
share information in food supply chains is recommended.
Secondly, a marked finding is that the identified challenges
primarily relate to the ex post phase, leaving many research
opportunities to enhance insight and knowledge concerning
proactive measures (in procedures, guidelines and tools).
Thirdly, the findings of research conducted over a decade
suggest an increase in attention for external coordination
challenges by the producer and wholesale/retail. More empirical
research is needed on how the positions deal with the internal
versus external focus to support them in improving their
response performance. Finally, food supply chain positions differ
in their perception of coordination challenges. This suggests the
need for more empirical research on how each of these positions
should coordinate an effective response to food safety issues. To
create a more holistic interdisciplinary approach, research into
food science would benefit from the involvement of researchers
from various disciplines, such as behavioural science, food
safety, supply chain management, information processing theory
and risk management. When meeting contemporary challenges,
such as sustainability, interdisciplinary research could also help
to develop knowledge, guidelines and procedures that may be
more effective (Kumar et al., 2022).
To some extent, the above already answers the second
research question (RQ2), “To what extent are the identified
coordination challenges interdisciplinary in nature?”. Aside from
these findings, our data supports the interdisciplinary nature as
disciplines such as operations management, strategy and
organisation, but also food safety and risk management have to
work together to align a rapid response, depending on the
incident’s specifics. So, we can conclude that food safety, and an
adequate response associated to incidents, should be considered
from an interdisciplinary perspective in the food supply chain.
To this end, a high priority on the interdisciplinary food research
agenda is required to stimulate progress towards (full) chain
transparency and external integration, integrating the various
disciplines to ensure food safety. An interesting question is also
how interdisciplinarity, impacted by topics such as legislation,
(social) media, marketing and cultures, will evolve in the near
future. Insight into the decisions made to respond to food safety
incidents, as consumers appear to expect (full) transparency and
a joint response, is a pre-requisite. The consequences of making
mistakes in the response to food safety incidents might lead to
more severe and diverse attention, impacting on branding and
reputation, but also impacting on food safety. Therefore, we
encourage the need for more robust interdisciplinary research in
food supply chains.
The study findings also indicate a need for more attention
to organisational learning, in the phase lessons learned,
contributing to the academic debate of logistics and supply
chain decisions in cases of food safety incidents. Our results
show that this debate should not only improve health and costeffectiveness but also shift the attention to the supply chain
perspective, as the end-consumer perceives the logistic
response by all involved organisations. To the best of our
knowledge, no empirical research has been conducted into the
coordination decision-making process for the logistic response
to food safety incidents while the views of the supply chain
positions are used. Focus on the views of the different supply
chain positions supports a better understanding of why
challenges in the logistic response still occur and, therefore,
deserves more attention from researchers.
Furthermore, the applied exploratory qualitative research
approach over a long period of time is not widely used in logistics
and supply chain management studies. Methodology designs
and protocols for this type of research design are still scarce,
resulting in some challenges and debates on the design but also
on the evaluation for this type of qualitative research (Welch and
Piekkari, 2017). The debate on the evaluation of qualitative
research stems mainly from the institutionalised nature of the
-----
Pauline van Beusekom – Thoolen et al.
academy, which suggests that there is a continuous pressure to
standardise the evaluation criteria (Welch, 2018). However,
Welch states (p. 410): “It is highly inappropriate to insist that all
qualitative research conform to a particular template for
demonstrating quality”. To our understanding, the current
debate between positivist (Eisenhardt, 1989; Yin, 2018) and
naturalist (Lincoln and Guba, 1985) criteria for evaluating
qualitative research in the field of logistics and supply chain
management is rather underexposed. We hope to encourage an
active debate and stimulate researchers to maintain an open
dialogue and raise awareness for methodological advances to
further stimulate creativity and innovation.
6.2 Recommendations to interdisciplinary field of food
supply chain management practice
The results of this study do not present a managerial blueprint but
can be helpful as a sense-making decision framework for
practitioners dealing with the design of coordination in the logistic
response to food safety incidents. Firstly, the findings help
practitioners to systematically go through all phases of the decisionmaking process for designing an effective logistic response to food
safety incidents. A systematic approach helps them to reflect on
their own business processes to improve the effectiveness of the
logistic response to food safety incidents by managerial sensemaking. According to all positions, this is perceived as important
since the decision-making process is highly challenged by the lack of
(full) transparency in combination with an existing legal timepressure. Furthermore, the results provide insight into the views of
the supply chain positions on the coordination decision-making
process. As those views appear to be distinctly different with respect
to coordination in the five phases, it is helpful for managers to better
understand in what phase(s), and why other positions might make
different decisions. The food industry can apply these insights to
further enhance the effectiveness of the logistic response to food
safety incidents where health, political and business risks may be at
stake. An important insight is that accessibility of information is
perceived by all positions as something of a hygiene factor for
creating an effective logistic response to food safety incidents, which
should make the food industry aware of the need to focus on this
aspect.
Finally, besides the managerial contributions, the findings
add value for the general public, as an effective logistic response
contributes to consumer’s trust in food safety, by creating more
transparency in the decisions made during a food safety
incident. As food sources are and will remain essential for
human existence, the need to contribute to knowledge related
to aspects of food safety is evident because it will be impossible
to prevent all food safety incidents.
6.3 Limitations
While this study is based on extensive empirical data obtained
over a considerable period from various supply chain
perspectives, our approach has some limitations. Firstly, there
are no clear guidelines for conducting the abductive and
exploratory research approach over a longer period of time used
in this study. There is no single best way of matching theory
and reality in abductive research, according to Dubois and
Gadde (2002), and what works or does not work “can only be
evaluated afterwards”. What we found effective in the research
process is the collaboration with both the participants of the
study, i.e. the actors in the logistic food chain, and the team of
fellow researchers. The data collections were started and
interpreted within our own experiences and existing ideas as
researchers and humans. Future studies are recommended to
explore and develop guidelines for exploratory and abductive
research. A second limitation relates to the participants, the
experts. As the topic of food safety is perceived as highly
sensitive throughout the food supply chain, participation in the
focus groups and/or interviews was sometimes difficult to
achieve. As a result, not all positions were included in each
research round. Furthermore, the level of experience, skills and
knowledge of each expert may differ. It is recommended that
future studies apply the similar protocol and collect more data
with respect to the supply chain positions to provide more
generalisability. Future studies are also recommended to
explore the cause-and-effect interrelationship between the
logistic response characteristics and the response performance.
### References
Acevedo, M.F., Harvey, D.R. and Palis, F.G. (2018), “Food
security and the environment: interdisciplinary research
to increase productivity while exercising environmental
conservation”, Global Food Security, Vol. 16, pp. 127-132.
Adeseun, M.A., Anosike, A.I., Reyes, J.A.G. and Al-Talib, M.
(2018), “Supply chain risk perception: understanding the
gap between theory and practice”, IFAC-PapersOnLine,
Vol. 51 No. 11, pp. 1701-1706.
Al Kurdi, O.F. (2021), “A critical comparative review of
emergency and disaster management in the Arab world”,
Journal of Business and Socio-Economic Development, Vol. 1
[No. 1, doi: 10.1108/JBSED-02-2021-0021.](http://dx.doi.org/10.1108/JBSED-02-2021-0021)
Al-Dahash, H., Thayaparan, M. and Kulatunga, U. (2016),
“Understanding the terminologies: disaster, crisis and
emergency”, Proceedings of the 32nd annual ARCOM
conference, ARCOM 2016, pp. 1191-1200.
Assefa, T.T., Meuwissen, M.P. and Lansink, A.G.O. (2017),
“Price risk perceptions and management strategies in
selected European food supply chains: an exploratory
approach”, NJAS: Wageningen Journal of Life Sciences,
Vol. 80 No. 1, pp. 15-26.
Arun, J. and Prasanna Venkatesan, S. (2019), “Supply chain
issues in SME food sector: a systematic review”, Journal of
Advances in Management Research, Vol. 17, pp. 19-65.
Astill, J., Dara, R.A., Campbell, M., Farber, J.M., Fraser, E.D.G.,
Sharif, S. and Yada, R.Y. (2019), “Transparency in food supply
chains: a review of enabling technology solutions”, Trends in
Food Science & Technology, Vol. 91, pp. 240-247.
Auler, D., Teixeira, R. and Nardi, V.A. (2017), “Food safety as
a field in supply chain management studies: a systematic
literature review”, International Food and Agribusiness
Management Review, Vol. 20 No. 1, pp. 99-112.
Aung, M.M. and Chang, Y.S. (2014), “Traceability in a food
supply chain: safety and quality perspectives”, Food Control,
Vol. 39, pp. 172-184.
Balcik, B., Beamon, B.M., Krejci, C.C., Muramatsu, K.M. and
Ramirez, M. (2010), “Coordination in humanitarian relief
chains: practices, challenges and opportunities”, International
Journal of Production Economics, Vol. 126 No. 1, pp. 22-34.
-----
Pauline van Beusekom – Thoolen et al.
Bechtsis, D., Tsolakis, N., Bizakis, A. and Vlachos, D. (2019),
“A blockchain framework for containerized food supply
chains”, Computer Aided Chemical Engineering, Elsevier,
Amsterdam, Vol. 46, pp. 1369-1374.
Braun, V. and Clarke, V. (2006), “Using thematic analysis in
psychology”, Qualitative Research in Psychology, Vol. 3 No. 2,
pp. 77-101.
Ca Commission, C.A.C. (2013), Principles and Guidelines for
National Food Control Systems (CAC/GL 82-2013), FAO/
WHO, Rome.
Cadden, T., Marshall, D. and Cao, G. (2013), “Opposites
attract: organisational culture and supply chain performance”,
Supply Chain Management: An International Journal, Vol. 18
No. 1, pp. 86-103.
Castañer, X. and Oliveira, N. (2020), “Collaboration,
coordination, and cooperation among organizations: establishing
the distinctive meanings of these terms through a systematic
literature review”, Journal of Management, Vol. 46 No. 6,
pp. 965-1001.
CFIA (2020), “Food incident response process”, Government
of Canadian.
Charlier, C. and Valceschini, E. (2008), “Coordination for
traceability in the food chain. A critical appraisal of European
regulation”, European Journal of Law and Economics, Vol. 25
No. 1, p. 1.
Chen, S., Brahma, S., Mackay, J., Cao, C. and Aliakbarian, B.
(2020), “The role of smart packaging system in food supply
chain”, Journal of Food Science, Vol. 85 No. 3, pp. 517-525.
Choi, B. and Pak, A. (2006), “Multidisciplinarity,
interdisciplinarity and transdisciplinarity in health research,
services, education and policy: 1. definitions, objectives, and
evidence of effectiveness”, Clinical and Investigative Medicine,
Vol. 29 No. 6, pp. 351-364.
Christensen, T. and Ma, L. (2020), “Coordination structures
and mechanisms for crisis management in China: challenges
of complexity”, Public Organization Review, Vol. 20 No. 1,
pp. 19-36.
Christopher, M. (2016), Logistics and Supply Chain Management,
Pearson Education, London.
Coleman, T. (2011), “A practical guide to risk management”,
CFA Institute Research Foundation M2011-2.
Czinkota, M., Kaufmann, H.R. and Basile, G. (2014), “The
relationship between legitimacy, reputation, sustainability
and branding for companies and their supply chains”,
Industrial Marketing Management, Vol. 43 No. 1, pp. 91-101.
Diabat, A., Govindan, K. and Panicker, V.V. (2012), “Supply
chain risk management and its mitigation in a food industry”,
International Journal of Production Research, Vol. 50 No. 11,
pp. 3039-3050.
Doherty, R., Ensor, J.E., Heron, T. and Prado Rios, P.A.D.
(2019), “Food systems resilience: towards an interdisciplinary
research agenda”, Emerald Open Research, Vol. 1.
Duan, J., Zhang, C., Gong, Y., Brown, S. and Li, Z. (2020), “A
content-analysis based literature review in blockchain
adoption within food supply chain”, International Journal of
Environmental Research and Public Health, Vol. 17 No. 5,
p. 1784.
Dubois, A. and Gadde, L. (2002), “Systematic combining: an
abductive approach to case research”, Journal of Business
Research, Vol. 55 No. 7, pp. 553-560.
EFSA (2022), “Update: multi-country Salmonella outbreak
[linked to chocolate products”, available at: www.efsa.](http://www.efsa.europa.eu/en/news)
[europa.eu/en/news (accessed 18 May 2022).](http://www.efsa.europa.eu/en/news)
Eisenhardt, K.M. (1989), “Building theories from case study
research”, The Academy of Management Review, Vol. 14
No. 4, pp. 532-550.
Ergun, Ö., Gui, L., Heier Stamm, J.L., Keskinocak, P. and
Swann, J. (2014), “Improving humanitarian operations
through technology-enabled collaboration”, Production and
Operations Management, Vol. 23 No. 6, pp. 1002-1014.
FAO & WHO (2022), CXC 1-1969 International Food
Standards: General Principles of Food Hygiene, Codex
Alimentarius, Rome.
FSA (2017), “Incident management plan for non-routine
incidents”, Food Standards Agency, Version 6.
Gallo, P. and Jones-Christensen, L. (2011), “Firm size matters:
an empirical investigation of organizational size and
ownership on sustainability-related behaviors”, Business &
Society, Vol. 50 No. 2, pp. 315-349.
George, A. and McKeown, T.J. (1985), “Case studies and
theories of organizational decision making”, Advances in
Information Processing in Organizations, Vol. 2, pp. 21-58.
Gerhold, L., Wahl, S. and Dombrowsky, W.R. (2019), “Risk
perception and emergency food preparedness in Germany”,
International Journal of Disaster Risk Reduction, Vol. 37,
p. 101183.
Gizaw, Z. (2019), “Public health risks related to food safety
issues in the food market: a systematic literature review”,
Environmental Health and Preventive Medicine, Vol. 24,
pp. 1-21.
Håkan, A. and Gyöngyi, K. (2005), “Abductive reasoning in
logistics research”, International Journal of Physical
Distribution & Logistics Management, Vol. 35 No. 2,
pp. 132-144.
Hamer, M., Terlau, W., Breuer, O., van der Roest, J. and
Petersen, B. (2014). “The EHEC-crisis–impact and lessons
learned–sustainable cross-border crisis control and
communication”, in XXIX International Horticultural
Congress on Horticulture: Sustaining Lives, Livelihoods and
Landscapes (IHC2014): Plenary 1126, pp. 51-58.
Hofmann, M., Betke, H. and Sackmann, S. (2015), “Processoriented disaster response management: a structured
literature review”, Business Process Management Journal,
Vol. 21 No. 5, pp. 966-987.
Holgado, M. and Niess, A. (2023), “Resilience in global supply
chains: analysis of responses, recovery actions and strategic
changes triggered by major disruptions”, Supply Chain
Management: An International Journal.
Horak, S., Afiouni, F., Bian, Y., Ledeneva, A., MuratbekovaTouron, M. and Fey, C.F. (2020), “Informal networks: dark
sides, bright sides, and unexplored dimensions”, Management
and Organization Review, Vol. 16 No. 3, pp. 511-542.
Horton, P., Banwart, S.A., Brockington, D., Brown, G.W.,
Bruce, R., Cameron, D., Holdsworth, M., Lenny Koh, S.,
Ton, J. and Jackson, P. (2017), “An agenda for integrated
system-wide interdisciplinary agri-food research”, Food
Security, Vol. 9 No. 2, pp. 195-210.
Huo, B. (2012), “The impact of supply chain integration on
company performance: an organizational capability perspective”,
-----
Pauline van Beusekom – Thoolen et al.
Supply Chain Management: An International Journal, Vol. 17
No. 6.
Jiang, Y. and Yuan, Y. (2019), “Emergency logistics in a largescale disaster context: achievements and challenges”,
International Journal of Environmental Research and Public
Health, Vol. 16 No. 5.
Jose, A. and Shanmugam, P. (2020), “Supply chain issues in
SME food sector: a systematic review”, Journal of Advances in
Management Research, Vol. 17 No. 1, pp. 19-65.
Kaipia, R. (2021), “Supply chain coordination – studies on
planning and information sharing mechanisms”, Helsinki
University of Technology.
Kırılmaz, O. and Erol, S. (2017), “A proactive approach to
supply chain risk management: shifting orders among
suppliers to mitigate the supply side risks”, Journal of
Purchasing and Supply Management, Vol. 23 No. 1,
pp. 54-65.
Kotler, P.J., Armstrong, G. and Opresnik, M.O. (2020),
Principles of Marketing, 18th global ed. Pearson, London.
Kumar, A., Mangla, S.K. and Kumar, P. (2022), “An
integrated literature review on sustainable food supply
chains: exploring research themes and future directions”,
Science of the Total Environment, Vol. 821, p. 153411.
Lämsä, A.M. and Takala, T. (2000), “Downsizing and ethics
of personnel dismissals—the case of Finnish managers”,
Journal of Business Ethics, Vol. 23 No. 4, pp. 389-399.
Leialohilani, A. and De Boer, A. (2020), “EU food legislation
impacts innovation in the area of plant-based dairy
alternatives”, Trends in Food Science & Technology, Vol. 104,
pp. 262-267.
Leon-Bravo,� V., Caniato, F. and Caridi, M. (2019),
“Sustainability in multiple stages of the food supply chain in
Italy: practices, performance and reputation”, Operations
Management Research, Vol. 12 Nos 1/2, p. 40.
Li, K., Lee, J.Y. and Gharehgozli, A. (2023), “Blockchain in
food supply chains: a literature review and synthesis analysis
of platforms, benefits and challenges”, International Journal of
Production Research, Vol. 61 No. 11, pp. 3527-3546.
Li, Z., Yang, W. and Hassan, T. (2019), “Coordination strategies
in dual-channel supply chain considering innovation investment
and different game ability”, Kybernetes, Vol. 49 No. 6.
Lin, C.F. (2010), “Global food safety: exploring key elements
for an international regulatory strategy”, Virginia Journal of
International Law, Vol. 51, p. 637.
Lincoln, Y.S. and Guba, E.G. (1985), Naturalistic Inquiry,
Sage, London.
Lo, S.M. (2013), “Effects of supply chain position on the
motivation and practices of firms going green”, International
Journal of Operations & Production Management, Vol. 34
No. 1, pp. 93-114.
Malone, T.W. and Crowston, K. (1994), “The interdisciplinary
study of coordination”, ACM Computing Surveys, Vol. 26
No. 1, pp. 87-119.
Manning, L. and Soon, J.M. (2016), “Food safety, food fraud,
and food defense: a fast evolving literature”, Journal of Food
Science, Vol. 81 No. 4, pp. 823-834.
Miles, M.B. and Huberman, A.M. (1994), Qualitative Data
Analysis: An Expanded Sourcebook, Sage.
Minnens, F., Lucas Luijckx, N. and Verbeke, W. (2019),
“Food supply chain stakeholders’ perspectives on sharing
information to detect and prevent food integrity issues”,
Foods, Vol. 8 No. 6, p. 225.
Moreno, J., Serrano, M.A., Fernandez, E.B. and Fern�andezMedina, E. (2020), “Improving incident response in big data
ecosystems by using blockchain technologies”, Applied
Sciences, Vol. 10 No. 2, p. 724.
Nardi, V.A.M., Auler, D. and Teixeira, R. (2020a), “Food
safety in global supply chains: a literature review”, Journal of
Food Science, Vol. 85 No. 4.
Nardi, V.A.M., Teixeira, R., Ladeira, W.J. and De Oliveira
Santini, F. (2020b), “A meta-analytic review of food safety
risk perception”, Food Control, Vol. 112, p. 107089.
Nordin, F., Öberg, C., Kollberg, B. and Nord, T. (2010),
“Building a new supply chain position: an exploratory study
of companies in the timber housing industry”, Construction
Management and Economics, Vol. 28 No. 10, pp. 1071-1083.
Nowell, L.S., Norris, J.M., White, D.E. and Moules, N.J.
(2017), “Thematic analysis: striving to meet the trustworthiness
criteria”, International Journal of Qualitative Methods, Vol. 16
No. 1.
Paciarotti, C. and Torregiani, F. (2021), “The logistics of the
short food supply chain: a literature review”, Sustainable
Production and Consumption, Vol. 26, pp. 428-442.
Pandey, V., Pant, M. and Snasel, V. (2022), “Blockchain
technology in food supply chains: review and bibliometric
analysis”, Technology in Society, Vol. 69, p. 101954.
Parker, O., Krause, R. and Devers, C. (2020), “Firm
reputation, managerial discretion, and conceptual clarity”,
Academy of Management Review, Vol. 45 No. 2, pp. 475-478.
Possas, A., Valero, A. and P�erez-Rodríguez, F. (2022), “New
software solutions for microbiological food safety assessment
and management”, Current Opinion in Food Science, Vol. 44,
p. 100814.
Pradabwong, J., Braziotis, C., Tannock, J.D. and Pawar, K.S.
(2017), “Business process management and supply chain
collaboration: effects on performance and competitiveness”,
Supply Chain Management: An International Journal, Vol. 22
No. 2.
Ramirez, M.J., Roman, I.E., Ramos, E. and Patrucco, A.S.
(2020), “The value of supply chain integration in the Latin
American agri-food industry: trust, commitment and
performance outcomes”, The International Journal of Logistics
Management, Vol. 32 No. 1, pp. 281-301.
Schmidt, C.G. and Wagner, S.M. (2019), “Blockchain and
supply chain relations: a transaction cost theory perspective”,
Journal of Purchasing and Supply Management, Vol. 25 No. 4,
p. 100552.
Schmidt, C.G., Foerstl, K. and Schaltenbrand, B. (2017),
“The supply chain position paradox: green practices and firm
performance”, Journal of Supply Chain Management, Vol. 53
No. 1, pp. 3-25.
Shockley-Zalabak (1999), Fundamentals of Organizational
Communication: Knowledge, Sensitivity, Skills, Values, 4th ed.,
Longman, New York, NY.
Song, Y.-H., Yu, H.-Q., Tan, Y.-C., Lv, W., Fang, D.-H. and
Liu, D. (2020), “Similarity matching of food safety incidents
in China: aspects of rapid emergency response and food
safety”, Food Control, Vol. 115, p. 107275.
Soon, J.M., Brazier, A.K. and Wallace, C.A. (2020),
“Determining common contributory factors in food safety
-----
Pauline van Beusekom – Thoolen et al.
incidents–a review of global outbreaks and recalls 2008–
2018”, Trends in Food Science & Technology, Vol. 97,
pp. 76-87.
Speranza, M.G. (2018), “Trends in transportation and
logistics”, European Journal of Operational Research, Vol. 264
No. 3, pp. 830-836.
Subramaniam, C., Ali, H. and Shamsudin, F.M. (2010),
“Understanding the antecedents of emergency response: a
proposed framework”, Disaster Prevention and Management:
An International Journal, Vol. 19 No. 5, pp. 571-581.
Tacheva, Z., Simpson, N. and Ivanov, A. (2020), “Examining
the role of top management in corporate sustainability: does
supply chain position matter?”, Sustainability, Vol. 12
No. 18, p. 7518.
Tarafdar, M. and Davison, R.M. (2018), “Research in
information systems: intra-disciplinary and inter-disciplinary
approaches”, Journal of the Association for Information Systems,
Vol. 19 No. 6, p. 2.
Trienekens, J.H., Wognum, P.M., Beulens, A.J. and van der
Vorst, J.G. (2012), “Transparency in complex dynamic food
supply chains”, Advanced Engineering Informatics, Vol. 26
No. 1, pp. 55-65.
Våland, T. and Heide, M. (2005), “Corporate social
responsiveness: exploring the dynamics of ‘bad episodes’”,
European Management Journal, Vol. 23 No. 5, pp. 495-506.
Van Asselt, E., van der Fels-Klerx, H., Breuer, O. and
Helsloot, I. (2017), “Food safety crisis management—a
comparison between Germany and The Netherlands”,
Journal of Food Science, Vol. 82 No. 2, pp. 477-483.
Van Beusekom - Thoolen, P. (2022), “Supply chain response
to food safety incidents”, PhD dissertation, Maastricht
University, Maastricht.
Van der Vorst, J.G.A.J. (2004), “Performance levels in food
traceability and the impact on chain design: results of an
international benchmark study. Dynamics in chains and
networks”, Proceedings of the Sixth International Conference on
Chain and Network Management in Agribusiness and the Food
Industry, Ede.
Van Hoek, R.I. (1999), “From reversed logistics to green
supply chains”, Supply Chain Management: An International
Journal, Vol. 4 No. 3, pp. 129-135.
Verbeke, W., Frewer, L.J., Scholderer, J. and De Brabander,
H.F. (2007), “Why consumers behave as they do with
respect to food safety and risk information”, Analytica
Chimica Acta, Vol. 586 Nos 1/2, pp. 2-7.
Vlajic, J.V., Van der Vorst, J.G. and Haijema, R. (2012), “A
framework for designing robust food supply chains”,
International Journal of Production Economics, Vol. 137 No. 1,
pp. 176-189.
Wan, P.K., Huang, L. and Holtskog, H. (2020), “Blockchainenabled information sharing within a supply chain: a
systematic literature review”, IEEE Access, Vol. 8,
pp. 49645-49656.
Wankmüller, C. and Reiner, G. (2020), “Coordination,
cooperation and collaboration in relief supply chain
management”, Journal of Business Economics, Vol. 90 No. 2,
p. 239.
Ward, J.D., Ward, L.T. and Riedel, J.S. (2015), Principles
of Food Science, Goodheart-Willcox Company, Tinley
Park, IL.
Welch, C. (2018), “Good qualitative research: opening up the
debate”, Collaborative Research Design, Springer, Singapore,
pp. 401-412.
Welch, C. and Piekkari, R. (2017), “How should we (not)
judge the ‘quality’ of qualitative research? A re-assessment of
current evaluative criteria in international business”, Journal
of World Business, Vol. 52 No. 5, pp. 714-725.
Wilson, A.M., McCullum, D., Henderson, J., Coveney, J.,
Meyer, S.B., Webb, T. and Ward, P.R. (2016), “Management
of food incidents by Australian food regulators”, Nutrition &
Dietetics, Vol. 73 No. 5, pp. 448-454.
WHO (2023), “Food safety—key facts”, FAO/WHO, available
[at: www.who.int/health-topics/food-safety](http://www.who.int/health-topics/food-safety)
Wiegerinck, J.V.V. (2006), “Consumer trust and food safety.
An attributional approach to food safety incidents and
channel response”, Tilburg University, School of Economics
and Management.
Wynstra, F., Suurmond, R. and Nullmeier, F. (2019),
“Purchasing and supply management as a multidisciplinary
research field: unity in diversity?”, Journal of Purchasing and
Supply Management, Vol. 25 No. 5, p. 100578.
Yin, R.K. (2018), Case Study Research and Applications: Design
and Methods, 6th ed. Sage, London.
Yu, X. and Ren, X. (2018), “The impact of food quality
information services on food supply chain pricing decisions
and coordination mechanisms based on the O2O e-commerce
mode”, Journal of Food Quality, Vol. 2018, pp. 1-18.
Zhang, D., Jiang, Q., Ma, X. and Li, B. (2014), “Drivers for
food risk management and corporate social responsibility; a
case of Chinese food companies”, Journal of Cleaner
Production, Vol. 66, pp. 520-527.
### Corresponding author
Pauline van Beusekom – Thoolen can be contacted at:
[[email protected]](mailto:[email protected])
For instructions on how to order reprints of this article, please visit our website:
www.emeraldgrouppublishing.com/licensing/reprints.htm
Or contact us for further details: [email protected]
-----
| 21,704
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1108/scm-01-2023-0040?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1108/scm-01-2023-0040, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://www.emerald.com/insight/content/doi/10.1108/SCM-01-2023-0040/full/pdf?title=interdisciplinary-challenges-associated-with-rapid-response-in-the-food-supply-chain"
}
| 2,023
|
[
"JournalArticle",
"Review"
] | true
| 2023-10-19T00:00:00
|
[
{
"paperId": "f50aeab4fda4629387efce3411f00191a3e932cc",
"title": "Blockchain technology in food supply chains: Review and bibliometric analysis"
},
{
"paperId": "b6fb42867de19f48e37f2be3daa334f16a415068",
"title": "New software solutions for microbiological food safety assessment and management"
},
{
"paperId": "bbf678518b3ecaef9fce1d99054d1cc8314d07f6",
"title": "An integrated literature review on sustainable food supply chains: Exploring research themes and future directions."
},
{
"paperId": "31cadf32f87650975b87f4fdc10f0351baa720da",
"title": "Blockchain in food supply chains: a literature review and synthesis analysis of platforms, benefits and challenges"
},
{
"paperId": "827d2b4633d1104eaada08474e2d9357b0fe10f1",
"title": "A critical comparative review of emergency and disaster management in the Arab world"
},
{
"paperId": "33ea229887ad61a26195ae987fba10822cf7e58c",
"title": "The logistics of the short food supply chain: A literature review"
},
{
"paperId": "178c7e223bea9c9de9b01e8e4efc13b4568f8ce0",
"title": "Examining the Role of Top Management in Corporate Sustainability: Does Supply Chain Position Matter?"
},
{
"paperId": "9ba84635bc7845d40558edabed0141be70876aea",
"title": "Similarity matching of food safety incidents in China: Aspects of rapid emergency response and food safety"
},
{
"paperId": "833dc2cd149471f7c9e93f476eff132d04c2329e",
"title": "The value of supply chain integration in the Latin American agri-food industry: trust, commitment and performance outcomes"
},
{
"paperId": "38df20677ce118d0f5a3491f93bdf7b7a7b03678",
"title": "EU food legislation impacts innovation in the area of plant-based dairy alternatives"
},
{
"paperId": "122e9e70e15951e230ad4468c5df5afca35171b4",
"title": "Informal Networks: Dark Sides, Bright Sides, and Unexplored Dimensions"
},
{
"paperId": "82fbaaaadb2754bda1f7dbebd596a09f616fb508",
"title": "A meta-analytic review of food safety risk perception"
},
{
"paperId": "489dcd941343050ed56d947cca6367b9b41157cc",
"title": "Firm Reputation, Managerial Discretion, and Conceptual Clarity"
},
{
"paperId": "6666431acfd4177b36561ee49f6b2ae33f6ee08d",
"title": "Food safety in global supply chains: A literature review."
},
{
"paperId": "8d0b554012e3db32b825f3f7d213c211fe2cbdd1",
"title": "Blockchain-Enabled Information Sharing Within a Supply Chain: A Systematic Literature Review"
},
{
"paperId": "51d62d07dccda4c612a5658b439753752b8f6b9c",
"title": "A Content-Analysis Based Literature Review in Blockchain Adoption within Food Supply Chain"
},
{
"paperId": "95e74fe3866180c09bd2b9f9b8cf586f4d70acf5",
"title": "Determining common contributory factors in food safety incidents – A review of global outbreaks and recalls 2008–2018"
},
{
"paperId": "36a614165e549f08e5a08e6986fa245c8937e1e3",
"title": "Collaboration, Coordination, and Cooperation Among Organizations: Establishing the Distinctive Meanings of These Terms Through a Systematic Literature Review"
},
{
"paperId": "5a6d9bcd7681c43ed460ad28af20c30df2837623",
"title": "The role of smart packaging system in food supply chain."
},
{
"paperId": "0643d4b30ec90ce78b45873ed64f23b1596e5595",
"title": "Improving Incident Response in Big Data Ecosystems by Using Blockchain Technologies"
},
{
"paperId": "124f79947376187ace4ca934b2ea3e423443ac7f",
"title": "Public health risks related to food safety issues in the food market: a systematic literature review"
},
{
"paperId": "d309fc4293abdb33debfeb5e8ba03384e57b2c0e",
"title": "Purchasing and supply management as a multidisciplinary research field: Unity in diversity?"
},
{
"paperId": "a26a584a79f96dabb9f649ddc20a6ef2f4605076",
"title": "Blockchain and supply chain relations: A transaction cost theory perspective"
},
{
"paperId": "d0aaf5f5e5f6e1c13701a2418c9995b9996a38d5",
"title": "Coordination, cooperation and collaboration in relief supply chain management"
},
{
"paperId": "f42c8a158ff80db77d7ceabf3662def300ae88e8",
"title": "Transparency in food supply chains: A review of enabling technology solutions"
},
{
"paperId": "87140d538892ba1c0408f6e934bdb1140b444c11",
"title": "Supply chain issues in SME food sector: a systematic review"
},
{
"paperId": "ea6243c7faacd7a1853b51196986b6509cd2dd2a",
"title": "Risk perception and emergency food preparedness in Germany"
},
{
"paperId": "c1fbd36df5e3a97a7d53adec9f0bf96b0f7d0071",
"title": "Food Supply Chain Stakeholders’ Perspectives on Sharing Information to Detect and Prevent Food Integrity Issues"
},
{
"paperId": "3f463e241e0f83b437b80ad9d94393ddb54ed79f",
"title": "Coordination strategies in dual-channel supply chain considering innovation investment and different game ability"
},
{
"paperId": "e7c9d6e0e6bd721ffca8c167cb624af1a3fcbcd7",
"title": "Emergency Logistics in a Large-Scale Disaster Context: Achievements and Challenges"
},
{
"paperId": "a140329a5cb5d6b27903e60a6c1bf41f686eebb1",
"title": "Food Systems Resilience: Towards an Interdisciplinary Research Agenda"
},
{
"paperId": "eefbf49929e09d72ed8d2cfeed609c7236d9b21f",
"title": "Sustainability in multiple stages of the food supply chain in Italy: practices, performance and reputation"
},
{
"paperId": "c3274bc003778404895c50a23c79092ed2e97797",
"title": "Coordination Structures and Mechanisms for Crisis Management in China: Challenges of Complexity"
},
{
"paperId": "df2f8793eeac4156e9edc2306d07abe87bb81bdb",
"title": "Research in Information Systems: Intra-Disciplinary and Inter-Disciplinary Approaches"
},
{
"paperId": "47ede0819ffca274d718005b9515db66031664d3",
"title": "Food security and the environment: Interdisciplinary research to increase productivity while exercising environmental conservation"
},
{
"paperId": "5bc2311cc74d065572e83483419bf8e4e12456cd",
"title": "Trends in transportation and logistics"
},
{
"paperId": "3d975f96b908caf30ad04857eb116d9a134accc2",
"title": "How should we (not) judge the ‘quality’ of qualitative research? A re-assessment of current evaluative criteria in International Business"
},
{
"paperId": "90bea57bf2c895f663d48c0eeff006a67ee3005b",
"title": "Business process management and supply chain collaboration: effects on performance and competitiveness"
},
{
"paperId": "c6bc1d1458e5db3696a06ef46c7a2e04dcf726f2",
"title": "Price risk perceptions and management strategies in selected European food supply chains: An exploratory approach"
},
{
"paperId": "2188f144a3eb516777b301d5f76d2d626cf11232",
"title": "An agenda for integrated system-wide interdisciplinary agri-food research"
},
{
"paperId": "b56c4249e82efd4f13482d3a38c7585ab644204b",
"title": "Food safety as a field in supply chain management studies: a systematic literature review"
},
{
"paperId": "12f4ec96e534b86652f569ccd9fe289c8f443b43",
"title": "Food Safety Crisis Management-A Comparison between Germany and the Netherlands."
},
{
"paperId": "c53859f403955d0b7187109e5edf9be76f37e324",
"title": "Management of food incidents by Australian food regulators"
},
{
"paperId": "1486e61be3cbe70f7ec4bdc10662b2c3d1488f79",
"title": "The EHEC-Crisis - Impact and lessons learned - Sustainable cross-border crisis control and communication"
},
{
"paperId": "751475155d1fadd45b29bb70d1a9249e87652539",
"title": "Understanding the terminologies : disaster, crisis and emergency"
},
{
"paperId": "07019c88049ac69c9a92d34e136022133958216b",
"title": "Food Safety, Food Fraud, and Food Defense: A Fast Evolving Literature."
},
{
"paperId": "9bd4999ba9790e6c6b78be2d15b838162bc80a22",
"title": "Process-oriented disaster response management: a structured literature review"
},
{
"paperId": "c1ccf0b7491e4309f04c56a75c4a0df819a00700",
"title": "Improving Humanitarian Operations through Technology‐Enabled Collaboration"
},
{
"paperId": "5ebb49ecf0393b35ccab3a6f28620a284a6ceec7",
"title": "Traceability in a food supply chain: Safety and quality perspectives"
},
{
"paperId": "fff112ab1579983ca200751ecf5bb2356460d352",
"title": "Drivers for food risk management and corporate social responsibility; a case of Chinese food companies"
},
{
"paperId": "48b7f1db2ccc273e494df21855c8abf7ef2224e4",
"title": "Effects of supply chain position on the motivation and practices of firms going green"
},
{
"paperId": "8a8285533406d2bdfd2469a6957dd31759fd3cb0",
"title": "Opposites attract: organisational culture and supply chain performance"
},
{
"paperId": "6d409bebfccdbef5d46f1135e3217cf96346780f",
"title": "The impact of supply chain integration on company performance: an organizational capability perspective"
},
{
"paperId": "7569061a8379b721035aba471f3bbbf5caa19ebf",
"title": "Supply chain risk management and its mitigation in a food industry"
},
{
"paperId": "2e2671e78c1971e3882dd8f8d81fe0c8a248fb27",
"title": "A framework for designing robust food supply chains"
},
{
"paperId": "bc777cef209bb50385ef9d6f70ea235cf5afdf9d",
"title": "A Practical Guide to Risk Management"
},
{
"paperId": "674e65db78d1e51abe51fe1b423c4572040629ac",
"title": "Firm Size Matters: An Empirical Investigation of Organizational Size and Ownership on Sustainability-Related Behaviors"
},
{
"paperId": "fdf03ecf6bc6b0f4debf0e38b45e4be8ee47dd67",
"title": "Global Food Safety: Exploring Key Elements for an International Regulatory Strategy"
},
{
"paperId": "378894031e69bcd5dc9167c43a89fad2cd51f5ca",
"title": "Understanding the antecedents of emergency response: a proposed framework"
},
{
"paperId": "7e142fbf4e96a20d4e7259d8a47e76a55765fba6",
"title": "Building a new supply chain position: an exploratory study of companies in the timber housing industry"
},
{
"paperId": "8956023214b00045d8f2db1cd8eba0941fab6fc5",
"title": "Coordination in humanitarian relief chains: Practices, challenges and opportunities"
},
{
"paperId": "9653d1f07ab2e90f90f10a20304f58cc826eeb3b",
"title": "Building theories from case study research"
},
{
"paperId": "4398db37c7091cb2e05b42b86295438631c191f0",
"title": "Coordination for traceability in the food chain. A critical appraisal of European regulation"
},
{
"paperId": "6d458b438c09cf30e719f89ea0e87c2dde52b800",
"title": "Supply chain coordination : studies on planning and information sharing mechanisms"
},
{
"paperId": "04ca9539fbf4f6ba3384305ed152199bb1aabc33",
"title": "Why consumers behave as they do with respect to food safety and risk information."
},
{
"paperId": "3ad0fb9f828360a4989df637d518867009f61796",
"title": "Multidisciplinarity, interdisciplinarity and transdisciplinarity in health research, services, education and policy: 1. Definitions, objectives, and evidence of effectiveness."
},
{
"paperId": "1b35cad37c9b3bc8252b7a485ac928678fb723d8",
"title": "Corporate Social Responsiveness:: Exploring the Dynamics of “Bad Episodes”"
},
{
"paperId": "7baacdde0e9f7609c381517dc88cd132f7053864",
"title": "Abductive reasoning in logistics research"
},
{
"paperId": "427fa019f7a0c17ec59708bcfe22a772563ca028",
"title": "Systematic combining: an abductive approach to case research"
},
{
"paperId": "3317124f2f68df6877e9c701676c8c2db7443efb",
"title": "Downsizing and Ethics of Personnel Dismissals — The Case of Finnish Managers"
},
{
"paperId": "97da78252e8db53a178fa5cbf16ac3feb922c3ee",
"title": "From reversed logistics to green supply chains"
},
{
"paperId": "5646a6ac0b1441fe5f55ada7349755ff55d7dfc5",
"title": "The interdisciplinary study of coordination"
},
{
"paperId": "a3b72b811e7ef604fcedb788e848883e8ffbcf76",
"title": "Qualitative Data Analysis: An Expanded Sourcebook"
},
{
"paperId": null,
"title": "“ Food safety — key facts ”"
},
{
"paperId": "5186764bd5b4812c288a127e0c85d6eaedfc4878",
"title": "Naturalistic Inquiry"
},
{
"paperId": null,
"title": "“ Update: multi-country Salmonella outbreak linked to chocolate products ”"
},
{
"paperId": null,
"title": "CXC 1-1969 International Food Standards: General Principles of Food Hygiene , Codex Alimentarius, Rome"
},
{
"paperId": "94e9fadcbc15550b4eb311445ce57160a740055d",
"title": "Logistics and Supply Chain Management: 7th International Conference, LSCM 2020, Tehran, Iran, December 23-24, 2020, Revised Selected Papers"
},
{
"paperId": "d02dcc7c83262c038ad60bb74215909afe9c5aa0",
"title": "Supply Chain"
},
{
"paperId": null,
"title": "Principles of Marketing , 18th global ed"
},
{
"paperId": null,
"title": "“ Food incident response process ”"
},
{
"paperId": "597286dc488bbea990860fad258d2ddd0448c7e9",
"title": "A Blockchain Framework for Containerized Food Supply Chains"
},
{
"paperId": "085ff72d10d8ededceecf77f2027a2700a2cbda7",
"title": "Good Qualitative Research: Opening up the Debate"
},
{
"paperId": "99fdcefeb3dab8cee7107c0c5ecf111ffd57fdd5",
"title": "The Impact of Food Quality Information Services on Food Supply Chain Pricing Decisions and Coordination Mechanisms Based on the O2O E-Commerce Mode"
},
{
"paperId": "b2bd4ba13a1a2341cafe5bf644454da843eae6e1",
"title": "Supply chain risk perception: understanding the gap between theory and practice."
},
{
"paperId": null,
"title": "Case Study Research and Applications: Design and Methods , 6th ed"
},
{
"paperId": "a797b64eea61e71ec20dd45e8fbf90cf5b384897",
"title": "A proactive approach to supply chain risk management: Shifting orders among suppliers to mitigate the supply side risks"
},
{
"paperId": "18389ae67a7a0692c6958696efa921cd34c7cd61",
"title": "The Supply Chain Position Paradox: Green Practices and Firm Performance"
},
{
"paperId": null,
"title": "“ Incident management plan for non-routine incidents ”"
},
{
"paperId": null,
"title": "Thematic analysis: striving to meet the trustworthiness criteria"
},
{
"paperId": "5610599511d2edfb5c4708c72cd79f3ad0ed306e",
"title": "Fundamentals Of Organizational Communication Knowledge Sensitivity Skills Values"
},
{
"paperId": "673dfba78fe64ab10de77c767fd70784139f5f0e",
"title": "The relationship between legitimacy, reputation, sustainability and branding for companies and their supply chains"
},
{
"paperId": null,
"title": "Principles and Guidelines for National Food Control Systems (CAC/GL 82-2013)"
},
{
"paperId": "953346782cff3438894c7f9a80d303e2280f6c52",
"title": "Transparency in complex dynamic food supply chains"
},
{
"paperId": "8d9ba269c04070e825331fa5f9caed31799d66eb",
"title": "Consumer trust and food safety. An attributional approach to food safety incidents and channel response"
},
{
"paperId": null,
"title": "Using thematic analysis in psychology"
},
{
"paperId": "08514c571049ff1ce6d7b9d84d37097ddefa74d9",
"title": "Performance levels in food traceability and the impact on chain design: results of an international benchmark study"
},
{
"paperId": "5fa8f0c77b5ea7aca5a2b4b2e1b804544839026d",
"title": "Principles of Food Science"
},
{
"paperId": null,
"title": "Case studies and theories of organizational decision making"
},
{
"paperId": "6c1245f1116d52c22bd07429e496d47c8ad42ce0",
"title": "Supply chain response to food safety incidents"
},
{
"paperId": null,
"title": "International Journal of Production Economics , Vol. 137 No. 1, pp. 176-189"
},
{
"paperId": null,
"title": "RQ1. What are the key coordination challenges in the logistics response to food safety incidents?"
},
{
"paperId": null,
"title": "RQ2 . To what extent are the identi fi ed coordination challenges"
},
{
"paperId": null,
"title": "“ Resilience in global supply chains: analysis of responses, recovery actions and strategic changes triggered by major disruptions ”"
}
] | 21,704
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/005068f0ec70e22950830230e4bd1868e430a8cd
|
[
"Computer Science"
] | 0.858877
|
Graph-Based LSTM for Anti-money Laundering: Experimenting Temporal Graph Convolutional Network with Bitcoin Data
|
005068f0ec70e22950830230e4bd1868e430a8cd
|
Neural Processing Letters
|
[
{
"authorId": "1840958580",
"name": "Ismail Alarab"
},
{
"authorId": "1843194",
"name": "S. Prakoonwit"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Neural Process Lett"
],
"alternate_urls": null,
"id": "03101d6e-e317-48fe-ab55-f82ed4f0727f",
"issn": "1370-4621",
"name": "Neural Processing Letters",
"type": "journal",
"url": "https://link.springer.com/journal/11063"
}
|
Elliptic data—one of the largest Bitcoin transaction graphs—has admitted promising results in many studies using classical supervised learning and graph convolutional network models for anti-money laundering. Despite the promising results provided by these studies, only few have considered the temporal information of this dataset, wherein the results were not very satisfactory. Moreover, there is very sparse existing literature that applies active learning to this type of blockchain dataset. In this paper, we develop a classification model that combines long-short-term memory with GCN—referred to as temporal-GCN—that classifies the illicit transactions of Elliptic data using its transaction’s features only. Subsequently, we present an active learning framework applied to the large-scale Bitcoin transaction graph dataset, unlike previous studies on this dataset. Uncertainties for active learning are obtained using Monte-Carlo dropout (MC-dropout) and Monte-Carlo based adversarial attack (MC-AA) which are Bayesian approximations. Active learning frameworks with these methods are compared using various acquisition functions that appeared in the literature. To the best of our knowledge, MC-AA method is the first time to be examined in the context of active learning. Our main finding is that temporal-GCN model has attained significant success in comparison to the previous studies with the same experimental settings on the same dataset. Moreover, we evaluate the performance of the provided acquisition functions using MC-AA and MC-dropout and compare the result against the baseline random sampling model.
|
Neural Processing Letters (2023) 55:689–707
https://doi.org/10.1007/s11063-022-10904-8
# **Graph-Based LSTM for Anti-money Laundering:** **Experimenting Temporal Graph Convolutional Network** **with Bitcoin Data**
**Ismail Alarab** **[1]** **· Simant Prakoonwit** **[1]**
Accepted: 25 May 2022 / Published online: 16 June 2022
© The Author(s) 2022
**Abstract**
Elliptic data—one of the largest Bitcoin transaction graphs—has admitted promising results
in many studies using classical supervised learning and graph convolutional network models
for anti-money laundering. Despite the promising results provided by these studies, only few
have considered the temporal information of this dataset, wherein the results were not very
satisfactory. Moreover, there is very sparse existing literature that applies active learning to
this type of blockchain dataset. In this paper, we develop a classification model that combines
long-short-term memory with GCN—referred to as temporal-GCN—that classifies the illicit
transactions of Elliptic data using its transaction’s features only. Subsequently, we present
an active learning framework applied to the large-scale Bitcoin transaction graph dataset,
unlike previous studies on this dataset. Uncertainties for active learning are obtained using
Monte-Carlo dropout (MC-dropout) and Monte-Carlo based adversarial attack (MC-AA)
which are Bayesian approximations. Active learning frameworks with these methods are
compared using various acquisition functions that appeared in the literature. To the best of
our knowledge, MC-AA method is the first time to be examined in the context of active
learning. Our main finding is that temporal-GCN model has attained significant success in
comparison to the previous studies with the same experimental settings on the same dataset.
Moreover, we evaluate the performance of the provided acquisition functions using MC-AA
and MC-dropout and compare the result against the baseline random sampling model.
**Keywords** Temporal GCN · Uncertainty estimation · Active learning · Bitcoin data ·
Anti-money laundering
## B Ismail Alarab
[email protected]
Simant Prakoonwit
[email protected]
1 Bournemouth, UK
123
-----
690 I. Alarab, S. Prakoonwit
### **1 Introduction**
Blockchain intelligence and forensics company CipherTrace have reported a global amount
of $US 4.5 billion in 2019 of Bitcoin crime related to illicit services [1]. Money launderers exploit the pseudonym of Bitcoin ledgers by transforming the illegally obtained money
from serious crimes into legitimate funds via Bitcoin network. On the other hand, Bitcoin
blockchain has attracted intelligence companies and financial regulators who transact on the
blockchain to be aware of its risks, such as technical developments in and societal adoption
of the cryptocurrency Bitcoin [2]. The arising of illegal services and the public availability of
Bitcoin data have urged the need to develop intelligent methods that exploit the transparency
of the blockchain records [3]. Such methods can boost anti-money laundering (AML) in
Bitcoin and enhance safeguarding cryptocurrency ecosystems. In the past few years, Elliptic
company—a cryptocurrency intelligence company focusing on safeguarding cryptocurrency
systems—has released a graph network of Bitcoin transactions, known as Elliptic data. This
data has been a great support to the research and AML community in order to develop machine
learning methods. Elliptic data acquires a graph of Bitcoin transactions that spans handcrafted
local features (associated with transactions itself) and aggregated features (associated with
neighbouring transactions) with partially labelled nodes. Furthermore, the labelled nodes
denote licitly-transacted payments (e.g. miners) and illicit transactions (e.g. theft, scams).
Previous researchers have attempted to apply this dataset to classical supervised learning
methods [3, 4], graph convolutional network (GCN) [3, 5], EvolveGCN for dynamic graphs
[6], signature vectors in blockchain transactions (SigTran) model [7] and uncertainty estimation with multi-layer perceptron (MLP) model [8]. Despite the promising results achieved
by these studies, the highest accuracy achieved considering only the set of local features is
about 97.4% and *f* 1 -score of 77.3%. Furthermore, only a few have considered the temporal
information of this dataset. On the other hand, Bitcoin blockchain is a large-scale data in
which the labelling process of this data is very hard and time-consuming. Active learning
(AL) approach tackles this problem by querying the labels of the most informative data points
that attain high performance with less labelled examples. Using Elliptic data, the only work
by Lorenz et al. [9] has presented an active learning solution which has shown the capability
of matching the performance of a fully supervised model by using 5% of the labelled data.
However, the preceded framework has been presented with classical supervised learning
methods which do not consider the graph topology or temporal sequence of Elliptic data. In
this paper, we aim:
- To present a model in a novel way that considers the graph structure and temporal sequence
of Elliptic data to predict illicit transactions that belong to illegal services in Bitcoin
blockchain network.
- To perform active learning on Bitcoin blockchain data that mimics a real-life situation,
since Bitcoin blockchain is a massively growing technology and its data is time-consuming
to label.
The presented classification model comprises long short-term memory (LSTM) and GCN
models, wherein the overall model attains an accuracy of 97.7% and *f* 1 -score of 80% which
outperform previous studies with the same experimental settings. On the other hand, the
presented active learning framework requires an acquisition function that relies on model’s
uncertainty to query the most informative data. In this paper, the model’s uncertainty estimates are obtained using two comparable methods based Bayesian approximations which
are named Monte-Carlo dropout (MC-dropout) [10] and Monte-Carlo adversarial attack
123
-----
Graph-Based LSTM for Anti-money Laundering: Experimenting … 691
(MC-AA) [11]. We examine these two uncertainty methods due to their simplicity and efficiency where MC-AA method is the first time to be applied in the context of active learning.
Hence, we use a variety of acquisition functions to test the performance of the active learning
framework using Elliptic data. For each acquisition function, we evaluate the active learning performance that relies on each of MC-AA and MC-dropout uncertainty estimates. We
compare the performance of the presented active learning framework against the random
sampling acquisition as a baseline model.
This paper is structured as follows: Section 2 describes the related work. Section 3 demonstrates the uncertainty estimation methods used by the active learning framework. Section 4
provides various acquisition functions to be examined in the experiments. Section 5 provides
the methods used to perform the classification task. Experiments are detailed in Sect. 6 followed by the results and discussions in Sect. 7. An ablation study of the proposed model is
given in Sect. 8. Section 9 states the conclusion to wrap up the whole methodology.
### **2 Overview of Related Work**
With the appearance of illicit services in the public blockchain systems, intelligent methods
have undoubtedly become a necessary need for AML regulations with the rapidly increasing
amount of blockchain data. Many studies have adopted the machine learning approach in
detecting illicit activities in the public blockchain. Harlev et al. [2] have tested the performance of classical supervised learning methods to predict the type of the unidentified entity
in Bitcoin. Farrugia et al. [12] have applied XGBoost classifier to detect fraudulent accounts
using the Ethereum dataset. Weber et al. [3] have introduced Elliptic data—a large-scale
graph-structured dataset of a Bitcoin transaction graph with partially labelled nodes—to predict licit and illicit Bitcoin transactions. This dataset has been introduced by Weber et al. [3]
who have discussed the outperformance of the random forest model against graph convolutional network (GCN) in classifying the licit and illicit transactions derived from the Bitcoin
blockchain. Subsequently, the classification results using ensemble learning model in [4] have
revealed a significant success over other benchmark methods to classify illicit transactions
of Elliptic data. Also, Pareja et al. [6] have introduced EvolveGCN which is formed of GCN
with a recurrent neural network such as Gated-Recurrent-Unit (GRU) and LSTM. This study
has revealed the outperformance of EvolveGCN over the GCN model used by Weber et al.
[3] on the same dataset. Another work in [5] has considered the neighbouring information
of the Bitcoin transaction graph of Elliptic data using GCN accompanied by linear hidden
layers. Without utilising any temporal information from this dataset, the latter reference has
achieved an accuracy of 97.4% outperforming the GCN based models that were presented in
[3, 6].
Active learning, a subfield of machine learning, is a way to make the learning algorithm
choose the data to be trained on [13]. Active learning mitigates the bottleneck of the manual
labelling process, such that the learning model queries the labels of the most informative data.
Since it is so expensive to obtain labels, active learning has witnessed a resurgence with the
appearance of big data where large-scale datasets exist [14]. Lorenz et al. [9] have presented
an active learning framework in an attempt to reduce the labelling process of the large-scale
Elliptic data of Bitcoin. The presented active learning solution has shown its capability in
matching the performance of a fully supervised model with only 5% of the labels. The authors
have focused on querying strategies based on uncertainty sampling [13, 15] and expected
model change [13, 16]. For instance, the used uncertainty sampling strategy is based on
123
-----
692 I. Alarab, S. Prakoonwit
the predicted probabilities provided by the random forest in [9]. Yet, no study presents an
active learning framework that utilises the recent advances in Bayesian methods on Bitcoin
data. On the other hand, Gal et al. [17] have presented active learning frameworks on image
data where the authors have combined the recent advances in Bayesian methods into the
active learning framework. This study has performed MC-dropout to produce the model’s
uncertainty which is utilised by a given acquisition function to choose the most informative
queries for labelling. Concisely, the authors in [18] have applied the entropy [19], mutual
information [20], variation ratios [21], and mean standard deviation (Mean STD) [22, 23]
acquisition functions which are compared against the random acquisition.
In this study, we conduct experiments using a classification model that exploits the graph
structure and the temporal sequence of Elliptic data derived from the Bitcoin blockchain.
Motivated by the studies in [9, 17], we perform the active learning frameworks, using pool
based-based scenario [13] in which the classifier iteratively samples the most informative
instances for labelling from an initially unlabelled pool. For each iteration, the classifier
samples a batch of unlabelled data points according to their uncertainty estimates from
Bayesian models using the sampling acquisition function.
### **3 Model Uncertainty: MC-Dropout Versus MC-AA**
The two major types of uncertainty in a machine learning model are epistemic and aleatoric
uncertainties [24]. Epistemic, also known as model uncertainty [10], is induced from the
uncertainty in the parameters of the trained model. This uncertainty is reducible by training the model on enough data. Aleatoric uncertainty is the uncertainty tied with the noisy
instances that lie on the decision boundary or in the overlapping region for class distributions,
and therefore it is irreducible. MC-dropout has gained popularity as a prominent method in
producing the two types of uncertainties [10]. Although MC-dropout is easy to perform and
efficient, this method has failed, to some extent, to capture data points lying in the overlapping region of different classes where noisy instances reside [11]. The latter reference has
provided an uncertainty method that is capable to reach noisy instances with high uncertainty
estimates. This method is so-called MC-AA which targets mainly the instances that fall in the
neighbourhood of a decision boundary. Although MC-dropout and MC-AA are both simple
and promising methods, MC-AA has provided more reliable uncertainty estimates in [11]. In
the light of these studies, we utilise these uncertainty methods as a part of the active learning
process.
#### **3.1 MC-Dropout: Monte-Carlo Dropout**
Initially, dropout has been provided as a simple regularisation technique that reduces the
overfitting of the model [25]. The work in [10] has MC-dropout as a probabilistic approach
basedonBayesianapproximationtoproduceuncertaintyestimates.MC-dropoutusesdropout
after every weight layer in a neural network. Uncertainty estimates are produced by activating
dropout during the testing phase by multiple stochastic forward passes wherein uncertainty
measurement (e.g., mutual information) is computed.
Let ˆ *y* be an output of an input *x* mapped by a neural network, trained on set *D* *train* *,* with
layers L and learnable weights *w* = { *W* *i* } *i* *[L]* =1 [. Consider] *[ y]* [ as an observed output associated]
with *x* . Then, we can express the predictive distribution as:
123
-----
Graph-Based LSTM for Anti-money Laundering: Experimenting … 693
*p(y* | *x,* *D* *train* *)* = *p(y* | *x, w)* *p(w* | *D* *train* *)dw,* (1)
�
where *p(y* | *x, w)* is the model’s likelihood and *p(w* | *D* *train* *)* is the posterior over the weights.
Since the posterior distribution is analytically intractable [10], the posterior is replaced by
*q* ( *w* ), an approximation of variational distribution. *q* ( *w* ) is obtained from the minimisation of
Kullback–Leibler divergence (KL) to approximately match *p(w* | *D* *train* *)* as follows referring
to:
*K L(q(w)* | *p(w* | *D* *train* *)*
Hence, the variational inference leads to an approximated predictive distribution as:
*q(y* | *x)* = *p(y* | *x, w)q(w)dw* (2)
�
The work in [10] has chosen *q* ( *w* ) to be the distribution over the matrices whose columns are
randomly set to zero for posterior approximation. Then, *q* ( *w* ) can be defined as:
*W* *i* = *M* *i* - *diag* �� *z* *i, j* � *Kj* = *i* 1 � (3)
where *z* *i,j* ∼ Bernoulli( *p* *i* ), as realisation from Bernoulli distribution, for *i* = 1 *,* … *, L* and *j* =
1 *,* … *K* *i* −1, with *K* *i* x *K* *i* −1 the dimension of matrix *W* *i* .
Thus, drawing T samples from Bernoulli distribution produces � *W* 1 *[t]* *[, ...,][ W]* *[ t]* *L* � *tT* =1 [. These]
are obtained from T stochastic forward passes with active dropout during the testing phase
of the input data. Then, the predictive mean can be expressed as:
*E* *q(y* | *x)(y)* ≈ *T* [1]
*T*
ˆ
� *y* � *x, W* 1 *[t]* *[, . . .,][ W]* *[ t]* *L* � = *p* *MC* *(y* | *x)* (4)
*t* =1
To obtain uncertainty, mutual information (MI) identifies the information gain of the
outputs derived from Monte-Carlo samples over the predictions. Data points that reside near
the decision boundary are more likely to acquire high mutual information referring to [8].
We can express mutual information as follows, referring to [10]:
T
� p *(* y = c|x,w *)* logp *(* y = c|x, w *)* (5)
t = 1
*I* ˆ *(y* | *x,* *D* *train* *)* = ˆ *H* *(y* | *x,* *D* *train* *)* + �
c
where c is the class label, and
1
T
ˆ
*H* *(y* | *x,* *D* *train* *)* = − � *p* *MC* *(y* = *c* | *x, w)* log *p* *MC* *(y* = *c* | *x, w)* (6)
*c*
MC-dropout method can be viewed as an ensemble of multiple decision functions derived
from the multiple stochastic forward passes. Precisely, it is an ensemble of multiple perturbed
decisionboundaries.Asthismethodcapturesdatapointsbetweendifferentclassdistributions,
a noisy point that falls in the wrong class cannot be captured by MC-dropout, since the latter
method influences only the points with weak confidence. This is tackled in MC-AA method
that is stated in the next part.
123
-----
694 I. Alarab, S. Prakoonwit
#### **3.2 MC-AA: Monte-Carlo Based Adversarial Attack**
Initially, adversarial attacks are introduced as crafted perturbations of the input in order to
produce incorrect predictions [26], which affect the integrity of the model by the attackers.
These attacks fall are categorised between white-box and black-box attacks. The former is
when the attacker has access to the model’s parameters, wherein the latter type accounts
for using the model as a black box. White-box attacks are designed by adding perturbations
to the inputs in the direction of the decision boundary formed by the model. These guided
perturbations are the gradients of the loss function with respect to the input such that the input
is assumed to belong to different class distribution. One of the methods used to compute the
perturbations is known as FGSM (Fast Gradient Sign Method). Primarily, FGSM is proposed
in[27]forattackingdeepneuralnetworks.Thismethodisbasedonmaximisingalossfunction
*J* ( *x, y* ) in a neural network model with respect to a given input *x* and its label *y* . The aim
of this method is to make the classifier perform poorly on the perturbed inputs as worse as
possible. The perturbation of input by FGSM can be reformulated as follows:
*x* *ε* = *x* + *δx* *ε* *,* (7)
with
*δx* *ε* = *ε* - *sign(* ∇ *x* *J* *(x, y)),*
where *x* *ε* is the adversarial example, *ε* is a small number and ∇ *x* is the gradient with respect
to the input *x* . This method perturbs the given input in the opposite direction of the initial
class towards the decision boundary. MC-AA is based on the idea of FGSM by computing
multiple perturbed versions of an input in a small range [11]. This leads to multiple outputs
that allow obtaining uncertainty. MC-AA can be viewed as ensemble learning of multiple
decisions derived from the perturbed versions of an input in a back-and-forth manner in the
direction of the decision boundary. Thus, any point falling on the decision boundary will
reflect a high uncertainty. In MC-AA, the noisy labels are triggered to move in a small range,
so that they are more likely to escape from their wrong class. Thus, the noisy labels will be
assigned with some uncertainty. Moreover, this will further increase the number of correctly
classified data points to be uncertain, which does not affect the model performance. More
formally, consider a discrete interval *I* that is evenly spaced by *β* and symmetric at zero, then
it can be expressed as follows:
*I* = *ε* *i* | *ε* *i* +1 − *ε* *i* = *β)* ∧ *(β* = [2] *[ε]* *[T]*
� � *T*
*T*
(8)
�� *t* =1
where *ε* *T* = *ε* *max* that is the maximum value in the interval I as a tunable hyper-parameter
to perturb an input by FGSM. T is a pre-chosen interval size, and it is also the number of
ensembles to be performed via MC-AA. Consider a neural network of weights *w* with function
approximation as *f* : *x* →ˆ *y* . Let *y* be the associated observation of *x* . Since the perturbations
by MC-AA over *x* are applied on a very small range, we can use Taylor expansion up to order
1 to make approximations as follows:
*f (x* *ε* *)* = *f (x* + *δx* *ε* *)* ≈ *(x)* + *[f]* [ ′] *[(][x][)δ][x]* *[ε]*
1!
123
-----
Graph-Based LSTM for Anti-money Laundering: Experimenting … 695
To compute the predictive mean *p* *MC* − *AA* ( *y* | *x* ), we find the average of the predictions of a
given input as follows:
*T*
� *f (x)* + *[f]* [ ′] *[(][x]* 1 *[)δ]* ! *[x]* *[ε]* (9)
*t* =1
*p* *MC AA* *(y* | *x)* ≈ [1]
*T*
*T*
� *f (x* *i* *)* ≈ *T* [1]
*t* =1
This equation boils down to:
*p* *MC* − *AA* *(y* | *x)* ≈ *f (x)* = ˆ *y* (10)
Hence, we obtain an unbiased predictive mean by MC-AA, whereas several perturbations
can be used to compute mutual information as the predictive uncertainty. Similarly, to Eq. 5,
we estimate uncertainty of *x* using mutual information as follows:
*T*
� *p(y* = *c* | *x* *ε* *)* log *p(y* = *c* | *x* *ε* *),* (11)
*t* =1
*I* ˆ *(y* | *x,* *D* *train* *)* = ˆ *H* *(y* | *x,* *D* *train* *)* + �
*c*
where c is the class label, and
1
*T*
ˆ
*H* *(y* | *x,* *D* *train* *)* = − � *p* *MC* − *AA* *(y* = *c* | *x)* log *p* *MC* − *AA* *(y* = *c* | *x).* (12)
*c*
### **4 Acquisition Functions for Active Learning**
Pool-based active learning is a prominent scenario [13, 28] that assumes a set of labelled
data available for initial training *D* *train* and a set of unlabelled pool *D* *pool* in a Bayesian
model M with model parameters *w* ∼ *p* ( *w* | *D* *train* ) and output predictions *p(y* | *w,* *D* *train* *)* for
*y* ∈ {0 *,* 1} in binary classification tasks. Then, the Bayesian model M that is already trained
on *D* *train* queries the labels—from the unlabelled set *D* *pool* —of an informative batch with
size *b* by an oracle in order to obtain an acceptable performance with less training data.
Consider an acquisition function *a* ( *x,* M) that measures the score of a batch of unlabelled
data { *x* *i* } *i* *[b]* =1 [∈] *[D]* *[pool]* *[.]* [ Let][ {] *[x]* [∗] [}] *i* *[b]* =1 [be the informative batch by the acquisition function which]
can be expressed as follows [29]:
� *x* [∗] [�] *i* *[b]* =1 [=][ argmax] [{] x i [}] [b] i=1 [⊆] *[D]* *[pool]* *[a][(]* [{] *[x]* *[i]* [}] *[,][ p][(w]* [|] *[D]* *[train]* *[))]* (13)
In what follows, we demonstrate various acquisition functions which are detailed in [24].
#### **4.1 BALD: Bayesian Active Learning by Disagreement**
BayesianActiveLearningbyDisagreement(BALD)[20]isanacquisitionmethodthatutilises
the uncertainty estimates via mutual information between the model predictions and model
parameters. Hence, the learning algorithm queries the data points with the highest mutual
information measurements. The highest mutual information measurements are produced
when the predictions by Monte-Carlo samples are assigned with the highest probabilities
where the samples are associated with different classes.
In this paper, we desire to acquire a batch of size *b* at each sampling iteration.
123
-----
696 I. Alarab, S. Prakoonwit
Using BALD, this can be expressed as:
*b*
ˆ
*a* �{ *x* *i* } *i* *[b]* =1 *[,][ p][(w]* [|] *[D]* *[train]* *[)]* � ≈ � *I* *(y* *i* *, w* | *x* *i* *,* *D* *train* *),*
*i* =1
where *I* [ˆ] is derived from Eq. 5 for MC-dropout and Eq. 11 for MC-AA. Furthermore, the optimal batch is the one with b-highest scoring data points to reduce the bottleneck of acquiring
a single data point at each acquisition step.
#### **4.2 Entropy**
This acquisition method computes the entropy using the uncertainty estimates from Eqs. 6
and 12. During the active learning process, we choose the batch size with the maximum
predictive entropy [19] which can be written as:
*b*
ˆ
*a* *Entropy* �{ *x* *i* } *i* *[b]* =1 *[,][ p][(w]* [|] *[D]* *[train]* *[)]* � ≈ � *H* *(y* *i* ; *w* | *x* *i* *,* *D* *train* *)*
*i* =1
The maximum entropy explains the lack of confidence within the obtained predictions which
are typically near 0.5.
#### **4.3 Variation Ratios**
Similarly, we choose the batch with the maximum variation ratios [21] where the variation
ratio is expressed as:
variation − ratio[x] = 1 − max *p(y* | *x,* *D* *train* *)*
y
The maximum variation ratios correspond to the lack of confidence in the samples’ predictions.
#### **4.4 Mean Standard Deviation**
Likewise, we sample a batch that maximise the mean standard deviation (Mean STD) [22,
23]. The predictive standard deviation can be computed as:
*σ* *c* = � *E* � *p(y* = *c* | *x, w)* [2] [�] − *E* [ *p(y* = *c* | *x, w)* ] [2] *,*
where *E* corresponds to the expected mean. The *σ* *c* measurement computes the standard
deviation between the predictions obtained by Monte-Carlo samples on every data point.
Consequently, the mean standard deviation is averaged over all *c* classes which can be derived
from:
*σ* = *C* [1] � *σ* *c*
*c*
#### **4.5 Random Sampling: Baseline Model**
This acquisition function uniformly draws data points from the unlabelled pool at random.
123
-----
Graph-Based LSTM for Anti-money Laundering: Experimenting … 697
**Fig. 1** Representative graph structure of Elliptic data. This dataset incorporates 49 directic acyclic graphs. Each
graph is associated with a timestep *t* . The colours of the nodes denote the labels provided by this dataset
### **5 Methods**
In this section, we provide a detailed description of Elliptic data. Then we demonstrate
temporal-GCN which is the proposed classification model to classify the illicit transactions
in this dataset.
#### **5.1 Dataset Description**
We use the Bitcoin dataset launched by Elliptic company that is renowned for detecting illicit
services in cryptocurrencies [3]. This dataset is formed of 49 directed acyclic graphs wherein
each is extracted on a specific period of time represented as time-step *t*, referring to Fig. 1.
Each fully connected graph network incorporates nodes as transactions and edges as the
flow of payments. In total, this dataset is formed of 203,769 partially labelled transactions,
where 21% are labelled as licit (e.g., wallet providers, miners) and 2% are labelled as illicit
(e.g. scams, malware, PonziSchemes, …). Each transaction node acquires 166 features such
that the first 94 belongs to local features and the remaining as global features. Local features are derived from the transactions’ information on each node (e.g. time-step, number of
outputs/inputs addresses, number of outputs/inputs unique addresses …). Meanwhile, global
features are extracted from the graph network structure between each node and its neighbourhood by using the information of the one-hop backward/forward step for each transaction.
In this study, we use the local features which count to 93 features (i.e. excluding time-step)
without any graph-related features.
#### **5.2 Temporal Modelling**
We refer to the presented model by temporal-GCN. This model is a combination of LSTM
and GCN models which are detailed in what follows.
123
-----
698 I. Alarab, S. Prakoonwit
#### **5.2.1 Long Short-Term Memory (LSTM)**
Initially, LSTM is proposed by [30] as a special category of recurrent neural networks (RNNs)
in order to prevent the vanishing gradient problem. LSTM has proven its efficacy in many
general-purpose sequence modelling applications [31–33].
Given a graph network of Bitcoin transactions as *G* = ( *V* *,* *E* ) with its adjacency matrix
*A* ∈ R *[n]* [×] *[n]*, degree matrix *D* ∈ R *[n]* [×] *[n]*, where *V* and *E* are the sets of nodes as Bitcoin
transactions and edges as payments flow, respectively, with | *V* |= *n* being the total number
of nodes. Consider *x* *t* ∈ R *[d]* *[x]* as the node feature vector with *d* *x* -dimensional features and
layer output *h* *t* ∈[− 1 *,* 1] *[d]* *[h]* as and states *c* *t* ∈ R *[d]* *[h]* ∈ R *dh* with *d* *h* -dimensional embedding
features.
Then, the fully connected LSTM layer, referring to [34], can be expressed as:
*i* *t* = *σ(W* *xi* ∗ *x* *t* + *W* *hi* ∗ *h* *t* −1 + *w* *ci* ⊙ *c* *t* −1 + *b* *i* *),*
*f* *t* = *σ(W* *x f* ∗ *x* *t* + *W* *h f* ∗ *h* *t* −1 + *w* *cf* ⊙ *c* *t* −1 + *b* *f* *),*
*c* *t* = *f* *t* ⊙ *c* *t* −1 + *i* *t* ⊙ tanh *(W* *xc* ∗ *x* *t* + *W* *hc* ∗ *h* *t* −1 + *b* *c* *),*
*o* *t* = *σ(W* *xo* ∗ *x* *t* + *W* *ho* ∗ *h* *t* −1 + *w* *co* ⊙ *c* *t* + *b* *o* *),*
*h* *t* = *o* *t* tanh *(c* *t* *),* (14)
where ⊙ is the Hadamard product, *σ* ( *.* ) is the sigmoid function and tanh( *.* ) is the hyperbolic
tangent function. The remaining notations refer to LSTM layer parameters as follows: *i* *t* *, f* *t* *,*
*o* *t* ∈ [0 *,* 1] *[d]* *[h]* are the input, forget, and output gates, respectively. The weights *W* *x.* ∈ R *[d]* *[h]* *[,][d]* *[x]*,
*W* *h.* ∈ R *[d]* *[h]* *[,][d]* *[x]*, *w* *c.* ∈ R *[d]* *[h]* and biases *b* *i* *, b* *f* *, b* *c* *, b* *o* express the parameters of the LSTM model.
#### **5.2.2 Topology Adaptive Graph Convolutional Network: TAGCN**
In @@this paper, we use a graph learning algorithm called TAGCN as introduced in [35]
which stems from the GCN model. Generally, GCNs are neural networks that are fed with
graph-structured data, wherein the node features with a learnable kernel undergo convolutional computation to induce new node embeddings. The kernel can be viewed as a filter
of the graph signal (node), wherein the work in [36] suggested the localisation of kernel
parameters using Chebyshev polynomials to approximate the graph spectra. Also, the study
in [37] has introduced an efficient algorithm for node classification using first-order localised
kernel approximations of the graph convolutions.
*H* *[(][l]* [+][1] *[)]* = *σ* *AH* ˆ *(l)* *W* *(l)* [�] *,*
�
where *A* [ˆ] is the normalization of *A* defined by:
ˆ ˜
*A* = ˜ *D* [−] 2 [1] ˜ *A* ˜ *D* [−] 2 [1] *,* ˜ *A* = *A* + *I* *,* ˜ *D* = diag⎛ *A* *i j* ⎞ *,*
⎝ [�] *j* ⎠
˜
*A* is the adjacency matrix of the graph *G* with the added self-loops. *σ* denotes the typical
activation function such as *ReLU* ( *.* ) = *max* (0 *,.* ). *H* *[(][l][)]* is the input node embedding matrix at
*l* *[th]* layer. W [(] *[l]* [)] is a trainable weight matrix used to update the output embeddings *H* *[(][l]* [+][1] *[)]* .
On the other hand, the work in [35] has introduced TAGCN which is based on GCN
but with fixed-size learnable filters and adaptive to the topology of the graph to perform
123
-----
Graph-Based LSTM for Anti-money Laundering: Experimenting … 699
convolutions in the vertex domain. Consequently, TAGCN can be expressed as follows:
*H* *[(][l]* [+][1] *[)]* =
*K*
� *(D* [−] 2 [1] *A* *D* [−] 2 [1] *)* *[k]* *H* *[(][l][)]* *�* *k* *,* (15)
*k* =0
where *�* *k* is a learnable weight matrix at *k* -hop from the node of interest.
#### **5.2.3 Overall Model: Temporal-GCN**
Since TAGCN in [35] requires no approximations in comparison to GCN by [37], we exploit
the performance of TAGCN in Bitcoin data. Motivated by the work in [38] that has suggested
feeding LSTM inputs with GCN node embeddings, temporal-GCN seeks to perform LSTM
that learns the temporal sequence in which after is forwarded non-linearly to 2-TAGCN layers
to exploit the graph structure of Bitcoin transaction graph. The temporal-GCN model can be
expressed as:
*H* *[(]* [1] *[)]* = ReLU *(* LSTM *(* *X* *)),*
*H* *[(]* [2] *[)]* = ReLU TAGCN *H* *[(]* [1] *[)]* *,* *E* *,*
� � ��
*H* *[(]* [3] *[)]* = Softmax TAGCN *H* *[(]* [2] *[)]* *,* *E* *,* (16)
� � ��
where *X* is the node feature matrix. LSTM(.) and TAGCN(.,.) are layers mapping a given input
to an output from Eqs. 14 and 15, respectively. Softmax function is defined as Softmax(x) =
1
exp( *x* *i* ), where Z = *Z* = [�] *i* [exp] *[(][x]* *[i]* *[)]* [.]
### **6 Experiments**
#### **6.1 Experimental Setup**
Using the Elliptic data, we split the data following the temporal split as in [3], so that the
first 35 graphs (i.e., *t* = 1 → *t* = 35) account for the train set and the remaining are kept
for testing. Since this dataset comprises partially labelled nodes, we only use the labelled
nodes which add up for 29,894/16,670 transactions in the train/test sets, respectively. To train
temporal-GCN, we use Pytorch Geometric (PyG) package [39] which is built on the top of
Pytorch (version 1.11.0) enabled-CUDA (version 11.3) in Python programming language.
At each time-step *t*, we feed the relevant graph network with its node feature matrix (i.e.,
local features excluding timestep) to the temporal-GCN model that is summarised in Eq. 16.
LSTM layer uses only the nodes features without any graph-structural information to provide
the output matrix *H* *[(]* [1] *[)]* *.* This matrix is then forwarded to 2-TAGCN layers (in *H* *[(]* [2] *[)]* and *H* *[(]* [3] *[)]* )
that consider the graph-structured data of the top-K influential nodes in the graph, where *K*
is kept by default equal to 3. Hence, a *Softmax* function provides the final class predictions
as licit/illicit transactions. We choose *NLLLoss* function and Adam optimiser in order to
compute the loss and update the model’s parameters. Using the same hyper-parameters from
[5], the widths of the hidden layers are set to 50, the number of epochs is set to 50 and the
learning rate is fixed at 0.001. Furthermore, we empirically opt 0.7 for the dropout ratio to
avoid overfitting. The classification results of the temporal-GCN model are provided in Table
1.
123
-----
700 I. Alarab, S. Prakoonwit
**Table 1** Classification results of Elliptic data using local features
Model % Accuracy % Precision % Recall % *F* 1 Score
Temporal-GCN 97.7 92.7 71.3 80.6
GCN + MLP [[][5][]] 97.4 89.9 67.8 77.3
Evolve-GCN [[][3][]] 96.8 85.0 62.4 72.0
Skip-GCN [[][3][]] 96.6 81.2 62.3 70.5
Random Forest [[][3][]] 96.6 80.3 61.1 69.4
GCN [[][3][]] 96.1 81.2 51.2 62.8
MLP [[][3][]] 95.8 63.7 66.2 64.9
Logistic regression [[][3][]] 92.0 34.8 66.8 45.7
This table shows comparison between the presented model “Temporal-GCN” and previous studies using same
features and train/test split
#### **6.2 Active Learning**
Active learning has a significant impact to alleviate the bottleneck of labelling especially with
this type of data. The main goal of active learning is to use less-training data with achieving
acceptable performance. Here, we initially assume the train set as a pool of unlabelled data
*D* *pool* and we consider *D* *train* as an empty set to be appended after the querying process. First,
the process starts by randomly querying the first batch size for manual labelling, which is
arbitrarily assigned to 2000 instances. Afterwards, we append the selected queries to *D* *train*
from *D* *pool* to train the temporal-GCN model that is evaluated using the test set at each
iteration. Subsequently, the same process is repeated using the uncertainty sampling strategy
until we reach an adequate accuracy. However, we query for all instances in *D* *pool* . The
uncertainty sampling is performed by using one of the acquisition functions demonstrated
earlier. These acquisition functions require as input the uncertainty estimates derived by
the uncertainty estimation methods. To imitate manual labelling, we append the labels to
the queried instances. This experiment is performed using MC-dropout and MC-AA. We
compare the performance of the active learning frameworks that use various acquisition
functions on the two distinct uncertainty estimation methods. Regarding the hyper-parameters
for producing uncertainty estimates, we arbitrarily set *T* = 50 for multiple stochastic forward
passes on the unlabelled pool for MC-dropout. With MC-AA, we arbitrarily choose *ε* *T* = 0 *.* 1
and *T* = 10.
Inaddition,weperformrandomsamplingasabaselinewhichuniformlyqueriesdatapoints
at random from the pool. The process of performing active learning with the temporal-GCN
model is schematised in Fig. 2. The required time to perform the active learning process in
an end-to-end fashion using parallel processing, referring to Fig. 2, is provided in Table 2
using various acquisition functions under the given uncertainty methods.
### **7 Results and Discussion**
We discuss the results of the temporal-GCN model in the light of the previous studies using
the same dataset. Subsequently, we provide and discuss the results provided by various
active learning frameworks. Then we apply a non-parametric statistical method to discuss
123
-----
Graph-Based LSTM for Anti-money Laundering: Experimenting … 701
**Fig. 2** Schematic representation of the active learning framework using the proposed temporal-GCN model.
This frame is a pool-based scenario where annotator queries the data points labels from a pool of unlabelled
instances, *D* *pool* using an acquisition function. For each iteration, the queried batch is appended to the train
set *D* *train*
**Table 2** Time required to perform
the active learning process in an
end-to-end fashion using the
proposed temporal-GCN model
Acquisition Runtime (mins) using Runtime (mins)
function MC-AA using MC-dropout
BALD t *MC* − *AA* = 28.07 t *MC* − *dropout* =
27.1
Entropy t *MC* − *AA* = 28.9 t *MC* − *dropout* =
27.3
Variation ratio t *MC* − *AA* = 28.9 t *MC* − *dropout* =
28.3
Mean STD t *MC* − *AA* = 28.68 t *MC* − *dropout* =
27.09
The time is provided for each experiment that uses the corresponding acquisition function which relies on a given uncertainty estimation
method
the significant difference between MC-AA and MC-dropout in performing active learning in
comparison to the random sampling strategy.
#### **7.1 Performance of Temporal-GCN**
Temporal-GCN has outperformed all previous studies on this dataset that uses local features
under the same train/test split settings. The presented model has leveraged temporal sequence
and the graph structure of the Bitcoin transaction graph, wherein the classification model can
detect illicit transactions with accuracy and *f* 1 -score equal to 97.77% and 80.60%, respectively. In previous studies, Evolve-GCN has attained an accuracy of 96.8%. The latter model
has exploited the dynamicity of the graph by performing LSTM on the weights of the GCN
layer, which outperformed GCN and skip-GCN without any temporal information. Whereas
123
-----
702 I. Alarab, S. Prakoonwit
**Fig. 3** Illicit transactions distribution in test set over the time-steps. The blue curve represents the actual illicit
labels, whereas the red curve represents the illicit predictions that are actually illicit labels by temporal-GCN
in temporal-GCN, LSTM has exploited the temporal sequence of Elliptic data itself before
using any graph information in which the new transformed features are mapped non-linearly
into the graph-based approach to perform graph convolutions in the vertex domain. Thus,
the obtained input features of TAGCN model are enriched with the relevant temporal information. Similarly to [3], we also realise that the presented model performs poorly with the
black market shutdown at time-step 43 as shown in Fig. 3. Regarding the time-complexity,
the complexity of LSTM is about *O* (4 *nd* *h* ( *d* *x* + 3 + *d* *h* ), whereas 2-TAGCN layers have a
linear complexity which is *O* ( *n* ). Consequently, the time-complexity of the temporal-GCN
model becomes *O* ( *n* (4 *d* *h* *d* *x* + 3 *d* *h* + *d* *h* [2] [+][ 1)) at every epoch.]
#### **7.2 Evaluation of Active Learning Frameworks**
Referring to Fig. 4, we plot the results of various active learning frameworks using various
acquisition functions (BALD, Entropy, Mean STD, Variation Ratio) which in turn utilise MCdropout and MC-AA uncertainty estimation methods. Moreover, we plot the performance
of the baseline model using a random sampling strategy. In the first subplot, BALD has
revealed a significant success under MC-AA and MC-dropout uncertainty estimates which
active learning is effectively better than the random sampling model. With the remaining
acquisition functions, MC-dropout has remarkably achieved a significant outperformance
over MC-dropout and the random sampling model.
MC-AA that is utilised in entropy and variation ratio acquisition function has not performed better than random sampling. Furthermore, the active learning framework using the
BALD acquisition function in Fig. 4 is capable of matching the performance of a fully supervised model after using 20% of the queried data. This amount of queried data belongs to the
second iteration. In our experiments, MC-AA has been revealed to be a viable method as
an uncertainty sampling strategy in an active learning approach with BALD and Mean STD
acquisition functions. This is reasonable since the latter two methods estimate the uncertainty
based on the severe fluctuations of the model’s predictions on a given input wherein MC-AA
suits this type of uncertainty.
123
-----
Graph-Based LSTM for Anti-money Laundering: Experimenting … 703
**Fig. 4** Results of active learning using BALD via MC-dropout in comparison to BALD via MC-AA
Referring to Table 2, BALD acquisition has recorded the shortest time among other acquisition functions using MC-AA, where this framework has been processed in 28.07 minutes
using parallel processing. Whereas the longest time by MC-AA is recorded by the entropy
and variation ratio. For MC-dropout, the shortest time is recorded by Mean STD acquisition
which is 27.09 min. Whereas the framework using variation ratio has revealed the longest
time which is 28.3 min. We also note that the frameworks using MC-AA require more time
than the ones using the MC-dropout method. This is due to the adversaries computed by
MC-AA which requires more time.
#### **7.3 Wilcoxon Hypothesis Test**
To show the statistical significance of the various acquisition functions that appeared in
Fig. 4, we perform the non-parametric Wilcoxon signed-rank test [40]. It is used to test the
null hypothesis between two paired samples based on the difference between their scores.
Given two paired samples P = { *p* 1 *,* … *, p* *m* } and Q = { *q* 1 *,* … *, q* *m* }, then the absolute value
of the difference between the samples.
This can be expressed as:
M = |P − Q| = {| *p* 1 − *q* 1 | *, . . .,* | *p* *m* − *q* *m* |} *,*
where *m* is the number of samples of each set. In summary, this test accounts for the statistical
difference between the sets P and Q. The Wilcoxon test compares a test statistic T to *Student’s*
*t-distribution* . To perform this test, we use the Wilcoxon function in sklearn [41]. Referring to
123
-----
704 I. Alarab, S. Prakoonwit
**Table 3** Wilcoxon statistical test
between the accuracy derived by
MC-AA/MC-dropout with
respect to the random sampling
model using various acquisition
functions
Acquisition Wilcoxon (MC-AA, Wilcoxon
function random sampling) (MC-dropout,
random sampling)
BALD 0.0009 0.0217
Entropy 0.2552 0.309
Variation ratio 0.266 0.0071
Mean STD 0.5507 0.0112
The values correspond to the *p* values by the test statistic
Fig. 4, we apply the Wilcoxon test for each subplot twice. The first one studies the difference
between MC-AA and random sampling curves. Likewise, the second one accounts for the
differences between MC-dropout and random sampling curves. For instance, we will refer
to the Wilcoxon test applied between MC-AA and random sampling pairs as follows:
p - value = Wilcoxon *(* MC − AA *,* Random Sampling *),*
where *p value* is a statistical measurement that is used to validate how likely the difference
of the paired samples is given by the null hypothesis. Let *H* 0 be the null hypothesis and *H* 1
be the alternative one. Then they can be written as:
*H* 0 = No difference between the two paired samples
*H* 1 = There is a difference between the two paired samples
We opt for 0.05 for the significance level *α* where we test against the null hypothesis. The
smaller the *p* value by the Wilcoxon function output, the evidence we have against the null
hypothesis. Precisely, the test statistic is statistically significant if *p* value is less than the
significance level. Subsequently, we provide the statistical results ( *p* values) of the various
acquisition functions against the baseline random sampling. The results are provided in Table
3. The *p* values—which are lower than the significance level *α* —from BALD acquisition
function are statistically significant against the null hypothesis in which there is statistical
evidence about the difference between each of MC-AA and MC-dropout in comparison to
random sampling acquisition. Moreover, MC-dropout against random sampling has shown
a statistical significance against the null hypothesis using the variation ratio and Mean STD
acquisitions where the *p* values are 0.071 and 0.0112, respectively. There is no evidence
against the null hypothesis for the entropy where the *p* values are 0.2552 and 0.309 are
greater than *α* .
### **8 Ablation Study**
In this section, we present the ablation studies performed on the proposed temporal-GCN
model. Referring to Table 4, we have studied the importance of using LSTM and TAGCN
layers. The Model-0 corresponds to the model performed in the experiments. Subsequently,
we have applied a combination of replacing each of the given layers with a linear layer
(Model-1, Model2, Model-3). We have also studied the case in which we remove one of
the first two layers from the original model (Model-6, Model-7). Furthermore, we have
shown the performance of the models using either LSTM (Model-4) with linear layers. In
Model-2, the replacement of the second layer with a linear layer has attained the highest
123
-----
Graph-Based LSTM for Anti-money Laundering: Experimenting … 705
**Table 4** Ablation study over the layers of the proposed model
Model number Layer-1 Layer-2 Layer-3 % Accuracy
Model-0 (ours) LSTM TAGConv TAGConv 97.77
Model-1 Linear TAGConv TAGConv 97.66
Model-2 LSTM Linear TAGConv 97.76
Model-3 LSTM TAGConv Linear 97.57
Model-4 LSTM Linear Linear 97.44
Model-5 Linear Linear Linear 97.51
Model-6 LSTM TAGConv None 97.63
Model-7 TAGConv TAGConv None 96.65
Each model number is provided by its architecture by changing the layers into linear layer or removing one
of the layers. The term’None’ in the cells correspond to the model having one of its layers removed
**Table 5** Ablation study by
changing the number of K-hops
in TAGCN layers of the
temporal-GCN model as given in
Eq. 16
TAGCN K-hops % Accuracy
K = 3 97.77
K = 2 97.71
K = 1 97.75
accuracy in comparison to Model-0. The removal of LSTM in all cases has provided a drop
in the model’s performance, especially in Model-7 which reveals the lowest accuracy. On
the other hand, using LSTM without the graph learning algorithms in Model-2 has recorded
the second-lowest accuracy in this study. We have also tweaked the K hyper-parameter that
appeared in TAGCN referring to Eq. 15. The original model uses, by default, K=3 which
means that neighbourhood information is aggregated up to 3-hops. Then we have checked the
performance for K ∈ {1, 2} as provided in Table 5. The highest accuracy has been recorded for
using *K* = 3 and the second one for *K* = *1* . Surprisingly, the drop in accuracy is not consistent
between the different *K-* values. This might be due to the informative features derived from
neighbouring nodes up to 1-hop and 3-hops.
### **9 Conclusion**
For anti-money laundering in Bitcoin, we have presented temporal-GCN, as a combination of
LSTMandGCNmodels,todetectillicittransactionsintheBitcointransactiongraphknownas
Elliptic data. Also, we have provided active learning using two promising methods to compute
uncertainties called MC-dropout and MC-AA. For the active learning frameworks, we have
studied various acquisition functions to query the labels from the pool of unlabelled data
points. The main finding is that the proposed model has revealed a significant outperformance
incomparisontothepreviousstudieswithanaccuracyof97.77%underthesameexperimental
settings. LSTM takes into consideration the temporal sequence of Bitcoin transaction graphs,
whereas TAGCN considers the graph-structured data of the top-K influential nodes in the
graph. Regarding active learning, we are able to achieve an acceptable performance by only
123
-----
706 I. Alarab, S. Prakoonwit
considering 20% of the labelled data with the BALD acquisition function. Moreover, a nonparametric statistical method, the so-called Wilcoxon test, is performed to test whether there
is a difference between the type of uncertainty estimation method used in the active learning
frameworks under the same acquisition function. Furthermore, an ablation study is provided
to highlight the effectiveness of the proposed temporal-GCN.
In future work, we foresee performing different active learning frameworks which utilise
different acquisition functions. Furthermore, we seek to extend the temporal-GCN model to
other graph-structured datasets for anti-money laundering in blockchain.
**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which
permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give
appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence,
and indicate if changes were made. The images or other third party material in this article are included in the
article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is
not included in the article’s Creative Commons licence and your intended use is not permitted by statutory
regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
[To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.](http://creativecommons.org/licenses/by/4.0/)
### **References**
1. Ciphertrace (2020) spring 2020 cryptocurrency crime and anti-money laundering report. https://
ciphertrace.com/spring-2020-cryptocurrency-anti-money-laundering-report/. Accessed 2021-09-01
2. Harlev MA, Sun Yin H, Langenheldt KC, Mukkamala R, Vatrapu R (2018) Breaking bad: de-anonymising
entity types on the bitcoin blockchain using supervised machine learning. In: Proceedings of the 51st
Hawaii International Conference on System Sciences
3. Weber M, Domeniconi G, Chen J, Weidele DKI, Bellei C, Robinson T, Leiserson CE (2019) Anti-money
laundering in bitcoin: experimenting with graph convolutional networks for financial forensics. arXiv
[preprint arXiv:1908.02591](http://arxiv.org/abs/1908.02591)
4. Alarab I, Prakoonwit S, Nacer MI (2020) Comparative analysis using supervised learning methods for
anti-money laundering in bitcoin. In: Proceedings of the 2020 5th international conference on machine
learning technologies, pp 11–17
5. Alarab I, Prakoonwit S, Nacer MI (2020) Competence of graph convolutional networks for anti-money
laundering in bitcoin blockchain. In: Proceedings of the 2020 5th international conference on machine
learning technologies, pp 23–27
6. Pareja A, Domeniconi G, Chen J, Ma J, Suzumura T, Kanezashi H, Kaler T, Schardl T, Leiserson C (2020)
Evolvegcn: evolving graph convolu- tional networks for dynamic graphs. In: Proceedings of the AAAI
conference on artificial intelligence, vol 34, pp 5363–5370
7. Poursafaei F, Rabbany R, Zilic Z (20201) Sigtran: signature vectors for detecting illicit activities in
blockchain transaction networks. In: PAKDD (1). Springer, pp 27–39
8. Alarab I, Prakoonwit S, Nacer MI (2021) Illustrative discussion of mc-dropout in general dataset: uncertainty estimation in bitcoin. Neural Process Lett 53(2):1001–1011
9. Lorenz J, Silva MI, Apar´ ıcio D, Ascens˜ao JT, Bizarro P (2020) Machine learning methods to detect
money laundering in the bitcoin blockchain in the presence of label scarcity. arXiv preprint arXiv:2005.
14635
10. Gal Y, Ghahramani Z (2016) Dropout as a bayesian approximation: represent ing model uncertainty in
deep learning. In: international conference on machine learning. PMLR, pp 1050–1059
11. Alarab I, Prakoonwit S (2021) Adversarial attack for uncertainty estimation: identifying critical regions
[in neural networks. arXiv:2107.07618](http://arxiv.org/abs/2107.07618)
12. Farrugia S, Ellul J, Azzopardi G (2020) Detection of illicit accounts over the ethereum blockchain. Expert
Syst Appl 150:113318
13. Settles B (2009) Active learning literature survey
14. Qiu J, Wu Q, Ding G, Xu Y, Feng S (2016) A survey of machine learning for big data processing.
EURASIP J Adv Signal Process 2016(1):1–16
15. Lewis DD, Catlett J (1994) Heterogeneous uncertainty sampling for supervised learning. In: Machine
learning proceedings 1994. Elsevier, pp 148–156
123
-----
Graph-Based LSTM for Anti-money Laundering: Experimenting … 707
16. Settles B, Craven M, Ray S (2007) Multiple-instance active learning. In: Advances in neural information
processing systems, vol 20
17. Gal Y, Islam R, Ghahramani Z (2017) Deep Bayesian active learning with image data. In: International
conference on machine learning. PMLR, pp 1183–1192
[18. Gal Y, Hron J, Kendall A (2017) Concrete dropout. arXiv preprint arXiv:1705.07832](http://arxiv.org/abs/1705.07832)
19. Shannon CE (1948) A mathematical theory of communication. Bell Syst Tech J 27(3):379–423
20. Houlsby N, Husz´ar F, Ghahramani Z, Lengyel M (2011) Bayesian active learning for classification and
[preference learning. arXiv preprint arXiv:1112.5745](http://arxiv.org/abs/1112.5745)
21. Johnson EH (1966) Freeman: elementary applied statistics: for students in behavioral science (book
review). Soc Forces 44(3):455
22. KendallA,BadrinarayananV,CipollaR(2015)Bayesiansegnet:modeluncer-taintyindeepconvolutional
[encoder-decoder architectures for scene under- standing. arXiv preprint arXiv:1511.02680](http://arxiv.org/abs/1511.02680)
23. Kampffmeyer M, Salberg A-B, Jenssen R (2016) Semantic segmentation of small objects and modeling
of uncertainty in urban remote sensing images using deep convolutional neural networks. In: Proceedings
of the IEEE conference on computer vision and pattern recognition workshops, pp 1–9
24. Gal Y (2016) Uncertainty in deep learning. Univ Camb 1(3):4
25. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to
prevent neural networks from overfitting. J Mach Learn Res 15(1):1929–1958
26. Chakraborty A, Alam M, Dey V, Chattopadhyay A, Mukhopadhyay D (2018) Adversarial attacks and
[defences: a survey. arXiv preprint arXiv:1810.00069](http://arxiv.org/abs/1810.00069)
27. GoodfellowIJ,ShlensJ,SzegedyC(2014)Explainingandharnessingadversarialexamples.arXivpreprint
[arXiv:1412.6572](http://arxiv.org/abs/1412.6572)
28. Lewis DD, Gale WA (1994) A sequential algorithm for training text classifiers. CoRR abs/cmp[lg/9407020. arXiv:cmp-lg/9407020. http://arxiv.org/abs/cmp-lg/9407020](http://arxiv.org/abs/cmp-lg/9407020)
29. Kirsch A, Van Amersfoort J, Gal Y (2019) Batchbald: Efficient and diverse batch acquisition for deep
Bayesian active learning. Adv Neural Inf Process Syst 32:7026–7037
30. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780
31. Graves A, Mohamed A, Hinton G (2013) Speech recognition with deep recur- rent neural networks. In:
2013 IEEE international conference on acoustics, speech and signal processing. IEEE, pp 6645–6649
32. Srivastava N, Mansimov E, Salakhudinov R (2015) Unsupervised learning of video representations using
lstms. In: International conference on machine learning. PMLR, pp 843–852
33. Sutskever I, Vinyals O, Le QV (2014) Sequence to sequence learning with neural networks. In: Advances
in neural information processing systems, pp 3104–3112
[34. Graves A (2013) Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850](http://arxiv.org/abs/1308.0850)
35. Du J, Zhang S, Wu G, Moura JM, Kar S (2017) Topology adaptive graph convolutional networks. arXiv
[preprint arXiv:1710.10370](http://arxiv.org/abs/1710.10370)
36. Defferrard M, Bresson X, Vandergheynst P (2016) Convolutional neural networks on graphs with fast
localized spectral filtering. Adv Neural Inf Process Syst 29:3844–3852
37. Kipf TN, Welling M (2016) Semi-supervised classification with graph convolu tional networks. arXiv
[preprint arXiv:1609.02907](http://arxiv.org/abs/1609.02907)
38. Seo Y, Defferrard M, Vandergheynst P, Bresson X (2018) Structured sequence modeling with graph
convolutional recurrent networks. In: International conference on neural information processing. Springer,
pp 362–373
39. Fey M, Lenssen JE (2019) Fast graph representation learning with pytorch geometric. arXiv preprint
[arXiv:1903.02428](http://arxiv.org/abs/1903.02428)
40. Wilcoxon F (1945) Individual comparisons by ranking methods. Biom Bull 1(6):80–83
41. Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss
R, Dubourg V et al (2011) Scikit-learn: machine learning in python. J MACH LEARN RES 12:2825–2830
**Publisher’s Note** Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.
123
-----
| 15,660
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1007/s11063-022-10904-8?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/s11063-022-10904-8, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://link.springer.com/content/pdf/10.1007/s11063-022-10904-8.pdf"
}
| 2,022
|
[
"JournalArticle"
] | true
| 2022-06-16T00:00:00
|
[
{
"paperId": "3b2518c2d6a478e7f2390de3f60f794f7805d793",
"title": "Adversarial Attack for Uncertainty Estimation: Identifying Critical Regions in Neural Networks"
},
{
"paperId": "f72a3dc1cd764a146e3b1ac2c45e0cf17ec7d3cf",
"title": "Illustrative Discussion of MC-Dropout in General Dataset: Uncertainty Estimation in Bitcoin"
},
{
"paperId": "f095691055a658d6e025c7825594d2cf90e16cc3",
"title": "Detection of illicit accounts over the Ethereum blockchain"
},
{
"paperId": "1225bc8902daa95f0df8d6a84b3cfdd4a1cca31d",
"title": "Competence of Graph Convolutional Networks for Anti-Money Laundering in Bitcoin Blockchain"
},
{
"paperId": "8df64754d176c817907873ab78e8dade2d39ba3b",
"title": "Comparative Analysis Using Supervised Learning Methods for Anti-Money Laundering in Bitcoin"
},
{
"paperId": "35b1f9a3204b74d03b42879b888f5b790d0ed0a3",
"title": "Machine learning methods to detect money laundering in the bitcoin blockchain in the presence of label scarcity"
},
{
"paperId": "68140108c35dea33e6d00efb3639e5df7bdc6073",
"title": "Anti-Money Laundering in Bitcoin: Experimenting with Graph Convolutional Networks for Financial Forensics"
},
{
"paperId": "ad3a75fa2a26e6f69b7059466828f49492d31789",
"title": "BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning"
},
{
"paperId": "63a513832f56addb67be81a2fa399b233f3030fc",
"title": "Fast Graph Representation Learning with PyTorch Geometric"
},
{
"paperId": "362e416c5f55b056a6c5930d55d8e3588efce9b9",
"title": "EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs"
},
{
"paperId": "869fdb53a40290a3941fd6ab808835e9b5184d62",
"title": "Adversarial Attacks and Defences: A Survey"
},
{
"paperId": "a57e4cf724e05f57dff099ff39a61bf14a52f0a3",
"title": "Breaking Bad: De-Anonymising Entity Types on the Bitcoin Blockchain Using Supervised Machine Learning"
},
{
"paperId": "0a5ecd6f9b58b827c06238b7ea4b22f0610e20a2",
"title": "Topology adaptive graph convolutional networks"
},
{
"paperId": "11d3bac980b8d3d72f82719e47a4406916224bd6",
"title": "Concrete Dropout"
},
{
"paperId": "da5c65b0ac8b525c3d3d4889bf44d8a48d254a07",
"title": "Deep Bayesian Active Learning with Image Data"
},
{
"paperId": "6b1793ece5993523855ce67c646de408318d1b12",
"title": "Structured Sequence Modeling with Graph Convolutional Recurrent Networks"
},
{
"paperId": "c41eb895616e453dcba1a70c9b942c5063cc656c",
"title": "Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering"
},
{
"paperId": "c92ab8d1139290e91d0da9c7b63b69f3b20ebf49",
"title": "Semantic Segmentation of Small Objects and Modeling of Uncertainty in Urban Remote Sensing Images Using Deep Convolutional Neural Networks"
},
{
"paperId": "27245e65a27bde90b5b0bb25d157bb75a0ad8b5a",
"title": "A survey of machine learning for big data processing"
},
{
"paperId": "c93e908cae9e879306cbcacd88cf4fe82a26774c",
"title": "Instantly decodable network coding for real-time device-to-device communications"
},
{
"paperId": "2daa90492b5509b33567eaf49360926f0e79f286",
"title": "Bayesian SegNet: Model Uncertainty in Deep Convolutional Encoder-Decoder Architectures for Scene Understanding"
},
{
"paperId": "f35de4f9b1a7c4d3fa96a0d2ab1bf8937671f6b6",
"title": "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning"
},
{
"paperId": "829510ad6f975c939d589eeb01a3cf6fc6c8ce4d",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"paperId": "bee044c8e8903fb67523c1f8c105ab4718600cdb",
"title": "Explaining and Harnessing Adversarial Examples"
},
{
"paperId": "cea967b59209c6be22829699f05b8b1ac4dc092d",
"title": "Sequence to Sequence Learning with Neural Networks"
},
{
"paperId": "6471fd1cbc081fb3b7b5b14d6ab9eaaba02b5c17",
"title": "Generating Sequences With Recurrent Neural Networks"
},
{
"paperId": "4177ec52d1b80ed57f2e72b0f9a42365f1a8598d",
"title": "Speech recognition with deep recurrent neural networks"
},
{
"paperId": "7486e148260329785fb347ac6725bd4123d8dad6",
"title": "Bayesian Active Learning for Classification and Preference Learning"
},
{
"paperId": "ad4fd2c149f220a62441576af92a8a669fe81246",
"title": "Scikit-learn: Machine Learning in Python"
},
{
"paperId": "547e0f6948e69115d3c7df39243eea660ba0dbc4",
"title": "Multiple-Instance Active Learning"
},
{
"paperId": "2e9d221c206e9503ceb452302d68d10e293f2a10",
"title": "Long Short-Term Memory"
},
{
"paperId": "5194b668c67aa83c037e71599a087f63c98eb713",
"title": "A sequential algorithm for training text classifiers"
},
{
"paperId": "2c97f1df53a7c7978d9f72325555eaf5c47b00b3",
"title": "A Sequential Algorithm for Training Text Classifiers"
},
{
"paperId": "b69e0cce79eb288ffb43ad7ae3b99b8dea9ac5ac",
"title": "Heterogeneous Uncertainty Sampling for Supervised Learning"
},
{
"paperId": "59d823fc9877f8d448ac4c2d90e38051026e3201",
"title": "Individual Comparisons by Ranking Methods"
},
{
"paperId": "54da615c0982af8c1285bd3d1ba9a9c00502a5a9",
"title": "SigTran: Signature Vectors for Detecting Illicit Activities in Blockchain Transaction Networks"
},
{
"paperId": null,
"title": "spring 2020 cryptocurrency crime and anti-money laundering report"
},
{
"paperId": "3c623c08329e129e784a5d03f7606ec8feba3a28",
"title": "Uncertainty in Deep Learning"
},
{
"paperId": "34f25a8704614163c4095b3ee2fc969b60de4698",
"title": "Dropout: a simple way to prevent neural networks from overfitting"
},
{
"paperId": "818826f356444f3daa3447755bf63f171f39ec47",
"title": "Active Learning Literature Survey"
},
{
"paperId": "6d12a1d23b21a9b170118a56386552bc5d4727de",
"title": "A Mathematical Theory of Communication"
},
{
"paperId": null,
"title": "Freeman: elementary applied statistics: for students in behavioral science (book review)"
}
] | 15,660
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/005376ef093cd73b71c2064a8899ea9e1e1d4b7d
|
[
"Computer Science"
] | 0.908207
|
Performance enhancement of the internet of things with the integrated blockchain technology using RSK sidechain
|
005376ef093cd73b71c2064a8899ea9e1e1d4b7d
|
[
{
"authorId": "2148758448",
"name": "Atiur Rahman"
},
{
"authorId": "144778068",
"name": "Md. Selim Hossain"
},
{
"authorId": "2067838331",
"name": "Ziaur Rahman"
},
{
"authorId": "31022036",
"name": "S. Shezan"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
**_International Journal of Advanced Technology and Engineering Exploration,_** **_Vol 6(61)_**
**_ISSN (Print): 2394-5443 ISSN (Online): 2394-7454_**
**Research Article**
**_http://dx.doi.org/10.19101/IJATEE.2019.650071_**
# Performance enhancement of the internet of things with the integrated blockchain technology using RSK sidechain
### Atiur Rahman[1], Md. Selim Hossain[2][*], Ziaur Rahman[3] and SK. A. Shezan[4]
Department of Information and Communication Technology (ICT) at Mawlana Bhashani Science and Technology
University (MBSTU) Santosh, Tangail, Bangladesh[1]
Lecturer, Department of Computer Science and Engineering at Khwaja Yunus Ali University, Enayetpur, Sirajganj,
Bangladesh[2]
Assistant Professor, Department of Information and Communication Technology (ICT) at Mawlana Bhashani
Science and Technology University (MBSTU) Santosh, Tangail, Bangladesh[3]
Department of Electrical and Electronic Engineering, School of Engineering, RMIT University, Melbourne,
Australia[4]
Received: 28-October-2019; Revised: 24-December-2019; Accepted: 27-December-2019
©2019 Atiur Rahman et al. This is an open access article distributed under the Creative Commons Attribution (CC BY) License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
**Research Article**
## Abstract
**_In the arrangement of sensor devices, the performance has become a pressing issue with the increase of the enormous_**
**_network overhead. As IoT has been evolving so rapidly to ease our daily life, communication latency and security can_**
**_affect its efficient usage, if different aspects of socio-economic issues where IoT is necessarily involved. In line with that,_**
**_blockchain has been able to show its enormous potentials to equip IoT devices to enhance security and performance. It is_**
**_so popular because of its self-administering ability through distributed and consensus-driven behavior along with_**
**_transparency, immutability, and cryptographic security strength. There have been several efforts made to upgrade the_**
**_network performance besides ensuring safety and privacy concerns. However, the existing approaches such that aligned_**
**_with publicly available blockchains have come up with certain drawbacks and performance delays. Therefore, it has been_**
**_raised as a popularly asked question that the existing cryptocurrency driven blockchain technology may not be directly_**
**_applicable in the areas such as IoT security and privacy. In this work, a two-way peg blockchain system to overcome the_**
**_performance and overhead issues has been proposed. The proposed approach has been justified after successfully_**
**_integrating considered IoT networks. It proves that the proposed rootstock (RSK) sidechain based blockchain has a_**
**_promising ability to work with the IoT networks. The result shows a significant improvement in terms of performance in_**
**_comparison with its peers, such as Ethereum and Monax, upon different sensor nodes employed._**
## Keywords
**_IOT, Blockchain, Sidechain, RSK, Consensus, Transaction._**
## Abstract
**_In the arrangement of sensor devices, the performance has become a pressing issue with the increase of the enormous_**
**_network overhead. As IoT has been evolving so rapidly to ease our daily life, communication latency and security can_**
**_affect its efficient usage, if different aspects of socio-economic issues where IoT is necessarily involved. In line with that,_**
**_blockchain has been able to show its enormous potentials to equip IoT devices to enhance security and performance. It is_**
**_so popular because of its self-administering ability through distributed and consensus-driven behavior along with_**
**_transparency, immutability, and cryptographic security strength. There have been several efforts made to upgrade the_**
**_network performance besides ensuring safety and privacy concerns. However, the existing approaches such that aligned_**
**_with publicly available blockchains have come up with certain drawbacks and performance delays. Therefore, it has been_**
**_raised as a popularly asked question that the existing cryptocurrency driven blockchain technology may not be directly_**
**_applicable in the areas such as IoT security and privacy. In this work, a two-way peg blockchain system to overcome the_**
**_performance and overhead issues has been proposed. The proposed approach has been justified after successfully_**
**_integrating considered IoT networks. It proves that the proposed rootstock (RSK) sidechain based blockchain has a_**
**_promising ability to work with the IoT networks. The result shows a significant improvement in terms of performance in_**
**_comparison with its peers, such as Ethereum and Monax, upon different sensor nodes employed._**
## Keywords
**_IOT, Blockchain, Sidechain, RSK, Consensus, Transaction._**
## 1.Introduction
Blockchain technology was first built as the
framework underlying crypto-currency; it has now
shown immense potentials with far-reaching
implications in the arena of smart contract-based
financial markets, bitcoin integrated artificial
intelligence and mostly the distributed ledgeroriented security mechanism of the IoT [1]. The
technology lets end-users to communicate and record
the value and information called transaction on a
peer-to-peer network of computers and smart devices.
*Author for correspondence
257
The term Internet of Things (IoT), the insightful and
smart network of humans, process, data, and things,
has been able to hold the principle research trends of
recent days, which are fairly a new term to be
confused with its peer the Internet of Things (IoT)
[2]. In essence, the Internet of Everything (IoE) may
further advance the power of the Internet to improve
the socio-economic outcome by making life easier to
live by adding to the progress of IoT [3, 4]. As smart
devices have been getting more connected and
accumulating huge information and transaction,
similarly the privacy and security has been caught as
a fundamental concern with the priority in all aspects
of IoT data and information. Most drawbacks have
been facing by IoE security is coming from the very
architecture of the ecosystem based on a centralized
-----
Atiur Rahman et al.
model [5]. Blockchain is a set of connected blocks
are immutable and holds transparent data over the
distributed network. A sample blockchain is depicted
in Figure 1.
**Figure 1 Blockchain blocks and its working procedure on the distributed ledger**
The objective of this work is outlined as below- popularity of blockchains in the IoT domain
`o` Proposing rootstock (RSK) based blockchain increased [6]. It is described that blockchains are
which is an improved structure using the concept capable of maintaining an unchallengeable record of
of sidechain. data relations and accomplish contact rheostat. The
`o` Applying the proposed blockchain for a considered contact regulator element originates after the creation
IoT network to monitor its applicability and initial of entree policies round the public key infrastructure
feasibility. (PKI) of blockchain systems [7, 8]. Authors in [9 and
`o` Evaluating the proposed system in comparison 10] highlighted the pros of certifying manipulators’
with the existing and more relevant works and possession of IoT data via a blockchain. It has been
concluding the claims that the performance of the discussed the budding of blockchains for smoothing
proposed approach looks better, indeed. an economy for sensor information and manipulators
[11, 12].
This article is organized with the following
structures-including background, proposed method **3.Proposed method: sidechains for IoT**
and materials, evaluation, result discussion, We can imagine different side chains for different
imitations that concluded with a conclusion and applications of IoT for smart cities. There are various
future scope section. sectors that can be divided for using as sidechains
within the IoT architecture. There is a different
## 2.Backgrounds and related works dimension for IoE applications in smart cities [13,
IoE data privacy is a research challenge because there 14]. For smart cities, there may be smart homes,
stands no enough adjustment in IoT, the heavy rule of smart parking lots, healthcare, climate and water
IoT systems and central entree replicas for IoT data. schemes, transportations and vehicular traffic flow,
There have been so many research contributions that ecological contamination, shadowing arrangements.
have been made to secure data access on the client There are subcategories cutting-edge each of sectors
server constructed entree control process. IoT service [15].
providers also use proprietary sanction procedures,
somewhere users of IoE turn as central permitting **3.1Challenges and threats**
objects. Though unified IoT data management causes In this section, the challenges for implementing IoE
scalability matters in IoT besides vigor the users to based smart cities are stated. _Figure 2 illustrates the_
faith in central third revelry mediators to accomplish challenges. They may be security and reliability,
their information, thus they practice data privacy and heterogeneity, large-scale, big data, in our proposed
termination to secure. By way of a result, the research system we want to deal with the security and
focus is moved to develop a decentralized model for reliability using blockchain technology which is
IoE, using RSK sidechains along with peer-to-peer called RSK sidechain. Social and legal aspects,
connection mechanisms. For the proven of supporting sensor networks, DR barriers, etc. [16] are the parts
good safety to bulky scale disseminated networks like of this technology.
as Bitcoin, and further cryptocurrency networks the
258
-----
International Journal of Advanced Technology and Engineering Exploration, Vol 6(61)
**Figure 2 Challenges for implementing of IoT using blockchain technology**
**3.2Blockchain materials** **3.3Working procedures**
A sidechain is a separate blockchain that is To a two-way peg to work these two lockboxes needs
independent but pegged with bitcoin main net via a to have information about each other and have to be
two-way peg in the middle. It consents us to able to release funds simultaneously when the
handover bitcoins back and forth. By this system we lockbox on the other side was seized. There are a
can get two advantages: (1) We can use the security couple of ways for this to work. The simplest way to
that we have in bitcoin (2) Transfer them or use them implement a two-way peg is via central exchanges
in the sidechain with different consensus rules. For and in this case, we will have a central party that
example, we can use different block sizes, different controls both lockboxes on both sides. The advantage
block intervals, different mining algorithms [17]. We of this is simplicity, but the disadvantage is that we
can also introduce new op-codes such as smart are placing trust in a central party who can if wants to
contracts. So, the possibilities of this experiment are maliciously empty a lockbox in a chain and steal all
quite limitless and we can also utilize the security of funds so there is a way to minimize the central trust
bitcoin network generated in the bitcoin main placed in a central exchange and that is with a
network. The way it will work is a two-way peg. It federation so we can implement the two-way peg via
consists of locked boxes on both the chains. For a federated peg where the lock boxes are now being
example, we want to move transfer a bitcoin from a controlled by a group of entities so to make that
bitcoin network to a sanctioned address our transaction across the two chains. Then it require the
transaction first gets to the locked box in the bitcoin lock box to have n of m signatures to release funds so
side there will be information in the transaction about on at least n entities of the Federation need to
the sidechain address [18]. Now once the transaction confirm that this is a valid transaction now the
is received by the locked box the sidechain then advantage of this is similar to what we have before it
releases an equivalent bitcoin called secondary can be implemented with any two types of chains
bitcoin (sec BC) which is then sent to the address we without specific protocol upgrades or specific all
indicated in our original transaction in the bitcoin codes but again we have a centralized trust placed in
side. If we want to reverse the process, we do exactly a group of minimum now there is one more type of
the opposite. We send a sec BC to the locked box on two-way peg where the two chains can interact with
the sidechain with information about the recipient each other without having a middleman and this is
bitcoin address. Once that is received the locked box via simple payment verification (SPV) proofs.
on the bitcoin side releases a bitcoin and that is sent
to address, we indicated in our original transaction in
the sidechain [19].
259
-----
Atiur Rahman et al.
**Figure 3 Sample transactions to unlock the BTC with sec BC in using proof of last transaction control for SPV**
**3.4SPV proof** block, then they prerequisite to present that newly
SPV proof attitudes for abridged compensation found block to the network along with the nonce.
confirmation. The SPV proof basically shows that I The network can then simply append the two values
can prove to you that my transaction is included in a and hash it to check the validity of the claim. This is
valid block and that miners have created a lot of the substance of PoW. It is difficult to solve the
subsequent blocks on top of it now the SPV proof possible and finding the exact nonce and It should
does not actually say that transaction is consistent not be easy to check whether the nonce is correct or
with entire blockchain history. It doesn't actually not.
check it across check it to be consistent with all
previous transactions from the genesis block onwards **4.PeIE: considered IoT system**
instead It's doing a proof indirectly and showing that In our polyethylenimine ethoxylated (PeIE) shaped
it's member of a block and a lot of miners trust that IoE architecture as shown in _Figure 1 we assume a_
the block is correct and therefore they have mined on sidechain called secondary bitcoin (Secoin) running
top of it forming the longest chain. SPV gives the 2 beside the main bitcoin has been demonstrated in
critical factors; a) It ensures the transactions are in a _Figure 2. So, the possibilities of this experiment are_
block, and b) It provides attestation (proof of work) quite limitless and we can also utilize the security of
that additional blocks are being appended to the bitcoin network beside the main network. The way it
chain. By using a two-way peg system with SPV will work is considered as the two-way peg. It
proof we can ensure more security, reliability and consists of locked boxes on both the chains. For
efficiency than a system that is using a single chain. example, as assumed and drawn in Figure 3 we want
to move transfer a bitcoin from a bitcoin network to a
**3.5Proof of work** sanctioned address: our transaction first gets to the
Miners in proof-of-work (POW) chains have the locked box in the bitcoin side there will be
accountability for the growing the chain by information in the transaction about the sidechain
repeatedly finding newer blocks. The way of address. Now, once the transaction is received by the
discovering or “mine” for these blocks is by doing locked box the sidechain then releases an equivalent
the nonstop calculation that requires a lot of bitcoin called secondary bitcoin which is then sent to
processing power. The hash of the block is occupied the address, we indicated in our original transaction
and affixed a “nonce” to it. It is a random in the bitcoin side. If we want to reverse the process,
hexadecimal value. The resultant string is then we do exactly the opposite. We send a secoins to the
hashed again. That new hashed value cannot be locked box on the sidechain with information about
equal to or more than a predetermined value that is the recipient bitcoin address. Once that is received
called “difficulty.” The miners must be custody on the locked box on the bitcoin side releases a bitcoin
repeatedly altering the value of the nonce until they and that is sent to the address we indicated in our
achieve the required result. If a miner's discovery a original transaction in the sidechain.
260
-----
International Journal of Advanced Technology and Engineering Exploration, Vol 6(61)
**4.1Core components**
Infrastructures between local devices or overlay
nodes are known as transactions. There are various
types of transactions in the RSK sidechain-based
overlay block manager (OBM) Access which is a
transaction invoked by the smart homeowner to the
overlay network. A monitor deal is produced by the
proprietor or SPs to periodically monitoring device
information. By genesis transaction, an original block
is supplementary toward the chain and with remove;
a block is withdrawn from the chain. Figure 4 shows
the proposed architecture of service provider and
RSK sidechain of OBM access.
**Figure 4 Proposed architecture of service provider and RSK sidechain of OBM access**
**Figure 5 Method for creating a two-way peg with BTC and secoins with bitcoin locked and bitcoin unlocked system**
**4.2Requesters and requesters responsibility** formerly the month is awake, the corporation’s
Permitting the followers of the system access to an application admittance will be failed; and as
unchallengeable ledger of completely fruitful and indication of delinquency by means of the sidechain
ineffective entree needs delivers responsibility to proprietor, the corporation can yield times manner
both requesters and requesters. Deliberate the history of the aforementioned failed admittance
situation wherever a manipulator vends his sensor submission. Algorithm displays the process for
information to an advertising corporation. The authenticating a distinct transaction, X that in the
manipulator approves to consent the corporation public BC all multisig dealings engendered by each
entree for one month. If the manipulator cancels requester is systematized in a separate ledger. The
admittance privileges on the sidechain neck and neck output of the multisig transactions generates a
261
-----
Atiur Rahman et al.
standing metric for the supplicant. The connection
between consecutive transactions is recognized by
the enclosure of the hash of the PK that will be
secondhanded by the requester for the next
transaction in the third output arena of the present
transaction. Thus, the OBM foremost settles this by
associating the hash of the requester PK in X with
output [8] of the former contract of this requester.
Succeeding this, the requester sign, which is
controlled within the fourth arena of X, is tested (also
called, redeemed) by means of its PK in X.
Originally, the requester groups these outputs
(constructed on its past of transactions) in the
multisig contract. If the request receives the
transaction, formerly it would upsurge the yield 0
through unique. Or the requestee augmentations the
output 6. To defend the chain in contradiction of
nodes those prerogative false standing by
incrementing its yields formerly distribution it’s to
the requestee, in the following stage of deal
substantiation, OBM payments which individual one
of X’s outputs, i.e. whichever the numeral of fruitful
contacts (i.e. output 0) or the numeral of banned
contacts (i.e. output 6), is enlarged only via
individual. Subsequent this, the requestee sign is
confirmed with its PK in X. If the steps complete
positively, X is confirmed.
**Algorithm Transaction Confirmation.**
**Input: Overlay Transaction (X)**
**Output: True or False Requester Confirmation.**
**if (hash (X.Requester-PK) = X**
output 2 then return False;
**else if** (X. requester-PK redeem x.requester-sign)
then return False;
**end if**
**end if**
Output Authentication.
**if (X. output 0 - X. output 0) + (X. output 1 - X**
output 1)> 1) then return False;
end if
Requestee Confirmation:
**if (X.requestee-PK redeem x.requestee-sign)** **then**
**return true;**
**end if;**
## 5.Evaluation and analysis
Isolated sidechain preserves kindling of completely
IoT information processes and transpire inside an
isolated IoT system. The private IoT link contains of
IoT strategies as well as individual lawful node
consecutively the sidechain. IoT strategies which are
prearranged the inimitable pubkeys and prikeys
though which it can be customed toward guide
262
encrypted sensor understandings towards the legal
node. The legal node acknowledged the information
with encryption as data conception proceedings.
Legal node enhances innovative blocks in the
direction of the sidechain and receipts advanced
influences and storing interplanetary. A keen
convention can be situated inside the sidechain to
achieve the succeeding occupations: Packing a
vocabulary by approved canny expedient’s pubkey
and the hash of the IPFS dossier storage information
of smart contract and safeguarding that individual the
information incoming from lawful smart devices
remain talented to connect through the sidechain
validator. The additional is storage a lexicon through
pubkeys of petitioners in the system by entree rights
and pubkeys of the smart plans whose information
the requesters have contact and accomplishment
admission regulator on arriving entree request
dealings. For meaningful particulars about the
viability of the application of the proposed
construction and the pertinent placement thoughts,
we ensured an act scrutiny of the current block chain
submission expansion stages on both the sidechain
and association near. We cast off Ethereum in whose
cryptocurrency ether is additional individual to
Bitcoin. We too rummage-sale Monax, blockchain
growth stage for commercial schemes. Ethereum
advances consensus by the PoW algorithm. Monax
becomes it by means of the Tendermint [15]
consensus apparatus [16], which services PoS. We
constructed our testbed on an association of five
validator nodes, each node we rummage-sale for
getting received information from five keen plans.
The act metrics we rummage-sale for our
investigation remained dispensation overhead,
overhead of the network traffic and block
dispensation periods.
**5.1Processing overhead**
We showed a trial on CPU procedure when
authenticating new-fangled blocks on the sidechain
close. We measured that 5 digital strategies are linked
to the way to separate legal node, our plan is to lead
trials per variable statistics via external transactions
confidential the sidechain system. For completely
differences in the arriving transactions, the
dispensation overhead endured unaffected with
mutually stages. We exhibited the dispensation
overhead for the last and the maximum transaction
rates that we directed for verified.
**5.2Network traffic overhead**
The traffic of network traffic above in blockchain
technology, which originates beginning the nodes of
-----
International Journal of Advanced Technology and Engineering Exploration, Vol 6(61)
the nets that contribute hip the consensus step by step
process. It is unhurried that traffic above intended for
the sidechain since the sidechain solitary includes one
legal node. In this research work, the proportion of
entree requests, contacts is expected can be less
compare to the data formation inside the sidechain.
Industrialization of Monax business and it stayed not
theoretical on the way to be rummage-sale in a
climbable pubnet, the method of risk grid purposes
toward. Here, it is restrained that traffic of network
taking the variable numbers of nodes to hip the
sidechain system, and a fluctuating quantity of entree
request transactions arrives for each minute. The
explanations have become from this experimentation
remain exemplified by Figures 5 and 6 slighter than
500
the traffic above of Monax. From head to foot
network upstairs in Monax is due to the fact the
Tendermint agreement train directs ready empty
blocks as a rate to square if a peer is awake. Monax
was established for business claims and it was not
destined to be secondhanded in a climbable pubnet,
method of consortium net intentions to be located in
the system. By this experimentation, it is unhurried
the traffic of network with various numbers of nodes
in the consortium network and numerous volumes of
contact request dealings inward for every minute.
The explanations that have been collected after its
trial be situated demonstrated in Figures 5 and 6.
400
300
200
100
0
C1: 10 tx/m C2: 20 tx/m C3: 30 tx/m C4: 40 tx/m C5: 50 tx/m
Node 2 Node 3 Node 4 Node 5
**Figure 6 Traffic of network overhead in Monax for different nodes for showing the potential performance**
400
300
200
100
0
C1: 10 tx/m C2: 20 tx/m C3: 30 tx/m C4: 40 tx/m C5: 50 tx/m
Node 2 Node 3 Node 4 Node 5
**Figure 7 Network traffic overhead in Ethereum blockchain for different nodes showing performance**
263
-----
Atiur Rahman et al.
200
150
100
50
0
C1: 10 tx/m C2: 20 tx/m C3: 30 tx/m C4: 40 tx/m C5: 50 tx/m
Node 2 Node 3 Node 4 Node 5
**Figure 8 Traffic overhead for the considered sensor network for the proposed blockchain using RSK sidechain**
## 6.Result discussion and limitations increasingly raise proportionately rather, for higher
The proposed approach using 2-way peg RSK nodes it performs similarly on an average as
Sidechain shows significantly improved performance calculated.
achieved in compare to other existing approach such
as Etherum [3] and Monax [7] blockchain. For The result achieved is done through Solidity run on
example, as per the demonstration shown in the MetaMask. For the proposed system evaluation, the
_Figure 8, the performance of the proposed RSK_ initial setup was run on NS2 in to measure the
sidechain seems higher than others as it has fewer network overhead for five use cases. The result of the
overheads for different sensor node setup. It also small network setup looks promising; however, it
shows that the Etheruem and Monax which are could have limitations for heavy network with several
mostly the crypto-currency have similar type of thousand sensor nodes. The ongoing work motivates
performance on Solidity platform. However, in case to overcome those challenges such different
of a RSK for example a case 5 for 50 sensor nodes, blockchain integration for the similar IoT test case.
the simulation shows that less than 200 Kpbs _Figure 9 shows the Performance comparisons among_
overheads whereas the Ethereum and Monax have the proposed RSK sidechain system, Monax and
approximately 300 kbps. It also shows as per the Ethereum
number of nodes increases the overhead does not
**Figure 9 Performance comparisons among the proposed RSK sidechain system, Monax and Ethereum**
264
-----
International Journal of Advanced Technology and Engineering Exploration, Vol 6(61)
## 7.Conclusion and future scope
The IoT will encompass 26 billion devices with 2020.
It will create millions of new objects and sensors
within a short time interval, all generating real-time
data that deserves proper security and privacy
concern among the researchers. Applying blockchain
Technology to enhance your security is not upfront
because of immense challenges such as high resource
consumption, scalability, and processing time.
Sidechain and RSK integration in the PeIE shaped
structure have been proposed. It is helpful in
influencing the security of this technology. Its
engagements are simple architecture that usages RSK
sidechain OBM to reduce the complexity overhead
and ensure stronger trust. A performance reputation
update strategy is also combined with monitoring and
enhancing this trust level. We proposed an IoT fast
consensus algorithm that eradicates the requirement
of computation by the miners before affixing a block
to the blockchain as justified by the respective
evaluation section. The consensus technique needs
further improvement, which will be included with our
future work along with other challenges encountered.
**Acknowledgment**
None.
**Conflicts of interest**
The authors have no conflicts of interest to declare.
**References**
[1] Ferrag MA, Derdour M, Mukherjee M, Derhab A,
Maglaras L, Janicke H. Blockchain technologies for
the internet of things: research issues and challenges.
IEEE Internet of Things Journal. 2018; 6(2):2188-204.
[2] Aitzhan NZ, Svetinovic D. Security and privacy in
decentralized energy trading through multi-signatures,
blockchain and anonymous messaging streams. IEEE
Transactions on Dependable and Secure Computing.
2016; 15(5):840-52.
[3] Eckhoff D, Wagner I. Privacy in the smart city—
applications, technologies, challenges, and solutions.
IEEE Communications Surveys & Tutorials. 2017;
20(1):489-516.
[4] Truong NB, Sun K, Lee GM, Guo Y. GDPR
compliant personal data management: a blockchainbased solution. arXiv preprint arXiv:1904.03038.
2019.
[5] Da Xu L, Viriyasitavat W. Application of blockchain
in collaborative internet-of-things services. IEEE
Transactions on Computational Social Systems. 2019;
6(6):1295-305.
265
[6] Taylor PJ, Dargahi T, Dehghantanha A, Parizi RM,
Choo KK. A systematic literature review of
blockchain cyber security. Digital Communications
and Networks. 2019.
[7] Jones M, Johnson M, Shervey M, Dudley JT,
Zimmerman N. Privacy-preserving methods for
feature engineering using blockchain: review,
evaluation, and proof of concept. Journal of Medical
Internet Research. 2019; 21(8):1-18.
[8] Gharakheili HH, Sivanathan A, Hamza A, Sivaraman
V. Network-level security for the internet of things:
opportunities and challenges. Computer. 2019;
52(8):58-62.
[9] Zyskind G, Nathan O, Pentland A. Enigma:
decentralized computation platform with guaranteed
privacy. arXiv preprint arXiv:1506.03471. 2015.
[10] Axon LM, Goldsmith M. PB-PKI: a privacy-aware
blockchain-based PKI. 14th International joint
conference on e-business and telecommunications.
2017(pp. 311-8).
[11] Zhang Y, Wen J. An IoT electric business model
based on the protocol of bitcoin. In international
conference on intelligence in next generation networks
2015 (pp. 184-91). IEEE.
[12] Zhang Y, Wen J. The IoT electric business model:
using blockchain technology for the internet of things.
Peer-to-Peer Networking and Applications. 2017;
10(4):983-94.
[13] Shafagh H, Burkhalter L, Hithnawi A, Duquennoy
S. Towards blockchain-based auditable storage and
sharing of IoT data. In proceedings of the on cloud
computing security workshop 2017 (pp. 45-50).
ACM.
[14] Zyskind G, Nathan O. Decentralizing privacy:
using blockchain to protect personal data. In
security and privacy workshops 2015 (pp. 180-4).
IEEE.
[15] Ouaddah A, Elkalam AA, Ouahman AA. Towards a
novel privacy-preserving access control model
based on blockchain technology in IoT. In Europe
and MENA cooperation advances in information
and communication technologies 2017 (pp. 52333). Springer, Cham.
[16] Barber S, Boyen X, Shi E, Uzun E. Bitter to
better—how to make bitcoin a better currency. In
international conference on financial cryptography
and data security 2012 (pp. 399-414). Springer,
Berlin, Heidelberg.
[17] Dorri A, Kanhere SS, Jurdak R. Towards an
optimized blockchain for IoT. In proceedings of the
second international conference on internet-ofthings design and implementation 2017 (pp. 173-8).
ACM.
[18] Jacobs IS. Fine particles, thin films and exchange
anisotropy. Magnetism. 1963:271-350.
[19] Yorozu T, Hirano M, Oka K, Tagawa Y. Electron
spectroscopy studies on magneto-optical media and
plastic substrate interface. IEEE Translation
Journal on Magnetics in Japan. 1987; 2(8):740-1.
-----
Atiur Rahman et al.
**Atiur Rahman is currentlyworkingg as**
SQA Engineer at Samsung R&D
Institute, Bangladesh. He was a student
of Department of Information and
Communication Technology of
Mawlana Bhashani Science and
Technology University, Tangail,
Bangladesh. He received his Bachelor
of Engineering degree in Information and Communication
Technology at Department of Mawlana Bhashani Science
and Technology University, Tangail, Bangladesh. His
research interests are IOT and blockchain.
Email: [email protected]
**Md. Selim Hossain has been working**
as a Lecturer in Department of
Computer Science and Engineering at
Khwaja Yunus Ali University,
Sirajganj, Bangladesh. He completed
his B.Sc. degree on Telecommunication
and Electronic Engineering from Hajee
Mohammad Danesh Science and
Technology University, Dinajpur, Bangladesh and M.Sc.
(Engg.) on Information and Communication Technology
from Mawlana Bhashani Science and Technology
University, Tangail, Bangladesh. His main research interest
is based on IoT, Blockchain, Cryptography and Network
Security, Antenna, Algorithm and Software Engineering.
266
**Ziaur Rahman is currently a PhD**
Candidate at RMIT University,
Melbourne, and an Assistant Professor
(currentlyonn study leave) of the
Department of ICT, MBSTU,
Bangladesh. He was graduated from
Shenyang University of Chemical
Technology, China, in 2012 and
completed Masters from IUT, OIC in 2015. His articles
received the best paper award and the nomination at IEEE
conferences and published in reputed journals. His research
includes Blockchain aligned IoT, Cybersecurity and
Software Engineering.
### SK. A. Shezan currently pursuing
his PhD degree in Electrical and
Electronic Engineering from RMIT
University, Melbourne, Australia. He
was a lecturer of Electrical and
Electronic Engineering Department of
Uttara University, Dhaka, Bangladesh.
He received his Master of Engineering
degree from the University of Malaya, in 2016. Moreover,
he received his Bachelor of Engineering degree in
Electrical Engineering and Automation from Shenyang
University of Chemical Technology, China, in 2013. His
research interests are Microgrid, HRES, Solar Energy and
Wind Energy.
-----
| 8,017
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.19101/ijatee.2019.650071?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.19101/ijatee.2019.650071, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GOLD",
"url": "https://www.accentsjournals.org/PaperDirectory/Journal/IJATEE/2019/12/1.pdf"
}
| 2,019
|
[] | true
| 2019-12-31T00:00:00
|
[] | 8,017
|
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0054baba895f36cedab702d36c99dc4a3b7d3363
|
[
"Computer Science"
] | 0.917574
|
A Survey on the Integration of Blockchain With IoT to Enhance Performance and Eliminate Challenges
|
0054baba895f36cedab702d36c99dc4a3b7d3363
|
IEEE Access
|
[
{
"authorId": "72558091",
"name": "Alia Al Sadawi"
},
{
"authorId": "145065425",
"name": "Mohamed S. Hassan"
},
{
"authorId": "5985158",
"name": "M. Ndiaye"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://ieeexplore.ieee.org/servlet/opac?punumber=6287639"
],
"id": "2633f5b2-c15c-49fe-80f5-07523e770c26",
"issn": "2169-3536",
"name": "IEEE Access",
"type": "journal",
"url": "http://www.ieee.org/publications_standards/publications/ieee_access.html"
}
|
Internet of things IoT is playing a remarkable role in the advancement of many fields such as healthcare, smart grids, supply chain management, etc. It also eases people’s daily lives and enhances their interaction with each other as well as with their surroundings and the environment in a broader scope. IoT performs this role utilizing devices and sensors of different shapes and sizes ranging from small embedded sensors and wearable devices all the way to automated systems. However, IoT networks are growing in size, complexity, and number of connected devices. As a result, many challenges and problems arise such as security, authenticity, reliability, and scalability. Based on that and taking into account the anticipated evolution of the IoT, it is extremely vital not only to maintain but to increase confidence in and reliance on IoT systems by tackling the aforementioned issues. The emergence of blockchain opened the door to solve some challenges related to IoT networks. Blockchain characteristics such as security, transparency, reliability, and traceability make it the perfect candidate to improve IoT systems, solve their problems, and support their future expansion. This paper demonstrates the major challenges facing IoT systems and blockchain’s proposed role in solving them. It also evaluates the position of current researches in the field of merging blockchain with IoT networks and the latest implementation stages. Additionally, it discusses the issues related to the IoT-blockchain integration itself. Finally, this research proposes an architectural design to integrate IoT with blockchain in two layers using dew and cloudlet computing. Our aim is to benefit from blockchain features and services to guarantee a decentralized data storage and processing and address security and anonymity challenges and achieve transparency and efficient authentication service.
|
Received January 19, 2021, accepted March 22, 2021, date of publication April 2, 2021, date of current version April 14, 2021.
_Digital Object Identifier 10.1109/ACCESS.2021.3070555_
# A Survey on the Integration of Blockchain With IoT to Enhance Performance and Eliminate Challenges
ALIA AL SADAWI 1, MOHAMED S. HASSAN 2, AND MALICK NDIAYE 1
1Department of Engineering Systems Management, American University of Sharjah, Sharjah, United Arab Emirates
2Department of Electrical Engineering, American University of Sharjah, Sharjah, United Arab Emirates
Corresponding author: Mohamed S. Hassan ([email protected])
**ABSTRACT Internet of things IoT is playing a remarkable role in the advancement of many fields such as**
healthcare, smart grids, supply chain management, etc. It also eases people’s daily lives and enhances their
interaction with each other as well as with their surroundings and the environment in a broader scope. IoT
performs this role utilizing devices and sensors of different shapes and sizes ranging from small embedded
sensors and wearable devices all the way to automated systems. However, IoT networks are growing in
size, complexity, and number of connected devices. As a result, many challenges and problems arise such
as security, authenticity, reliability, and scalability. Based on that and taking into account the anticipated
evolution of the IoT, it is extremely vital not only to maintain but to increase confidence in and reliance
on IoT systems by tackling the aforementioned issues. The emergence of blockchain opened the door to
solve some challenges related to IoT networks. Blockchain characteristics such as security, transparency,
reliability, and traceability make it the perfect candidate to improve IoT systems, solve their problems,
and support their future expansion. This paper demonstrates the major challenges facing IoT systems and
blockchain’s proposed role in solving them. It also evaluates the position of current researches in the field of
merging blockchain with IoT networks and the latest implementation stages. Additionally, it discusses the
issues related to the IoT-blockchain integration itself. Finally, this research proposes an architectural design
to integrate IoT with blockchain in two layers using dew and cloudlet computing. Our aim is to benefit
from blockchain features and services to guarantee a decentralized data storage and processing and address
security and anonymity challenges and achieve transparency and efficient authentication service.
**INDEX TERMS Blockchain, IoT, smart contract, trust, IoT challenges, IoT security, decentralized IoT,**
cloudlet computing, dew computing, cloudlet-dew architecture.
**I. INTRODUCTION**
In today’s digital world, advances and transformation in
electronics, wireless communications, and networking technologies are not only rapid but also remarkable. While this
led to a distinguishable hype in the performance of wireless devices and sensors, leading to the emergence of the
Internet of things (IoT), it resulted in a significant increase
in the complexity of cloud services and structures, as well.
IoT was facilitated by the capabilities of Wireless Sensors
Networks (WSN), Radio Frequency Identification (RFID),
in addition to advances in other devices to sense, communicate and actuate through existing network infrastructure [1].
IoT allows for a digitally connected real world, whereby
The associate editor coordinating the review of this manuscript and
approving it for publication was Alessandra De Benedictis.
connected devices can exchange collected data, interact with
each other, and remotely control objects across the Internet,
possibly without human intervention. Basically, IoT is where
the Internet meets the physical world [2] such that societies
and industries can benefit from IoT to achieve a quantum
shift towards a smart digitally controlled world. Therefore,
the ways with which people interact with one another and
with their surroundings as well as with the environment have
been improved and reshaped due to the implementation of
the IoT technologies. Consequently, one can say that people
have reached a better understanding of the world while the
IoT enables more efficient interaction with it.
Moreover, the IoT does not only enable a huge range of
applications but covers a wide span of societies and industrial needs, as well. Specifically, IoT is expected to play a
major role in transforming ordinary cities into smart ones,
-----
houses into smart homes, electrical grids into smart grids, and
so on. Additionally, IoT has diverse applications including
healthcare, sports, entertainment, as well as environmental
applications and many more. On another front, IoT can be
thought of as the backbone of digitizing the industrial sector
by enabling optimized production and manufacturing processes in addition to cost reduction. Additionally, IoT has
the ability to connect a huge number of devices to the extent
that the number of connected IoT devices and sensors was
estimated to reach 20 to 50 billion by 2020 [3]. It is also
expected that IoT could be more complex in the future leading
to a Network of Plentiful Things (NPT) [4].
Relevantly, due to the successful implementation of IoT in
different fields, the number of newly established IoT networks is increasing around the world. As a result, IoT is
becoming increasingly popular for consumers, industries, and
organizations of different natures. Therefore, the need to
develop and elevate the domain becomes essential bearing in
mind the number of challenges posed by such an exponential
evolution.
The significant proliferation of IoT applications in various
sectors places some serious challenges that could limit the
successful deployment of IoT, on one hand, and could possibly degrade the performance of existing systems, on the
other hand. Unfortunately, these challenges could strongly
be interrelated, therefore, a comprehensive system study is
essential to understand these challenges and overcome them.
It is also important to note that IoT is not a stand-alone
technology but rather an integration of multiple technologies including communication and information technologies,
electronic sensors and actuators in addition to computing
and data analytic, all collaborating towards achieving the
desired smartness [5], [6]. Unfortunately, the integration of
those technologies increases the complexity of IoT systems,
especially when implemented on large scales. Therefore,
to address any arising issues when integrating scattered patterns of IoT devices using networks’ interconnection, a central server structure was proposed in which all connected
devices use for authentication. Such a structure can clearly
call for unreliable interconnection of the integrated devices
permitting sharing data with falsified authentication, which
in turn can result in an insecure data flow [7]. Thus, centralized architectures of IoT networks could suffer from the
difficulty of fulfilling the trust factor. In a related context,
information trustworthiness is vital for the efficient operation
of IoT networks [8] since connected devices would interact
and operate based on this information. The challenge here is
how far the data in IoT systems can be trusted. Usually, people
trust the information provided by governments and financial institutions, but the question now is how to make sure
that this information is not falsified or tampered with? The
same applies to companies providing IoT services. Clearly,
information fed by certain entities to IoT servers could be
modified according to their interests, therefore, when this
falsified information is communicated through the network
to act upon, the performance of the whole network gets
disturbed accordingly [9]. This is just another reason the
centralized model of most IoT platforms could raise an issue
of impracticality. Therefore, in many cases, devices need
to perform data exchange directly and autonomously. Thus,
many efforts have been made towards deploying decentralized IoT platforms [10]. Moreover, it is well known that a
distinct attribute of IoT is generating an enormous amount
of data [7] that requires energy and connectivity to communicate, process, and possibly store over long periods of
time [8]. This problem could be inflated if the underlying IoT
employs a centralized structure in which data communication
is entirely done through a central storage hub. The situation
is aggravated if data processing is also carried out at central
servers, which requires increasing the processing capabilities
for the existing infrastructure especially for large-scale IoT
generating an enormous amount of data [11].
Also, the ability of IoT to connect devices of different
natures ranging from small wearable gadgets to massive
industrial systems has opened the door for a diversity of IoTbased applications. Such applications use different frameworks in which the ecosystem characteristics, mainly security
mechanisms, determine the success of their deployment [2].
Clearly, the wider the range of IoT applications, the higher
the expectation to reveal more related challenges to network
security and privacy. Therefore, security issues should be
investigated and tackled because threats, ranging from simple
manipulation of data to the more serious problem of unauthorized control of IoT nodes and actuators [2] can jeopardize the
reliability of the IoT network.
It is important to note that the privacy and security of
exchanged data and its computations are equally important [12]. Privacy and security issues become more crucial
with regards to the current trend of Internet-of-Everything
(IoE), which comprises application-specific IoTs such as
the Internet of Vehicles (IoV), Internet of Medical Things
(IoMT), Internet of Battlefield Things (IoBT), and so on.
Some of these IoT networks such as IoMT and IoBT are
data-sensitive, therefore, it is essential to ensure security at
the data, systems, and devices’ levels. It is worth noting
that threats could also be a result of a blunder of security
measures, especially for application-specific IoT systems.
For instance, it is known that IT team members have full
control over IoT devices, endpoints, and the overall network
in general, however, they are not necessarily fully acquainted
with the specificity and detailed functionalities of every single device. This could cause chaotic situations resulting in
security breaches simply due to performing what seemingly
looks as routine operations [12].
Last but not least, a broader view of IoT systems characterizes a growing extensive adoption of cloud computing. While
cloud-based centralized IoT platforms provide upgraded and
powerful analytical capabilities, they augment the security
and privacy challenges and heighten the difficulty of building
a trusted functioning environment compared to constrained
IoT devices, which might have some form of imperfect
security solutions. Based on the above, security and trust
-----
issues constitute a serious problem for the reliability of IoT
systems. As a result, this brings up the need to verify data
to ensure that it has never been altered [9]. Here comes the
role of ‘‘blockchain’’, which was proposed as a solution to
those challenges. Therefore, it is necessary to explore and
understand blockchain in order to derive value from it that
would be an addition to IoT systems.
Recently, it was argued that integrating the novel
‘‘blockchain’’ technology with IoT can alleviate some of
the challenges facing the deployment of IoT applications.
However, surveying related work in the literature, it was clear
that integration of blockchain with IoT is a relatively new
topic where most of the conducted studies were dated only
a few years back highlighting the fact that blockchain as an
emerging technology is yet to be further explored. Also, from
analyzing existing researches that cover the integration of
blockchain with IoT, it was evident that those works only
discussed some of the challenges facing IoT and presented
blockchain as a solution without proposing any practical
architectures, schemes, frameworks, nor analysis to help in
integrating blockchain with IoT. Not only that, such works did
not address all major challenges posed by IoT applications.
Therefore, this survey intends to bridge such a gap and provides a comprehensive study that covers the important aspects
of the topic. Thus, the main contributions of this work can be
summarized as follows
:
- Demonstrate the different challenges facing IoT especially with the growing complexity and size in contrast
to other reviews in the literature, which focused only on
challenges mostly related to security.
- Introduce blockchain concepts and shed light on its
important architecture as a promising technology with
a vital role in enhancing the performance of IoT-based
applications by taking care of the major challenges facing them.
- Then, summarize and compare existing work in the literature, which suggested integrating blockchain in IoT
deployments. Specifically, this study provides a screening survey of the main proposed architectural designs,
schemes, and frameworks in the literature with the focus
of integrating blockchain with IoT. In this survey, how
far the integration process has gone and what are the
successful steps taken in existing related research are
also addressed.
- Highlight the challenges and limitations of IoT and
blockchain integration process, which provides guidance for new integration designs.
- Provide the most suitable and comprehensive IoT–
blockchain integrated architecture that addresses the
challenges facing IoT systems and overcomes the challenges facing the integration process as well as IoT
devices constraints, and smart contract implementation.
The rest of this paper is organized as follows. Section II
introduces blockchain and its classification while Section III
demonstrates blockchain structure and Section VI highlights
the major characteristics of blockchains. Section IV provides
a briefing about smart contracts and their potential for IoTblockchain integration. Blockchian main characteristics are
explained in section V while section VI discusses blockchain
for IoT. The research survey is presented in section VII and
the issues facing the integration of IoT and blockchain are
explained in Section VIII. A literature survey conclusion is
provided in Section IX. Section X explains the design requirements and Section XI proposes a decentralized architecture
of the integration of IoT and blockchain. Finally, the article
is concluded in Section XII.
**II. BLOCKCHAIN**
The revolutionary blockchain technology is a distributed
peer to peer network. Blockchain facilitates exchanging
transactions and information between non-trusting entities
without intermediary or centralized third party. It consists of time-stamped, append-only records of data stored
immutably, securely, nevertheless privately [13]. Blockchain
is defined as ‘‘a ledger of transactions, or blocks, that form
to make a systematic, linear chain of all transactions ever
made. While the blocks themselves are highly encrypted
and anonymized, the transaction headers are made public and not owned or mediated by any specific person or
entity.’’ [14].
In 2008, an unknown person or group by the pseudonym
Satoshi Nakamoto presented the blockchain technology as
the backbone of the cryptocurrency Bitcoin. However, since
then, blockchain has established a reliable and efficient performance and found its way to many other applications such
as supply chain management, digital identity, voting, healthcare services, insurance, digital assets management, IoT, artificial intelligence, big data [13] and many other applications
where trust needs to be established between entities, whether
human or machine, who do not fully trust each other and
operate in a decentralized environment [15]. There are three
types of blockchain identified as per the mechanism regulating nodes access privileges, which are public, hybrid, and
private blockchain [16].
1) Public blockchain: used in cryptocurrencies network.
It is a permissionless blockchain where transactions
are visible by all participants in the network, however, the identity of nodes initiating those transactions
are kept anonymous [16]. It is entirely decentralized,
peer to peer network and is not owned by a single
entity. [17].
2) Private blockchain: is a permissioned blockchain,
which specifies a list of permissioned participants with
particular characteristics to operate within the network [13], [16]. This type’s ownership belongs to a single entity that controls the block creation [18]. A private
blockchain is usually used by organizations to record
transactions or assets transfer data on a limited user
base [18].
3) Federated or consortium or hybrid blockchain: This
is a semi-private blockchain, which is a combination
of a public and a private blockchain [17]. It could be
-----
**FIGURE 1. Blockchain structure.**
considered a scaled-down public blockchain available
to a specific privileged group of nodes.
As per the characteristics of IoT networks and based on
the above classification of blockchain, it is foreseen that
private and federated blockchains are the most suitable types
to be integrated with IoT and add value to it. As per public
blockchain, which has been so far used in cryptocurrency
since it is the only network where all people might have
the interest to join to trade bitcoins. However, IoT networks
are designed for special purpose applications where certain
groups or parties are interested in joining rather than the
whole public.
**III. BLOCKCHAIN STRUCTURE**
Blockchain is a distributed public database of all executed digital events shared among participants. Public events
records are verified by a mechanism that requires consensus
of the majority of participants in the network [10]. This is
called a consensus algorithm and it takes many forms such as
Proof of Work (POW), Proof of Stake (POS), and others [19].
Blockchain can utilize any of them based on the requirements of the design. Figure 1 demonstrates the structure of
blockchain. Basically, when information is contained in a
block, it needs to be authenticated before being added to
the chain. This is the role of specified nodes in the network
called miners, which have to solve a mathematical puzzle
of certain difficulty in order to verify the block and get
rewarded for their effort. When a block is verified and chronically added to the blockchain, the contained data become
immutable and can never be altered or erased. Accordingly,
the identical database copies possessed by each participant
get updated [20]. It is vital to know that the emergence of
blockchain facilitated smart contracts implementation and
made them one of the most popular technologies that add
high levels of customization to traditional transactions [15].
In essence, a smart contract is an application that resides on
blockchain and provides the service of linking entities that
do not fully trust each other to achieve a pre-set goal or perform a prespecified function in case certain conditions occur.
Many proposed IoT-Blockchain integrated architectures utilized smart contracts in the integration process in a way
that serves the goal of the integration itself or resolve more
challenges facing IoT. To understand smart contracts’ role in
the evolved IoT-Blockchain integrated design, the structure
and characteristic of a smart contract should be explored first.
This is demonstrated in the following section.
**IV. SMART CONTRACT AND ITS POTENTIAL FOR**
**IOT-BLOCKCHAIN INTEGRATION**
In [21] smart contracts are referred to as ‘‘self-executing
codes that enable the system to enforce the clauses of a
contract through certain trigger events’’ while smart contract utility is viewed by [22] as a computerized process
performed on a blockchain that is automatically triggered
when a pre-set agreed on data gets recorded as a transaction in a block. In this context, and as per [10], one of the
important characteristics of operating in a digital environment
is the ability to create programs and algorithms that could
be executed to perform a specific action without human
intervention in case a certain pre-set term(s) agreed to by all
involved parties occur. Smart contracts are programs or coded
scripts that have unique addresses and are embedded in the
-----
blockchain network. An IoT device representing a node can
operate a smart contract by just sending a transaction to
its address. Every smart contract automatically and independently gets executed on every node in the blockchain.
Therefore, every node will run as a virtual machine (VM),
and the blockchain network will act as a distributed VM [21]
while the system, as a whole, operates as a single ‘‘world
computer’’ [23]. The execution of the contract is enforced by
the blockchain consensus protocol.
When a smart contract is executed, each node updates its
state based on the outcomes obtained after running the smart
contract. Such a replication process provides great potential
for decentralized network control [24]. Consequently, tasks
and actions usually managed or performed by a central third
party authority are transferred to the blockchain [19].
Smart contracts are supported by many blockchains, however, Ethereum is the first blockchain that adopted smart
contracts. It is a public, distributed, blockchain-based computing platform and operating system, and the second-largest
cryptocurrency after bitcoin [25]. Ethereum was launched
in the year 2015 as the world’s programmable blockchain,
which means that it could be used by developers to build
brand new types of decentralized applications or ‘‘dapps’’.
Ethereum decentralized applications are predictable, reliable,
and combine the benefits of blockchain technology and cryptocurrency. Ethereum’s digital money is called Ether or ETH
and can be used in many Ethereum-based applications. It is
worth mentioning that no company or centralized organization controls Ethereum. It is maintained by diverse global
contributors who work on the core protocol and consumer
applications.
Once Smart contracts are uploaded to Ethereum, they
will automatically run as programmed every time they get
triggered [23]. The node that initiated the smart contract
pays an execution fee called ‘‘Gas’’ to perform the function
of the program. Gas is the incentive for nodes to perform
the contract and ensure that it is obliged by the blockchain
network. It is scaled according to the amount of computational power needed to perform the contract functions [26].
Smart contracts have associated code and data storage. The
code is written in a high-level language called ‘‘Solidity’’,
which is explicitly used to write smart contracts and supports
their execution in the Ethereum world computer decentralized
environment. However, the code should comply with a lowlevel bytecode in order to run in the EVM. EVM stands for
a virtual machine that is similar to a computer’s CPU, which
runs machine code such as x86 64 [23].
−
Smart contracts run only when called by a transaction.
However, a contract can call another one, which in turn may
call another contract and so on. It is important to note that
smart contracts cannot run in the background or by themselves. Also, they cannot be executed in parallel, therefore,
Ethereum world computer is considered a single-threaded
machine [23]. Smart contracts are turning into complete
systems [26], meaning that they can solve any computation
problem. This is an extremely important feature added to
blockchain especially that it allows most of existing verifiable programs to transfer to and operate in blockchain [26].
Moreover, smart contracts have many advantages that add
automation and therefore strengthens blockchain. One of
which is that they are superior to traditional agreements
due to the security they provide since they are stored and
executed in blockchain. Also, the self-executed events and
actions are easily traceable in blockchain and are irreversible.
Furthermore, those contracts are updated in real-time and are
capable of executing actions and trades. Lastly, the above
features of smart contracts do not only reduce significantly
the network-performance’ costs [21] but lower anticipated
risks [13], errors, and disruptions, as well.
Smart contracts were proposed as a cornerstone in comprehensive systems combining IoTs and blockchains. The result
is an autonomous system aiming to pay for consumed and
provided IoT resources [27]. Also, smart contracts manage
and record all IoT interactions while providing a reliable and
secured processing tool resulting in trusted actions. Therefore, smart contracts can securely model the logic supporting
IoT applications [28].
Since a smart contract consists of functional codes and
data with a specific address on a blockchain, then any device
can call the functional code. Consequently, functions can
trigger events resulting in applications, which can listen to
events and react to them [28]. An outstanding example is a
system adopted by Kouvola Innovation in Finland in which
pallets were equipped with RFIDs and provided with shipping
tasks and willing carriers. RFIDs communicate pallets’ needs
to potential carriers using a blockchain. When an offer is
provided by a carrier, the blockchain aligns it with pre-set
conditions, price, and service. If the offer matches the prespecified conditions, the smart contract gets executed automatically on blockchain, and pallets are moved as per the
contract. Every move is visible and traceable on blockchain
thanks to RFIDs and sensors [29]. It is worth mentioning that
the majority of IoT applications either use Ethereum or at
least are compatible with it. Basically, smart contracts define
the application logic and the IoT devices connected to it send
their measurements and data whenever a transaction calls for
that particular smart contract [30]–[32].
**V. BLOCKCHAIN CHARACTERISTICS**
As demonstrated, blockchain is characterized by a robust
structure that grants it many valuable features. The following
are the main distinguishing features, which add value to any
sector implementing blockchain technology [13], [16], [33]:
1) Decentralization: network participants have access to
data records without the control of a central authority.
2) Distribution: each node poses a copy of the data
records, which are continuously updated
3) Security: blockchian structure of linking blocks using
hash algorithm ensures that generated blocks cannot be
erased or modified.
4) Transparency: data encapsulated in blocks are visible
to all participants in the blockchain.
-----
5) Automation: fulfilled by the concept of smart contract
in which certain action could be automatically triggered
by a specific smart contract program whenever a set of
prespecified conditions are met.
6) Traceability: blockchain holds a historical record of all
data from the date it was established. Such a record can
be traced back to the original action.
7) Privacy: although blockchain is transparent, participants’ information is kept anonymous using private/
public key.
8) Reliability: blockchains have been successfully implemented by various organizations due to its features and
robust structure.
**VI. BLOCKCHAIN FOR IOT**
Today’s large-scale IoT systems consist of a considerably
huge number of interacting devices using central servers to
store, authenticate, and analyze data. Unfortunately, such
architecture is not an effective one, as discussed in Section I.
In addition, there are other challenges that arise with the
IoT centralized structure or at least inflate as a result of it.
Blockchain, as an emerging technology, would provide an
essential solution to the problems facing IoT, especially when
utilizing smart contracts, which shall play an important role
in managing and securing IoT devices. Blockchain solves IoT
issues as explained in what follows.
Elimination of central authority: Blockchain as a decentralized network eliminates the concept of central
servers, which does not only remove central points
of failures and bottlenecks [34] but improves fault
tolerance and scalability, as well. In blockchain, data
is stored in a decentralized manner where each network participant would have a copy of all transactions. Consequently, identical copies of data that is
continuously updated will be stored in network nodes
rather than being stored in central servers. Therefore,
when blockchain is integrated with any layer of the
IoT paradigm such as cloud or edge servers, it builds
a distributed data storage. This shall provide redundancy and make disruption extremely difficult [35].
Also, the data authentication process will be carried
on by blockchain’s consensus mechanism without the
need for central servers. Blockchain provides trusted,
unique, and distributed authentication of IoT devices
where participants can identify every single device.
As per data analysis, it could be executed with the aid
of the smart contract facility provided by blockchain.
Those advantages are extremely important, especially
for large scale IoT systems.
Peer to peer accelerated direct messaging: The peer
to peer structure of blockchain does not only make
direct messaging between them possible but also
makes peer messaging faster compared to the present
centralized IoT structure. Additionally, IoT applications can take advantage of this feature by providing
device-agnostic and decoupled-applications [30]. This
is possible thanks to the distributed ledger characteristics of blockchains, which not only eliminates the
need for a central authority but enables to coordinate
the processing of transmitted data between devices [4]
and stores devices interaction, state, and exchanged
data immutably in blockchain’s ledger. Also, data flow
in the centralized IoT system differs from that in the
decentralized IoT-blockchain integrated system, especially that the integration takes different forms and
designs.
Automation and resource utilization: Blockchain enables
direct and automated interaction between IoT devices
using smart contracts. Also, blockchain’s smart
contract facilitates resource usage by running an
on-demand code or smart algorithm to manage resource
utilization and automate payments when the requested
service is completed. This process shall be performed
automatically and without human intervention [35].
Additionally, blockchain empowers next-generation
applications and enables the development of smart
autonomous assets services. Furthermore, smart contracts can automate IoT software and hardware update
and upgrade rights in addition to resetting IoT devices,
initiating their repair request, and changing their ownership. Finally, smart contracts can support decentralized IoT devices authentication using specific rules
embedded in their logic.
Secure code deployment: Since blockchain provides
immutable and secured transaction storage, codes
could also be pushed into the IoT devices in a
secure manner [36]. Also, IoT devices’ status could be
checked and updates could be performed safely [30].
Built-in trust: Blockchain peer to peer structure based
on consensus mechanism grant higher trust to IoT
data since all participants are in posses of a tamperproof copy of all transactions. If all nodes have the
data and the means to verify that it has not been
altered or tampered with then trustworthiness could be
achieved [37], [38].
Security: Blockchain cryptographic structure is based on
hashing each block and including it in the successive
block. This process of block hashing forms the virtual
chain that connects them and grants blockchain its
name. There is no way to modify/change data in any
block unless the hashes of that block along with all
successive blocks were recalculated, which is almost
an impossible task. Besides, hypothetically speaking,
even if all the previously mentioned hashes were recalculated, the structure of a blockchain as a distributed
data record does not allow any falsified data authentications because the consensus of the majority of nodes
is required before updating data records [18]. Therefore, it is claimed that security and immutability are
always guaranteed. This structure enhances the security
of IoT systems since blockchain can store exchanged
-----
massages of the IoT devices’ as transactions and validate them with the aid of smart contracts. Therefore, IoT communications and generated data will be
securely stored as an encrypted and digitally-signed
blockchain transactions [9], [28]. Also, integrating IoT
systems with blockchain can utilize smart contracts
to automatically update devices’ firmwares that deal
with vulnerable breaches and consequently enhance
the total security of the underlying IoT system [28].
Furthermore, implementing blockchain can optimize
current IoT secure standard protocols [9]. For instance,
the Internet Protocol version 6 (IPv6) has a 128-bit
address space while blockchain has a 160-bit address
space [39]. Blockchain uses the Elliptic Curve Digital
Signature Algorithm (ECDSA ) to generate a 160-bit
hash of public key address [40] for around 1.46 × 10[48]
IoT devices, which drastically reduces the address collision probability and hence is secure enough to provide
a Global Unique Identifier (GUID). Also, assigning an
address to an IoT device using blockchain does not
require any registration or uniqueness verification [9].
In addition to enhancing security, blockchain eliminates the need for a central authority, therefore, it will
eliminate the need for the Internet Assigned Numbers
Authority (IANA) in charge of global allocation of
IPv6 and IPv4 addresses. Lastly, blockchain enhances
scalability in securing IoT devices since it provides
4.3 billion addresses more than IPv6 which is a more
scalable solution for IoT compared to IPv6 [9].
Data privacy: The other part of the cryptographic structure
of blockchain is based on private/public key pair, which
ensures that only the specified recipient or the node that
owns and manages the private key is able to access data.
Therefore, privacy is achieved where no entity other
than the one having the private key can access or control
the data. Also, data privacy could be achieved and
maintained using smart contracts where a set of access
rules are specified in the logic of the code to allow
certain users or entities to access, control, or own the
data whether it was in transient or at rest.
Historical action records: Data records of all transactions
are stored immutably in blocks and can be traced back
by any node to the very first transaction. To clarify the importance of this characteristic, we refer the
readers to the work in [41] where the authors presented a blockchain-based traceability system. This
system provides traceability services to suppliers and
retails by inspecting and verifying the provenance
of products and confirm their quality. As per IoT
devices, all transactions made to or by IoT are stored
in blockchain and can be traced back by any network
participant [9]. The traceability feature provided by
blockchain enhances the quality of service for IoT
devices since it enables tracing resources and verify the
service level agreement established between clients and
IoT service providers [35].
Cost reduction in developing huge internet infrastructure: Large scale IoT requires upgrading the underlying network infrastructure to increase its capability
to provide IoT connectivity, whereas, the decentralized
blockchain eliminates this need and saves upon its cost.
Transparency: The latest developments in technology have
led to cloud computing concepts, which increased the
IoT ability to analyze and process data and consequently take real-time actions. Therefore, it is without any doubt that cloud computing contributed to the
development of IoT systems [42]. However, it acts as
a black box when coming to data transparency. Participants usually do not have any clear vision of where and
how the data they provide is going to be used [30].
Enhance IoT systems interoperability: which is the ability of IoT systems to interact with physical systems
and exchange the generated data between IoT systems
themselves. Blockchain is capable of enhancing the
interoperability of IoT systems by transforming and
storing IoT data into blocks. This process converts,
compresses, and stores heterogeneous IoT data into an
integrated blockchain where it provides uniform access
to different IoT systems connected as peers in it [43].
Governance of access and identities: Identity and access
management (IAM) of IoT devices is facing multiple
challenges such as the change of ownership during the
lifetime of IoT devices from manufacturer to supplier
then to retailer, until they end up in the hands of their
consumers [44], [45]. Also, consumer ownership may
change in case the IoT device is compromised, decommissioned, or resold. Another issue facing IAM is managing the attributes of the IoT devices such as serial
number, manufacturer, make type, location, deployment GPS coordinates. Another challenge related to
IoT identity and access management is the IoT relationships, which may take the form of device-to-device,
device-to-human, or device-to-service. Also, the types
of IoT’ relationships could vary from deployed by to
use by or sold by, shipped by, upgraded by, repaired
by, and so on [9]. Blockchain is capable of addressing
the above challenges securely and effectively since it
has been utilized to provide authorized and trusted
identity registration and management, ownership tracking, and assets monitoring. Blockchain can register
and provide identities to IoT devices with different
attributes that are connected in a complex relationship
and store all this information securely and immutably
in a distributed manner. Therefore, blockchain supports
a trusted and decentralized IoT identity governance and
tracking throughout the life-cycle of the device [9].
Reliability and robustness: Blockchain eliminates central
servers which increases privacy and security in IoT
paradigm,therefore, the integration of blockchain with
IoT systems would result in a reliable robust system.
It is well known that IoT can facilitate information digitization, however, the reliability of such information is
-----
**FIGURE 2. Types of blockchain –IoT integration.**
still a challenge [30]. Blockchain solved this issue by
increasing the reliability of a proposed integrated system. Blockchain reliability along with the long history
of its flawless implementation in many fields ensures
high robustness [4].
From the above, it is clear that employing blockchain could
complement IoT with secured and trusted information to
solve the issues related to transparency, latency, and Internet
infrastructure. Moreover, IoT was recently integrated with
some computing infrastructures to overcome a few of its
limitations related to storage and processing. One of which
is cloud computing, which played a vital role in solving
many issues. However, it established a centralized network
architecture, which complicates reliable data sharing among
other impracticalities [42]. Blockchain, in contrast, addresses
IoT problems and maintains a decentralized structure to solve
further issues and add more value. Similarly, fog computing
was also integrated with IoT to enhance its performance
by minimizing exiting limitations. Fog computing uses end
devices to perform a substantial amount of computation,
storage, and communication locally and route it over the
Internet. Fog computing if follows the distributed structure
of blockchain could utilize more powerful devices such as
gateways and edge nodes, which could then be reused as
blockchain components. Therefore, Fog computing, which
restructured IoT by including a new layer between cloud computing and IoT devices is expected to facilitate the integration
of IoT and blockchain [30].
**VII. RESEARCH SURVEY**
Recently, integrating blockchain with IoT was addressed
in the literature offering a diversity of contributions. Some
work proposed an overview of challenges facing IoT and
blockchain’s integration by conducting a systematic literature
review [2], [46], [47], while others investigated certain challenges in the IoT paradigm and demonstrated a framework
to face those challenges or at least a few of them [12], [48].
Other studies created evolved IoT system architecture by integrating blockchain in various configurations and explained
its reflected benefits on IoT’s performance and the eliminated
challenges [35], [49]. In relation to the last type of researches,
it is important to know that different works proposed different IoT–blockchain paradigm. Specifically, when integrating
blockchain with IoT, the communication between systems’
layers was clarified and accounted for. Therefore, devices and
IoT infrastructure interactions were taking different forms,
whether to be inside the IoT, through blockchain, or by creating a hybrid design that involves both [30]. Different integration schemes will typically result in various levels of acquired
benefits. Figure 2 demonstrates the types of blockchain–IoT
integration. Many review papers were found in literature such
as [2], [4], [12], [30], [46] in which authors demonstrated the
benefits and challenges of integrating IoT with blockchain.
However, none of them reviewed the available blockchain IoT integration frameworks and architectures as we did in this
research.
In [50], the authors introduced a new IoT architecture called ‘‘EdgeABC’’. This model consists of three
layers: An IoT smart device layer, a distributed agent controller architecture based on blockchain, and a hierarchical
edge computing servers. The architecture in [50] utilized
blockchain in the middle layer to ensure resource transaction data integrity. The study implemented a developed task
offloading and resource allocation algorithm on blockchain
in the form of a smart contract. The proposed model could
be implemented in any typical application such as smart
healthcare, home, building or factory. Another security model
and protocol was proposed by [51] to provide decentralized
-----
cryptographic keys and trust information storage for Wireless Sensor Networks using blockchain technology. The aim
of the blockchain authentication and trust module (BATM)
in [51] was to allow each network component to authenticate
information about every node within their networks.
The authors in [35] proposed a distributed blockchainbased cloud architecture model, fog computing, and softwaredefined networking SDN. The model aimed to efficiently
manage raw IoT data streams at the edge of the network and
the distributed cloud. The model consists of three layers: IoT
devices, SDN controller network based on blockchain for fog
nodes, and distributed cloud based on blockchain.
The authors in [52] proposed architecture for Blockchain
of Things (BCoT), where a blockchain-composite layer forms
a middleware between IoT and industrial applications to
hide the heterogeneity of the lower layers while providing
blockchain-based services to facilitate different industrial
applications. Also, researchers discussed blockchain potentials for 5G-beyond networks.
Blockchain was integrated into more than one layer in
the architectural model presented by [53]. A hierarchical
authentication architecture comprising of a physical network
layer, blockchain edge layer, and blockchain network layer
was demonstrated to improve authentication efficiency and
data sharing among various IoT platforms. The study evaluated the authentication mechanism using MATLAB and
Hyperledger Fabric. In a related context, the problem of a
single point failure at gateway nodes was tackled by [54].
This study proposed a decentralized blockchain-based IoT
management system to solve the gateway node censorship
problem that utilizes a gossip-based diffusion protocol. The
designed protocol aimed to deliver all messages from sensors to all full nodes and improve blockchain-based IoT
management systems security. Another P2P network architecture was designed by [55], which integrated blockchain
and edge computing for IoT applications to achieve secured
data storage and high system performance. The architecture
design consisted of three layers: a cloud layer, an edge layer,
and a device layer. The resources in the cloud could be
configured as nodes on the blockchain, which is separated
from the application layer. Also, a Proof-of-Space solution
based on smart contracts was adopted to authenticate information. Another flexible blockchain architecture in edge
computing was demonstrated by [49]. This study proposed
a blockchain-based data management scheme (BlockTDM),
which supports matrix-based multichannel data isolation to
protect sensitive information by utilizing smart contracts.
Internet of Drones (IoD) could also benefit from blockchain’s
specific features to face its challenges as well. This was
implied by [48] in their design of a blockchain-based access
control scheme for an IoD environment. Their scheme was
used to support access control between any two neighbor
drones and between a drone and its associated ground station
server (GSS). Testing and simulation proved that the proposed scheme could help to resist various attacks and increase
communications security.
The integration of IoT and blockchain is applied in power
systems as well. The work in [56] proposed structural applications incorporating IoT and blockchain in distributed generation systems, smart buildings, energy hubs, and management
of residential electric vehicles. The study aimed to benefit
from blockchain features in solving issues related to the huge
amount of generated information that needs to be securely
transferred, stored, and analyzed to enhance grids’ performance and reliability. Also, an article by [57] demonstrated
the integration of blockchain with IoT ecosystems trading
platforms and provided practical scenarios and a case study
to establish end-to-end trust for trading IoT devices and
corresponding data. Trust and authentication also were the
core issues tackled in [58]. The authors in [58] designed a
secondary authentication scheme for IoT devices to access a
Wi-Fi network using three smart contracts. The scheme aimed
to identify IoT devices located within a legal range. The cost
of IoT-blockchain integration was discussed in [59] which
analyzed the cost of storing data from several IoT sensors on
Ethereum blockchain via smart contracts under two options:
Appending new data or overwriting on existing data. The conducted cost analysis aimed at enabling practical applications
of blockchain and smart contracts in IoT applications.
In related research, [60] designed, developed, and tested
a blockchain tokenizer device that connects any industrial
machine to blockchain platforms. The study aimed to build
an enabling technology to diffuse blockchain in industrial
applications and act as a bridge between Industrial IoT, and
blockchain world by tokenizing industrial assets. Devices
were tested at the hardware and software levels on two
industrial supply chain use cases. Researchers used Ethereum
programming language to develop a smart contract that can
be used to enable the creation of a digital twin (building a
virtual model of a product to simulate systems) by producing
a blockchain token. Also, research by [61] explored how integrating IoT and blockchain would benefit shared economy
applications focusing on security and decentralization features. The researchers proposed shared economy application
scenarios enabled by integrating IoT and blockchain. The
integration of blockchain with industrial IoT was the focus of
another research conducted by [62]. The study introduced a
blockchain-enabled IoT framework where components interactions, data processing, and storing were done through a
smart contract. Further research in the same context was carried on where a decentralized self-organized trading platform
for IoT devices using blockchain was designed by [63]. The
authors of this work modeled the resource management and
pricing problem between the cloud provider and blockchain
miners using game theory. Nash equilibrium of the proposed
Stackelberg game was achieved by introducing a multiagent reinforcement learning algorithm. Furthermore, some
conducted researches aimed at improving and optimizing
IoT-blockchain integration architecture such as [64]. This
research addressed blockchain consensuses dynamic management needed to deal with the high dynamics of IoT applications. Researchers designed application-aware consensus
-----
management for software-defined intelligent blockchain
and an intelligent scheme to analyze packets at the IoT
application-layer. Also, [65] aimed at quantifying the performance of constrained IoT devices in terms of reducing
transaction delay and cost. These researchers proposed models based on inter-ledger mechanisms and smart contracts to
provide decentralized authorization for IoT devices. Another
study by [66] presented an optimization policy for IoT sensors sampling rate using blockchain and Tangle technologies. The proposed model aimed to minimize the age of
information (AoI) experienced by end-users taking into consideration resource networking and processing constraints.
Table 1 summarizes the demonstrated researches pointing at
their contribution, application area, and the challenges they
addressed.
It is noticed from the surveyed research works that
blockchain has many forms in which it could be integrated
with IoT networks based on the required outcome performance and the addressed challenges. In addition, researches
agreed on the conclusion that integrated IoT-blockchain systems demonstrate better performance compared to standard
benchmark IoT systems prior to blockchain integration.
**VIII. ISSUES FACING THE INTEGRATION OF IOT**
**AND BLOCKCHAIN**
The integration of IoT with blockchain came as a rescue
for the IoT paradigm where it provides valuable opportunities and resolves many of the challenges facing IoT.
However, limitations do exist due to the challenges facing
the integration itself in the form of newly created obstacles, which clearly opens doors for contemporary research
ideas. Currently, the literature mainly focuses on the features offered by blockchain that would elevate IoT architecture and widen its application in a much effective manner
[52], [67], [68]. Issues such as security, traceability, transparency, efficiency, and trust will be enhanced in the presence
of blockchain in IoT systems. However, researchers need to
tackle the issues that appeared due to the integration and
eliminate them before the potentials of the integration could
be fully revealed. Remember that blockchain technology was
designed for powerful computers in an Internet paradigm in
the first place and this is not the exact case for IoT as will
be explained later. In this section, several major challenges
incorporating IoT-blockchain integration are identified and
discussed as follows.
_A. IOT RESOURCES CONSTRAINTS_
Many IoT devices such as sensors, RFID tags, and smart
meters are resource-constrained. Usually, these devices suffer from inferior computing capabilities, poor network connection capability, limited storage space, and low battery
power [9]. On the other hand, blockchains have their own
special requirements. Firstly, the consensus algorithm needs
extensive computing power, which consumes energy, therefore, not practical for low-power IoT devices [9]. Secondly,
the size of blockchain data is bulky so it is infeasible to store
the whole blockchain in each IoT device, especially with
the fact that IoT generates massive data in real-time, which
makes the situation even worse [46]. Thirdly, blockchain is
designed assuming stable network connections [69], which
may not be feasible for IoT that can normally suffers poor network IoT devices connection or unstable network due to the
failure of nodes (e.g. battery depletion) [70]. In most cases,
the situation of the IoT devices cannot be detected until it is
tested, while in many other cases the devices work perfectly
fine for a period of time then the situation changes for many
reasons such as disconnection, short circuit, and program
obsolescence [30].
_B. SECURITY SUSCEPTIBILITY_
Many industries growingly deploy wireless networks for their
applications due to their scalability and feasibility. However,
the wireless medium suffers from many security breaches
such as passive eavesdropping, jamming, denial of service,
and others [71]. Furthermore, due to IoT devices’ resource
constraints, it is difficult to manage the public/private keys
encryption algorithms [46], especially in a distributed environment. Besides, many IoT systems contain different types
of devices that vary in computational capabilities meaning
that not all devices can carry out, for example, the encryption
algorithm at the same speed [72]. Meanwhile, blockchain
has its vulnerabilities such as malicious nodes hijacking
blockchain’s messages with the purpose of delaying block
broadcasting.
_C. POSSIBLE PRIVACY BREACHING_
Blockchain utilizes private/public key pairs as a mechanism
to preserve data privacy. However, this encryption method
might not be robust enough in some cases. It was found that
user identity could be revealed using learning and inferring
multiple transactions performed by one common user [73].
Furthermore, storing all data on a blockchain could be more
serious in case of any privacy leakage [74].
_D. INCENTIVE MECHANISM CHOICE_
Blockchain networks have different incentive mechanisms
that are used to mine blocks. Some use Proof of Work (POW)
while others use Proof of Stake (POS). However, there are
many more algorithms. In general. there are two types of
incentive mechanisms in blockchains
:
1) The reward for mining a block and
2) The compensation for processing a contract
Choosing the proper incentive for the blockchain application
is a sensitive issue that affects the continuous effort provided
by nodes in general and miners in particular [32]. To illustrate
the issue, for Bitcoin blockchain, the first miner that solves
the POW puzzle will be rewarded a certain amount of bitcoins. However, rewards are halved every 210,000 blocks.
This decrement incentive structure will discourage miners
and make them shift to another blockchain especially knowing that POW consumes a huge amount of energy. This is an
-----
-----
important point that should be considered when designing a
consensus algorithm for the integrated network.
_E. PERFORMING BIG DATA ANALYTICS_
There is a growing trend for analysis of IoT real-time generated data. This type of data is of a massive volume and usually heterogeneous, however, it has high business value [75].
Big data analysis of IoT generated data could reveal hidden
valuable and meaningful information that aids in making
intelligent decisions. However, applying conventional big
data analysis for the integrated IoT-blockchain system is
challenging due to the following
:
1) IoT devices suffer from resource limitations and
inferior computing capabilities. These issues prevent
deploying complicated big data analytics methods
directly at IoT devices. Uploading the data to clouds
for computation and performing big data analysis is
a proposed solution, however, it could lead to long
latency and privacy concerns [42].
2) Blockchain technology protects privacy via public/
private key digital signature. On one hand, performing big data analysis of anonymous data is difficult, while on the other hand decrypting data is a
time-consuming process that results in inefficient data
analytics [76].
_F. SCALABILITY OF THE INTEGRATED SYSTEM_
Blockchain scalability is measured by the throughput of
transactions per second against the number of IoT nodes and
the number of concurrent workloads [43]. The scalability
of current blockchains limits their implementation in large
scale IoT applications [46]. Specifically, IoT devices generate
gigabytes real-time data while blockchain is not designed to
store that huge amount of data [30]. For example, Bitcoin
blockchains may not be suitable for IoT due to their poor
scalability. Some blockchains can process only a few transactions per second. This clearly is a bottleneck for the IoT
systems [30]. Such a situation is solved by implementing
consortium or private blockchain. There are many platforms
for consortium blockchain such as Hyperledger [77].
_G. IOT DEVICES MOBILITY AND NAMING_
Blockchain network structure differs from that of IoT in
the sense that nodes were not meant to find each other in
the network. For illustration, looking at Bitcoin blockchain,
the IP address for senders is included in the transaction and
is used to build the network topology by other nodes. This
topology is not practical for IoT networks because many IoT
devices are mobile all the time [78].
_H. SMART CONTRACT IMPLEMENTATION_
Any instability of IoT devices could compromise the validation of smart contracts. Furthermore, smart contracts could
be overloaded in cases that require accessing multiple data
sources. It is known that smart contracts, being one of
blockchain’s features, are decentralized and distributed, however, they do not share resources or distribute performing
functions in order to run a huge amount of computational
tasks. In other words, each smart contract is simultaneously
executed over multiple nodes where the distribution is only
for contracts’ validation and not for performing functions and
codes [30].
_I. BLOCKCHAIN STANDARDIZATION_
IoT developers consider standardization of blockchain as
a vital issue that shall decide the future of the integration between them because it is expected to provide the
required guidance for developers and customers as well [79].
It is worth mentioning that setting blockchain standards
should take into account the relevant industry standards that
are currently being followed, especially the ones related to
IoT. Therefore, many European countries established standards for blockchain’ financial transactions to increase confidence in the market [80]. Also, the ISO approved the new
standard for blockchain and distributed ledger technology
(ISO/TC 307) [81]. Besides, legislation related to cybersecurity should be considered in the integrated IoT-blockchain
systems such as the EU Network and Information Security
(NIS) directive, which was adopted by the European Commission in 2016 to enhance cybersecurity across the EU [82]
and the general data protection regulation (GDPR) proposed
by EU on 2018 to harmonize data protection and privacy laws
for individuals [83]. The integrated system has to consider the
above laws in addition to some other rules and notifications
regarding personal data breach in cases of applications that
grant access to or edit personal and enterprise data. Furthermore, blockchain is structured around connecting people
from different countries were so far no global legal compliance code exists, and that represents an issue for manufacturers and service providers [46].
**IX. LITERATURE SURVEY CONCLUSION**
From reviewing related work in literature, it was concluded
that integrating blockchain with IoT could take various forms
and designs depending on the required outcome, application,
and addressed challenges as demonstrated in Section VIII.
Besides, it is argued in the literature that integrated systems demonstrated better performance compared to standard
benchmark IoT systems with no blockchain integration [7].
Additionally, the surveyed studies did not only agree on the
feasibility of the integration but proposed a variety of designs
to achieve it, as well. While some have focused on the general
architectural prospectives required for the integration; others
concentrated on mitigating specific issues by introducing the
blockchain. Moreover, some other researchers have utilized
the integration as a platform to deploy certain applications.
However, many issues and challenges have not been tackled
by researchers such as constraints of IoT devices, analysis
of big data in addition to others previously demonstrated
challenges regarding the integration of IoT –blockchain. This
research is based on integrating blockchain in two out of the
-----
three layers; namely, the dew and cloudlet layers, forming the
final architectural design. Our aim is to benefit from features
and services provided by blockchain to guarantee a decentralized data storage while addressing security anonymity challenges and achieve transparency and efficient authentication
service.
Despite the continuous effort to design suitable IoT–
blockchain integrated architecture, many issues limit proper
implementation as well as the applications’ range of the
integrated system in order to guarantee its optimal usage.
Therefore, there is an increase in the demand for an efficient
design that takes into consideration the challenges facing the
integration process, mainly, IoT devices constraints, big data
analytic, security, and privacy. Also, the appropriate method
should be investigated to facilitate proper smart contract
implementation.
**X. DESIGN REQUIREMENT**
To design a high-performance distributed and scalable IoT
network architecture with the goal of successfully integrating blockchain with dew and cloudlet computing to meet
current and future challenges while offering support for new
service requirements, the following design principles must be
fulfilled
:
- Efficiency: The integrated system should operate at optimal performance even though its nodes consist of heterogeneous devices.
- Resilience: In case any node fails, computational tasks
should not be affected and the system should continue to work through the rest of the operational
nodes.
- Decentralized data storage: The integrated architecture
should extend the storage capacities of IoT devices
by employing the storage capacities of blockchain
technology.
- Scalability: This is a vital principle in designing an IoT
network with the ability to manage future growth in
terms of the number of devices and amount of information they generate.
- Ease of deployment: All nodes even the ones located at
the edge of the Internet should be allowed to join the
network without complicated configurations.
- Data integrity: The integrated system must have a reliable built-in data verification mechanisms to ensure the
accuracy and consistency of data in the decentralized
environment.
- Security: Securing the IoT network is one of the main
objectives of introducing a new design architecture.
Therefore, to ensure a holistic design of the integrated
system, data confidentiality and security must be adequately addressed.
- Data authenticity: Data transactions should be authenticated and validated in a heterogeneous and decentralized
dew computing environment.
- Privacy: Users’ data privacy should be guaranteed by
blockchain. This will ensure network participants that
their transferred information is not being tracked or
altered.
- Offloaded computation: The processing tasks outsourced to other servers, such as dew servers in our
proposed design, by IoT end devices should be verified
in order to produce accurate results.
- Low latency: The integrated system design should consider delays incurred during computation processes as
well as data transmission from one node to another.
To ensure low latency, it is important to identify what
computation tasks are involved, as for our architecture,
decide whether they should be performed at the end
devices, dew servers, or at the cloudlet layer.
- Access control: It is fundamental to enforce access policies in the network to regulate the viewing and sharing
of users’ data.
- Adaptability: The architecture must be flexible enough
to adapt to the changing environments, expanded customer pools along with their demands, and increased
complexities in possible future applications while maintaining acceptable levels of system throughput, delays,
and security.
**XI. PROPOSED DECENTRALIZED ARCHITECTURE FOR**
**INTEGRATION IOT AND BLOCKCHAIN**
The proposed blockchain-based architecture is built to mitigate the multiple challenges facing the integration of IoT
and blockchain. This proposed architecture consists of
three layers; a device layer, a dew-blockchain layer, and a
cloudlet-blockchain layer. Integrating blockchain with dew
and cloudlet computing is intended to provide authentication efficiency, processing, and data storage services. Dew
computing is a contemporary computing model that emerged
after the wide success of cloud computing. However, cloud
computing uses centralized servers to provide its services,
while Dew computing uses on-premises computers to provide cloud-friendly, and collaborative micro services to endusers [84]. As a matter of fact, Dew computing goes beyond
the concept of a network-storage and network-service, to a
distributed sub-platform computing hierarchy [85]. Some
researchers suggested an extension to the Open Systems
Interconnection (OSI) model by adding a new (i.e. eighth)
layer called the context layer on top of the application layer.
As defined in [86], Dew computing is ‘‘an on-premises computer software-hardware organization paradigm in the cloud
computing environment where the on-premises computer
provides functionality that is independent of cloud services
and is also collaborative with cloud services. The goal of dew
computing is to fully realize the potentials of on-premises
computers and cloud services’’. From this definition, the main
features of dew computing are independence and collaboration. Dew computers provide substantial functionalities
independently from the cloud layer, however, they collaborate
with it. Dew computing is the closest layer in the network
hierarchy to the IoT devices as demonstrated in Figure 3.
Also, it is not only applicable in cases of powerful local
-----
**FIGURE 3. 5 Tier network layer hierarchical structure.**
computers and applications, simple applications maybe not
rich enough but still considered a dew computing application [86]. As previously mentioned, one of the major issues
facing the integration process is IoT resource constraints
in terms of computational capabilities, storage space, and
power supply. This was solved by introducing a Dew layer
in the design. Dew on-premises computers could contain
a duplicated fraction of the World Wide Web or serve as
files storage that automatically synchronizes with its cloud
copy (such as Dropbox). Additionally, dew computing hosts
on-premises database synchronized in real-time with cloud
database and serve as a backup to each other. This facilitates big data analysis, which represented a challenge for
integrating blockchain with IoT. Furthermore, dew computers may host software or serve as a platform supporting
development applications [86]. Our proposed dew-cloudlet
architecture can be considered as an extension to the clientserver architecture, in which two servers are located at both
ends of a communication link [87]. Although fog and edge
computing are still viewed as useful technologies, however,
they heavily rely on connectivity. Dew servers, on the other
hand, grant users more flexibility and control over their data
even at the absence of an Internet connection. Primarily,
the dew server stores a local copy of the data and synchronizes
it with a master copy upon restoring the Internet connection [87]. This feature is not the only valuable characteristic
that distinguishes dew computing from other technologies,
which made it a strong candidate and most suitable to be
integrated with blockchain technology, dew computing has
the significant advantages of self-healing, autonomic selfaugmentation, self-adaptive, user-programmability, extreme
scalability, and capability of performing tasks in a highly
heterogeneous IoT device environment [87]. Clearly, and
after reviewing the issues facing the integration of IoT and
blockchain, dew computing features appear to be tailored
made to address the integration process challenges.
This is not the first time dew servers are integrated with
blockchain. Research by [88] introduced dew computing as a
blockchain client forming a new kind of blockchain called
Dewblock. This system solved the issue of clients having
to keep a huge amount of blockchain data in order to act
as a full node in a blockchain. The proposed system brings
in a new approach in which the data size of a client is
reduced while the features of a full node are still maintained.
-----
This enables clients to enjoy the features of full nodes in
blockchain without needing to store the growing blockchain
data. The study approach was inspired by dew computing
principles to develop Dewblock based on cloud-dew architecture. In the system, a dew client operates independently
to perform blockchain activities while it collaborates with
the cloud server to maintain the integrity of the blockchain
network. Therefore, every blockchain user has to deploy a
cloud server. This system clearly demonstrated the two main
features of dew computing which are independence and collaboration.
The other layer in the integration architecture is the
cloudlet layer, which is a resource-rich, trusted, small-scale
cloud data center located at the edge of the Internet [84]. The
proposed design is providing solutions to many challenges
and upgraded performance for the IoT paradigm.
_A. AN OVERVIEW OF THE PROPOSED ARCHITECTURE_
A three-layer architecture is proposed in this study to solve
the problems of devices’ constraints, big data analysis, data
privacy, and security in IoT systems as well as other challenges facing the IoT paradigm. Additionally, our design shall
increase authentication efficiency and enhance data storage
and processing capabilities. The architectural design consists
of perception or sensing layer, dew layer, and cloudlet layer
as shown in Figure 4. Blockchain is integrated into two
of those layers, precisely the dew layer, and the cloudlet
layer. In general, blockchain usage comes in three types: as a
decentralized storage database, as a distributed ledger, or as a
supporting distributed services provided by smart contracts.
Blockchain is integrated with dew and cloudlet computing to
provide fundamental requirements of IoT, which are: computation offloading, outsourced data storage, and management
of network traffic. In what follows, we introduce these three
layers.
1) The device layer: Located at the edge of the network,
the device layer consists of IoT sensing devices and
actuators used to monitor and control various smart
applications and send the locally generated data to
the dew layer to utilize its resources in performing
requested services and other tasks. The participation
of IoT devices in the blockchain network is facilitated
by capable servers in the upper dew and cloudlet layers. Thus, heavier operations are performed by those
servers while end devices carry out lighter tasks such
as accepting firmware updates
2) The dew Layer: The IoT device layer transmits the
generated raw data to the dew layer, which consists of higher-performance controllers connected in a
distributed manner using the blockchain technology.
Each dew controller represents a node in a consortium blockchain and covers a small associated device
community. The dew layer is responsible for timely
service delivery, data analysis, data processing and
reporting of results to the cloudlet and device layers
whenever needed. Specifically, the dew layer provides
localization, while the cloudlet layer provides widearea monitoring and controlling. Dew computing is
characterized by its high scalability, which is ‘‘the ability of a computer system, network or application to
handle a growing amount of work, both in terms of
processing power as well as storage resources or its
potential to be easily enlarged in order to accommodate that growth’’ [85]. Also, dew computing equipments are capable of performing complex tasks and
running a large variety of applications effectively.
To provide such functionality, devices at this layer
are self-adaptive and ad hoc programmable. Thus,
by integrating them with consortium blockchain, they
become more capable to run applications in a distributed manner without a central communication
point or central device. This powerful characteristic
of the dew layer enables it to support a large number of heterogeneous devices connected in a peer-topeer environment meanwhile avoid the risk of a single
point failure. Additionally, the dew layer peer-to-peer
servers provide decentralized and distributed storage
facilities used for additional data storage, real-time
data analytics, different data communication handling.
Furthermore, the dew layer brings services closer to end
devices which shall improve overall performance and
lower latency.
Moreover, dew servers can transfer messages between
themselves, which shall assist in coordinating data processing, save cost, and time. This became possible due
to the deployment of blockchain that serves as a distributed platform supporting secured data transmission
across the network. Besides, the ability to convey peer
to peer messages in the network, dew nodes perform
light processing and analysis for their data as well
as for peer nodes. This facilitates self-organization in
a dynamic environment where dew nodes could be
added and removed at any time. Equally important, dew
servers forward real-time data analytics either to the
distributed cloudlet layer for long term storage or further processing and analysis or back to end devices
depending on the network requirements.
From the above, it is clear that this layer’s distributed
blockchain architecture creates a pool of mobilized
resources that provide extra data storage and speed
computations and data analysis. In case of substantial or intensified computational requirements that dew
layer can not handle, servers request services form the
cloudlet layer and offload the workload to it. Not only
blockchain provides decentralized services of storing,
processing, and analyzing terminal information but
also supports creating smart contracts that further lower
latency and increase throughput for dew servers and
distributed resources on the cloudlet layer. Smart contracts are utilized to define the authentication mechanism and integrate different protocols of heterogeneous
-----
**FIGURE 4. The proposed IoT-blockchain integrated architecture.**
IoT platforms. Dew nodes can access any smart contract by sending a transaction to its address and therefore invoke its function. Meanwhile, terminal identity
anonymity and communication security are maintained
by the cryptography algorithm and public/private key
pair.
3) The cloudlet layer: The cloudlet layer consists of more
powerful resources to provide long-term data processing, analytics and storage, in addition to a higher level
reporting and communication. Such cloudlet resources
are configured as blockchain nodes capable of participating in the mining process to ensure data privacy and
integrity. We propose a distributed cloudlet layer based
on blockchain technique to provides secure, scalable,
reliable, low-cost, high-availability services, and ondemand access to computing infrastructures. Cloudlet
layer hosts massive storage and computational facilities
that when used with blockchain, a complete replication
of all records being shared among them is maintained
The flowchart in Figure 5 further explains the message
flow between layers in the proposed architecture.
_B. CONSENSUS MECHANISM_
The consensus mechanism in blockchains is crucial for both
the dew and cloudlet layers to provide secure and timely
access, consequently, offering quality computing services.
-----
**FIGURE 5. The message flow between layers in the proposed**
IoT-blockchain integrated architecture.
The adopted mechanism in both layers is Practical Byzantine
Fault Tolerance (PBFT). Byzantine Fault Tolerance enables
distributed computer networks to reach sufficient and valid
consensus even though malicious nodes might exist in the
network performing malicious acts such as failing to send
information or sending incorrect ones. Here, the role of BFT
is to protect the system from catastrophic failure by decreasing the effect of those malicious nodes [89].
BFT stemmed from the Byzantine Generals’ Problem. It is
a computer science term describing the situation that involves
multiple parties who should agree on a single strategy to
prevent network failure bearing in mind that some nodes
might be unreliable or malicious [89]. BFT has been utilized
in nuclear power plants, airplane engine systems, and almost
in any system that depends on many sensors to take a decision or action. Moreover, it is used in blockchain networks
where trust needs to be established between nodes who do
not fully trust each other [90]. In 1999, a published research
introduced the Practical Byzantine Fault Tolerance (pBFT)
algorithm [89]. The reason behind choosing pBFT in our
architecture is its distinguished high-performance Byzantine state machine replication and its capability of processing thousands of requests per second with sub-millisecond
increased latency. Also, pBFT is effective in providing highthroughput transactions [90]. In order to further increase the
throughput of the network, we suggest using a consensus
round every specific number of mined blocks and perform
blockchain sharding. Here, miners are split into smaller
groups called shards capable of processing transactions
simultaneously resulting in higher throughput [90].
_C. STRENGTHS OF PBFT_
Practical Byzantine Fault Tolerance (pBFT) algorithm has
many strong points that support our choice of adopting it in
our architecture, the following are the main strengths
1) Transaction quick finalization: the structure of pBFT
imply that transactions could be validated and finalized
without the need for multiple confirmations. Also, there
is no waiting period after including the block in the
chain to ensure that a transaction is secured [91].
2) Energy efficiency: pBFT does not require intensive
energy consumption such as POW as described in
section VIII. Even if the system adopts POW almost
every 100 mined blocks to prevent Sybil attack,
the increase in energy consumption is not significant [91].
3) Low reward variation: miners incentive is one of the
issues facing the integration of IoT with blockchain,
which was discussed previously in section VIII. pBFT
solves this issue because it requires collective decision through voting on records by signing messages,
unlike POW in which miners only add the next block
and get rewarded. In the pBFT network, every node
can be incentivized. Therefore, there is no fear of
nodes or miners leaving the network due to unacceptable rewards [91].
_D. THE WEAKNESS OF PBFT_
Although (pBFT) proved to be reliable and strong, the following explains its main weakness.
1) Sybil attacks: pBFT consensus could be affected by
Sybil attacks, where a single party controls or manipulates a large number of nodes which enables them to
control and modify the blockchain and thus comprises
security. This threat is lowered in large size networks.
However, considering the scalability problem of pBFT,
the solution is to use sharding or combine another type
of consensus algorithm as suggested above [91].
_E. INTEGRATION CHALLENGES AND FULFILLMENT OF_
_DESIGN REQUIREMENTS IN THE PROPOSED_
_IOT-BLOCKCHIAN ARCHITECTURE_
In this section, the satisfaction of previously specified design
principles, as well as solutions to many integration challenges, are discussed.
- Information created by clients smart devices and sensors
such as videos and photos, GPS data, health data by
wearable devices, and smart home statuses detected by
the sensors usually contains gigantic amounts of valuable data that when analyzed will benefit individuals
and societies as a whole. Big data analysis was one of
the discussed issues facing blockchain. We propose dew
-----
computing as a solution to this problem. Dew servers
shall be able to store and participate in big data analysis,
which could not be performed on IoT devices due to
their constrained resources nor in blockchain alone due
to encryption dilemma.
- Computation offloading service was included in our
architecture to relieve intensive and heavy computation
tasks from the less capable IoT devices to more powerful
dew servers. This solves the problem of computational
and power demanding POW consensus algorithm. This
means that the consensus mechanism will be deployed in
the dew-blockchain layer. The same problem was further
tackled by adopting pBFT algorithm which consumes
less power.
- Also, resource-constrained mobile devices that
communicate their data using wireless links represent a security vulnerability -as discussed earlier in
Section VIII- shall benefit from the deployed computation offloading service. With dew servers deployed at
the edge of the network, closer to end devices, dewblockchain layer resources can take the processing load
from the devices. Those tasks involve hash computations, encryption and decryption, as well as consensus mechanism, are offloaded from the devices and
outsourced to dew servers for execution. Blockchain
safeguards the security aspects of this module in case
a computation operation requires assignment to multiple dew nodes. Being relieved of such operations, end
devices’ battery lifetime gets increased and execution of
tasks speeds up with increased efficiency and security.
- Outsourcing decentralized data storage, which outweighs the centralized storage in conventional cloud
computing. The decentralized data storage provided
by the integration of dew computing and blockchain
exploits the benefits of both technologies to increase
storage sizes, heighten the security of stored data, and
keep data closer to the end devices layer. Storing data
on dew servers close to consumers shall decrease the
communication latency and elevate the system availability and performance. The large storage capacity offered
by dew computing complements the validated security
in blockchain to ensure a decentralized storage management in a peer to peer environment without entrusting
the data to a centralized authority. The same applies
to the cloudlet-blockchain layer which although not
close to consumers but shares with the dew-blockchain
layer the capability to provide access decentralized and
secured data storage facilities.
**XII. CONCLUSION**
IoT network is growing tremendously in terms of types
of applications and number of devices. This created many
challenges that need urgent solutions to enable exploiting
the full potential of IoT in the future. On the other hand
blockchain technology appeared as a distributed immutable
transparent decentralized and secured technology that has
a promising role in many sectors. The characteristics and
structure of blockchain make it a strong candidate to solve
IoT system issues through integration. The integration process captured the attention of many researchers who came
up with different IoT -Blockchain integrated architectures
and designs. However, none of the proposed studies was
capable of solving most of the challenges nor exploring the
full potential of blockchain to benefit from it in the IoT
paradigm. This research proposes a new architecture based
on three layers system consisting of; devices layer, dewblockchain layer, and cloudlet-blockchain layer. It is the only
architecture that utilizes dew computing in the integration
process between IoT and blockchain. The novelty of including dew and cloudlet computing serves the final design by
bringing computing resources as close as possible to the IoT
devices so that traffic in the core network can be secured
and with the minimum end-to-end delay between the IoT
devices and computing resources. In addition to adopting
cloudlet computing as a means to bringing servers closer to
IoT devices, the proposed architecture reduces the end-toend delay by utilizing private and consortium blockchains,
which requires a transaction verification time in the order of
milliseconds opposite to public blockchain, which needs a
transaction approval time in the order of minutes [13]. In addition, it is the only design that does not include a cloud layer
and instead depends on distributed cloudlets for higher-level
computational tasks and ultimate decentralization in addition
to reducing the end-to-end message delay. Our architectural
design solved many problems facing IoT systems such as
constraints of IoT devices, big data analysis, data privacy, and
security in IoT systems, data storage, intensive computational
and analytical requirements as well as core network traffic.
**REFERENCES**
[1] M. Díaz, C. Martín, and B. Rubio, ‘‘State-of-the-art, challenges, and open
issues in the integration of Internet of Things and cloud computing,’’
_J. Netw. Comput. Appl., vol. 67, pp. 99–117, May 2016._
[2] M. Ammar, G. Russello, and B. Crispo, ‘‘Internet of Things: A survey on
the security of IoT frameworks,’’ J. Inf. Secur. Appl., vol. 38, pp. 8–27,
Feb. 2018.
[3] J. Rivera and R. van der Meulen, ‘‘Forecast alert: Internet of Things—
Endpoints and associated services, worldwide,’’ Tech. Rep., 2016.
[4] N. M. Kumar and P. K. Mallick, ‘‘Blockchain technology for security issues
and challenges in IoT,’’ Procedia Comput. Sci., vol. 132, pp. 1815–1823,
2018.
[5] A. Zanella, N. Bui, A. Castellani, L. Vangelista, and M. Zorzi, ‘‘Internet of
Things for smart cities,’’ IEEE Internet Things J., vol. 1, no. 1, pp. 22–32,
Feb. 2014.
[6] J. Gubbi, R. Buyya, S. Marusic, and M. Palaniswami, ‘‘Internet of Things
(IoT): A vision, architectural elements, and future directions,’’ Future
_Gener. Comput. Syst., vol. 29, no. 7, pp. 1645–1660, Sep. 2013._
[7] N. Kshetri, ‘‘Can blockchain strengthen the Internet of Things?’’ IT Prof.,
vol. 19, no. 4, pp. 68–72, 2017.
[8] M. Amadeo, C. Campolo, J. Quevedo, D. Corujo, A. Molinaro, A. Iera,
R. L. Aguiar, and A. V. Vasilakos, ‘‘Information-centric networking for
the Internet of Things: Challenges and opportunities,’’ IEEE Netw., vol. 30,
no. 2, pp. 92–100, Mar. 2016.
[9] M. A. Khan and K. Salah, ‘‘IoT security: Review, blockchain solutions,
and open challenges,’’ Future Gener. Comput. Syst., vol. 82, pp. 395–411,
May 2018.
[10] M. Crosby, P. Pattanayak, S. Verma, and V. Kalyanaraman, ‘‘Blockchain
technology: Beyond bitcoin,’’ Appl. Innov., vol. 2, nos. 6–10, p. 71, 2016.
-----
[11] D. Puthal, S. Nepal, R. Ranjan, and J. Chen, ‘‘Threats to networking cloud
and edge datacenters in the Internet of Things,’’ IEEE Cloud Comput.,
vol. 3, no. 3, pp. 64–71, May 2016.
[12] M. Banerjee, J. Lee, and K.-K.-R. Choo, ‘‘A blockchain future for Internet
of Things security: A position paper,’’ Digit. Commun. Netw., vol. 4, no. 3,
pp. 149–160, Aug. 2018.
[13] F. Casino, T. K. Dasaklis, and C. Patsakis, ‘‘A systematic literature review
of blockchain-based applications: Current status, classification and open
issues,’’ Telematics Informat., vol. 36, pp. 55–81, Mar. 2019.
[14] A. Hughes, A. Park, J. Kietzmann, and C. Archer-Brown, ‘‘Beyond bitcoin:
What blockchain and distributed ledger technologies mean for firms,’’ Bus.
_Horizons, vol. 62, no. 3, pp. 273–281, May 2019._
[15] D. Macrinici, C. Cartofeanu, and S. Gao, ‘‘Smart contract applications
within blockchain technology: A systematic mapping study,’’ Telematics
_Informat., vol. 35, no. 8, pp. 2337–2354, Dec. 2018._
[16] Y. Wang, M. Singgih, J. Wang, and M. Rit, ‘‘Making sense of blockchain
technology: How will it transform supply chains?’’ Int. J. Prod. Econ.,
vol. 211, pp. 221–236, May 2019.
[17] W. Mougayar, The Business Blockchain: Promise, Practice, and Applica_tion of the Next Internet Technology. Hoboken, NJ, USA: Wiley, 2016._
[18] M. Swan, Blockchain: Blueprint for a New Economy. Newton, MA, USA:
O’Reilly Media, 2015.
[19] K. Sultan, U. Ruhi, and R. Lakhani, ‘‘Conceptualizing blockchains: Characteristics & applications,’’ 2018, arXiv:1806.03693. [Online]. Available:
http://arxiv.org/abs/1806.03693
[20] S. Nakamoto, ‘‘Bitcoin: A peer-to-peer electronic cash system,’’ Manubot,
White Paper, 2019. [Online]. Available: https://git.dhimmel.com/bitcoinwhitepaper/
[21] Q. Tang and L. M. Tang, ‘‘Toward a distributed carbon ledger for carbon
emissions trading and accounting for corporate carbon management,’’
_J. Emerg. Technol. Accounting, vol. 16, no. 1, pp. 37–46, Mar. 2019._
[22] Hotspot for Blockchain Innovation, I. A. Deloitte, Israel, 2016.
[23] Learn _About_ _Ethereum,_ Ethereum, 2020. [Online]. Available:
https://ethereum.org/en/about/
[24] B. Fu, Z. Shu, and X. Liu, ‘‘Blockchain enhanced emission trading framework in fashion apparel manufacturing industry,’’ Sustainability, vol. 10,
no. 4, p. 1105, Apr. 2018.
[25] V. Buterin, ‘‘A next-generation smart contract and decentralized application platform,’’ White Paper 37, 2014, vol. 3.
[26] W. Shao, Z. Wang, X. Wang, K. Qiu, C. Jia, and C. Jiang, ‘‘LSC: Online
auto-update smart contracts for fortifying blockchain-based log systems,’’
_Inf. Sci., vol. 512, pp. 506–517, Feb. 2020._
[27] K. Wüst and A. Gervais, ‘‘Do you need a blockchain?’’ in Proc. Crypto
_Valley Conf. Blockchain Technol. (CVCBT), Jun. 2018, pp. 45–54._
[28] K. Christidis and M. Devetsikiotis, ‘‘Blockchains and smart contracts for
the Internet of Things,’’ IEEE Access, vol. 4, pp. 2292–2303, 2016.
[29] A. Kawa and A. Maryniak, SMART Supply Network. Berlin, Germany:
Springer, 2019.
[30] A. Reyna, C. Martín, J. Chen, E. Soler, and M. Díaz, ‘‘On blockchain
and its integration with IoT. Challenges and opportunities,’’ Future Gener.
_Comput. Syst., vol. 88, pp. 173–190, Nov. 2018._
[31] H. Sun, S. Hua, E. Zhou, B. Pi, J. Sun, and K. Yamashita, ‘‘Using Ethereum
blockchain in Internet of Things: A solution for electric vehicle battery
refueling,’’ in Proc. Int. Conf. Blockchain. Berlin, Germany: Springer,
2018, pp. 3–17.
[32] I. Makhdoom, M. Abolhasan, H. Abbas, and W. Ni, ‘‘Blockchain’s adoption in IoT: The challenges, and a way forward,’’ J. Netw. Comput. Appl.,
vol. 125, pp. 251–279, Jan. 2019.
[33] J. F. Galvez, J. C. Mejuto, and J. Simal-Gandara, ‘‘Future challenges on
the use of blockchain for food traceability analysis,’’ TrAC Trends Anal.
_Chem., vol. 107, pp. 222–232, Oct. 2018._
[34] P. Veena, S. Panikkar, S. Nair, and P. Brody, ‘‘Empowering the edge
practical insights on a decentralized Internet of Things,’’ IBM Inst. Bus.
Value, 2015, vol. 17.
[35] P. K. Sharma, M.-Y. Chen, and J. H. Park, ‘‘A software defined fog node
based distributed blockchain cloud architecture for IoT,’’ IEEE Access,
vol. 6, pp. 115–124, 2018.
[36] M. Samaniego and R. Deters, ‘‘Hosting virtual IoT resources on edge-hosts
with blockchain,’’ in Proc. IEEE Int. Conf. Comput. Inf. Technol. (CIT),
Dec. 2016, pp. 116–119.
[37] D. Kundu, ‘‘Blockchain and trust in a smart city,’’ Environ. Urbanization
_ASIA, vol. 10, no. 1, pp. 31–43, Mar. 2019._
[38] M. J. Casey and P. Vigna, ‘‘In blockchain we trust,’’ MIT Technol. Rev.,
vol. 121, no. 3, pp. 10–16, 2018.
[39] A. M. Antonopoulos, Mastering Bitcoin: Unlocking Digital Cryptocurren_cies. Newton, MA, USA: O’Reilly Media, 2014._
[40] N. Taleb, ‘‘Prospective applications of blockchain and bitcoin cryptocurrency technology,’’ TEM J., vol. 8, no. 1, pp. 48–55, 2019.
[41] Q. Lu and X. Xu, ‘‘Adaptable blockchain-based systems: A case study for
product traceability,’’ IEEE Softw., vol. 34, no. 6, pp. 21–27, Nov. 2017.
[42] P. Wang, R. X. Gao, and Z. Fan, ‘‘Cloud computing for cloud manufacturing: Benefits and limitations,’’ J. Manuf. Sci. Eng., vol. 137, no. 4, pp. 1–9,
Aug. 2015.
[43] Z. Zheng, S. Xie, H.-N. Dai, X. Chen, and H. Wang, ‘‘Blockchain challenges and opportunities: A survey,’’ Int. J. Web Grid Services, vol. 14,
no. 4, pp. 352–375, 2018.
[44] I. Friese, J. Heuer, and N. Kong, ‘‘Challenges from the identities of things:
Introduction of the identities of things discussion group within kantara initiative,’’ in Proc. IEEE World Forum Internet Things (WF-IoT), Mar. 2014,
pp. 1–4.
[45] P. N. Mahalle, B. Anggorojati, N. R. Prasad, and R. Prasad, ‘‘Identity
authentication and capability based access control (IACAC) for the Internet
of Things,’’ J. Cyber Secur. Mobility, vol. 1, no. 4, pp. 309–348, 2013.
[46] H. F. Atlam, A. Alenezi, M. O. Alassafi, and G. B. Wills, ‘‘Blockchain
with Internet of Things: Benefits, challenges, and future directions,’’ Int.
_J. Intell. Syst. Appl., vol. 10, no. 6, pp. 40–48, Jun. 2018._
[47] A. Panarello, N. Tapas, G. Merlino, F. Longo, and A. Puliafito,
‘‘Blockchain and IoT integration: A systematic survey,’’ Sensors, vol. 18,
no. 8, p. 2575, Aug. 2018.
[48] B. Bera, D. Chattaraj, and A. K. Das, ‘‘Designing secure blockchain-based
access control scheme in IoT-enabled Internet of drones deployment,’’
_Comput. Commun., vol. 153, pp. 229–249, Mar. 2020._
[49] M. Zhaofeng, W. Xiaochang, D. K. Jain, H. Khan, G. Hongmin, and
W. Zhen, ‘‘A blockchain-based trusted data management scheme in edge
computing,’’ IEEE Trans. Ind. Informat., vol. 16, no. 3, pp. 2013–2021,
Mar. 2020.
[50] K. Xiao, Z. Gao, W. Shi, X. Qiu, Y. Yang, and L. Rui, ‘‘EdgeABC: An
architecture for task offloading and resource allocation in the Internet of
Things,’’ Future Gener. Comput. Syst., vol. 107, pp. 498–508, Jun. 2020.
[51] A. Moinet, B. Darties, and J.-L. Baril, ‘‘Blockchain based trust & authentication for decentralized sensor networks,’’ 2017, arXiv:1706.01730.
[Online]. Available: http://arxiv.org/abs/1706.01730
[52] H.-N. Dai, Z. Zheng, and Y. Zhang, ‘‘Blockchain for Internet of Things: A
survey,’’ IEEE Internet Things J., vol. 6, no. 5, pp. 8076–8094, Oct. 2019.
[53] S. Guo, Y. Dai, S. Guo, X. Qiu, and F. Qi, ‘‘Blockchain meets edge computing: Stackelberg game and double auction based task offloading for mobile
blockchain,’’ IEEE Trans. Veh. Technol., vol. 69, no. 5, pp. 5549–5561,
May 2020.
[54] S. He, Q. Tang, C. Q. Wu, and X. Shen, ‘‘Decentralizing IoT management
systems using blockchain for censorship resistance,’’ IEEE Trans. Ind.
_Informat., vol. 16, no. 1, pp. 715–727, Jan. 2020._
[55] Nyamtiga, Sicato, Rathore, Sung, and Park, ‘‘Blockchain-based secure
storage management with edge computing for IoT,’’ Electronics, vol. 8,
no. 8, p. 828, Jul. 2019.
[56] H. Hosseinian, H. Shahinzadeh, G. B. Gharehpetian, Z. Azani, and
M. Shaneh, ‘‘Blockchain outlook for deployment of IoT in distribution
networks and smart homes,’’ Int. J. Electr. Comput. Eng. (IJECE), vol. 10,
no. 3, p. 2787, Jun. 2020.
[57] B. Yu, J. Wright, S. Nepal, L. Zhu, J. Liu, and R. Ranjan, ‘‘IoTChain:
Establishing trust in the Internet of Things ecosystem using blockchain,’’
_IEEE Cloud Comput., vol. 5, no. 4, pp. 12–23, Jul. 2018._
[58] Y. Chen, X. Wang, Y. Yang, and H. Li, ‘‘Location-aware Wi-Fi authentication scheme using smart contract,’’ Sensors, vol. 20, no. 4, p. 1062,
Feb. 2020.
[59] Y. Kurt Peker, X. Rodriguez, J. Ericsson, S. J. Lee, and A. J. Perez, ‘‘A cost
analysis of Internet of Things sensor data storage on blockchain via smart
contracts,’’ Electronics, vol. 9, no. 2, p. 244, Feb. 2020.
[60] D. Mazzei, G. Baldi, G. Fantoni, G. Montelisciani, A. Pitasi, L. Ricci, and
L. Rizzello, ‘‘A blockchain tokenizer for industrial IOT trustless applications,’’ Future Gener. Comput. Syst., vol. 105, pp. 432–445, Apr. 2020.
[61] S. Huckle, R. Bhattacharya, M. White, and N. Beloff, ‘‘Internet of Things,
blockchain and shared economy applications,’’ Procedia Comput. Sci.,
vol. 98, pp. 461–466, 2016.
[62] S. Zhao, S. Li, and Y. Yao, ‘‘Blockchain enabled industrial Internet of
Things technology,’’ IEEE Trans. Comput. Social Syst., vol. 6, no. 6,
pp. 1442–1453, Dec. 2019.
-----
[63] H. Yao, T. Mai, J. Wang, Z. Ji, C. Jiang, and Y. Qian, ‘‘Resource trading
in blockchain-based industrial Internet of Things,’’ IEEE Trans. Ind. Infor_mat., vol. 15, no. 6, pp. 3602–3609, Jun. 2019._
[64] J. Wu, M. Dong, K. Ota, J. Li, and W. Yang, ‘‘Application-aware consensus
management for software-defined intelligent blockchain in IoT,’’ IEEE
_Netw., vol. 34, no. 1, pp. 69–75, Jan. 2020._
[65] V. A. Siris, D. Dimopoulos, N. Fotiou, S. Voulgaris, and G. C. Polyzos,
‘‘Decentralized authorization in constrained IoT environments exploiting interledger mechanisms,’’ Comput. Commun., vol. 152, pp. 243–251,
Feb. 2020.
[66] A. Rovira-Sugranes and A. Razi, ‘‘Optimizing the age of information for
blockchain technology with applications to IoT sensors,’’ IEEE Commun.
_Lett., vol. 24, no. 1, pp. 183–187, Jan. 2020._
[67] S. Aich, S. Chakraborty, M. Sain, H.-I. Lee, and H.-C. Kim, ‘‘A review
on benefits of IoT integrated blockchain based supply chain management
implementations across different sectors with case study,’’ in Proc. 21st
_Int. Conf. Adv. Commun. Technol. (ICACT), Feb. 2019, pp. 138–141._
[68] J. Lin, Z. Shen, A. Zhang, and Y. Chai, ‘‘Blockchain and IoT based food
traceability for smart agriculture,’’ in Proc. 3rd Int. Conf. Crowd Sci.
_Eng. (ICCSE), 2018, pp. 1–6._
[69] F. Knirsch, A. Unterweger, and D. Engel, ‘‘Implementing a blockchain
from scratch: Why, how, and what we learned,’’ EURASIP J. Inf. Secur.,
vol. 2019, no. 1, p. 2, Dec. 2019.
[70] F. Samie, V. Tsoutsouras, L. Bauer, S. Xydis, D. Soudris, and J. Henkel,
‘‘Computation offloading and resource allocation for low-power IoT edge
devices,’’ in Proc. IEEE 3rd World Forum Internet Things (WF-IoT),
Dec. 2016, pp. 7–12.
[71] J. Lin, W. Yu, N. Zhang, X. Yang, H. Zhang, and W. Zhao, ‘‘A survey on Internet of Things: Architecture, enabling technologies, security
and privacy, and applications,’’ IEEE Internet Things J., vol. 4, no. 5,
pp. 1125–1142, Oct. 2017.
[72] A. Torkaman and M. A. Seyyedi, ‘‘Analyzing IoT reference architecture
models,’’ Int. J. Comput. Sci. Softw. Eng., vol. 5, no. 8, p. 154, 2016.
[73] M. Conti, E. Sandeep Kumar, C. Lal, and S. Ruj, ‘‘A survey on security
and privacy issues of bitcoin,’’ IEEE Commun. Surveys Tuts., vol. 20, no. 4,
pp. 3416–3452, 4th Quart., 2018.
[74] A. Dorri, S. S. Kanhere, and R. Jurdak, ‘‘MOF-BC: A memory optimized
and flexible blockchain for large scale networks,’’ Future Gener. Comput.
_Syst., vol. 92, pp. 357–373, Mar. 2019._
[75] V. Grover, R. H. L. Chiang, T.-P. Liang, and D. Zhang, ‘‘Creating strategic
business value from big data analytics: A research framework,’’ J. Manage.
_Inf. Syst., vol. 35, no. 2, pp. 388–423, Apr. 2018._
[76] H.-N. Dai, H. Wang, G. Xu, J. Wan, and M. Imran, ‘‘Big data analytics for
manufacturing Internet of Things: Opportunities, challenges and enabling
technologies,’’ Enterprise Inf. Syst., vol. 14, nos. 9–10, pp. 1279–1303,
2019.
[77] Hyperledger, T. L. Found., 2020.
[78] V. Daza, R. Di Pietro, I. Klimek, and M. Signorini, ‘‘CONNECT: CONtextual NamE disCovery for blockchain-based services in the IoT,’’ in Proc.
_IEEE Int. Conf. Commun. (ICC), May 2017, pp. 1–6._
[79] B. Carson, G. Romanelli, P. Walsh, and A. Zhumaev, ‘‘Blockchain beyond
the hype: What is the strategic business value,’’ McKinsey Company, Tech.
Rep., 2018, pp. 1–13.
[80] A. Deshpande, K. Stewart, L. Lepetit, and S. Gunashekar, ‘‘Distributed ledger technologies/blockchain: Challenges, opportunities and the
prospects for standards,’’ Overview Rep. Brit. Standards Inst. (BSI), vol. 40,
p. 40, May 2017.
[81] Blockchain and Distributed Ledger Technologies, ISO, Geneva, Switzerland, 2016.
[82] E. U. A. for Cybersecurity, Eur. Commission, 2020. [Online]. Available:
https://digital-strategy.ec.europa.eu/en/policies/nis-directive
[83] The General Data Protection Regulation (GDPR), European Patent Office,
Munich, Germany, 2020.
[84] Y. Pan, P. Thulasiraman, and Y. Wang, ‘‘Overview of cloudlet, fog computing, edge computing, and dew computing,’’ in Proc. 3rd Int. Workshop
_Dew Comput., 2018, pp. 20–23._
[85] K. Skala, D. Davidovic, E. Afgan, I. Sovic, and Z. Sojat, ‘‘Scalable
distributed computing hierarchy: Cloud, fog and dew computing,’’ Open
_J. Cloud Comput., vol. 2, no. 1, pp. 16–24, 2015._
[86] Y. Wang, ‘‘Definition and categorization of dew computing,’’ Open
_J. Cloud Comput., vol. 3, no. 1, pp. 1–7, 2016._
[87] P. P. Ray, ‘‘An introduction to dew computing: Definition, concept and
implications,’’ IEEE Access, vol. 6, pp. 723–737, 2018.
[88] Y. Wang, ‘‘A blockchain system with lightweight full node based on dew
computing,’’ Internet Things, vol. 11, Sep. 2020, Art. no. 100184.
[89] What is Consensus Algorithm in Blockchain & Different Types of Consen_sus Models, Medium, BangBit Technol., Bengaluru, India, 2018._
[90] M. Vukolić, ‘‘The quest for scalable blockchain fabric: Proof-of-work vs.
BFT replication,’’ in Proc. Int. Workshop Open Problems Netw. Secur.
Berlin, Germany: Springer, 2015, pp. 112–125.
[91] What is Practical Byzantine Fault Tolerance (pBFT), Crush Crypto,
Vancouver, BC, Canada, 2020.
ALIA AL SADAWI received the B.Sc. degree
in electrical and electronics engineering, and the
M.Sc. degree in engineering systems management
from the American University of Sharjah, United
Arab Emirates, in 2016, where she is currently
pursuing the Ph.D. degree. She is also a Graduate
Teaching Assistant working with the American
University of Sharjah. She was involved in multiple projects related to decision making, sustainability in smart city and blockchain application in
supply chain, logistics and carbon trading. Her research interests include
blockchain integration with the IoT and their applications in the smart
industrial sector.
MOHAMED S. HASSAN received the M.Sc.
degree in electrical engineering from the University of Pennsylvania, Philadelphia, PA, USA,
in 2000, and the Ph.D. degree in electrical and
computer engineering from the University of Arizona, USA, in 2005. He is currently a Full Professor of electrical engineering with the American
University of Sharjah. He was involved in multiple
projects related to free space optical communications, electromagnetic shielding, demand response
and smart grids, anti-static flooring and fiber optic sensors for infrastructure
health monitoring applications in addition to EV wireless charging systems.
His research interests include multimedia communications and networking,
wireless communications, cognitive radios, resource allocation and performance evaluation of wired networks, and next generation wireless systems.
MALICK NDIAYE received the M.S. degree in
quantitative methods in economics, optimization
and strategic analysis from the University of Paris
1 Sorbonne, France, and the Ph.D. degree in operations research from the University of Burgundy,
France. He has worked with the University of
Birmingham, U.K., and the King Fahd University
of Petroleum and Minerals, Saudi Arabia, before
joining the American University of Sharjah. His
recent scholarly work focuses on developing lastmile delivery routing solutions, vehicle routing optimization in cold supply
chain, and the use of emerging technology to improve logistics systems. His
research interests include operations research, supply chain, and logistics
systems management. He is a Certified Supply Chain Professional from the
American Association for Operations Management (APICS).
-----
| 23,467
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ACCESS.2021.3070555?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ACCESS.2021.3070555, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://ieeexplore.ieee.org/ielx7/6287639/9312710/09393912.pdf"
}
| 2,021
|
[
"JournalArticle",
"Review"
] | true
| null |
[
{
"paperId": "104196bdb638606edfeca400091d95e75642359b",
"title": "Blockchain Technology for Security Issues and Challenges in IOT"
},
{
"paperId": "88e384fdc412ccf098812dc01e8cce0200c3f998",
"title": "A Blockchain System with Lightweight Full Node Based on Dew Computing"
},
{
"paperId": "fd4df295b1dad849490b78efe9a1671436df6d09",
"title": "Blockchain outlook for deployment of IoT in distribution networks and smart homes"
},
{
"paperId": "16f9a06880b3689160bb8f7f9943f9d182678ce5",
"title": "EdgeABC: An architecture for task offloading and resource allocation in the Internet of Things"
},
{
"paperId": "bb0ed76f0cbc5696f6930d036072cff1b5fb4cdc",
"title": "A Blockchain Tokenizer for Industrial IOT trustless applications"
},
{
"paperId": "d18c27c186bf78a4f98bc031c2cce7a5bd20b5cb",
"title": "Blockchain Meets Edge Computing: Stackelberg Game and Double Auction Based Task Offloading for Mobile Blockchain"
},
{
"paperId": "3170feb1a329dbcc416b15264e74772184ff247b",
"title": "Designing secure blockchain-based access control scheme in IoT-enabled Internet of Drones deployment"
},
{
"paperId": "1ff76ab0fcf22110df62337d462e15d79a2a2593",
"title": "A Blockchain-Based Trusted Data Management Scheme in Edge Computing"
},
{
"paperId": "375125029b085e70a109491656b69aa01bc2a166",
"title": "A Cost Analysis of Internet of Things Sensor Data Storage on Blockchain via Smart Contracts"
},
{
"paperId": "6a5d75b229c391477413f2e18b66a8228290617f",
"title": "LSC: Online auto-update smart contracts for fortifying blockchain-based log systems"
},
{
"paperId": "708f60698e348625ad6c3e829fd6f5064103a52c",
"title": "Decentralized authorization in constrained IoT environments exploiting interledger mechanisms"
},
{
"paperId": "4aeb86d8a680f23d8daad15d0c5f897fe4ec5b53",
"title": "Location-Aware Wi-Fi Authentication Scheme Using Smart Contract"
},
{
"paperId": "2f3889aee4eb5ea4265126404b6d73984240b1fa",
"title": "Application-Aware Consensus Management for Software-Defined Intelligent Blockchain in IoT"
},
{
"paperId": "44403eeb9b14c1bef010024029a1eff0f2559083",
"title": "Optimizing the Age of Information for Blockchain Technology With Applications to IoT Sensors"
},
{
"paperId": "b562d09ef46f29b5c27ca5cf200f8c0f85054a19",
"title": "Decentralizing IoT Management Systems Using Blockchain for Censorship Resistance"
},
{
"paperId": "da37f5acd3ac40af1d7643f0e1c7c392a975a2a8",
"title": "Blockchain-Based Secure Storage Management with Edge Computing for IoT"
},
{
"paperId": "816656b1b911425b08e8a0b2cde1dee46df07dc9",
"title": "Blockchain Enabled Industrial Internet of Things Technology"
},
{
"paperId": "3bc9bb1f2218dcbd15c3b7cdfcb43077a3f30779",
"title": "Big data analytics for manufacturing internet of things: opportunities, challenges and enabling technologies"
},
{
"paperId": "7fe4bbce603abe533f688b888a51d597db600609",
"title": "Blockchain for Internet of Things: A Survey"
},
{
"paperId": "a27da4a7af6ded1cbddab51dd0a7ce90adc1d80e",
"title": "Making sense of blockchain technology: How will it transform supply chains?"
},
{
"paperId": "c4a6bc47faf27e35df3753476fb9ba111d6d7380",
"title": "Beyond Bitcoin: What blockchain and distributed ledger technologies mean for firms"
},
{
"paperId": "ccdffddf290486c3399f116189c4227f45cf95ab",
"title": "Resource Trading in Blockchain-Based Industrial Internet of Things"
},
{
"paperId": "4c0945cb52d0734b25ecea49e3ae1c1b243fca66",
"title": "A systematic literature review of blockchain-based applications: Current status, classification and open issues"
},
{
"paperId": "cc1fcb2a0bfbe5ce103310cf1482a41102954b07",
"title": "Toward a Distributed Carbon Ledger for Carbon Emissions Trading and Accounting for Corporate Carbon Management"
},
{
"paperId": "bf42a7ba4af9585d3b34abed099f4823edc67343",
"title": "Blockchain and Trust in a Smart City"
},
{
"paperId": "4f9298f2d43f22f33f44df3f7e816b88bbc7a7a2",
"title": "Implementing a blockchain from scratch: why, how, and what we learned"
},
{
"paperId": "40f5f2b05042d5dd81f1d22689534b453bfb46ff",
"title": "Prospective Applications of Blockchain and Bitcoin Cryptocurrency Technology"
},
{
"paperId": "f6eaacfeb2d3d35c35f3ec80f760c14f82b190cf",
"title": "A Review on Benefits of IoT Integrated Blockchain based Supply Chain Management Implementations across Different Sectors with Case Study"
},
{
"paperId": "1cccd591e7a72501ff4e418e148d3e0848479a35",
"title": "Blockchain's adoption in IoT: The challenges, and a way forward"
},
{
"paperId": "f5a21fb87d88b4510dd4d42fbdc52a674a592ea6",
"title": "Smart contract applications within blockchain technology: A systematic mapping study"
},
{
"paperId": "305edd92f237f8e0c583a809504dcec7e204d632",
"title": "Blockchain challenges and opportunities: a survey"
},
{
"paperId": "420d8eb0d8f9829506a7fef6140b87e045cfb402",
"title": "Future challenges on the use of blockchain for food traceability analysis"
},
{
"paperId": "383057f972b11b99cbc8c0d3e6c47170e9d95c1c",
"title": "Blockchain and IoT Integration: A Systematic Survey"
},
{
"paperId": "6d5eff0f4dcaf531a6129134119bc9613ce1b657",
"title": "Blockchain and IoT based Food Traceability for Smart Agriculture"
},
{
"paperId": "e98f86cba70ac1703f4a4808fa7f32467101dded",
"title": "IoTChain: Establishing Trust in the Internet of Things Ecosystem Using Blockchain"
},
{
"paperId": "5aced96a7dca18a3affe7f6a5260d81069eb3a34",
"title": "Using Ethereum Blockchain in Internet of Things: A Solution for Electric Vehicle Battery Refueling"
},
{
"paperId": "259568226fc10d2b82c4202359e732561a5a7461",
"title": "Conceptualizing Blockchains: Characteristics & Applications"
},
{
"paperId": "01157f7c700e92323a5933e00c71cf001a8bac88",
"title": "Blockchain with Internet of Things: Benefits, Challenges, and Future Directions"
},
{
"paperId": "ad9760ea1568263d4f670edc52e8d91875c95e42",
"title": "Do you Need a Blockchain?"
},
{
"paperId": "133000fa446dc445ba0a2930f9629ee19a62fb35",
"title": "General Data Protection Regulation (GDPR)"
},
{
"paperId": "e489fb6d194f0f3e5c1cbbebde898a178b38e454",
"title": "Blockchain Enhanced Emission Trading Framework in Fashion Apparel Manufacturing Industry"
},
{
"paperId": "e7c8fcbc24c73a576339e5f34f9f23f5ea732b3b",
"title": "Creating Strategic Business Value from Big Data Analytics: A Research Framework"
},
{
"paperId": "4ce13dcc8e5497755bbedcea39c4b9b7e14fe64f",
"title": "The internet of things"
},
{
"paperId": "051a8fae323f26a9bd2ca551940b4ba52b99c1be",
"title": "A Software Defined Fog Node Based Distributed Blockchain Cloud Architecture for IoT"
},
{
"paperId": "59e0611465824e84fab94b2bd332e8aec97a3d59",
"title": "MOF-BC: A Memory Optimized and Flexible BlockChain for Large Scale Networks"
},
{
"paperId": "7d49d03e62907c45ea3f84cdc626dcbd75dc03f0",
"title": "Adaptable Blockchain-Based Systems: A Case Study for Product Traceability"
},
{
"paperId": "81f6442e50890b990598e637a44b2d8d10329710",
"title": "IoT security: Review, blockchain solutions, and open challenges"
},
{
"paperId": "fa33053af899e6ca0da59ce04802c5437e43ee5a",
"title": "A blockchain future for internet of things security: a position paper"
},
{
"paperId": "e8709e2906361ade9064cc605b9c7637bec474a0",
"title": "Can Blockchain Strengthen the Internet of Things?"
},
{
"paperId": "999d34c323e1f49635eaf5ec434dda7dd1895edb",
"title": "Blockchain based trust & authentication for decentralized sensor networks"
},
{
"paperId": "1973f0f8815ec63abd21b0e191846efa2041125a",
"title": "A Survey on Security and Privacy Issues of Bitcoin"
},
{
"paperId": "33d0d59b44a93a985140a746a9300ebcb843d4a9",
"title": "CONNECT: CONtextual NamE disCovery for blockchain-based services in the IoT"
},
{
"paperId": "dd074ff6d9b58f614de69b03411875185d13a976",
"title": "Berlin"
},
{
"paperId": "cd3d4459ff0ff590a3e3258b0f774d6963cd4c90",
"title": "A Survey on Internet of Things: Architecture, Enabling Technologies, Security and Privacy, and Applications"
},
{
"paperId": "30c8135481e0e8d963bad9e55ae07ca5f6a7917e",
"title": "Hosting Virtual IoT Resources on Edge-Hosts with Blockchain"
},
{
"paperId": "55c3f95cd69c2f71663ab4e10d961a4f498ba388",
"title": "Computation offloading and resource allocation for low-power IoT edge devices"
},
{
"paperId": "ac013d1d21a659da4873164c43d005416e1bce7a",
"title": "Internet of Things, Blockchain and Shared Economy Applications"
},
{
"paperId": "7dcc27cf1886ddaad912a90f7efd525872fd5d2a",
"title": "Threats to Networking Cloud and Edge Datacenters in the Internet of Things"
},
{
"paperId": "c998aeb12b78122ec4143b608b517aef0aa2c821",
"title": "Blockchains and Smart Contracts for the Internet of Things"
},
{
"paperId": "e2e4ff38cb02c325a606d415705d6b7d7011b07b",
"title": "State-of-the-art, challenges, and open issues in the integration of Internet of things and cloud computing"
},
{
"paperId": "dd4f470f31169a77fb913f0365f63020cc45bee0",
"title": "Information-centric networking for the internet of things: challenges and opportunities"
},
{
"paperId": "6ff24a605b92a550cde2bd7afc34c4443728ff17",
"title": "Scalable Distributed Computing Hierarchy: Cloud, Fog and Dew Computing"
},
{
"paperId": "efb1a85cf540fd4f901a78100a2e450d484aebac",
"title": "The Quest for Scalable Blockchain Fabric: Proof-of-Work vs. BFT Replication"
},
{
"paperId": "5a279e92a6269eab4d73c5ebb03b86cfe6e6a5bd",
"title": "Cloud Computing for Cloud Manufacturing: Benefits and Limitations"
},
{
"paperId": "97fddbbfd681bce9eeb8e0a013353b4d5b2ba0db",
"title": "Blockchain: Blueprint for a New Economy"
},
{
"paperId": "148f044225ce7433e5fcf2c214b3bb48d94f37ef",
"title": "Mastering Bitcoin: Unlocking Digital Crypto-Currencies"
},
{
"paperId": "7c2549ec60d412ee7fc7f51104f45f3b48325b26",
"title": "Challenges from the Identities of Things: Introduction of the Identities of Things discussion group within Kantara initiative"
},
{
"paperId": "52f168c6c4f42294c4c9f9305bc88b6d25ffec9a",
"title": "Internet of Things for Smart Cities"
},
{
"paperId": "8fff1d6597b7c3163731ed00740dbb0c1eb6cf26",
"title": "Identity Authentication and Capability Based Access Control (IACAC) for the Internet of Things"
},
{
"paperId": "72c4d8b64a9959ea45677ca1955d3491ef0f1c62",
"title": "Internet of Things (IoT): A vision, architectural elements, and future directions"
},
{
"paperId": "75aa1f1a04b5f2bb6bf9afb662711121edde9eda",
"title": "A and V"
},
{
"paperId": null,
"title": "Transparency: data encapsulated in blocks are visible to all participants in the blockchain"
},
{
"paperId": null,
"title": "Learn About Ethereum"
},
{
"paperId": null,
"title": "E. U. A. for Cybersecurity , Eur. Commission"
},
{
"paperId": null,
"title": "What is Practical Byzantine Fault Tolerance (pBFT), Crush Crypto"
},
{
"paperId": null,
"title": "TheGeneralDataProtectionRegulation(GDPR) ,EuropeanPatentOffice, Munich, Germany"
},
{
"paperId": "7cd316505f52aa337ef8a2aff10bc6bf1df561d0",
"title": "and s"
},
{
"paperId": "c445bde08d08fc7567f2501def43acfd54c59d3f",
"title": "Designs"
},
{
"paperId": null,
"title": "SMART Supply Network . Berlin, Germany"
},
{
"paperId": null,
"title": "‘Overview of cloudlet"
},
{
"paperId": null,
"title": "What is Consensus Algorithm in Blockchain & Different Types of Consensus Models , Medium"
},
{
"paperId": "bb0ee456b747bf642e8a5da1e536f45e8ea643e3",
"title": "Blockchain technology for security issues and challenges in IoT"
},
{
"paperId": "7c249c8d49c17eb625ee207cd377d9b619aad1c1",
"title": "An Introduction to Dew Computing: Definition, Concept and Implications"
},
{
"paperId": null,
"title": "‘‘In blockchain we trust,’’"
},
{
"paperId": null,
"title": "‘‘Blockchain beyond thehype:Whatisthestrategicbusinessvalue"
},
{
"paperId": null,
"title": "Gunashekar, ‘‘Distributed ledger technologies/blockchain: Challenges, opportunities and the prospects for standards,’’OverviewRep"
},
{
"paperId": "942492c729e84a01a3ef85d63fec117feb9e4362",
"title": "Definition and Categorization of Dew Computing"
},
{
"paperId": "7e07327ce9bdd65791968f59b508e2047fb7d1b9",
"title": "Analyzing IoT Reference Architecture Models"
},
{
"paperId": null,
"title": "‘‘Forecast alert: Internet of Things— Endpoints and associated services, worldwide,’’"
},
{
"paperId": null,
"title": "‘‘Blockchain technology: Beyond bitcoin,’’"
},
{
"paperId": null,
"title": "Deters,"
},
{
"paperId": null,
"title": "The Business Blockchain: Promise"
},
{
"paperId": null,
"title": "Hotspot for Blockchain Innovation"
},
{
"paperId": "0dbb8a54ca5066b82fa086bbf5db4c54b947719a",
"title": "A NEXT GENERATION SMART CONTRACT & DECENTRALIZED APPLICATION PLATFORM"
},
{
"paperId": "3d2218b17e7898a222e5fc2079a3f1531990708f",
"title": "I and J"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": "ca5cb4e0826b424adb81cbb4f2e3c88c391a4075",
"title": "Influence of cultivation temperature on the ligninolytic activity of selected fungal strains"
},
{
"paperId": "73f36ff3a6d340606e09d2d0091da27a13af7a6f",
"title": "and as an in"
},
{
"paperId": null,
"title": "Automation: fulfilled by the concept of smart contract in which certain action could be automatically triggered by a specific smart contract program whenever a set of prespecified conditions are met"
},
{
"paperId": null,
"title": "Secure code deployment: Since blockchain provides immutable and secured transaction storage, codes could also be pushed into the IoT devices in a secure manner [36]"
},
{
"paperId": null,
"title": "Security: blockchian structure of linking blocks using hash algorithm ensures that generated blocks cannot be erased or modified"
},
{
"paperId": null,
"title": "The cloudlet layer"
},
{
"paperId": null,
"title": "blockchain integration with the IoT and their applications in the smart industrial sector"
},
{
"paperId": null,
"title": "Privacy: although blockchain is transparent, participants’ information is kept anonymous using private/ public key"
},
{
"paperId": null,
"title": "Traceability: blockchain holds a historical record of all data from the date it was established. Such a record can be traced back to the original action"
}
] | 23,467
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Engineering",
"source": "external"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00576c3890e9c6d312bc3eb36201bce83fc4284f
|
[
"Computer Science",
"Engineering"
] | 0.862148
|
Defending Water Treatment Networks: Exploiting Spatio-temporal Effects for Cyber Attack Detection
|
00576c3890e9c6d312bc3eb36201bce83fc4284f
|
Industrial Conference on Data Mining
|
[
{
"authorId": "1669829502",
"name": "Dongjie Wang"
},
{
"authorId": "35629977",
"name": "Pengyang Wang"
},
{
"authorId": "2336606",
"name": "Jingbo Zhou"
},
{
"authorId": "1845784191",
"name": "Leilei Sun"
},
{
"authorId": "2525530",
"name": "Bowen Du"
},
{
"authorId": "2274395",
"name": "Yanjie Fu"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Ind Conf Data Min",
"ICDM"
],
"alternate_urls": null,
"id": "67d15a94-d523-4b5f-be58-03fe2ef9dcfb",
"issn": null,
"name": "Industrial Conference on Data Mining",
"type": "conference",
"url": "http://www.data-mining-forum.de/"
}
|
While Water Treatment Networks (WTNs) are critical infrastructures for local communities and public health, WTNs are vulnerable to cyber attacks. Effective detection of attacks can defend WTNs against discharging contaminated water, denying access, destroying equipment, and causing public fear. While there are extensive studies in WTNs attack detection, they only exploit the data characteristics partially to detect cyber attacks. After preliminary exploring the sensing data of WTNs, we find that integrating spatio-temporal knowledge, representation learning, and detection algorithms can improve attack detection accuracy. To this end, we propose a structured anomaly detection framework to defend WTNs by modeling the spatiotemporal characteristics of cyber attacks in WTNs. In particular, we propose a spatio-temporal representation framework specially tailored to cyber attacks after separating the sensing data of WTNs into a sequence of time segments. This framework has two key components. The first component is a temporal embedding module to preserve temporal patterns within a time segment by projecting the time segment of a sensor into a temporal embedding vector. We then construct Spatio-Temporal Graphs (STGs), where a node is a sensor and an attribute is the temporal embedding vector of the sensor, to describe the state of the WTNs. The second component is a spatial embedding module, which learns the final fused embedding of the WTNs from STGs. In addition, we devise an improved one class-SVM model that utilizes a new designed pairwise kernel to detect cyber attacks. The devised pairwise kernel augments the distance between normal and attack patterns in the fused embedding space. Finally, we conducted extensive experimental evaluations with real-world data to demonstrate the effectiveness of our framework: it achieves an accuracy of 91.65%, with average improvement ratios of 82.78% and 22.96% with respect to F1 and AUC, compared with baseline methods.
|
# Defending Water Treatment Networks: Exploiting Spatio-temporal Effects for Cyber Attack Detection
### 1[st] Dongjie Wang
_Department of Computer Science_
_University of Central Florida_
Orlando,United States
[email protected]
### 2[nd] Pengyang Wang
_Department of Computer Science_
_University of Central Florida_
Orlando, United States
[email protected]
### 3[rd] Jingbo Zhou
_Baidu Research_
_Baidu Inc._
Beijing, China
[email protected]
### 6[th] Yanjie Fu[†]
_Department of Computer Science_
_University of Central Florida_
Orlando, United States
[email protected]
### 4[th] Leilei Sun
_Department of Computer Science_
_Beihang University_
Beijing, China
[email protected]
### 5[th] Bowen Du
_Department of Computer Science_
_Beihang University_
Beijing, China
[email protected]
**_Abstract—While Water Treatment Networks (WTNs) are crit-_**
**ical infrastructures for local communities and public health,**
**WTNs are vulnerable to cyber attacks. Effective detection of**
**attacks can defend WTNs against discharging contaminated**
**water, denying access, destroying equipment, and causing public**
**fear. While there are extensive studies in WTNs attack detection,**
**they only exploit the data characteristics partially to detect cyber**
**attacks. After preliminary exploring the sensing data of WTNs,**
**we find that integrating spatio-temporal knowledge, represen-**
**tation learning, and detection algorithms can improve attack**
**detection accuracy. To this end, we propose a structured anomaly**
**detection framework to defend WTNs by modeling the spatio-**
**temporal characteristics of cyber attacks in WTNs. In particular,**
**we propose a spatio-temporal representation framework specially**
**tailored to cyber attacks after separating the sensing data of**
**WTNs into a sequence of time segments. This framework has two**
**key components. The first component is a temporal embedding**
**module to preserve temporal patterns within a time segment**
**by projecting the time segment of a sensor into a temporal**
**embedding vector. We then construct Spatio-Temporal Graphs**
**(STGs), where a node is a sensor and an attribute is the temporal**
**embedding vector of the sensor, to describe the state of the**
**WTNs. The second component is a spatial embedding module,**
**which learns the final fused embedding of the WTNs from STGs.**
**In addition, we devise an improved one class-SVM model that**
**utilizes a new designed pairwise kernel to detect cyber attacks.**
**The devised pairwise kernel augments the distance between**
**normal and attack patterns in the fused embedding space. Finally,**
**we conducted extensive experimental evaluations with real-world**
**data to demonstrate the effectivness of our framework: it achieves**
**an accuracy of 91.65%, with average improvement ratios of**
82.78% and 22.96% with respect to F1 and AUC, compared
**with baseline methods.**
I. INTRODUCTION
example, the water sector reported the fourth largest number
of incidents in 2016 [1]. How does a cyber attack to WTNs
look like? Figure 1 shows that a water treatment procedure
includes six stages (i.e., P1-P6), each of which is monitored
by sensors; a cyber attack compromises the RO Feed Pump
sensor of P4 to change the levels of chemicals being used to
treat tap water. As a result, there is a compelling need for an
effective solution to attack detection in WTNs.
Raw Water
|aw Water P1 S0 HCL S1 NaOCl S2 P2 RAW Ta W nkater Pump S Mt ia xt eic r S7 AC ty tab ce kr P4 P3 S3 UF T aF ne ked SyU stV e m RO Pu F me ped RO Ta F ne ked Ultrafiltration U PF u F me ped S4 S6 S5 UF BackWash Pump S10 Filter RO P uB mo post R Se yv se ters me UF B Ta ac nk kWash S11 P5 S8 S9 RO P Te ar nm keate S12 P6 Purified Water|P1 S0 RAW Ta W nkater Pump|HCL S1 NaOCl S2 P2 Static Mixer|Col4|Col5|
|---|---|---|---|---|
||||P3 S3 UF T aF ne ked Ultrafiltration U PF u F me ped S4||
||||UF BackWash Pump S10 UF B Ta ac nk kWash S11 RO P Te ar nm keate S12 P6 Purified Water||
**Cyber**
**Attack**
Water Treatment Networks (WTNs) are critical infrastructures that utilize industrial control systems, sensors and
communication technologies to control the water purification
processes to improve the water quality and distribution for
drinking, irrigation, or industrial uses. Although it is a critical
infrastructure, WTNs are vulnerable to cyber attacks. For
Fig. 1. Cyber attack example: one cyber attack happens at RO Feed Pump
of P4, then the cyber attack effect spreads to other devices in P4.
In the literature, there are a number of studies about cyber
attack detection in WTNs [1]–[4]. However, most of these
studies only exploit traditional spaiotemporal data preprocessing and pattern extraction methods to distinguish attack
patterns. Our preliminary explorations find that tremendous
opportunities exist in solving the problem by teaching a
machine to augment the differences between normal and attack
patterns. To this end, in this paper, we aim to effectively solve
1https://www.osti.gov/servlets/purl/1372266
-----
the attack detection problem by augmenting the difference
between normal and attack patterns in WTNs.
However, it is challenging to mine the spatio-temporal
graph stream data of WTNs and identify the strategy to
maximize the pattern divergence between normal and attack
behaviors. By carefully examining the sensing data of WTNs,
we identify three types of characteristics of cyber attacks:
(1) delayed effect, meaning that many attacks will not take
effects immediately, but usually exhibit symptoms after a
while; (2) continued effect, meaning that the effects of attacks
will sustain for a while, not disappear rapidly; (3) cascading
_effect, meaning that the effects of attacks propagate to other_
sensors across the whole WTNs. Specifically, the delayed and
_continued effects are both temporal, and the cascading effect_
is spatial. More importantly, these three effects are mutually
coupled, impacted, and boosted in WTNs. A new framework is
required to address the margin maximization between normal
and attack pattern learning, under the three coupled effects.
Along this line, we propose a structured detection framework. This framework has two main phases: (1) spatiotemporal representation learning, which includes two modules:
incorporating temporal effects and spatial effects; (2) improved
unsupervised one-class detection, which utilizes a new designed pairwise kernel to make detection more accurate. Next,
we briefly introduce our structured spatio-temporal detection
framework named STDO.
**Phase 1: Spatio-temporal representation learning. This**
phase aims to learn a good spatio-temporal representation
over the sensing data of WTNs with two modules. The
first module of this part is to integrate temporal effects.
Cyber attacks in WTNs exhibit temporally-dependent attack
behaviors, sequentially-varying attack purposes over time, and
delayed negative impacts. Traditional methods ( e.g., AR, MA,
ARMA, ARIMA, arrival density of point process, change
point detection) mostly model the patterns of data points
at each timestamp. However, we identify that partitioning
the sensing data into a sequence of time segments can help
to better describe delayed and continued effect of attacks.
Therefore, we propose to segment the sensing data into a
sequence of time segments. We then exploit a sequence-tosequence (seq2seq) embedding method to characterize the
temporal dependencies within each time segment. To improve
the seq2seq method, we develop a new neural reconstruction
structure to reconstruct not just a time segment, but also first
and second derivatives of momentum of the time segment. In
this way, the improved seq2seq method can have the awareness
of values, acceleration, and jerk (second order derivatives) of
sensor measurements. Through this module, we obtain the
temporal embedding of each time segment of each sensor.
The second module is to integrate spatial effects. The
effects of cyber attacks in WTNs will spatially diffuse and
propagate to other sensors over time. Therefore, exploring the
propagation structure can significantly model attack patterns
and improve detection accuracy. However, how can we capture
the spatial structures of propagation? The topology of WTNs
is a graph of interconnected sensors. We map the temporal
embedding of one time segment of a sensor to the graph of
WTNs as node attributes. We construct the Spatio-Temporal
Graphs (STGs), where a node is a sensor and an attribute is the
temporal embedding of the sensor, to describe the state of the
WTNs. In this way, the STGs not only contain spatial connectivity among different sensors, but also include temporal
patterns by mapping temporal embedding. We develop graph
embedding model to jointly learn the state representations of
the WTNs from the relational STGs.
**Phase 2: Improving Unsupervised One-Class Detection**
**Model. In reality, WTNs are mostly normal yet with a small**
number of cyber attack events, so the attack data samples are
quite rare. This trait makes the data distributions of normal
and attack extremely imbalanced. How can we overcome the
imbalanced data distribution to accurately detect cyber attack?
One-class detection fits well this problem. In particular, oneclass SVM (OC-SVM) is a popular detection model, in which
a hyper-plane is identified to divide normal and abnormal data
samples after being mapped to a high-dimmensional space by
kernel functions. While vanilla OC-SVM achieves promising
results, the kernel functions can be improved by exploiting the
similarities between data samples. Specifically, we propose a
new pair-wise kernel function to augment the distance between
normal and attack patterns by preserving similarity across
different data samples. Consequently, normal data samples are
grouped into a cluster, while abnormal data samples are pushed
away from normal data. In our framework, we feed the learned
state representations of the WTN into the improved OC-SVM
to use the pairwise kernel to detect attacks.
In summary, we develop a structured detection framework
for cyber attack detection in WTNs. Our contributions are as
follows: (1) we investigate an important problem of defending critical graph-structured infrastructures via cyber attack
detection, which is important for building resilient and safe
communities. (2) we develop a structured detection framework
to maximize the margin between normal and attack patterns,
by integrating spatio-temporal knowledge, deep representation
learning, and pairwise one-class detection. (3) we implement
and test our proposed framework with real-world water treatment network data and demonstrate the enhanced performance
of our method. Specifically, our detection method achieves
an accuracy of 91.65%, with average improvement ratios of
82.78% and 22.96% with respect to F1 and AUC, compared
with baseline methods.
II. PROBLEM STATEMENT AND FRAMEWORK OVERVIEW
We first introduce the statement of the problem, and then
present an overview of the framework.
_A. Problem Statement_
We aim to study the problem of cyber attack detection using
the sensing data of WTNs. We observe that cyberattacks in
WTNs exhibit not just spatial diffusion patterns, but also two
temporal effects (i.e., delayed and continued). As a result, we
partition the sensing data streams into non-overlapped time
segments. We investigate detection on the time segment level.
-----
normal/attack
sensor data segments
S1
S2
**…**
Sm
combined segments
S1
S2
S3
Sm
normal
attack
attack
normal
**z1**
**z** 2
**z** 3
**zm**
1
2
0
1
3 **OC-SVM** 1
pairwise kernel
**Um**
(a) P1: Embedding time segments sequential patterns.
**z** m
(b) P1: Modeling spatio-temporal patterns over STG.
|GCN|VGAE|GCN|
|---|---|---|
||||
|Encoder Decoder spatio-temporal z1 embedding|||
(c) P2: Anomaly detection with data similarity.
Fig. 2. The overview of cyber attack detection framework in water treatment network
_Definition 2.1: The WTN Attack Detection Problem._ Formally, assuming a WTN consists of N sensors, given the
sensing data streams of a WTN, we evenly divide the
streams into m non-overlapped segments by every K sensory records. Consequently, we obtain a segment sequence
**S = [S1, S2, · · ·, Si, · · ·, Sm], where the matrix Si ∈** R[N] _[×][K]_
is the i-th segment. Each segment is associated with a cyber
attack status label: if a cyber attack happens within Si, the
label of this segment is marked as yi = 1; Otherwise, yi = 0.
The objective is to develop a framework that takes the segment
sequence S as inputs, and output the corresponding cyber
attack labels for each segment to maximize detection accuracy.
_B. Framework Overview_
Figure 2 shows that our framework includes two phases:
(1) Spatio-temporal representation learning (P1); (2) Improving unsupervised one-class detection (P2). Specifically, there
are two modules in Phase 1: (a) Embedding time segment
sequential patterns, in which a seq2seq model is leveraged to
capture the temporal dependencies within a segment. Later,
the learned representations are attached to each node (sensor)
in the WTNs as node attributes to construct STGs. Be sure
to notice that temporal patterns are integrated by attaching
temporal embeddings as attributes; and spatial connectivity is
integrated via a graph structure of sensors, which is introduced
next. (b) Modeling spatio-temporal patterns over STGs, in
which the fused embedding is learned through an encodedecode paradigm integrated with Graph Convolutional Network (GCN). The fused embedding summarizes the information of STGs to profile the spatio-temporal characteristics in
WTNs. Finally, the Phase 2 exploit the fused embedding as
inputs to detect attacks. The Phase 2 has one module, namely
anomaly detection with pairwise segment similarity awareness.
Specifically, the fused embedding is fed into an improved oneclass anomaly detector integrated with awareness of pairwise
segment similarity. Specifically, the similarities between two
different segment embedding vectors are introduced to the
pairwise kernel of the detector to augment the distance between normal and attack patterns.
III. PROPOSED METHOD
We first introduce time segment embedding, then illustrate
the construction of spatio-temporal graphs using temporal embedding and sensor networks, present spatio-temporal graphbased representation learning, and, finally, discuss the integration with one-class detection.
_A. Embedding Time Segments Sequential Patterns_
We first model the sequential patterns of time segments.
The sequential patterns of WTN involves two essential measurements: (1) changing rate and (2) the trend of changing
rate, which correspond to the first and second derivatives
respectively. Therefore, in addition to the raw data points in
one segment, we introduce both the first and second derivatives
to quantify the sequential patterns, resulting in an augmented
segment. Formally, blow we define the augmented segment.
_Definition 3.1: Augmented Segment. Given a sensory data_
segment denoted by Si = [vi[1][,][ v]i[2][,][ · · ·][,][ v]i[k][,][ · · ·][,][ v]i[K][]][, where]
**vi[k]** _∈_ R[N] _[×][1]_ denotes the sensory measurements of all the
sensors of the i-th segment at the k-th record. Then, the
corresponding first-order derivative segment S′i [is][ S]′i =
[ _∂[∂]v[S]i[2][i]_ _[,][ ∂]∂v[S]i[3][i]_ _[,][ · · ·][,][ ∂]∂v[S]i[K][i]_ []][, and the corresponding second-order]′′ _′′_ _′i_ _′i_ _′_
derivative segment Si [is][ S]i = [ _∂[∂]v[S]i[3]_ _[,][ ∂]∂v[S]i[4]_ _[,][ · · ·][,][ ∂]∂v[S]i[K]_ []][. The]
augmented segment **S[˜]i is then defined as the concatenation of**
the raw segment, the first-order derivative segments, and the
second-order derivative segments: **S[˜]i = [Si, S′i[,][ S]′′i** []][. For con-]
venience, **S[˜]i also can be denoted as** **S[˜]i = [r[1]i** _[,][ r]i[2][, . . .,][ r]i[3][K][−][3]],_
where elements in [r[1]i _[,][ r]i[2][,][ · · ·][,][ r]i[K][]][ corresponds to each el-]_
responds to each element inement in Si, elements in [r[K]i S[+1]′i,[, and the elements in] r[K]i [+2], · · ·, r[2]i _[K][−][1]] cor-_
[r[2]i _[K], r[2]i_ _[K][+1], · · ·, r[3]i_ _[K][−][3]] corresponds to each element in S′′i_
respectively.
We here provide an example of constructing an augmented
segment. Suppose there are two sensors in WTNs, there are
three measurement records in each time segment. In such
WTN, N is 2 and K is 3. Considering the i-th segment
**SScalculate the′′ii = [[1[= [[][−],[1]] 3,[,] 4][ [] S[−], [2′i[9]]],[by row:][. Finally, we concatenate these three seg-] 8, 5]], the size of[ S]′i** [= [[2] S[,][ 1]]i[,] is[ [6][,] 2[ −] ×[3]]] 3[. Afterward,]. We then
ments by row: **S[˜]i = [[1, 3, 4, 2, 1, −1][2, 8, 5, 6, −3, −9]].**
Figure 2(a) shows the process of temporal embedding. The
temporal embedding process develops a seq2seq model based
on the encoder-decoder paradigm that takes a non-augmented
segment as inputs, and reconstructs the corresponding augmented segment. The objective is to minimize the loss between
the original augmented segment and the reconstructed one.
Next, we provide an example about how our model operate
the i-th segment Si.
The encoding step feeds the segment Si into a seq2seq
encoder, and outputs the latent representation of the segment
**Ui. Formally, as shown in Equation 1, given the segment data**
-----
**Si = [vi[1][,][ v]i[2][, . . .,][ v]i[K][]][, the first hidden state][ h][1][ is calculated]**
by the first time step value. Then recursively, the hidden state
of the previous time step h[t][−][1] and the current time step value
**vi[t]** [are fed into a LSTM model to produce the current time]
step hidden state h[t]. Finally, we concatenate all of the hidden
states by row (sensor) to obtain the latent feature matrix Ui.
**h[1]** = σ(Wevi[1] [+][ b][e][)][,]
**h[t]** = LSTM ([vi[t][,][ h][t][−][1][])][,][ ∀][t][ ∈{][2][, . . ., K][}][,] (1)
**Ui = CONCAT** (h[1], h[2], . . ., h[K]),
where We and be are the weight and bias of the encoding
step, respectively.
In the decoding step, the decoder takes Ui as inputs and generates a reconstructed augmented segment:
[ˆr[1]i _[,]_ **[ˆr]i[2][, . . .,]** **[ˆr]i[3][K][−][3]]. Formally, as shown in Equation 2, the first**
hidden state **h[ˆ][1]** of the decoder is copied from the last hidden
state of encoder h[K]. Then, the previous time step hidden
state **h[ˆ][t][−][1], the previous time step element ˆri[t][−][1], and the latent**
feature vector Ui are input into the LSTM model to produce
the current time step hidden state **h[ˆ][t]. Finally, the reconstructed**
value of current time step ˆr[t]i [is produced by current hidden]
state **h[ˆ][t]** that is activated by sigmoid function σ.
**hˆ[1] = h[K],**
ˆr[1]i [=][ σ][(][W][d][ ˆ][h][1][ +][ b][d][)][,]
(2)
ˆrhˆ[t]i[t] =[=][ σ] LSTM[(][W][d][ ˆ][h][t]([ˆ[ +]r[t]i[ b][−][1][d],[)]h[ˆ][,][ ∀][t][−][t][ ∈{][1], U[2]i[, . . ., K]]), ∀t ∈{[}][,]2, . . ., K},
where Wd and bd are the weight and bias for the decoding
step respectively.
After the decoding step, we obtain the reconstructed augmented segment sequence [ˆr[1]i _[,][ ˆ][r]i[2][, . . .,][ ˆ][r]i[3][K][−][3]]. The objective_
is to minimize the reconstruction loss between the original and
reconstructed augmented segment sequence. The overall loss
is denoted as
Taking the temporal embedding of the i-th segment Ui as
an example. Since each row of Ui is a temporal embedding
of a segment of one senor (node), we mapped each row of Ui
to the corresponding node (sensor) as attributes, resulting in an
attributed WTNs Gi, which we call a Spatio-temporal Graph
(STG) that preserves both the spatial and temporal effects.
_C. Learning Representations of STGs_
Figure 2(b) shows that we develop a spatiotemporal graph
representation learning framework to preserve not just temporal patterns, but also spatial patterns in a latent embedding
space. We take the STG of the i-th time segment, Gi, as an
example to explain the representation process. Formally, we
denote Gi by Gi = (Ui, Ai), where Ai is an adjacency matrix
that describes the connectivity among different sensors; Ui is
a feature matrix that is formed by the temporal embedding of
all the sensors of the i-th time segment. The representation
learning process is formulated as: given the i-th STG Gi, the
objective is to minimize the reconstruction loss between the
input Gi and the reconstructed graph _G[ˆ]i, by an encoding-_
decoding framework, in order to learn a latent embedding zi.
The neural architecture of the encoder includes two Graph
Convolutional Network (GCN) layers. The first GCN layer
take Ai and Ui as inputs, and then outputs the lowerdimensional feature matrix **U[ˆ]** _i. Specifically, the encoding_
process of the first GCN layer is given by:
**ˆUi = RELU** (GCN (Ui, Ai))
(4)
_−_ [1]2 _−_ [1]2
= RELU ( D[ˆ] _i_ **AiD[ˆ]** _i_ **UiW0)**
3K−3
�
_||r[k]i_ _[−]_ [ˆ][r]i[k][||][2][.] (3)
_k=1_
where **D[ˆ]i is the diagonal degree matrix of Gi, and W0 is the**
weight matrix of the first GCN layer.
Since the latent embedding zi of the graph is sampled from
one prior normal distribution, here the purpose of the second
GCN layer is to assess the parameters of the prior distribution.
This layer takes Ai and **U[ˆ]** _i as the input, then produces the_
mean value µ and the variance value δ[2] of the prior normal
distribution as the output. Thus the encoding process of the
second GCN layer can be formulated as
**_µ, log(δ[2]) = GCN_** ( U[ˆ] _i, Ai) = D[ˆ]_ _−i_ 2[1] **AiD[ˆ]** _−i_ 2[1] **Uˆ** _iW1,_ (5)
min
_m_
�
_i=1_
Along this line, we obtain the latent temporal embedding at
the i-th time segment, denoted by Ui.
_B. Temporal Embedding as Node Attributes: Constructing_
_Spatio-temporal Graphs_
Water
**Ui** TreatmentNetworks G i
Fig. 3. The illustration of constructing spatio-temporal graphs.
The temporal embedding, obtained by Section III-A, describes and models the temporal effects of cyber attacks.
Then, to further incorporate the spatial effects of WTNs, we
map the temporal embedding to WTNs as node attributes.
where W1 is the weight matrix of the second GCN layer. Then
we utilize the reparameterization trick to mimic the sample
operation to construct the latent representation zi. The process
is formulated as
**zi = µ + δ × ϵ,** (6)
where ϵ (0, 1).
_∼N_
The decoding step takes the latent representation zi as the
input and outputs the the reconstructed adjacent matrix **A[ˆ]** _i._
The decoding process is denoted as
**ˆAi = σ(ziz[T]i** [)][.] (7)
In addition, the core calculation of the decoding step can
be denoted as ziz[T]i = ∥zi∥ ��zTi �� cos θ. Owing to the zi is
the node level representation, the inner product calculation is
helpful to capture the correlation among different sensors.
-----
We minimize the joint loss function Lg during the training
phase, which is formulated as Equation 8. Lg includes two
parts. The first part is Kullback-Leibler divergence between
the distribution of zi and the prior standard normal distribution
denoted by (0, 1). The second part is the squared error
_N_
between Ai and **A[ˆ]** _i. Our training purpose is to make the_ **A[ˆ]** _i_
as similar as Ai, and to let the distribution of zi as close as
(0, 1). The total loss is denoted as
_N_
Loss between Ai and **A[ˆ]** _i_
_m_ �w �� �
_Lg =_ � _KL[q(zi|Xi, Ai)||p(zi)]_ + � ���Ai − **Aˆ** _i���2_
_i=1_ � �� � _j=1_
KL Divergence between q(.) and p(.)
(8)
When the model converges, we apply the global average
aggregation to zi. Then the zi becomes the graph-level representation of the WTNs, which contains the spatio-temporal
information of the whole system at i-th time segment.
_D. One-Class Detection with Data Similarity Awareness_
In reality, most of sensor data are normal, and attacks related
data are scarce and expensive. This indeed results into the
problem of unbalanced training data. How can we solve the
problem? Can we develop a solution that only uses normal
data for attack detection? This is the key algorithm challenge
for this phase. One-class classification is a promising solution
that aims to find a hyperplane to distinguish normal and attack
patterns only using normal data. Specifically, OC-SVM is
a classical one-class classification model. OC-SVM includes
two steps: (1) mapping low dimensional data into a high
dimensional feature space by a kernel function. (2) learning
the parameters of hyper-plane to divide normal and abnormal
data via optimization.
Intuitively, in the hyperspace provided by OC-SVM, the
normal (or abnormal) data are expected to be closer, while
there should be a large distance between normal and abnormal
data. In other words, similar data points should be closer
to each other than dissimilar ones. However, traditional kernel functions (e.g., linear, nonlinear, polynomial, radial basis
function (RBF), sigmoid) cannot preserve such characteristic
well. How can we make data samples well-separated in order
to achieve such characteristic? To address the question, we
propose a new pairwise kernel function that is capable of
reducing the distances between similar data points, while
maximizing the distances between dissimilar ones. Formally,
given the representation matrix Z = [z1, · · ·, zi, · · ·, zm], the
pairwise kernel function is given by :
� 1 �
_Kernel = tanh_ (9)
(Z) **[ZZ][T][ +][ sim][(][Z][,][ Z][T][ ) +][ c]**
_D_
where Z[T] is the transpose of Z, (Z) is the covariance matrix
_D_
of Z, and sim(Z, Z[T] ) ∈ R[N] _[×][N]_ is the pairwise similarity
matrix between segments. Compared with the vanilla sigmoid
kernel function, we add sim(Z, Z[T] ), where the range of
_sim(Z, Z[T]_ ) is [ 1, 1]. If two segments are more similar, the
_−_
corresponding value in sim(Z, Z[T] ) is closer to 1. Otherwise,
the value is closer to 1. Therefore, when two segments
_−_
are similar (e.g., both are normal or abnormal samples), the
proposed parwise kernel function will push these two segments
closer; otherwise, these two segments will be set away from
each other.
Y
normal
x1 +sim(x1,x2)
sim(x1,x2) > sim(x1,x3) x2
+sim(x1,x3) x3 X
attack
Fig. 4. The illustration of pairwise kernel, given normal data x1, owing to
_x2 is normal and x3 is attack, sim(x1, x2)>sim(x1, x3) and the directions_
of sim(x1, x2) and sim(x1, x3) are opposite. Pairwise kernel increase the
distance between x2 and x3 .
The pairwise kernel is able to enlarge the distance among
different category samples in feature space, which makes the
OC-SVM converge more easily and detect cyber attacks more
accurate. Figure 2(c) shows the detection process of cyber
attacks. The spatio-temporal embedding zi is fed into the
integrated OC-SVM to detect cyber attacks by utilizing the
pairwise kernel function, and to output the corresponding
status labels of WTNs, to indicate whether a cyber attack
happen or not at the i-th time segment.
_E. Comparison with Related Work_
Recently, lots of attempts have been made to detect cyber
attacks in WTNs. For instance, Lin et al. utilized a probabilistic graphical model to preserve the spatial dependency
among sensors in WTNs and a one-class classifier to detect
cyber attacks [5]. Li et al. regarded the LSTM and RNN
as the basic model of the GAN framework to develop an
anomaly detection algorithm to detect cyber attacks in WTNs
[3]. Raciti et al. constructed one real-time anomaly detection
system based on cluster model [6]. However, these models
exhibit several limitations when detecting cyber attacks: (i)
the changing trend of sensing data in a time segment is not
preserved; (ii) the spatial patterns among sensors are captured
partially; (iii) the similarity between different data samples is
not utilized completely.
In order to overcome these limitations, we propose a new
spatio-temporal graph (STG) to preserve and fuse spatiotemporal effects of WTNs simultaneously. Moreover, a new
pairwise kernel that utilizing the data similarity to augment the
distance among different patterns is also proposed to improve
the accuracy of cyber attack detection.
-----
IV. EXPERIMENTAL RESULTS
We conduct experiments to answer the following research
questions:
(1) Does our proposed outlier detection framework (STOD)
outperforms the existing methods?
(2) Is the spatio-temporal representation learning component
of STOD necessary for improving detection performance?
(3) Is the proposed pairwise kernel better than other traditional
kernels for cyber attack detection?
(4) How much time will our method and other methods cost?
_A. Data Description_
We used the secure water treatment system (SWAT) data
set that is from Singapore University of Technology and
Design for our study. The SWAT has project built one water
treatment system and a sensor network to monitor and track
the situations of the system. Then, they construct one attack
model to mimic the cyber attack of this kind of system in the
real scenario. The cyber attacks to and the sensory data of the
system are collected to form the SWAT dataset. Table I show
some important statistics of the SWAT dataset. Specifically, the
SWAT dataset include a normal set (no cyber attacks) and an
attack set (with cyber attacks). The time period of the normal
data is from 22 December 2015 to 28 December 2015. The
time period of the attack data is from 28 December 2015 to 01
January 2016, and 01 February 2016. There is no time period
overlap between the normal data and the attack data on 28
January 2015. It is difficult to identify more water treatment
network datasets. In this study, we focus on validating our
method using this dataset.
TABLE I
STATICS OF THE SWAT DATA SET
Data Type Sensor Count Total Items Attack Items Pos/Neg
Normal 51 496800 0 Attack 51 449919 53900 7:1
_B. Evaluation Metrics_
We evaluate the performances of our method in terms of
four metrics. Given a testset, a detection model will predict a
set of binary labels (1: attack; 0: normal). Compared predicted
labels with golden standard benchmark labels, we let tp, tn,
_fp, fn be the sizes of the true positive, true negative, false_
positive, false negative sets, respectively.
(1) Accuracy: is given by:
_tp + tn_
_Accuracy =_ (10)
_tp + tn + fp + fn_
(2) Precision: is given by:
_tp_
_Precision =_ (11)
_tp + fp_
(3) F-measure: is the harmonic mean of precision and recall,
which is given by:
_F_ _measure = [2][ ×][ Precision][ ×][ Recall]_ (12)
_−_
_Precision + Recall_
(4) AUC: is the area under the ROC curve. It shows the
capability of a model to distinguish between two classes.
_C. Baseline Algorithms_
We compare the performances of our method (STOD)
against the following ten baseline algorithms.
(1) DeepSVDD [7]: expands the classic SVDD algorithm into
a deep learning version. It utilizes a neural network to find
the hyper-sphere of minimum volume that wraps the normal data. If a data sample falls inside of the hyper-sphere,
DeepSVDD classifies the sample as normal, and attack
otherwise. In the experiments, we set the dimensions of
the spatio-temporal embedding zi to 28 × 28.
(2) GANomaly [8]: is based on the GAN framework. It
develop a new version of generator by using the encoderdecoder-encoder structure. The algorithm regards the difference between the embedding of the first encoder and
the embedding of the second encoder as the anomaly score
to distinguish normal and abnormal. In the experiments,
we set the dimension of the spatio-temporal embedding
vector zi into 28 × 28.
(3) LODA [9]: is an ensemble outlier detection model. It
collects a series of weak anomaly detectors to produce a
strong detector. In addition, the model fits real-time data
flow and is resistant to missing values in the data set. In
the experiments, we fed the learned representations into
the LODA to detect.
(4) Isolation-Forest [10]. The IsolationForest isolates observations by randomly selecting a feature and then randomly
selecting a split value between the maximum and minimum values of the selected feature. In the experiments, we
input spatio-temporal embedding vector zi into IsolationForest, and set the number of estimators = 100, max
sample numbers = 256.
(5) LOF [11]. The principle of LOF is to measure the local
density of data samples. If one data sample has low local
density, the sample is an outlier. Otherwise, the sample
is a normal sample. In the experiments, we input the
spatio-temporal embedding vector zi into LOF and set
the number of neighborhood = 20, the distance metric for
finding neighborhoods is euclidean distance.
(6) KNN [12]. KNN selects k nearest neighborhoods of one
data sample based on a distance metric. KNN calculates
the anomaly score of the data sample according to the
anomaly situation of the k neighborhoods. In the experiments, we input spatio-temporal embedding vector zi into
KNN, and set the number of neighborhoods = 5, the
adopted distance metric is euclidean distance.
(7) ABOD [13]. The ABOD method uses angle as a more
robust measure to detect outliers. If many neighborhoods
of one sample locate in the same direction to the sample,
it is an outlier, otherwise, it is a normal sample. In the
experiments, we input spatio-temporal embedding zi into
ABOD, set k = 10. The angle metric is cosine value.
(8) STODP1. We proposed to partition the sensing data into
non-overlapped segments. The global mean pooling tech
-----
Fig. 5. Comparison of different models in terms of Accuracy, Precision, F-measure and AUC .
70 70
STODP3 STODP1 STODP3 STODP1 STODP3 STODP1 STODP3
STOD 60 STODP2 STOD 60 STODP2 STOD 80 STODP2 STOD
50 50
60
40 40
30 30 40
20 20
20
10 10
0 0 0
Fig. 6. Comparison of different phases of representation learning module based on Accuracy, Precision, F-measure, and AUC.
nique was then applied to fuse the segments of different
sensors into an averaged feature vector. We fed the fused
feature vector into OC-SVM for outlier detection. The
kernel of OC-SVM is defined in Equation 9.
(9) STODP2. We applied the global mean pooling to the
temporal embedding vectors generated by Section 3.A to
obtain a global feature vector of WTN, which was fed
into OC-SVM for outlier detection. In addition, the kernel
of OC-SVM is our proposed kernel function defined in
Equation 9.
(10) STODP3. In order to study the effect of Seq2Seq, we
remove the Seq2Seq module of our framework pipeline.
The temporal segments of different sensors are organized
as graph set. The graph set is input into graph embedding
module to obtain the final embedding. Finally, the embedding is input into OC-SVM to do outlier detection. The
kernel of the OC-SVM is defined in Equation 9.
In the experiments, the spatio-temporal representation learning phase of our framework is used to preserve the spatiotemporal patterns and data characteristics into feature learning.
The one-class outlier detection phase of our framework is
used to detect the cyber attack status of the water treatment
system based on the spatio-temporal representation. We only
use normal data to train our model. After the training phase,
our model has the capability to detect the status of the testing
data set that contains both normal and attack data. All the
evaluations are performed on a x64 machine with Intel i99920X 3.50GHz CPU and 128GB RAM. The operating system
is Ubuntu 18.04.
second in terms of precision, compared with other baseline
models. A potential interpretation of such observation is that
the STOD captures the temporal effects (delayed, continued)
and spatial effect (cascading) of cyber attacks by spatiotemporal representation learning part of STOD in a balanced
way. With STOD captures more intrinsic features of cyber
attacks, the model not only finds more attack samples but
also makes fewer mistakes on normal samples. Thus, the
distinguishing ability of STOD is improved greatly. But on
a single evaluation metric, STOD maybe poorer than other
baselines. Overall, STOD outperforms with respect to Accuracy, F-measure and ACU compared with baseline models,
which signifies our detection framework owns the best attack
detection ability.
Another observation is that the performances of LOF,
ABOD, and KNN are much worse than other models. The
possible reason is that these models exploit distance or anglebased assessment strategies. These geometrical measurements
are vulnerable after projecting data into high dimensional
space due to the “curse of dimensionality”. Thus, these models
can not achieve excellent performances.
_E. Study of Representation Learning_
_D. Overall Performances_
We compare our method with the baseline models in terms
of accuracy, precision, f-measure and AUC. Figure 5 shows
the average performances of our mtehod (STOD) is the best
in terms of accuracy, f-measure and AUC; our method ranks
The representation learning phase of our framework include: (1) partitioning sensor data streams into segments;
(2) modeling the temporal dependencies with seq2seq; (3)
modeling the spatial dependencies with graph embedding.
What role does each of the three steps play in our framework?
We will iteratively remove each of the three steps to obtain
three different variants, namely STODP1, STODP2, STODP3.
We then test compare the three variants with our original
framework to examine the importance of the removed step
for improving detection performances
Figure 6 shows the experimental results of STOD, STODP1,
STODP2, and STODP3, which clearly show that STOD out
-----
100
80
60
40
20
|poly sigmoid linear pairwise rbf|Col2|
|---|---|
|||
Fig. 7. Comparison of different kernels with respect to Accuracy, Precision, F-measure, and AUC.
performs STODP1, STODP2, and STODP3 in terms of accuracy, precision, f-measure, and AUC with a large margin. A
reasonable explanation of this phenomenon is that attack patterns are spatially and temporally structured, and, thus, when
more buried spatio-temporal patterns are modeled, the method
becomes more discriminant. The results validate the three steps
(segmentation, temporal, spatial) of the representation learning
phase is critical for attack pattern characterization.
_F. Study of Pairwise Kernel Function_
The kernel function is vital for the SVM based algorithm
family. An effective kernel function can map challenging data
samples into a high-dimensional space, and make these data
samples more separable in the task of detection. We design
experiments to validate the improved performances of our
pairwise kernel function by comparing our pairwise kernel
function with other baseline kernel functions. Specifically, the
baseline kernels are as follows:
(1) linear. This kernel is a linear function. There are limited
number of parameters in the linear kernel, so the calculation process is quick. The dimension of the new feature
space is similar to the original space.
(2) poly. This kernel is a polynomial function. The parameters
of the kernel are more than the linear kernel. It maps data
samples into high dimensional space.
(3) rbf. This kernel is a Gaussian function that is a nonlinear function. It exhibits excellent performance in many
common situations.
(4) sigmoid. This kernel is a sigmoid function. When SVM
utilizing this function to model data samples, the effect is
similar to using a multi-layer perceptron.
Figure 7 shows a comparison between our kernel and
other baseline kernels with respect to all evaluation metrics.
We observed that our kernel shows significant improvement,
compared with other baseline kernels, in terms of Accuracy,
Precision, F-measure, and AUC, This indicates that our kernel
can effectively augment the attack patterns in original data, and
maximize the difference between normal and attack patterns,
by mapping original data samples into high dimensional
feature space. This experiment validates the superiority of our
pairwise kernel function.
_G. Study of Time Costs_
non-overlap folds. We then used cross-validation to evaluate
the time costs of different models.
STOD OC-SVM IsolationForest LOF KNN ABOD
DeepSVDD GANomaly LODA
40s
30s
20s
10s
0s
1 2 3 4 5 6
The Index of Folds
Fig. 8. Comparison of different models based on training time cost
Figure 8 shows the comparison of training time costs
among different models. We find that the training time costs
of each model is relatively stable. An obvious observation
is GANomaly has the largest training time cost than other
models. This is because the encoder-decoder-encoder network
architecture design is time-consuming. In addition, the training
time of STOD is slightly longer than OC-SVM. This can be
explained by the fact that the similarity calculation of pairwise
kernel function increases time costs, since we need to calculate
the similarities between two representation vectors of each
training data sample.
We aim to study the time costs of training and testing in
different models. Specifically, we divided the dataset into six
Fig. 9. Comparison of different models based on testing time cost.
Figure 9 shows the comparisons of testing time costs among
different models. The testing time costs of each model are
relatively stable as well. And many of them can complete the
testing task within one second, except GANomaly. We find the
testing time of our method is shorter than the training time by
comparing Figure 8 and Figure 9. This can be explained by a
strategy of our method: once the model training is completed,
our method stores the kernel mapping parameters in order to
-----
save time of computation. In addition, GANomaly still shows
the largest testing time cost. The reason is that the testing
phase of GANomaly needs to calculate two representation
vectors of each testing data samples, and, thus, GANomaly
doesn’t use less time, compared with that of the training phase.
_H. Case Study: Visualization for Spatio-temporal Embedding_
The spatio-temporal representation learning phase is an
important step in our structured detection framework. An
effective representation learning method should be able to
preserve the patterns of normal or attack behaviors and
maximize the distances between normal and attack in the
detection task. We visualize the spatio-temporal embedding on
a 2-dimensional space, in order to validate the discriminant
capabilities of our learned representations. Specifically, we
first select 3000 normal and 3000 attack spatio-temporal
embedding respectively. We then utilize the T-SNE manifold
method to visualize the embedding. Figure 10 shows the
visualization results of normal and attack data samples. We
find that our representation learning result is discriminant in a
transformed 2-dimensional space. As can be seen, the learned
normal and attack representation vectors are clustered together
to form dense areas. The observation shows that non-linear
models are more appropriate for distinguishing normal and
attack behaviors than linear methods.
Fig. 10. visualization result for spatio-temporal embedding
V. RELATED WORK
**Representation Learning. Representation learning is to**
learn a low-dimensional vector to represent the given data
of an object. Representation learning approaches are threefold: (1) probabilistic graphical models; (2) manifold learning
approaches; (3) auto-encoder and its variants; The main idea
of the probabilistic graphical model is to learn an uncertain
knowledge representation by a Bayesian network [14], [15].
The key challenge of such methods is to find the topology relationship among nodes in the probabilistic graphical model. The
manifold learning methods utilize the non-parametric approach
to find manifold and embedding vectors in low dimensional
space based on neighborhood information [16], [17]. However,
manifold learning is usually time-costly. The discriminative
ability of such methods is very high in many applications.
Recently, deep learning models are introduced to conduct
representation learning. The auto-encoder model is a classical neural network framework, which embeds the non-linear
relationship in feature space via minimizing the reconstruction
loss between original and reconstructed data [18]–[20]. When
representation learning meets spatial data, autoencoders can
inetgrate with spatio-temporal statistical correlations to learn
more effective embedding vectors. [21]–[23]. For instance,
Singh et al, use the auto-encoder framework to learn the
spatio-temporal representation of traffic videos to help detect
the road accidents [24]. Wang et al. utilize spatio-temporal
representation learning to learn the intrinsic feature of GPS
trajectory data to help analyze driving behavior [25].
**Deep Outlier Detection. Outlier detection is a classical**
problem with important applications, such as, fraud detection
and cyber attack detection. Recently, deep learning has been
introduced into outlier detection. According to the availability
of outlier labels, deep anomaly detection can be classified
into three categories: (1) supervised deep outlier detection;
(2) semi-supervised deep outlier detection; (3) unsupervised
deep outlier detection. First, supervised deep outlier detection
models usually train a deep classification model to distinguish
whether a data sample is normal or not [26], [27]. These
models are not widely available in reality, because it is difficult
to obtain data labels of outliers. Meanwhile, data imbalance is
a serious issue that degrades the performances of supervised
models. Second, semi-supervised outlier detection methods
usually train a deep auto-encoder model to learn the latent embedding of normal data [28]–[30], then the learned embedding
vectors are used to accomplish outlier detection task. In deep
semi-supervised outlier detection, one-class classification is an
important research direction. For instance, Liu et. al, proposed
to detect the anomaly data on uncertain data by SVDD algorithm [31]. Many experiments have shown the adaptability of
one class SVM. Third, unsupervised outlier detection models
do not need any label information, they detect outliers depends
on the intrinsic rules (e.g., scores, distance, similarity) of
data [32]–[34]. Such methods are appropriate for scenarios
that are hard to collect label information.
**Cyber Attack Detection in Water Treatment Network.**
Water purification plants are critical infrastructures in our local
communities. Such infrastructures are usually vulnerable to
cyber attacks. Early detection of cyber attacks in water treatment networks is significant for defending our infrastructure
safety and public health. There are many existing studies
about outlier detection in water treatment networks [2], [4],
[5], [35], [36]. For instance, Adepu et al. studied the impact
of cyber attacks on water distribution systems [37]. Goh et
al. designed an unsupervised learning approach that regards
Recurrent Neural Networks as a temporal predictor to detect
attacks [1]. Inoue et al. compared the performances of Deep
Neural Network and OC-SVM on outlier detection in water
treatment networks [38]. Raciti et al. developed a real-time
outlier detection system by clustering algorithms and deployed
the system into a water treatment network [6]. However, there
is limited studies that integrate deep graph representation
learning, spatiotemporal patterns, and one-class detection to
more effectively address cyber attack problems.
-----
VI. CONCLUSION REMARKS
We studied the problem of cyber attack detection in water
treatment networks. To this end, we proposed a structured
detection framework to integrate spatial-temporal patterns,
deep representation learning, and one-class detection. Specifically, we first partitioned the sensing data of WTNs into a
sequence of fixed-size time segments. We then built a deep
spaiotemporal representation learning approach to preserve the
spatio-temporal patterns of attacks and normal behaviors. The
representation learning approach includes two modules: (1)
a temporal embedding module, which preserves the temporal
dependencies within a time segment. Then, we constructed the
spatiotemporal graphs by mapping the temporal embedding
to the WTN as node attributes. (2) a spatial embedding
module, which learns the fused spatio-temporal embedding
from the spaiotemporal graphs. In addition, we developed
an integrated one-class detection method with an improved
pairwise kernel. The new kernel is capable of augmenting the
difference between normal and attack patterns via the pairwise
similarity among deep embedding vectors of system behaviors.
Finally, we conducted extensive experiments to illustrate the
effectiveness of our method: STOD achieves an accuracy of
91.65%, with average improvement ratios of 82.78% and
22.96% with respect to F1 and AUC, compared with the
baseline methods.
REFERENCES
[1] J. Goh, S. Adepu, M. Tan, and Z. S. Lee, “Anomaly detection in
cyber physical systems using recurrent neural networks,” in 2017 IEEE
_18th International Symposium on High Assurance Systems Engineering_
_(HASE)._ IEEE, 2017, pp. 140–145.
[2] M. Romano, Z. Kapelan, and D. Savi´c, “Real-time leak detection in
water distribution systems,” in Water Distribution Systems Analysis
_2010, 2010, pp. 1074–1082._
[3] D. Li, D. Chen, B. Jin, L. Shi, J. Goh, and S.-K. Ng, “Mad-gan:
Multivariate anomaly detection for time series data with generative
adversarial networks,” in International Conference on Artificial Neural
_Networks._ Springer, 2019, pp. 703–716.
[4] C. Feng, T. Li, and D. Chana, “Multi-level anomaly detection in
industrial control systems via package signatures and lstm networks,” in
_2017 47th Annual IEEE/IFIP International Conference on Dependable_
_Systems and Networks (DSN)._ IEEE, 2017, pp. 261–272.
[5] Q. Lin, S. Adepu, S. Verwer, and A. Mathur, “Tabor: A graphical modelbased approach for anomaly detection in industrial control systems,”
in Proceedings of the 2018 on Asia Conference on Computer and
_Communications Security, 2018, pp. 525–536._
[6] M. Raciti, J. Cucurull, and S. Nadjm-Tehrani, “Anomaly detection
in water management systems,” in Critical infrastructure protection.
Springer, 2012, pp. 98–119.
[7] L. Ruff, R. Vandermeulen, N. Goernitz, L. Deecke, S. A. Siddiqui,
A. Binder, E. M¨uller, and M. Kloft, “Deep one-class classification,”
in International conference on machine learning, 2018, pp. 4393–4402.
[8] S. Akcay, A. Atapour-Abarghouei, and T. P. Breckon, “Ganomaly:
Semi-supervised anomaly detection via adversarial training,” in Asian
_Conference on Computer Vision._ Springer, 2018, pp. 622–637.
[9] T. Pevn`y, “Loda: Lightweight on-line detector of anomalies,” Machine
_Learning, vol. 102, no. 2, pp. 275–304, 2016._
[10] F. T. Liu, K. M. Ting, and Z.-H. Zhou, “Isolation forest,” in 2008 Eighth
_IEEE International Conference on Data Mining._ IEEE, 2008, pp. 413–
422.
[11] M. M. Breunig, H.-P. Kriegel, R. T. Ng, and J. Sander, “Lof: identifying
density-based local outliers,” in Proceedings of the 2000 ACM SIGMOD
_international conference on Management of data, 2000, pp. 93–104._
[12] P. Soucy and G. W. Mineau, “A simple knn algorithm for text categorization,” in Proceedings 2001 IEEE International Conference on Data
_Mining._ IEEE, 2001, pp. 647–648.
[13] H.-P. Kriegel, M. Schubert, and A. Zimek, “Angle-based outlier detection in high-dimensional data,” in Proceedings of the 14th ACM
_SIGKDD international conference on Knowledge discovery and data_
_mining, 2008, pp. 444–452._
[14] N. Friedman, “Inferring cellular networks using probabilistic graphical
models,” Science, vol. 303, no. 5659, pp. 799–805, 2004.
[15] M. J. Johnson, D. K. Duvenaud, A. Wiltschko, R. P. Adams, and S. R.
Datta, “Composing graphical models with neural networks for structured
representations and fast inference,” in Advances in neural information
_processing systems, 2016, pp. 2946–2954._
[16] B. Zhu, J. Z. Liu, S. F. Cauley, B. R. Rosen, and M. S. Rosen, “Image
reconstruction by domain-transform manifold learning,” Nature, vol.
555, no. 7697, pp. 487–492, 2018.
[17] W. Wang, Y. Yan, F. Nie, S. Yan, and N. Sebe, “Flexible manifold
learning with optimal graph for image and video representation,” IEEE
_Transactions on Image Processing, vol. 27, no. 6, pp. 2664–2675, 2018._
[18] Y. Wang, H. Yao, and S. Zhao, “Auto-encoder based dimensionality
reduction,” Neurocomputing, vol. 184, pp. 232–242, 2016.
[19] H.-I. Suk, S.-W. Lee, D. Shen, A. D. N. Initiative et al., “Latent feature
representation with stacked auto-encoder for ad/mci diagnosis,” Brain
_Structure and Function, vol. 220, no. 2, pp. 841–859, 2015._
[20] J. Calvo-Zaragoza and A.-J. Gallego, “A selectional auto-encoder approach for document image binarization,” Pattern Recognition, vol. 86,
pp. 37–47, 2019.
[21] L. Cedolin and B. Delgutte, “Spatiotemporal representation of the
pitch of harmonic complex tones in the auditory nerve,” Journal of
_Neuroscience, vol. 30, no. 38, pp. 12 712–12 724, 2010._
[22] C.-Y. Ma, M.-H. Chen, Z. Kira, and G. AlRegib, “Ts-lstm and temporalinception: Exploiting spatiotemporal dynamics for activity recognition,”
_Signal Processing: Image Communication, vol. 71, pp. 76–87, 2019._
[23] Z. Pan, Y. Liang, W. Wang, Y. Yu, Y. Zheng, and J. Zhang, “Urban
traffic prediction from spatio-temporal data using deep meta learning,”
in Proceedings of the 25th ACM SIGKDD International Conference on
_Knowledge Discovery & Data Mining, 2019, pp. 1720–1730._
[24] D. Singh and C. K. Mohan, “Deep spatio-temporal representation for
detection of road accidents using stacked autoencoder,” IEEE Transac_tions on Intelligent Transportation Systems, vol. 20, no. 3, pp. 879–887,_
2018.
[25] P. Wang, X. Li, Y. Zheng, C. Aggarwal, and Y. Fu, “Spatiotemporal
representation learning for driving behavior analysis: A joint perspective
of peer and temporal dependencies,” IEEE Transactions on Knowledge
_and Data Engineering, 2019._
[26] Y. Yamanaka, T. Iwata, H. Takahashi, M. Yamada, and S. Kanai, “Autoencoding binary classifiers for supervised anomaly detection,” in Pa_cific Rim International Conference on Artificial Intelligence._ Springer,
2019, pp. 647–659.
[27] Y. Kawachi, Y. Koizumi, S. Murata, and N. Harada, “A two-class hyperspherical autoencoder for supervised anomaly detection,” in ICASSP
_2019-2019 IEEE International Conference on Acoustics, Speech and_
_Signal Processing (ICASSP)._ IEEE, 2019, pp. 3047–3051.
[28] L. Ruff, R. A. Vandermeulen, N. G¨ornitz, L. Deecke, S. A. Siddiqui,
A. Binder, E. M¨uller, and M. Kloft, “Deep one-class classification,” in
_Proceedings of the 35th International Conference on Machine Learning,_
vol. 80, 2018, pp. 4393–4402.
[29] R. Chalapathy, A. K. Menon, and S. Chawla, “Anomaly detection using
one-class neural networks,” arXiv preprint arXiv:1802.06360, 2018.
[30] M. Zhao, L. Jiao, W. Ma, H. Liu, and S. Yang, “Classification and
saliency detection by semi-supervised low-rank representation,” Pattern
_Recognition, vol. 51, pp. 281–294, 2016._
[31] B. Liu, Y. Xiao, L. Cao, Z. Hao, and F. Deng, “Svdd-based outlier detection on uncertain data,” Knowledge and information systems, vol. 34,
no. 3, pp. 597–618, 2013.
[32] Y. Liu, Z. Li, C. Zhou, Y. Jiang, J. Sun, M. Wang, and X. He, “Generative
adversarial active learning for unsupervised outlier detection,” IEEE
_Transactions on Knowledge and Data Engineering, 2019._
[33] S. Wang, Y. Zeng, X. Liu, E. Zhu, J. Yin, C. Xu, and M. Kloft,
“Effective end-to-end unsupervised outlier detection via inlier priority of
discriminative network,” in Advances in Neural Information Processing
_Systems, 2019, pp. 5960–5973._
[34] W. Lu, Y. Cheng, C. Xiao, S. Chang, S. Huang, B. Liang, and T. Huang,
“Unsupervised sequential outlier detection with deep architectures,”
-----
_IEEE Transactions on Image Processing, vol. 26, no. 9, pp. 4321–4330,_
2017.
[35] D. T. Ramotsoela, G. P. Hancke, and A. M. Abu-Mahfouz, “Attack
detection in water distribution systems using machine learning,” Human_centric Computing and Information Sciences, vol. 9, no. 1, p. 13, 2019._
[36] S. Adepu and A. Mathur, “Using process invariants to detect cyber
attacks on a water treatment system,” in IFIP International Conference
_on ICT Systems Security and Privacy Protection._ Springer, 2016, pp.
91–104.
[37] S. Adepu, V. R. Palleti, G. Mishra, and A. Mathur, “Investigation
of cyber attacks on a water distribution system,” arXiv preprint
_arXiv:1906.02279, 2019._
[38] J. Inoue, Y. Yamagata, Y. Chen, C. M. Poskitt, and J. Sun, “Anomaly
detection for a water treatment system using unsupervised machine
learning,” in 2017 IEEE International Conference on Data Mining
_Workshops (ICDMW)._ IEEE, 2017, pp. 1058–1065.
-----
| 15,226
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2008.12618, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://arxiv.org/pdf/2008.12618"
}
| 2,020
|
[
"JournalArticle",
"Conference"
] | true
| 2020-08-26T00:00:00
|
[
{
"paperId": "e82ebcdebdc5afc236fda05a145a08ba368ebd52",
"title": "Spatiotemporal Representation Learning for Driving Behavior Analysis: A Joint Perspective of Peer and Temporal Dependencies"
},
{
"paperId": "06b8e82542d1873928d007548a23d3b77daa11f8",
"title": "Urban Traffic Prediction from Spatio-Temporal Data Using Deep Meta Learning"
},
{
"paperId": "808711143fbc56164ab374c374dfa472a93fe89e",
"title": "Investigation of Cyber Attacks on a Water Distribution System"
},
{
"paperId": "21e10051007a387ca6a4998ec70c1149f12a7b5d",
"title": "A Two-class Hyper-spherical Autoencoder for Supervised Anomaly Detection"
},
{
"paperId": "a745d8df9392544be514a0d4b25b9612c815ca34",
"title": "Attack detection in water distribution systems using machine learning"
},
{
"paperId": "6015e61a077729a7d205f1b8b784da5f6cbcd29c",
"title": "Autoencoding Binary Classifiers for Supervised Anomaly Detection"
},
{
"paperId": "5f2f117064ab73f07d95ed961d09f11b7440bca9",
"title": "Deep Spatio-Temporal Representation for Detection of Road Accidents Using Stacked Autoencoder"
},
{
"paperId": "261e81cba749f70271fa4b7e230328fc1a4a6c96",
"title": "MAD-GAN: Multivariate Anomaly Detection for Time Series Data with Generative Adversarial Networks"
},
{
"paperId": "17eeaa7bfbadbcd92d5bacdda2e2f5760454eed9",
"title": "Generative Adversarial Active Learning for Unsupervised Outlier Detection"
},
{
"paperId": "6af440915b8a0718c93be1cf61905e41e620484a",
"title": "Deep One-Class Classification"
},
{
"paperId": "1823dd8f7efbb8f372c1864bd7b03981dfaf6643",
"title": "TABOR: A Graphical Model-based Approach for Anomaly Detection in Industrial Control Systems"
},
{
"paperId": "0535625be630c6a67f4c244ebf3aa61ad088fc70",
"title": "GANomaly: Semi-Supervised Anomaly Detection via Adversarial Training"
},
{
"paperId": "84d7af78c8dba3cad0380a33511725db4db1a54d",
"title": "Flexible Manifold Learning With Optimal Graph for Image and Video Representation"
},
{
"paperId": "67b9c2b376a01d8757dc6d704be450d1c46c4ced",
"title": "Anomaly Detection using One-Class Neural Networks"
},
{
"paperId": "b3b8f935f7f2d6e0c9a9582c37b930fedc261620",
"title": "Anomaly Detection for a Water Treatment System Using Unsupervised Machine Learning"
},
{
"paperId": "b1b99af0353c836ac44cc68c43e3918b0b12c5a2",
"title": "A selectional auto-encoder approach for document image binarization"
},
{
"paperId": "59ded7fc9bf5e16fa50e6a867ba42be9df5ba10c",
"title": "Multi-level Anomaly Detection in Industrial Control Systems via Package Signatures and LSTM Networks"
},
{
"paperId": "fe69ae8a3595a015debd2a1897eb30263460ea5d",
"title": "Unsupervised Sequential Outlier Detection With Deep Architectures"
},
{
"paperId": "fb9ef4969e7dc92fe67851b5488d0cb7d206b19c",
"title": "Image reconstruction by domain-transform manifold learning"
},
{
"paperId": "aac934f2eed758d4a27562dae4e9c5415ff4cdb7",
"title": "TS-LSTM and Temporal-Inception: Exploiting Spatiotemporal Dynamics for Activity Recognition"
},
{
"paperId": "41c1387b68be195a3cc8df5c58dc05f0d1a6b356",
"title": "Using Process Invariants to Detect Cyber Attacks on a Water Treatment System"
},
{
"paperId": "9e98010a840d549ad657829f78006ce7578525f7",
"title": "Auto-encoder based dimensionality reduction"
},
{
"paperId": "9ca3af4440eb4aa4fd0a65dfa559685b2c39cd42",
"title": "Composing graphical models with neural networks for structured representations and fast inference"
},
{
"paperId": "8a05602adcdd76077d79b954c23c105812d2ef4f",
"title": "Classification and saliency detection by semi-supervised low-rank representation"
},
{
"paperId": "ca56df64a351f873c8c138874326a6f64eec011d",
"title": "Loda: Lightweight on-line detector of anomalies"
},
{
"paperId": "0a1e8435516b9da400cee8c4a70e9c6a236220b5",
"title": "Latent feature representation with stacked auto-encoder for AD/MCI diagnosis"
},
{
"paperId": "fefa4f6ae78e57223c1f203a024f4891123a46c5",
"title": "REAL-TIME LEAK DETECTION IN WATER DISTRIBUTION SYSTEMS"
},
{
"paperId": "00a1077d298f2917d764eb729ab1bc86af3bd241",
"title": "Isolation Forest"
},
{
"paperId": "0372728b9a2def008ef3240a62362f0afbfb5d43",
"title": "Angle-based outlier detection in high-dimensional data"
},
{
"paperId": "ba455643072b6e11b4591b43f29fd3558590947f",
"title": "Inferring Cellular Networks Using Probabilistic Graphical Models"
},
{
"paperId": "3cd7c6ae098c9634d160527c5418c824cf3a6da2",
"title": "A simple KNN algorithm for text categorization"
},
{
"paperId": "a7c828184693a453a6c2867dee233ed054b2012e",
"title": "LOF: identifying density-based local outliers"
},
{
"paperId": "df9010d72c03c158e6bbd57ba88500dab6dca72a",
"title": "Effective End-to-end Unsupervised Outlier Detection via Inlier Priority of Discriminative Network"
},
{
"paperId": "228b4e4bce405c6ecb401a07b8912a0cb9d62668",
"title": "Anomaly Detection in Cyber Physical Systems Using Recurrent Neural Networks"
},
{
"paperId": "57aa6ca51094e6fb813ab396acdd2fd80e7c0c10",
"title": "SVDD-based outlier detection on uncertain data"
},
{
"paperId": "faecdbf2a19bf13c1c9315af1a1f4a089041be7d",
"title": "Anomaly Detection in Water Management Systems"
},
{
"paperId": "d26d0aff3b74b66331d25af0675ccc579c7fca52",
"title": "Behavioral / Systems / Cognitive Spatiotemporal Representation of the Pitch of Harmonic Complex Tones in the Auditory Nerve"
},
{
"paperId": null,
"title": "AUC: is the area under the ROC curve. It shows the capability of a model to distinguish between two classes"
},
{
"paperId": null,
"title": "Does our proposed outlier detection framework (STOD) outperforms the existing methods?"
},
{
"paperId": null,
"title": ". We proposed to partition the sensing data into non-overlapped segments"
},
{
"paperId": null,
"title": "is an ensemble outlier detection model"
},
{
"paperId": null,
"title": "30s STOD OC-SVM IsolationForest LOF KNN ABOD DeepSVDD GANomaly LODA"
},
{
"paperId": null,
"title": "Is the spatio-temporal representation learning component of STOD necessary for improving detection performance?"
},
{
"paperId": null,
"title": "poly . This kernel is a polynomial function"
}
] | 15,226
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/005ab1f08ba0aa4168b7c674dbc35a3cd75714f3
|
[
"Computer Science"
] | 0.909851
|
XOX Fabric: A hybrid approach to blockchain transaction execution
|
005ab1f08ba0aa4168b7c674dbc35a3cd75714f3
|
International Conference on Blockchain
|
[
{
"authorId": "50397449",
"name": "Christian Gorenflo"
},
{
"authorId": "2285679182",
"name": "Lukasz Golab"
},
{
"authorId": "145128122",
"name": "S. Keshav"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"ICBC",
"IEEE Int Conf Blockchain Cryptocurrency",
"IEEE International Conference on Blockchain and Cryptocurrency",
"Int Conf Blockchain"
],
"alternate_urls": null,
"id": "f1ab8d75-7f15-4bb4-ad88-e834ec6ed604",
"issn": null,
"name": "International Conference on Blockchain",
"type": "conference",
"url": null
}
|
Performance and scalability are major concerns for blockchains: permissionless systems are typically limited by slow proof of X consensus algorithms and sequential postorder transaction execution on every node of the network. By introducing a small amount of trust in their participants, permissioned blockchain systems such as Hyperledger Fabric can benefit from more efficient consensus algorithms and make use of parallel pre-order execution on a subset of network nodes. Fabric, in particular, has been shown to handle tens of thousands of transactions per second. However, this performance is only achievable for contention-free transaction workloads. If many transactions compete for a small set of hot keys in the world state, the effective throughput drops drastically. We therefore propose XOX: a novel two-pronged transaction execution approach that both minimizes invalid transactions in the Fabric blockchain and maximizes concurrent execution. Our approach additionally prevents unintentional denial of service attacks by clients resubmitting conflicting transactions. Even under fully contentious workloads, XOX can handle more than 3000 transactions per second, all of which would be discarded by regular Fabric.
|
# XOX Fabric: A hybrid approach to blockchain transaction execution
### Srinivasan Keshav
University of Cambridge
Cambridge, UK
Email: [email protected]
### Christian Gorenflo
University of Waterloo
Waterloo, Canada
Email: [email protected]
### Lukasz Golab
University of Waterloo
Waterloo, Canada
Email: [email protected]
Abstract—Performance and scalability are major concerns
for blockchains: permissionless systems are typically limited
by slow proof of X consensus algorithms and sequential postorder transaction execution on every node of the network.
By introducing a small amount of trust in their participants,
permissioned blockchain systems such as Hyperledger Fabric can
benefit from more efficient consensus algorithms and make use
of parallel pre-order execution on a subset of network nodes.
Fabric, in particular, has been shown to handle tens of thousands
of transactions per second. However, this performance is only
achievable for contention-free transaction workloads. If many
transactions compete for a small set of hot keys in the world state,
the effective throughput drops drastically. We therefore propose
XOX: a novel two-pronged transaction execution approach that
both minimizes invalid transactions in the Fabric blockchain
and maximizes concurrent execution. Our approach additionally
prevents unintentional denial of service attacks by clients resubmitting conflicting transactions. Even under fully contentious
workloads, XOX can handle more than 3000 transactions per
second, all of which would be discarded by regular Fabric.
I. INTRODUCTION
Blockchain systems have substantially evolved from their
beginnings as tamper-evident append-only logs. With the addition of smart contracts, complex computations based on the
blockchain’s state become possible. In fact, several permissionless and permissioned systems such as Ethereum [14] and
Hyperledger Fabric [3] allow Turing-complete computations.
However, uncoordinated execution of smart contracts in a
decentralized network can result in inconsistent blockchains,
a fatal flaw. Fundamentally, blockchain systems have two
options to resolve such conflicts. They can either coordinate,
i.e., execute contracts after establishing consensus on a linear
ordering, or they can deterministically resolve inconsistencies
after parallel execution.
Most existing blockchain systems implement smart contract
execution after ordering transactions, giving this pattern the
name order-execute (OX). In these systems, smart contract
execution happens sequentially. This allows each execution
to act on the result of the previous execution, but restricts
the computation to a single thread, limiting performance.
Blockchains using this pattern must additionally guarantee
that the smart contract execution reaches the same result on
every node in the network that replicates the chain, typically
by requiring smart contracts to be written in a domainspecific deterministic programming language. This restricts
programmability. Moreover, this makes the use of external data
sources, so-called oracles, difficult, because they cannot be
directly controlled and may deliver different data to different
nodes in the network.
Other blockchain systems, most notably Hyperledger Fabric,
use an execute-order (XO) pattern. Here, smart contracts referred to by transactions are executed in parallel in a container
before the ordering phase. Subsequently, only the results of
these computations are ordered and put into the blockchain.
Parallelized smart contract execution allows, among other
benefits, a nominal transaction throughput orders of magnitude
higher than that of other blockchains [5]. However, a model
that executes each transaction in parallel is inherently unable
to detect transaction conflicts[1] during execution.
Prior work on contentious workloads in Fabric focuses on
detecting conflicting transactions during ordering and aborting them early. However, this tightly couples architecturally
distinct parts of the Fabric network, breaking its modular
structure. Furthermore, early abort only treats a symptom and
not the cause in that it only filters out conflicting transactions
instead of preventing their execution in the first place. This
approach will not help if many transactions try to modify
a small number of ‘hot’ keys. For example, suppose the
system supports a throughput of 1000 transactions per second.
Additionally, suppose 20 transactions try to access the same
key in each block of 100 transactions. Then, only one of the
20 transactions will be valid and the rest must be aborted early.
Subsequently, all 19 aborted clients will attempt to re-execute
their transactions, adding to the 20 new conflicting transactions
in the next block. This leads to 38 aborted transactions
in the next round, and so on. Clearly, with cumulative reexecution, the number of aborted transactions grows linearly
until it surpasses the throughput of the system. Thus, if clients
re-execute aborted transactions, their default behaviour, this
effectively becomes an unintentional denial of service attack
on the blockchain!
This inherent problem with the XO pattern greatly reduces
the performance of uncoordinated marketplaces or auctions.
For example, conflicting transactions cannot be avoided in use
cases such as payroll, where an employer transfers credits to
1Two transactions are said to conflict if either one reads or writes to a key
that is written to by the other.
-----
a large number of employees periodically, or energy trading,
where a small number of producers offer fungible units of
energy to a large group of consumers.
We therefore propose XOX Fabric that essentially adds
a second deterministic re-execution phase to Fabric. This
phase executes ‘patch-up code’ that must be added to a smart
contract. We show that, in many cases, this eliminates the
need to re-submit and re-execute conflicting transactions. Our
approach can deal with highly skewed contentious workloads
with a handful of hot keys, while still retaining the decoupled,
modular structure of Fabric. Our contributions are as follows:
- Hybrid execution model: Our execute-orderexecute (XOX) model allows us to choose an optimal
trade-off between concurrent high-performance execution
and consistent linear execution, preventing the cumulative
re-execution problem.
- Compatibility with external oracles: To allow the use
of external oracles in the deterministic second execution
phase, we gather and save oracle inputs in the pre-order
execution step.
- Concurrent transaction processing: By computing a DAG
of transaction dependencies in the post-order phase, Fabric peers can maximize parallel transaction processing.
Specifically, they not only parallelize transaction validation and commitment, making full use of modern multicore CPUs, but also re-execute transactions in parallel
as long as these transactions are not dependent on each
other. This alleviates the execution bottleneck of OX
blockchains.
We achieve these contributions while being fully legacycompatible and without affecting Fabric’s modularity. In fact,
XOX can replace an existing Fabric network setup by changing
only the Fabric binaries[2].
II. BACKGROUND
A. State machine replication and invalid state transitions
We can model blockchain systems as state machine replication mechanisms [11]. Each node in the network stores a
replica of a state machine, with the genesis block as its START
state. Smart contracts then become state transition functions.
They take client requests (transactions) as input and compute
a state transition which can be subsequently committed to the
world state. This world state is either implicitly created by the
data on the blockchain or explicitly tracked in a data store,
most commonly a key-value store. Because of blockchain’s
inherently decentralized nature, keeping the world state consistent on all nodes is not trivial. A node’s stale state, a client’s
incomplete information, parallel smart contract execution, or
malicious behaviour can produce conflicting state transitions.
Therefore, a blockchain’s execution model must prevent such
transactions from modifying the world state. There are two
possibilities to accomplish this, the OX and XO approaches,
that we have already outlines. In the next two subsections, we
explore some subtleties of each approach.
[2Source code available at https://github.com/cgorenflo/fabric/tree/xox-1.4](https://github.com/cgorenflo/fabric/tree/xox-1.4)
B. The OX model
The order-execute (OX) approach guarantees consensus on
linearization of transactions in a block. However, it requires
certain restrictions on the execution engine to guarantee that
each node makes identical state transitions. First, the output of
the execution engine must be deterministic. This requires the
use of a deterministic contract language, such as Ethereum’s
Solidity, which must be learned by the application developer
community. It also means that external oracles cannot easily be incorporated because different nodes in the network
may receive different information from the oracle. Second,
depending on the complexity of smart contracts, there needs
to be a mechanism to deal with the halting problem, i.e., the
inherent a priori unknowability of contract execution duration.
A common solution to this problem is the inclusion of an
execution fee like Ethereum’s gas, which aborts long-running
contracts.
C. The XO model
The execute-order (XO) model approach allows transactions
to be executed in arbitrary order: the resulting state transitions
are then ordered and aggregated into blocks. This allows
transactions to be executed in parallel, increasing throughput.
However, the world state computed at the time of state
transition commitment is known to execution engines only
after some delay, and all transactions are inevitably executed
on a stale view of the world state. This makes it possible for
transactions to result in invalid state transitions even though
they were executed successfully before ordering. It necessitates
a validation step after ordering so transitions can be invalidated
deterministically based on detected conflicts. Consequently,
for a transaction workload with a set of frequently updated
keys, the effective throughput of an XO system can be
significantly lower than the nominal throughput (we formalize
this as the Hot Key Theorem below).
D. Hyperledger Fabric
Hyperledger Fabric has been described in detail by Androulaki et al [3]. Below, we describe those parts of the Fabric
architecture that are relevant to this work.
A Fabric network consists of peer nodes replicating the
blockchain and the world state, and a set of nodes called the
ordering service whose purpose is to order transactions into
blocks. The world state is a key-value view of the state created
by executing transactions. The nodes can belong to different
organizations collaborating on the same blockchain. Because
of the strict separation of concerns, Fabric’s blockchain model
is independent of the consensus algorithm in use. In fact,
release 1.4.1 supports three plug-in algorithms, solo, Kafka
and Raft, out of the box. As we will show in section V we
preserve Fabric’s modularity completely.
Apart from replication and ordering, Fabric needs a way
to execute its equivalent of smart contracts, called chaincode.
Endorsers, a subset of peers, fill this role. Each transaction
proposal received by an endorser is simulated in isolation. A
successful simulation of arbitrarily complex chaincode results
-----
in a read and write set (RW set) of {key, value, version}
tuples. They act as instructions for transitions of the world
state. The endorser then appends the RW set to the initial
proposal, signs the response, sends it back to the requesting
client, and discards the world state effect of the simulated
transaction before executing the next one.
To combat non-determinism and malicious behaviour during
chaincode execution, endorsement policies can be set up. For
example, a client may be required to collect identical responses
from three endorsers across two different organizations before
sending the transaction to the ordering service.
After transactions have been ordered into blocks, they are
disseminated to all peers in the network. These peers first
independently perform a syntactic validation of the blocks and
verify the endorsement policies. Lastly, they sequentially compare each transaction’s RW set to the current view of the world
state. If the version number of any key in the set disagrees
with the world state, the transaction is discarded. Thus, any
RW set overlap across transactions in the same block leads to
an invalidation of all but the first conflicting transaction. As a
consequence of this execution model, Fabric’s blockchain also
contains invalid transactions, which every peer independently
flags as such during validation and ignores during commitment
to world state. In the worst case, all transactions in a block
might be invalid. This can drastically reduce the effective
transaction throughput of the system.
III. RELATED WORK
Performance is an important issue for blockchain systems
since they are still slower than traditional database systems [4],
[5]. While most research focuses on consensus algorithms, less
work has been done to optimize other aspects of the transaction
flow, especially transaction execution.
We base this work on FastFabric, our previous optimization
of Hyperledger Fabric [9]. We introduced efficient data structures, caching, and increased parallelization in the transaction
validation pipeline to increase Fabric’s throughput for conflictfree transaction workloads by a factor six to seven. In this
paper, we address the issue of conflicting transactions.
To the best of our knowledge, a document from the Fabric
community [13] is the first to propose a secondary post-order
execution step for Fabric. However, the allowed commands
were restricted to addition, subtraction, and checking if a number is within a certain interval. Furthermore, this secondary
execution step is always triggered regardless of the workload,
and is not parallelized. This diminishes the value of retaining
the first pre-order execution step and introduces the same
bottleneck that OX models have to deal with.
Nasirifard et al [10] take this idea one step further. By
introducing conflict-free replicated data types (CRDTs), which
allow conflicting transactions to be merged during the validation step, they follow a similar path to our work. However,
their solution has several limitations. It can only process
transactions sequentially, one block at a time. When conflicts
are discovered, they use the inherent functionality of CRDTs to
resolve them. While this enables efficient computation, it also
restricts the kind of execution that can be done. For example,
it is not possible to check a condition like negative account
balance before merging two transaction results.
Amiri et al [2] introduce ParBlockchain using a similar architecture to Fabric’s but with an OX model. Here, the ordering
service also generates a dependency graph of the transactions
in a block. Subsequently, transactions in the new block are
distributed to nodes in the network to be executed, taking the
dependencies into account. Only a subset of nodes executes
any given transaction and shares the result with the rest of
the network. This approach has two drawbacks. First, the
ordering service must determine the transaction dependencies
before they are executed. This requires the orderers to have
complete knowledge of all installed smart contracts, and, as a
result, restricts the complexity of allowed contracts. Even if a
single conditional statement relies on a state value, for example
Read the value of key k, where k is the value to be read
from key k[′], reasoning about the result becomes impossible.
Second, depending on the workload, all nodes may have to
communicate the current world state after every transaction
execution to resolve execution deadlocks. This leads to a
significant networking overhead.
CAPER [1] extends ParBlockchain by modelling the
blockchain as a directed acyclic graph (DAG). This enables
sharding of transactions. Each shard maintains an internal
chain of transaction that is intertwined with a global crossshard chain. Both internal chains and the global chain are
totally ordered. This approach works well for scenarios with
tightly siloed data pockets that can easily be sharded and
where cross-shard transactions are rare. In this case, internal
transactions of different shards can be executed in parallel.
However, if the workload does not have clear boundaries to
separate the shards, then most transactions will use the global
chain, negating the benefit of CAPER.
Sharma et al [12] approach blockchains from a database
point of view and incorporate concepts such as early abort
and transaction reordering into Hyperledger Fabric. However,
they do not follow its modular design and closely couple the
different building blocks. For both early abort and transaction
reordering, the ordering service needs to understand transaction contents to unpack and analyze RW sets. Furthermore,
transaction reordering only works in pathological cases. Whenever a key appears both in the read and write set, which is
the case for any application that transfers any kind of asset,
reordering will not eliminate RW set conflicts. While early
transaction abort might increase overall throughput slightly,
it cannot solve the problem of hot keys and only skews the
transaction workload away from those keys.
Zhang et al [15] present a solution for a client-side early
abort mechanism for Fabric. They introduce a transaction
cache on the client that analyzes endorsed transactions to detect RW set conflicts and only sends conflict-free transactions
to the ordering service. Transactions that have dependencies
are held in the cache until the conflict is resolved and then
they are sent back to the endorsers for re-execution. This
approach prevents invalid transactions from a single client, but
-----
cannot deal with conflicts between multiple clients. Moreover,
it cannot deal with hot key workloads.
Lastly, Escobar et al [6] investigate parallel state machine
replication. They focus on efficient data structures to keep
track of parallelizable, i.e., independent state transitions. While
this might be interesting to incorporate into Fabric in the
future, we show in section VIII that the overhead of our
relatively simple implementation of a dependency tracker is
negligible compared to the transaction execution overhead.
IV. THE HOT KEY THEOREM
We now state and prove a theorem that limits the performance of any XO system.
Hot Key Theorem. Let l be the average time between a
transaction’s execution and its state transition commitment.
Then the average effective throughput for all transactions
operating on the same key is at most [1]l [.]
Proof. The proof is by induction. Let i denote the number of
changes to an arbitrary but fixed key k.
i = 0 (just before the first change):
For k to exist, there must be exactly one transaction tx0
which takes time l0 from execution to commitment and creates
k with version v1 at time t1.
i → i + 1 (just before the i + 1[th] change):
Let k’s current version be vi at time ti. Let txi be the
first transaction in a block which updates k to a new version
vi+1. The version of k during txi’s execution must have
been vi, otherwise Fabric would invalidate txi and prevent
commitment. Let txi be committed at time ti+1 and li be the
time between txi’s execution and commitment. Therefore,
ti ≤ ti+1 − li.
Likewise, no transaction tx[′]i [which is ordered after][ tx][i][ can]
commit an update vi → vi[′]+1 [because][ tx][i][ already changed the]
state and tx[′]i [would therefore be invalid. Consequently,][ tx][i]
must be the only transaction able to update k from vi to a
newer version.
This means, N updates to k take tN time with
tN ≥
N −1
�
li.
i=0
A lower bound on the average update time is then given by
transactions. Worse yet, transactions are not only invalidated
if their RW set overlaps completely, but also if there is a
single key overlap with a previous transaction. This means that
workloads with hot keys can easily reduce effective throughput
by several orders of magnitude.
While early abort schemes can discard invalid transactions
before they become part of a block, they cannot break the
theorem. Assuming they result in blocks without invalid
transactions, they can only fill up the slots in a new block
with transactions using different key spaces. Thus, they skew
the processed transaction distribution. Furthermore, aborted
transactions need to be re-submitted and re-executed, flooding
the network with even more attempts to modify hot keys.
Eventually, endorsers will be completely saturated by clients
repeatedly trying to get their invalid transactions re-executed.
V. THE XOX HYBRID MODEL
To deal with the drawbacks of both the OX and XO patterns,
we now present the execute-order-execute (XOX) pattern which
adds a secondary post-order execution step to execute the
patch-up code added to smart contracts. XOX minimizes transaction conflicts while preserving concurrent block processing
and without the introduction of any centralized elements. In
this section, we first describe the necessary changes to the
endorsers’ pre-order execution step to allow the inclusion of
external oracles in the post-order execution step. Then, we
describe changes to the critical transaction flow path on the
peers after they receive blocks from the ordering service.
The details of the crucial steps we introduce are described
in sections VI and VII. Notably, our changes do not affect the
ordering service, preserving Fabric’s modular structure.
A. Pre-order endorser execution
The pre-order execution step leverages concurrent transaction execution and uses general purpose programming languages like Go. Depending on the endorsement policy, clients
request multiple endorsers to execute their transaction and
the returned execution results must be identical. This makes
a deterministic execution environment unnecessary because
deviations are discarded and a unanimous result from all endorsers becomes ground truth for the whole network. Notably,
this also allows external oracles like weather or financial data.
If these oracle data lead to non-deterministic RW sets, the
client will not receive identical endorser responses and the
transaction will never reach the Fabric network.
External oracles are a powerful tool. If they are supported
by the pre-order execution step, they must also be supported by
the post-order execution step. To achieve this, we must make
the oracle deterministic. We leverage the same mechanism
that ensures deterministic transaction results for pre-order
execution: We extend the transaction response by an additional
oracle set. Any external data are recorded in the form of keyvalue pairs and are added to the response to the client. Now, if
the oracle sets for the same transaction executed by different
endorsers differ, the client has to discard the transaction.
Otherwise, the external data effectively becomes part of the
1
N [t][N][ ≥]
N −1
�
i=0
1
N [l][i][ =][ l,]
so we get [1]l [as an upper bound on throughput being the inverse]
of the update latency.
This theorem has a crucial consequence. For example,
FastFabric can achieve a nominal throughput of up to 20,000
transactions per second [9], yet even an unreasonably fast
transaction life cycle of 50 ms from execution to commitment
would result in a maximum of 20 updates per second to the
same key, or once every ten blocks with a block size of 100
-----
Fig. 1. The modified XOX Fabric validation and commitment pipeline. Stacks and branched paths show parallel execution.
deterministic world state so that it can be used without by
the post-order execution step. Analogous to existing calls to
GetState and PutState that record the read and write set
key-value pairs, respectively, we add a new call PutOracle
to the chaincode API.
B. Critical transaction flow path
Our previous work on FastFabric [9] showed how to improve performance by pipelining the syntactic block verification and endorsement policy validation (EP validation) so that
it can be done for multiple blocks at the same time. However,
the RW set validation to check for invalid state transitions
and the final commitment had to be done sequentially in a
single thread. While the the XOX model is an orthogonal
optimization, its second execution step needs to be placed
between RW set validation and commitment. Since this step
is relatively slow, we must expand our concurrency efforts
to pipelining RW set validation, post-order executions, and
commitment. Two vital pieces for this effort, a transaction dependency analyzer and the executions step itself, are described
in later sections in detail, so we will only give a brief overview
here. This allows us to concentrate on the pipeline integration
in this section.
1) Dependency analyzer: For concurrent transaction processing, we rely on the ability to isolate transactions from
each other. However, the sequential order of transactions in
a block matters when their RW sets are validated and they
are committed. A dependency exists when two transactions
overlap in some keys of their RW sets (read-only transactions
do not even enter the orderer). In that case, we cannot process
them independently. Therefore, we need to keep track of
dependencies between transactions so we know which subsets
of transactions can be processed concurrently.
2) Execution step: Transactions for which the dependency
analyzer has found a dependency on an earlier transaction
would be invalidated during Fabric’s RW set validation. We
introduce a step which re-executes transaction with such an
RW set conflict based on the most up-to-date world state. It
can resolve conflicts due to a lack of knowledge of concurrent
transactions during pre-order execution. However, it still invalidates transactions that attempt something the smart contract
does not allow, such as creating a negative account balance.
In FastFabric, peers receive blocks as fast as the ordering
service can deliver them. If the syntactic verification of a block
fails, the whole block is discarded. Thus, it is reasonable to
keep this as a first step in the pipeline. Next up is the EP
validation step. Each transaction can be validated in parallel
because the validations are independent of each other. The next
step is the intertwined RW set validation and commitment:
Each transaction is validated, and, if successful, added to an
update batch that is subsequently committed to the world state.
XOX Fabric separates RW set validation from the commitment decision. Therefore, this step is no longer dependent on
the result of the EP validation and can be done in parallel.
However, in order to validate transactions concurrently, we
need to know their dependencies, so the dependency analyzer
goes first and releases transactions to the RW set validation as
their dependencies are resolved.
Subsequently, the results from the EP validation and RW set
validation are collected, and if they are marked as valid, they
can be committed concurrently. If a RW set conflict arises,
they need to be sent to the new execution step to be reexecuted based on the current world state. Finally, successfully
re-executed transactions are committed and all others are
discarded.
Our design allows dependency analysis to work in parallel
to endorsement policy validation and transactions can proceed
as soon as all previous dependencies are known. Specifically,
independent sets of transactions can pass through RW set
validation, post-order execution, and commitment steps concurrently. The modified pipeline is shown in Fig. 1.
VI. DEPENDENCY ANALYZER
We now discuss the details of the dependency analyzer. Note
that the only way for a transaction to have a dependency on
another is an overlap in its RW set with a previous transaction.
More precisely, one of the conflicting transactions must write
to the overlapping key. Reads do not change the version nor
the value of a key, so they do not impede each other. However,
we must consider a write a blocking operation for that key.
If transaction a with a write is ordered before transaction b
with a read from the same key, then this must always happen
in this order lest we lose deterministic behaviour of the peer
because of the changing key value. The reverse case of readbefore-write has the same constraints. In the write-write case,
neither transaction actually relies on the version or the value
of that key. Nevertheless, they must remain in the same order,
otherwise transaction a’s value might win out, even though
transaction b should overwrite it.
To detect such conflicts, we keep track of read and write
accesses to all keys across transactions. For each key, we create
a doubly-linked skip list that acts as a dependency queue,
recording all transactions that need to access it. Entries in
this queue are sorted by the blockchain transaction order. As
described before, consecutive reads of the same key do not
affect each other and can be collapsed into a single node in
-----
- · ·
|... skip|Col2|
|---|---|
|keyi−1 block:4 #tx:43 keyi block:5 #tx:2 block:5 #tx:8 block:6 #tx:21 keyi+1 block:5 #tx:7 write write read||
|keyi−1||
|keyi||
|keyi+1||
...
read
Fig. 2. Dependency analyzer data structure: Example of a state database key mapped to a doubly-linked skiplist of dependent transactions.
the linked list so they will be freed together. For faster traversal
during insertion, nodes can skip to the start of the next block
in the list. This data structure is illustrated in Fig. 2. After
the analysis of a transaction is complete, it will not continue
to the next step in the pipeline until all previous transactions
have also been analyzed, lest an existing dependency might
be missed.
Dependencies may change in two situations: when new
transactions are added or existing transactions have completed the commitment pipeline. In either case, we update
the dependency lists accordingly and check the first node
of lists that have been changed. If any of these transactions
have no dependency in any key anymore, they are released
into the validation pipeline. However, we can only remove
a transaction from the dependency lists once it is either
committed or discarded, lest dependent transactions get freed
up prematurely.
VII. POST-ORDER EXECUTION STEP
The post-order execution step executes additional patch-up
code added to a smart contract. We discuss it in more detail
in this section.
When the RW validation finds a conflict between a transaction’s RW set and the world state, that transaction will be
re-executed and possibly salvaged using the patch-up code.
However, the post-order execution stage needs to adhere to
some constraints. First, the new RW set must be a subset of the
original RW set so the dependency analyzer can reason properly. Without this restriction, new dependencies could emerge
and transactions scheduled for parallel processing would now
create an invalid world state. Second, the blockchain network
also needs consistency among peers. Therefore, the post-order
execution must be deterministic so there is no need for further
consensus between peers. Lastly, this new execution step is
part of the critical path and thus should be as fast as possible.
For easier adoption of smart contracts from other blockchain
systems, we use a modified version of Ethereum’s EVM [14]
as the re-execution engine for patch-up code[3]. Patch-up code
take a transaction’s read set and oracle set as input. The read
set is used to get the current key values from the latest version
of the world state. Based on this and the oracle set, the smart
contract then performs the necessary computations to generate
3We note that forays have been made to build WebAssembly based
execution engines [7], which would allow for a variety of programming
languages to build smart contracts for the post-order execution step.
a new write set. If the transaction is not allowed by the logic of
the smart contract based on the updated values, it is discarded.
Finally, in case of success, it generates an updated RW set,
which is then compared to the old one. If all the keys are
a subset of the old RW set, the result is valid and can be
committed to the world state and blockchain.
For example, suppose client A wants to add 70 digital
coins to an account with a current balance of 20 coins.
Simultaneously, client B wants to add 50 coins to the same
account. They both have to read the key of the account, update
its value, and write the new value back, so the account’s
key is in both transactions’ RW set. Even if both clients are
honest, only the transaction which is ordered earlier would
be committed. Without loss of generality, assume that A’s
transaction updates the balance to 90 coins because it won
the race. In XOX Fabric, B’s transaction would wait for A to
finish due to its dependency and then would find a key version
conflict in the RW validation step. Therefore, it is sent to the
post-order execution step. In the step, B’s patch-up code can
read the updated value from the database and add its own value
for a total of 140 coins, which is recorded in its write set. After
successful execution, the RW set comparison is performed and
the new total will be committed. Thus, the re-execution of the
patch-up code salvages conflicting transactions.
However, if we start with an account balance of 100 coins
and A tries to subtract 50 coins and B tries to subtract 60
coins, we get a different result. Again, B’s transaction would
be sent to be re-executed. But this time, it’s patch-up code tries
to subtract 60 coins from the updated 50 coins and the smart
contract does not allow a negative balance. Therefore, B’s
transaction will be discarded, even though it was re-executed
based on the current world state.
Thus, our hybrid XOX approach can correct transactions
which would have been discarded because they were executed
based on a stale world state. However, transactions that do not
satisfy the smart contract logic are still rejected.
Lastly, if we do not put any restrictions on the execution,
we risk expensive computations, low throughput, and even
non-terminating smart contracts. Ethereum deals with this by
introducing gas. If a smart contract runs out of gas, it is aborted
and the transaction is discarded. As of yet, Fabric does not
include such a concept.
As a solution, we introduce virtual gas as a tuning parameter
for system performance. Instead of originating from a bid
by the client that proposes the transaction, it can be set by
-----
a system administrator. If the post-order step runs out of
gas for a transaction, it becomes immediately invalidated, but
in case of success the fee is never actually paid. A larger
value allows for more complex computation at the cost of
throughput. While the gas parameter should generally be as
small as possible, large values could make sense for workloads
with very infrequent transaction conflicts and high importance
of conflict resolution.
VIII. EXPERIMENTS
15,000
10,000
We now evaluate the peformance of XOX Fabric. We used
11 local servers connected by a 1 Gbit/s switch. Each is
equipped with two Intel⃝[R] Xeon R⃝ CPU E5-2620 v2 processors
at 2.10 GHz, for a total of 24 hardware threads and 64 GB of
RAM. We compare three systems with different capabilities.
Fabric 1.4 is the baseline. Next, FastFabric [9] adds efficient
data structures, improved parallelization, and decoupled endorsement and storage servers. Finally, our implementation
of an XOX model based on FastFabric adds transaction
dependency analysis, concurrent key version validation, and
transaction re-execution.
For comparable results, we match the network setup of
all three systems as closely as possible. We use a single
orderer in solo mode, ensuring that throughput is bound by
the peer performance. A single anchor peer receives blocks
from the orderer and broadcasts them to four endorsing peers.
In the case of FastFabric and XOX, the broadcast includes
the complete transaction validation metadata so endorsers
can skip their own validation steps. FastFabric and XOX
run an additional persistent storage server because in these
cases the peers store their internal state database in-memory.
The remaining four servers are used as clients[4]. Spawning a
total of 200 concurrent threads, they use the Fabric node.js
SDK to send transaction proposals to the endorsing peers and
consecutively submit them to the orderer. Each block created
by the orderer contains 100 transactions.
All experiments run the same chaincode: A money transfer
from one account to another is simulated, reading from and
writing to two keys in the state database, e.g. deducting 1
coin from account0 and adding 1 coin to account1. We use
the default endorsement policy of accepting a single endorser
signature. XOX’s second execution phase integrates a Python
virtual stack machine (VM) implemented in Go [8]. We
added a parameter to the VM to stop the stack machine
after executing a certain amount of operations, emulating a
gas equivalent. We load a Python implementation of the Go
chaincode into the VM and extract the call parameters from the
transaction so that the logic between pre-order and post-order
execution remains the same. Therefore, the only semantic
difference between XO and OX is that OX operates on upto-date state.
For each tested system, clients generate a randomized load
with a specific contention factor by flipping a loaded coin
4We do not use Caliper because it is not sufficiently high-performance to
fully load our system.
|Col1|XOX Fabric 1.4 XOX Fabric 1.4 (no re-execution) FastFabric 1.4|
|---|---|
|Fabric 1.4|Fabric 1.4|
We start by examining the nominal throughput of each system in Fig. 3. We measured the throughput of all transactions
regardless of their validity. The effectively single-threaded
validation/commitment pipeline of Fabric 1.4 creates results
with little variance over time. The throughput increases slightly
from about 2200 tx/s to 3000 tx/s the higher the transaction
contention becomes, because Fabric discards invalid transactions, so their changes are not committed to the world state
database. FastFabric follows the same trend, going from 13600
tx/s up to 14700 tx/s, although the relative throughput increase
is not as pronounced because the database commit is cheaper,
and there is higher variance due to many parallel threads vying
for resources at times.
We ran the experiments for XOX in two configurations to
understand the effects of different parts of our implementation
on the overall throughput. First, we only included changes
to the validation pipeline and the addition of the dependency
analyzer but disabled transaction re-execution. Subsequently,
we ran it again with all features enabled. The first configuration shows roughly the same behaviour as FastFabric,
albeit with a small hit to overall throughput, ranging from
12000 tx/s up to 12700 tx/s. For higher contention ratios,
the fully-featured configuration’s throughput drops from 12800
tx/s to about 3600 tx/s, a third of its initial value. However,
this is expected as more transactions need to be re-executed
sequentially. Importantly, even under full contention, XOX
performs better than Fabric 1.4.
5,000
0
0% 20% 40% 60% 80% 100%
Transaction contention
Fig. 3. Impact of transaction conflicts on nominal throughput, counting both
valid and invalid transactions.
for each transaction. Depending on the outcome, they either
choose a previously unused account pair or the pair account0–
account1 to create a RW set conflict. We scale the transaction
contention factor from 0% to 100% in 10% steps and run
the experiment for each of the three systems. Every time,
clients generate a total of 1.5 Million transaction. In the
following, we will discuss XOX’s throughput improvements
under contention over both FastFabric and Fabric 1.4, and its
overhead compared to FastFabric.
A. Throughput
-----
transaction, or, more likely, submit the same transaction again.
In a system with some amount of contention, conflicting
transactions can accumulate over time by users resubmitting
them repeatedly. This results in an unintended denial of service
attack. In contrast, XOX guarantees liveness in every scenario.
B. Overhead
|XOX Fabric 1.4 FastFabric 1.4 Fabric 1.4|Col2|Fabric 1.4 (rescaled)|Col4|Col5|
|---|---|---|---|---|
||||Fabric 1.4 (rescaled)||
|15,000 tx/s in throughput 10,000 transaction 5,000 ffective||||2,500 tx/s in 2,000 throughput 1,500 transaction 1,000 500 ffective|
0 0
0% 20% 40% 60% 80% 100%
Transaction contention
Fig. 4. Impact of transaction conflicts on effective throughput, counting only
valid transactions. Fabric 1.4 scaled up for slope comparison (right y-axis).
100%
80%
60%
40%
20%
0%
0% 20% 40% 60% 80% 100%
Transaction contention
Fig. 5. Relative load overhead of separate XOX parts over FastFabric.
Note that the nominal throughout is meaningless if blocks
contain mostly invalid transactions. Therefore, we now discuss
the effective throughput. In Fig. 4, we have eliminated all
invalid transactions from the throughput measurements. Naturally, this means there is no change for the full XOX implementation, because it already produces only valid transactions.
For better comparison of the three systems under contention,
we normalized the projections of their plots. FastFabric and
XOX follow the left y-axis while Fabric follows the right one.
For up to medium transaction contention, all systems roughly
follow the same slope. However, while both FastFabric and
Fabric tend towards 0 tx/s in the limit of 100% contention,
XOX still reaches a throughput of about 3600 tx/s. At this
point, all transaction in a submitted block have to be reexecuted. This means, starting at 70% contention, XOX surpasses all other systems in terms of effective throughput while
maintaining comparable throughput before that threshold.
Even though it might seem like a corner case, this is a
significant improvement. All experiments were run with a
synthetic static workload where the level of contention stayed
constant. However, in a real world scenario, users have two
options when their transaction fails. They can abandon the
We now explore the overhead of XOX compared to FastFabric’s nominal performance in Fig. 5. We isolate the overhead
introduced by adding the dependency analyzer and modifying
the validation pipeline so that it can handle single transactions
instead of complete blocks, as well as the overhead of the
transaction re-execution by the python VM.
The blue dashed line shows that the dependency analyzer
overhead is almost constant regardless of contention level.
By minimizing spots in the validation/commitment pipeline
that require a sequential processing order of transactions,
we achieve an overhead of less than 15% even when the
dependency analyzer only releases one transaction at a time.
In contrast, the overhead of the re-execution step is more
noticeable. For high contention, this step generates over 60%
of additional load. Yet, this also means that replacing the
highly inefficient Python VM used in our proof of concept with a faster deterministic execution environment could
dramatically increase XOX’s throughput for high-contention
workloads. This would push the threshold when XOX beats
the other systems to lower fractions of conflicting transactions.
Furthermore, the contention load used for these experiments
presents the absolute worst case, where every conflicting
transaction is touching the same state keys, resulting in a
fully sequential re-execution of all transactions. However, if
instead of a single account pair account0–account1 used for
contentious transactions there was a second pair account2–
account3, the OX step would run in two concurrent threads
instead of one. Even with this simple relaxation, the overhead
would roughly be cut in half.
IX. CONCLUSION AND FUTURE WORK
In this work, we propose a novel hybrid execution model for
Hyperledger Fabric consisting of a pre-order and a post-order
execution step. This allows a trade-off between parallel transaction execution and minimal invalidation due to conflicting
results. In particular, our solution can deal with highly skewed
workloads where most transactions use only a small set of
hot keys. Contrary to other post-order execution models, we
support the use of external oracles in our secondary execution
step. We show that the throughput of our implementation
scales comparably to Fabric and FastFabric for low contention
workloads, and surpasses them when transaction conflicts
increase in frequency.
Now that all parts of the validation and commitment pipeline
are decoupled and highly scalable, it remains to be seen in
future work if the pipeline steps can be scaled across multiple
servers to improve the throughput further.
-----
REFERENCES
[1] Mohammad Javad Amiri, Divyakant Agrawal, and Amr El Abbadi.
CAPER: a cross-application permissioned blockchain. Proceedings of
the VLDB Endowment, 12(11):1385–1398, 2019.
[2] Mohammad Javad Amiri, Divyakant Agrawal, and Amr El Abbadi.
ParBlockchain: Leveraging Transaction Parallelism in Permissioned
Blockchain Systems. Proceedings - International Conference on Distributed Computing Systems, 2019-July:1337–1347, 7 2019.
[3] Elli Androulaki, Artem Barger, Vita Bortnikov, Christian Cachin, Konstantinos Christidis, Angelo De Caro, David Enyeart, Christopher Ferris,
Gennady Laventman, Yacov Manevich, Srinivasan Muralidharan, Chet
Murthy, Binh Nguyen, Manish Sethi, Gari Singh, Keith Smith, Alessandro Sorniotti, Chrysoula Stathakopoulou, Marko Vukoli´c, Sharon Weed
Cocco, and Jason Yellick. Hyperledger Fabric: A Distributed Operating
System for Permissioned Blockchains. Proceedings of the Thirteenth
EuroSys Conference on - EuroSys ’18, pages 1–15, 2018.
[4] Si Chen, Jinyu Zhang, Rui Shi, Jiaqi Yan, and Qing Ke. A comparative
testing on performance of blockchain and relational database: Foundation for applying smart technology into current business systems.
In International Conference on Distributed, Ambient, and Pervasive
Interactions, pages 21–34. Springer Verlag, 2018.
[5] Tien Tuan Anh Dinh, Ji Wang, Gang Chen, Rui Liu, Beng Chin Ooi, and
Kian-Lee Tan. BLOCKBENCH: A Framework for Analyzing Private
Blockchains. Proceedings of the 2017 ACM International Conference
on Management of Data - SIGMOD ’17, pages 1085–1100, 2017.
[6] Ian Aragon Escobar, Eduardo E.P. Alchieri, Fernando Lu´ıs Dotti, and
Fernando Pedone. Boosting concurrency in Parallel State Machine
Replication. In Proceedings of the 20th International Middleware
Conference, pages 228–240. Association for Computing Machinery
(ACM), 2019.
[7] Ethereum Community. EWASM, 2018.
[8] go-python. Python 3.4 interpreter implementation for Golang.
[9] Christian Gorenflo, Stephen Lee, Lukasz Golab, and Srinivasan Keshav.
FastFabric: Scaling Hyperledger Fabric to 20,000 Transactions per
Second. In International Journal of Network Management. John Wiley
and Sons Ltd, 2 2020.
[10] Pezhman Nasirifard, Ruben Mayer, and Hans-Arno Jacobsen. FabricCRDT: A Conflict-Free Replicated Datatypes Approach to Permissioned
Blockchains. In Proceedings of the 20th International Middleware
Conference, pages 110–122. Association for Computing Machinery
(ACM), 2019.
[11] Fred B. Schneider. Implementing Fault-Tolerant Services Using the
State Machine Approach: A Tutorial. ACM Computing Surveys (CSUR),
22(4):299–319, 1990.
[12] Ankur Sharma, Felix Martin Schuhknecht, Divyakant Agrawal, and Jens
Dittrich. Blurring the Lines between Blockchains and Database Systems.
In Proceedings of the 2019 International Conference on Management
of Data, pages 105–122, 2019.
[13] Alessandro Sorniotti, Angelo De Caro, Baohua Yang, Binh Nguyen,
Manish Sethi, Vukolic Marko, Sheehan Anderson, Srinivasan Muralidharan, and Parth Thakkar. Fabric Proposal: Enhanced Concurrency Control,
2017.
[14] Gavin Wood. Ethereum: a Secure Decentralised Generalised Transaction
Ledger, 2014.
[15] Shenbin Zhang, Ence Zhou, Bingfeng Pi, Jun Sun, Kazuhiro Yamashita,
and Yoshihide Nomura. A Solution for the Risk of Non-deterministic
Transactions in Hyperledger Fabric. IEEE International Conference on
Blockchain and Cryptocurrency (ICBC), pages 253–261, 2019.
-----
| 11,100
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1906.11229, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/1906.11229"
}
| 2,019
|
[
"JournalArticle",
"Conference"
] | true
| 2019-06-26T00:00:00
|
[
{
"paperId": "e3435d01fd640581d80073ad8ab582d0775e2601",
"title": "Boosting concurrency in Parallel State Machine Replication"
},
{
"paperId": "c0584b6bba8a715e6b08632775f7b5b447eda12d",
"title": "FabricCRDT: A Conflict-Free Replicated Datatypes Approach to Permissioned Blockchains"
},
{
"paperId": "61fd22c46c79bf6b6c4e0651a003b4f52170aff7",
"title": "CAPER: A Cross-Application Permissioned Blockchain"
},
{
"paperId": "c8a717f1ec7cfc38a98a1950b4df2f63eb7f8447",
"title": "Blurring the Lines between Blockchains and Database Systems: the Case of Hyperledger Fabric"
},
{
"paperId": "71bd932d7d4b6e2e247d9ec559467cbfda3cc9a5",
"title": "A Solution for the Risk of Non-deterministic Transactions in Hyperledger Fabric"
},
{
"paperId": "a8c6520fa4be060765891cc928c4081909997f4c",
"title": "ParBlockchain: Leveraging Transaction Parallelism in Permissioned Blockchain Systems"
},
{
"paperId": "1e36ea0e19c236b67f680e3af66dcbc7bdf07dbe",
"title": "FastFabric: Scaling Hyperledger Fabric to 20,000 Transactions per Second"
},
{
"paperId": "0bfb38f72ae56b98a831c594ca70a7d714b257c0",
"title": "A Comparative Testing on Performance of Blockchain and Relational Database: Foundation for Applying Smart Technology into Current Business Systems"
},
{
"paperId": "1b4c39ba447e8098291e47fd5da32ca650f41181",
"title": "Hyperledger fabric: a distributed operating system for permissioned blockchains"
},
{
"paperId": "97b4375a71e98fb5b4628b3cf9bf80c4e006e891",
"title": "BLOCKBENCH: A Framework for Analyzing Private Blockchains"
},
{
"paperId": "a9c50aa297f5898a48f8a4c33aa92dec483fcd61",
"title": "Implementing fault-tolerant services using the state machine approach: a tutorial"
},
{
"paperId": "91bfb9e9a222765fcd7e4afd891be47c8ad3dc78",
"title": "ETHEREUM: A SECURE DECENTRALISED GENERALISED TRANSACTION LEDGER"
},
{
"paperId": null,
"title": "Ethereum Community"
},
{
"paperId": null,
"title": "Fabric Proposal: Enhanced Concurrency Control, 2017"
},
{
"paperId": "3c50bb6cc3f5417c3325a36ee190e24f0dc87257",
"title": "ETHEREUM: A SECURE DECENTRALISED GENERALISED TRANSACTION LEDGER"
},
{
"paperId": "41f1abe566060e53ad93d8cfa8c39ac582256868",
"title": "Implementing Fault-tolerant Services Using the State Machine Approach: a Tutorial"
},
{
"paperId": null,
"title": "Python 3.4 interpreter implementation for Golang"
}
] | 11,100
|
en
|
[
{
"category": "Business",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/006330657515bc09ea3a9144790840f691e2b56b
|
[] | 0.932967
|
Drivers, barriers and supply chain variables influencing the adoption of the blockchain to support traceability along fashion supply chains
|
006330657515bc09ea3a9144790840f691e2b56b
|
Operations Management Research
|
[
{
"authorId": "9250667",
"name": "A. Moretto"
},
{
"authorId": "3460345",
"name": "Laura Macchion"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Oper Manag Res"
],
"alternate_urls": [
"https://rd.springer.com/journal/12063"
],
"id": "700b4a95-899e-44f2-b373-a0244ee1dd68",
"issn": "1936-9735",
"name": "Operations Management Research",
"type": "journal",
"url": "http://www.springer.com/business+&+management/production/journal/12063"
}
|
The critical role of blockchain technology in ensuring a proper level of traceability and visibility along supply chains is increasingly being explored in the literature. This critical examination must focus on the factors that either encourage or hinder (i.e. the drivers or barriers) the implementation of this technology in extended supply chains. On the assumption that the blockchain will need to be adopted at the supply chain level, the enabling factors and the contingent variables of different supply chains must be identified and analysed. The appropriate identification of supply chain partners is becoming a critical factor of success since the globalization of supply chains makes their management and control increasingly difficult. This is particularly true of the fashion industry. Five blockchain providers and seven focal companies working in the fashion industry were interviewed to compare their different viewpoints on this topic. The results highlight which drivers, barriers, and supply chain variables impact the implementation of the blockchain and specific research propositions are formulated.
| ERROR: type should be string, got "https://doi.org/10.1007/s12063 022 00262 y\n\n# Drivers, barriers and supply chain variables influencing the adoption of the blockchain to support traceability along fashion supply chains\n\n**Antonella Moretto[1] · Laura Macchion[2]**\n\n\nReceived: 18 February 2021 / Revised: 4 February 2022 / Accepted: 25 February 2022\n© The Author(s) 2022, corrected publication 2022\n\n\n/ Published online: 16 March 2022\n\n\n**Abstract**\nThe critical role of blockchain technology in ensuring a proper level of traceability and visibility along supply chains is\nincreasingly being explored in the literature. This critical examination must focus on the factors that either encourage or\nhinder (i.e. the drivers or barriers) the implementation of this technology in extended supply chains. On the assumption that\nthe blockchain will need to be adopted at the supply chain level, the enabling factors and the contingent variables of different supply chains must be identified and analysed. The appropriate identification of supply chain partners is becoming a\ncritical factor of success since the globalization of supply chains makes their management and control increasingly difficult.\nThis is particularly true of the fashion industry. Five blockchain providers and seven focal companies working in the fashion\nindustry were interviewed to compare their different viewpoints on this topic. The results highlight which drivers, barriers,\nand supply chain variables impact the implementation of the blockchain and specific research propositions are formulated.\n\n**Keywords Traceability · Blockchain · Fashion**\n\n\n### 1 Introduction\n\nSupply chains today are incredibly complex, comprising\nmulti-echelon and geographically dispersed companies.\nGlobalization, different international regulations, and varied cultural and human behaviors worldwide are all challenges to managing companies through their supply chains.\nThese evolutionary phenomena have made it arduous to\nacquire relevant and trustworthy information within supply\nchains and have dramatically increased the potential for\ninefficient transactions, fraud, pilferage, or simply a deterioration in supply chain performance (Hastig and Sodhi\n2020).\nThe urgent need for traceability of both product and process in supply chains has been documented in several industries, including the agri-food sector (Sun and Wang 2019;\nYadav et al. 2020; Mukherjee et al. 2021), pharmaceutical\n\n- Laura Macchion\[email protected]\n\n1 Department of Management, Economics and Industrial\nEngineering, Politecnico Di Milano, Piazza Leonardo da\nVinci, 32 ‑ 20133 Milano, Italy\n\n2 Department of Engineering and Management, University\nof Padova, Stradella San Nicola, 3 ‑ 36100 Vicenza, Italy\n\n## 1 3\n\n\nand medical products (Chen et al. 2019) and luxury products (Choi 2019). The lack of transparency and visibility in\nall processes of the supply chain prevents customers from\nverifying the origin of the raw materials and the processes\nthat the product underwent before reaching the store shelves,\nwith a high risk of fraud and counterfeiting of products. The\ncosts involved in verifying supply chains’ intermediaries, in\nassessing their reliability and transparency in the production\nprocesses further complicates managing traceability in supply chains (Ahluwalia et al. 2020; Choi 2020). Strategic and\ncompetitive reputational issues arise from these risks and the\nlack of supply chain transparency.\nIn response to these concerns, the technological advancements of the digital era are providing companies with many\nopportunities that can be exploited in the supply chain\n(Xiong et al. 2021). The term digital supply chain refers to\ndata exchanges occurring between actors involved in a supply chain and also to how the supply chain process may be\nmanaged through a wide variety of innovative technologies\n(Büyüközkan and Göçer 2018) such as the Internet of Things\n(IoT), Big Data Analytics, cloud computing and the blockchain itself. Blockchain technology is particularly relevant\n(Casey and Wong 2017; Tapscott and Tapscott 2017; Samson\n2020) in overcoming the difficulties mentioned above due\nto its centralized database in which all the information of\n\n\n-----\n\nthe supply chain partners is recorded immutably. The literature on the use of blockchain technology in supply chains\nis quite recent (e.g. Chang et al. 2019) but has experienced\nsignificant growth in recent years thanks to the evidence\nthat emerged on the potential of this technology applied to\nsupply chains of different sectors such as food supply chains\n(Katsikouli et al. 2021; Bechtsis et al. 2021; Sharma et al.\n2021; Mukherjee et al. 2021), humanitarian supply chains\n(Baharmand et al. 2021) or pharmaceutical chains (Hosseini\nBamakan et al. 2021; Hastig and Sodhi 2020). Existing\npapers are focusing on illustrating the potential value of the\nblockchain and its interoperability with existing technology,\nsuch as IoT, and in particular, for the fashion industry, this\ntechnology has enormous potential in improving the information flows of supply chains (Agrawal et al. 2021; Wang\net al. 2020; Bullón Pérez et al. 2020; Agrawal et al. 2021;\nChoi and Luo 2019). The fashion industry is characterized\nby a multitude of international suppliers collaborating in\nthe creation of collections, and nowadays the development\nof complete traceability is certainly a relevant issue for all\ncompanies in the sector. The blockchain is characterized by\nthe possibility of ensuring traceable information and represents a technology that in the future will be massively used\nby fashion companies, even if currently there are few cases\nof application of this technology in the fashion industry\n(Ahmed and MacCarthy 2021). The fashion sector, however,\nstill presents little empirical evidence as many companies\nare still studying and evaluating blockchain technology and\nhave not yet moved on to the next phase of implementing\nthe technology. Further studies on the adoption of blockchain technology in the fashion industry are encouraged to\nevaluate the factors that may contribute to (or hinder) the\nimplementation of the blockchain system in extended fashion supply chains (Caldarelli et al. 2021). At present, there\nare still few blockchain applications, so any new studies that\ndelve into the feasibility of this tool are very useful in helping to understand the contexts in which the blockchain can\nachieve positive results for fashion companies and their supply chains (Chang et al. 2019; Queiroz and Wamba 2019).\nBearing in mind these gaps, this paper aims to investigate the adoption of the blockchain to enhance traceability\nalong supply chains. In particular, the drivers and barriers\nthat favor or hinder the introduction of blockchain technology among supply chain actors will be investigated for the\nfashion industry. The first research question (RQ1) will be:\n_Why do fashion companies adopt, or not adopt, blockchain_\n_technology as a system to improve traceability along supply_\n_chains in the fashion industry? What are the drivers and_\n_barriers to the implementation of blockchain in fashion sup-_\n_ply chains?_\nTraceability cannot be implemented at the level of a\nsingle node in the supply chain, but it affects entire fashion supply chains (Ahmed and MacCarthy 2021). For this\n\n\nreason, the implementation of blockchain technology should\nembrace the perspective of the whole supply chain by further\ninvestigating the variables that may enable or influence the\nadoption of blockchain technology at the supply chain level\nin the fashion sector. For this reason, the second research\nquestion (RQ2) is, therefore: How do supply chain variables\n_impact the adoption of blockchain technology as a system for_\n_improving traceability along fashion supply chains?_\nThese questions are tackled through the analysis of 12\ncase studies of the fashion industry, which describe fashion\ncompanies that are considering the use of blockchain technology to track their supply chain processes. The sample\nincludes both providers (five) and focal companies (seven)\nto compare their different viewpoints on the topic.\nThe paper is organized as follows. Section 2 reviews\nprevious studies focusing on blockchains and the relationship between the blockchain and traceability practices\nwithin extended supply chains. Section 3 is dedicated to the\nresearch aims, and Sect. 4 presents the methodology. Sections 5 and 6 provide a comprehensive analysis of results,\nwhile Sect. 7 highlights the concluding remarks.\n\n### 2 \u0007Literature review\n\n#### 2.1 \u0007The revolution of using blockchain technology for supply chains\n\nThe blockchain concept was proposed by the developer\nSatoshi Nakamoto and since 2009, has been fully validated\nthrough the bitcoin system implementation (Nakamoto\n2008). A blockchain refers to an open, shared, and distributed ledger that enables information disclosure and responsibility attribution and is suitable for dealing with valuable\ninformation (Pazaitis et al. 2017).\nAs stated by Fu et al. (2018), ‘The blockchain entries\n_could represent transactions, contracts, assets, identities,_\n_or practically anything else that can be digitally expressed_\n_using smart devices. New versions of blockchain technol-_\n_ogy implementation offer support for the implementation of_\n_smart contracts encoded in ledger blocks, which implement_\n_different business rules that need to be verified and agreed_\n_upon by all peer nodes from the network. When a transac-_\n_tion arrives, each node updates its state based on the results_\n_obtained after running the smart contract. Such replication_\n_process offers a great potential for control decentralization’._\nBased on a structure composed of nodes, blockchain technology can support digital integration in complex supply\nchains. The blockchain can address the limitations of traditional supply chains thanks to the features (Kouhizadeh et al.\n2021) described below.\nFirst, a distributed ledger of transactions is replicated\nto every node of the blockchain network. As already\n\n## 1 3\n\n\n-----\n\nmentioned, the distributed ledger is open to all nodes, which\nmay have restrictions depending on their permission level.\nTransactions create new blocks that are chained to the previous blocks, and everyone who has read permission can\nverify the validity of the transactions: for instance, a seller\ncan notify a buyer about a transaction, and the existence of\nthis transaction will be verified directly from the ledger. In\nthis way, all the actors in a digital supply chain can be verified (Pazaitis et al. 2017; Raval 2016).\nMoreover, the blockchain offers the possibility of developing smart contracts for automating business transactions\nand document exchanges between parties within the supply chain. Smart contracts can be developed on blockchains\nand used to automate supply chain transactions at a very\ndetailed level (Savelyev 2017). For instance, smart contracts can enable automated transactions of pre-determined\nagreements between parties. The blockchain can make the\ntransactions transparent and reliable, thus generating safe\nfinancial transactions.\nFinally, public-key cryptography is used to encrypt and\ndecrypt a transaction. This feature ensures a high level of\nsecurity while sustaining the whole architecture within the\ndigital supply chain. As a result, the blockchain can enable\nthe quick, reliable, and efficient execution of transactions\nand document exchanges securely and at a low cost (Pazaitis\net al. 2017).\nFrom the operational point of view, the adoption of a\nblockchain system can simplify supply chain processes by\nreducing, for instance, disputes over invoices. The results\nof an IBM study indicate that, worldwide, invoices for over\n100 million dollars are annually subject to dispute (IBM\n2019). According to the IBM estimations, the blockchain\ncould avoid this kind of dispute in 90–95% of cases. Purchase orders and purchase agreements, which are formalized among supply chain partners, can be registered in digital formats in a blockchain and made available only to the\nintended parties through their private keys. This drastically\nreduces the need for emails or other means of communication. With the blockchain, messages and documents are\ntransferred between supply chain members via blockchain\nnodes, with confidential data stored and made accessible\nwith a private key. If records are correctly uploaded on a\nblockchain platform, it becomes a single source of truth,\nand supply chain partners can access relevant information\nin real-time.\n\n#### 2.2 \u0007Blockchain and supply chain traceability\n\nThe identification of all transactions and information\nexchanged within a supply chain, as well as that of all suppliers collaborating in the chain, is becoming a weapon of\nsuccess: by giving evidence (and therefore enabling tracing)\nregarding the origins, supply chains are assuming a key role\n\n## 1 3\n\n\nfor consumers, who are increasingly interested in knowing\nthe details of products purchased (Morkunas et al. 2019).\nAuthors have debated concerning the interoperability of\nblockchains with IoT devices (such as the RFID), verifying\nthe benefits of an interconnection between blockchains and\nIoT identification to track products and processes. The first\nevidence in this sense comes from food supply chains. For\nexample, we cite the collaboration between the multinational\nNestlé and Walmart that have implemented successfully the\nblockchain developed by IBM (Zelbst et al. 2019). More in\ngeneral in the food sector the blockchain has demonstrated\nits important role in ensuring product safety traceability\n(Rogerso and Parry 2020). The logistics sector also experimented the potential of blockchain technology; distribution\ncompanies such as Maersk, UPS, and FedEx have indeed\nsuccessfully implemented this technology (Kshetri 2018).\nThe implementation of blockchain technology has also\nproved useful in the pharmaceutical sector, in particular\nfor products that require to be stored and distributed at a\ncontrolled temperature (Bamakan et al. 2021). Significant\nresults were also achieved in the humanitarian sector, in\nwhich blockchain technology was used for enhancing swift\ntrust, collaboration, and resilience within a humanitarian\nsupply chain setting (Dubey et al. 2020; Baharmand et al.\n2021).\nReal cases of blockchain adoption made it possible to\nverify and validate the identities of individuals, resources,\nand products in extended supply chains. Nevertheless, the\nestablishment of traceability for a network is still an open\nchallenge for many companies and sectors due to the difficulty of structuring traceability practices across company\nboundaries to identify suppliers located internationally\n(Moretto et al. 2018). In structuring traceability systems,\ncompanies must define tools and mechanisms to transmit\ninformation, focusing not only on their internal processes\nbut also on complete inter-organizational traceability that\ncan align different supply chain actors and ensure that data is\nexchanged in a standardized way. In most cases, traceability\npractices along the supply chain have been supported by\ntags, labels, barcodes, microchips, or radio-frequency identification (RFID), applied to each product (or to each batch),\nbut nowadays, digital tracking technologies are opening new\nhorizons and new possibilities. Blockchains widely enable\nthe tracking of products and service flow among enterprises\nthanks to the possibility of the access control and activity\nlogging that occurs in all nodes of the supply chain (Chang\net al. 2019). Based on this structure composed of nodes,\nthe blockchain represents a weapon that can protect every\ncompany involved from fraud and misleading information.\nEach partner in a supply chain, and every action it performs,\nare identified and tracked since the blockchain’s architecture\nensures the truthfulness of the data stored in it. Not only that,\nbut the blockchain also allows consumers to be protected\n\n\n-----\n\nfrom commercial fraud by allowing quick identification of\noriginal pieces and thus fighting the so-called grey market\n(i.e. the parallel sales market outside the official circuits of\nthe brand). In this way, the blockchain avoids, or at least\nreduces, the phenomenon of counterfeits by allowing consumers to verify information (Kshetri 2018).\nBlockchain technology also allows strengthening communication actions and the advertising campaigns of companies\nthat aim to tell the consumer the story of their products.\nThe blockchain makes it possible to check the history of the\nproduct along the entire supply chain and its use is strongly\nsupported by the greater consumer demand for tracked products. According to a recent PricewaterhouseCoopers (PwC)\nreport (2019), customers are willing to pay 5 to 10% more\nthan the list price to buy traced products.\nHowever, although many contributions detail the potential of the blockchain to support traceability systems in some\nspecific contexts (specifically in the food, pharmaceutical,\nhumanitarian, and logistics sectors), empirical evidence in\nthe fashion industry is still fragmentary. Many fashion companies are currently verifying the benefits of this technology\nfor their business and they have not yet moved on to the next\noperational phase which involves the real implementation\nof the blockchain technology (Caldarelli et al. 2021). What\nemerges from the literature review is the potential of this\ntechnology in various sectors, and, in the face of the positive\nresults, the fashion industry is working to understand the\nadvantages and limitations of the specific fashion business\n(Ahmed and MacCarthy 2021). The first results from the\nevaluation of blockchain technology in the context of fashion help to underline how this technology can lead to better\ncontrol of the fashion supply chains, characterized by high\nlevels of internationalization of production and distribution\n(Agrawal et al. 2021; Ahmed and MacCarthy 2021; Bullón\nPérez et al. 2020). The studies identify how the blockchain\ntheme for the fashion sector is closely linked to the goal of\nimproving traceability in all the procurement, production,\nand distribution of fashion products. The goal of improving traceability in the fashion supply chains is of primary\nimportance for companies in this sector, not only to know\nthe movements of physical products, the real-time stocks in\npoints of sale and distribution warehouses, the progress of\nthe subcontractors' activities but also to verify the sustainability of the entire supply chain, composed of many actors\nthat, with different roles and tasks, cooperate in the creation\nof collections (Choi and Luo 2019; Wang et al. 2020).\nThe fashion context has yet to be guided towards identifying the benefits and difficulties related to the use of blockchain technology in the fashion sector. Further evidence in\nthe fashion industry is encouraged to analyze the factors\nthat favor (or hinder) the implementation of blockchain\ntechnology in extended and complex fashion supply chains\n(Caldarelli et al. 2021).\n\n\n### 3 \u0007Research aims\n\nBlockchain technology is not yet widespread among companies, and research is still open to evaluating the new\npossibilities that blockchains can offer to various industrial sectors (Pólvora et al. 2020). Further research contributions are encouraged to identify the factors that could\ncontribute to, or that may hinder, the implementation of\nthe blockchain within supply chains (Chang et al. 2019;\nQueiroz and Wamba 2019), in particular in the fashion\nindustry (Choi et al. 2019; Caldarelli et al. 2021; Ahmed\nand MacCarthy 2021; Agrawal et al. 2021).\nThe overall goal of this research is to address the potential for using blockchain technology in fashion supply\nchains by considering the specific company variables (i.e.\nthe drivers and the barriers) that would affect its implementation. In particular, the current literature does not clarify which are the factors that a company considers to be\nfacilitators, or which to be obstacles, in their adoption of\nblockchain technology (Chang et al. 2019; Pólvora et al.\n2020; Queiroz and Wamba 2019). Fashion companies today,\nare at the stage of evaluating the relevance of blockchain\ntechnology for their business: their initial step will focus\non the identification of the main drivers and barriers in the\nadoption of blockchain technology. Current blockchain literature mainly takes a technological perspective and a more\nmanagerial point of view that would understand the drivers\nand barriers in the adoption of blockchain technology is still\nmissing. Recognizing this research gap, the first research\nquestion is formulated as follows.\n\n_RQ1: Why do fashion companies adopt, or not adopt,_\n_blockchain technology as a system to improve trace-_\n_ability along supply chains in the fashion industry?_\n_What are the drivers and barriers to the implementa-_\n_tion of blockchain in fashion supply chains?_\n\nThe literature also makes little contribution to addressing the supply chain variables that would support the\nimplementation of the blockchain in the specific fashion\ncontext. Further studies are needed to support an understanding of how to operate in making the implementation\nof blockchain technology effective and successful among\nfashion supply chain partners (Wang et al. 2019). There\nis a need to study in-depth the main variables that enable\nproper and successful implementation of blockchain technology within fashion supply chains (SCs). Industries differ\nin terms of their different SC relationships, setting the path\nfor a contingency foundation to blockchain implementation\nchoices within supply chains (Caniato et al. 2009; Pólvora\net al. 2020). Using the contingency approach emphasizes\nthat SCs can have different structures and that these may\nbe related to several contingencies, such as environment,\ntechnology, organizational goals, or the characteristics of\n\n## 1 3\n\n\n-----\n\nthe members of the SC, such as skills, knowledge, and size\n(Caniato et al. 2009). In line with the approach suggested\nby the contingency theory, the study of blockchain technology in the fashion context will have to take into account\nthe characteristics of the fashion supply chain itself. Recognizing this research gap, the second research question\nwas formulated for an in-depth investigation of specific\nfashion supply chain variables (i.e. contingent variables\nand enablers) impacting the implementation of the blockchain technology.\n\nRQ2: How do supply chain variables impact the adoption of blockchain technology as a system for improving traceability along supply chains of the fashion\nindustry?\n\n### 4 \u0007Research methodology\n\nGiven the exploratory nature of the topic under investigation, we decided to adopt a multiple case study methodology to anchor our results in the real world. The case study\nmethodology is appropriate when research is exploratory\nand the phenomenon under investigation is still poorly studied as it offers the opportunity to achieve in-depth results\nthrough direct experience (Voss et al. 2002). Multiple case\nstudies are conducted to achieve a depth of information and\nto increase the external validity of the results (Voss et al.\n2002). Although research studies are available regarding the\nimplementation of the blockchain in the financial context, a\nperspective that considers the implementation of the blockchain in manufacturing supply chains, and more specifically\nin the fashion industry, is still lacking.\n\n#### 4.1 \u0007Sample selection\n\nThe goal of the study is to investigate how company variables (drivers and barriers) and supply chain variables\n(enablers and contingent variables) impact the adoption of\nblockchain technology to improve traceability in the fashion supply chain. The literature suggests that the adoption\nof blockchain technology might differ strongly in different\nindustries (van Hoek 2019) and that the nature of the industry is one of the most impactful variables for supply chains\n(Treiblmaier 2018).\nFor this reason, the sample used in this paper is homogeneous in terms of industry, and the fashion industry was\nselected as this industry is consistently working on the\nimprovement of product traceability at the supply chain\nlevel (Choi 2019). The reasons for this attention are several. First, the phenomenon of counterfeiting heavily afflicts\nthis industry. In addition, companies are increasingly interested in verifying their supply chain partners for purposes\n\n## 1 3\n\n\nof social and environmental sustainability (Moretto et al.\n2018; Mukherjee et al. 2021). Furthermore, this industry\nis already investigating the possible contribution of blockchain technology for achieving these goals. The blockchain\nis, therefore, becoming a tool for protecting companies in\nthis context (Choi and Luo 2019; Fu et al. 2018). To mention\na few examples, companies such as Levi’s, Tommy Hilfiger,\nand LVMH are already evaluating or implementing blockchain technologies. For these reasons, the fashion supply\nchain is an interesting context in which to study the potential\nof blockchain technology (Agrawal et al. 2018).\nSimultaneously, the sample is heterogeneous in terms\nof the types of actors included, as both focal companies\nand the providers of blockchain technology were included.\nThe former were all interested in the adoption of the blockchain system within their supply chain. In particular, focal\ncompanies were included to get the perspective of supply\nchain decision-makers. Within the fashion supply chain,\nthe important changes and investments will be driven by\nthe focal company, which will push the rest of the chain in\nthe same direction. For this research, seven focal companies\nwere interviewed to discuss the roles and the responsibilities\ninvolved in the blockchain project in their company. This\npart of the sample was homogeneous in terms of size, as it\nis generally only large companies that are evaluating blockchain projects and have the financial resources to afford this\nkind of project. Furthermore, these companies are strong\nenough to influence the rest of the supply chain. Only brand\nowners were included in the sample. All the companies\nin the sample were either implementing or evaluating the\nimplementation of blockchain technology to meet their\ntraceability goals; the reason why we decided to include\ncompanies that are both implementing and evaluating the\ntechnology is that the former is potentially more aware of\nthe enablers and contingent variables whereas the latter of\ndrivers and barriers. The companies are considered anyhow\ncomparable as implementing companies are mainly in the\nearly stage in the project whereas evaluating companies have\nbeen working on these proposals for a certain amount of\ntime, so data and perception are comparable. This choice of\nthe sample will make it possible to achieve a full understanding of the drivers and barriers and also the supply chain variables that influence the adoption of blockchain technology\nin the fashion industry.\nIn addition to representatives from the fashion industry,\nblockchain providers are included in the sample to introduce the perspective of actors who are in the position to talk\nwith several companies, and who have a breadth of perspective on the main drivers, barriers, enablers, and contingent\nvariables addressed by their customers. The providers were\nasked to present their understanding of the viewpoints of\ntheir fashion customers. For the providers to be eligible for\nthe research, they needed to work explicitly with fashion\n\n\n-----\n\ncompanies. This part of the sample is heterogeneous in terms\nof company size, as both large companies and small startups\nare emerging to support fashion companies in their adoption\nof blockchain technology. Five blockchain providers were\ninterviewed for the study, and they spoke from the position of the technology expert and also from the perspective\nof sales and commercial managers who are in contact with\ncustomers in the fashion industry.\nA total of 12 case studies were thus included in the\nresearch (Tables 1 and 2): five technology providers who\nsupport companies in blockchain implementation and seven\nfocal companies that are evaluating blockchain implementation in their respective supply chains. The number of\ncase studies is considered sufficient to reach saturation\n(Yin 2003).\n\n#### 4.2 \u0007Data collection\n\nTo collect the data, semi-structured interviews were conducted, and for this purpose, a semi-structured interview\nprotocol was developed. A research protocol increases\nresearch reliability and validates the research by guiding\ndata collection. Furthermore, a protocol provides essential\ninformation on how to carry out case studies by standardizing the procedures used to collect the data (Yin 2003).\nDue to the exploratory purpose of this study, open questions were asked and the protocol developed did not follow\na rigid pattern but allowed the conversation to be natural so\nthat the characteristics of the framework would be shaped\nby the answers given in the interviews. The protocol was\nrevised in the course of the interviews to incorporate the\ninsights gathered.\nTwo separate interview protocols were designed, one for\nthe focal companies and one for the providers. The former\nwas composed of (1) an introduction to the company (e.g.,\ncompany name, role of the person interviewed, number of\nemployees, turnover, description of the supply chain in terms\nof sourcing, making and delivery and the global scope of the\nSC for the focal company); (2) a description of the traceability system already in place with the focal company (e.g.\nreasons for adoption of a traceability system, technologies\nadopted, impact on processes, main drawbacks, etc.); (3) an\nevaluation of the main drivers and barriers to the adoption of\n\n**Table 1 Sample composition–Providers**\n\n**Company** **Location** **Revenue**\n\n_Provider 1_ Italy 39 Million $\n_Provider 2_ Italy Around 100.000€\n_Provider 3_ Italy 46 Billion $\n_Provider 4_ Italy 4 Million €\n_Provider 5_ Italy 2 Million €\n\n\nblockchain technology; (4) the characteristics of the supply\nchain and how these variables influence the implementation of the blockchain. The interview protocol for the providers included (1) an introduction to the company (name\nand role of the person interviewed, number of employees,\nturnover, description of the services offered to companies);\n(2) a description of the blockchain technology that they are\nselling to their customers; (3) an analysis of the main reasons\nfor fashion customers implementing blockchain technology,\nincluding an investigation of drivers and barriers; (4) an\nanalysis of how the individual supply chain features impact\ncompanies’ adoption of blockchain technology.\nThe data collection stage involved multiple investigators and interviewers and all the interviews were recorded\nand transcribed (Eisenhardt 1989). Trick questions were\nincluded to verify the information and to identify any bias.\nThe whole data collection process was conducted in 2019.\nData collected through direct interviews were then combined with secondary data, such as white papers, company\nwebsites, documents provided by the company, case studies\npresented in conferences or specific workshops, etc.\nAfter the interview, each case was analyzed on its own.\nThe data collected through the direct interviews were then\ncategorized onto a spreadsheet. It was then analyzed and\ntriangulated with secondary data, such as the companies’\ndocuments, newspapers, and reports on both the focal companies and the providers. In empirical studies, a combination\nof different sources makes it possible to understand all facets\nof the complex phenomenon studied (Harris 2001).\n\n#### 4.3 \u0007Data analysis\n\nThe data analysis involved three stages: a within-case analysis, a cross-case analysis, and a theory-building stage. For\nthis data analysis, the research team met many times after\nthe initial site visits to develop a strategy for synthesizing the\ndata. In cases where some data were missing or unclear, the\nrespondents were contacted again by phone for clarification.\nTo maintain the narrative of the findings, a within-case\nanalysis was conducted to identify each company’s peculiarities (its drivers and barriers), while the main supply chain\nvariables (enablers and contingent variables) for each case\nwere highlighted. Several quotations from informants have\nbeen included in the within-case analysis, as reported along\nwith the description of the results in the paper. In particular, open coding was adopted for the within-case analysis,\nand labels and codes were identified based on transcripts of\nthe interviews. The within-case analysis involved following\nseveral steps: reading the transcripts of the interviews twice\nto take notes and grasp the general meaning of the interview. Through this process, the most frequent words used\nin each case were identified, and these were used to create\nthe coding labels. Finally, data interpretation was performed\n\n## 1 3\n\n\n-----\n\n**Table 2 Sample composition –**\n**Company** **Location** **Revenue** **Number of** **Degree of globalization**\nFocal companies\n**employees**\n\n_Focal Company (FC) 1_ Italy 54 Billion € 150.000 Stores in more than 150 countries\nGlobal supply network\n_Focal Company (FC) 2_ Italy 60 Million € 260 Global customers\nMainly local suppliers\n_Focal Company (FC) 3_ Italy 150 Million € 1400 Global customers\nLocal and global suppliers are\nequally important\n_Focal Company (FC) 4_ Italy 3 Billion € 6.500 Stores in more than 150 countries\nGlobal supply network\n_Focal Company (FC) 5_ Italy 1 Billion € 3.800 Stores in more than 150 countries\nGlobal supply network\n_Focal Company (FC) 6_ Italy 1,5 Billion € 4.000 Stores in more than 100 countries\nGlobal supply network\n_Focal Company (FC) 7_ Italy 1,5 Billion € 6500 Stores in more than 150 countries\nGlobal supply network\n\n\nwhere each case was taken individually and its variables\nwere described and interpreted. This included examining the\nfinal results to conclude the within-case analysis.\nThese coding labels were then used to perform the crosscase analysis (Annex A). The cross-case analysis was initially jointly performed for the focal companies and providers to combine their different points of view and to raise\ndifferences during the discussion. The purpose of the crosscase analysis was to identify both commonalities and differences among the cases. The cross-case comparisons helped\nto extract the common patterns. The cross-case analysis was\nperformed independently by two researchers and then the\nresults were compared to find similarities and differences\nand to increase the descriptive validity. In the case of any\nmisalignment, a revision of results was performed to arrive\nat a common classification for each case.\nFinally, the theory-building stage was completed, where\ninterpretation and abstraction were performed. This involved\niterating data and theory to design a new framework for characterizing the design of decentralized two-sided platforms\nthat are built upon blockchain technology. Results of this\nstep are provided in the Table reported in the Result section.\n\n### 5 \u0007Drivers and barriers for blockchain technology\n\n#### 5.1 \u0007Drivers for blockchain technology\n\nThe analysis of the within-cases allowed us first of all to\nidentify two main groups of drivers for the blockchain technology: the internal and the external. In terms of the internal drivers, companies presented decisions taken within the\n\n## 1 3\n\n\ncompany to improve internal performance metrics such as\nefficiency and effectiveness. In terms of external drivers,\ncompanies presented the incentives or requests obtained\nfrom external actors, which could be either the supply chain\nor the customers. This distinction was made particularly\nclear by the providers, who illustrated the different requests\nreceived from some of their customers, as indicated in a\nquote from Provider 2: ‘For us, it is particularly important\n_to understand why a customer is approaching the block-_\n_chain. Some of them are mainly interested in the possibility_\n_to exploit traceability at a lower cost or through the auto-_\n_mation of some steps, so mainly with an internal perspec-_\n_tive. Some others are, actually, more focused on the external_\n_perspectives: either for specific requests of the customers or_\n_retailers or for the willingness to onboard on the project the_\n_overall supply chain. But this is an important distinction,_\n_guiding potentially different approaches’._\nBased on these insights, the cross-case analysis considered\nthree different variables, i.e. the internal drivers, the external drivers (the supply chain), and the external drivers (the\ncustomers), as reported in Annex A. We noticed that almost\nall of the companies have listed some elements in all three\ngroups of drivers. Internal drivers are mentioned strongly by\nproviders whereas focal companies are stressing more the\nimportance of external drivers, especially supply chain ones.\nThis difference could depend on the fact that providers are\nalso considering the perspective of companies that at the end\ndecided to not move forward in the adoption of the blockchain technology; focal companies, on the contrary, strongly\nunderstand the importance to generate value along the supply\nchain or for accomplishing the request of customers.\nHaving compared the different cases, their commonalities\nand differences were considered and are combined in Table 3.\n\n\n-----\n\nThe first group concerns internal drivers, meaning the\nreasons that push the individual company to implement\nblockchain technology. In particular, companies presented\neither efficiency- or effectiveness-oriented reasons for their\nadoption of the blockchain. These companies highlighted\nstrongly the benefits expected in terms of reduction of costs\nto be achieved through greater business efficiency (in terms\nof the reduction of insurance costs or bureaucracy costs),\ngenerally to be achieved through an extensive process of\nautomation. Several companies also emphasized as important the need to reduce the cost of compliance. This was\nexpressed by the manager of Provider 2, who reported: ‘In\n_Castel Goffredo there is a district where 60% of European_\n_socks are produced. One of the most interesting topics that_\n_came up with them is the management of compliance. Each_\n_of these companies, of which many are subcontractors for_\n_other brands like Zara, have a series of certificates that_\n\n_[they] must produce. But they come to need 15 different_\n_certificates for each company, so every 2/3 days they have_\n_an audit, which involves dedicating people and wasting time._\n_This is a big problem for them because the certifications are_\n_different, but they also have many common points. Maybe_\n_they have to produce one for a brand and a similar one for_\n_another brand. Thanks to a blockchain and a smart con-_\n_tract, they could reduce these kinds of costs’. The cost of_\ncompliance was probably the most frequently cited driver\nfor the blockchain, and also in the literature. This driver was\ncited by all the providers, illustrating that this is the main\npoint emphasized by the providers in terms of what matters to their customers. This point, especially in the fashion\nindustry, could represent an important element especially\nfor smaller companies, with several customers and request\nto accomplish.\nAlthough this driver was strongly presented in the case\nstudies, and especially by the providers, it is interesting that\nseveral other drivers were also emphasized. In terms of the\ninternal drivers, several case studies spoke of the importance\nof using blockchain technology to increase effectiveness,\nin particular, due to improvements in the decision-making\nprocess, as information is always required immediately and\nmust be easily available. This was supported by an additional\ndriver linked to data integrity and data safety, as companies\nneed to be sure of the validity of the data that they use for\ndecision-making. This driver is, anyhow, not specific to the\nindustry, but presented also in literature as one of the main\nadvantages of the blockchain technology independently from\nthe area of application.\nHowever, the most recurrent driver, specific to fashion\nproducts, is the possibility of reducing counterfeit products.\nThis was highlighted by almost all the focal companies, all\nof whom are potentially strongly impacted by this issue. Provider 2 gave an example of this when they reported that one\nof their customers had suffered damage due to counterfeit\n\n## 1 3\n\n\n-----\n\nproducts that equaled 10% of their total revenue. FC3\nreported: ‘We are part of a blockchain project sponsored\n_by the government. The main reason why the government_\n_pushed this project was a willingness to protect Made in_\n_Italy’. This is a relevant driver for the industry, that was also_\nmentioned for example for food products in other domains.\nThe second group of drivers pertains to external drivers\nand includes the supply chain drivers, where other supply\nchain actors play an important role. This is a perspective\njust partially investigated in existing literature, for example\nconsidering the logistics industry. The first group of supply chain drivers concerns the willingness to increase visibility along the overall chain, thanks to the trust demonstrated in the sharing of data among different actors. This\nwas expressed by Provider 1: ‘I think generally, a block_chain is solving a problem of trust. It is solving a problem_\n_in which multiple different actors, within a specific kind of_\n_system, whether it is a supply chain system, or whether it_\n_is a government, like a political system, or different kind_\n_of social system, where different actors have incentives to_\n_anticipate in the system and some of the actors have incen-_\n_tives to cheat, not be transparent, maybe gain more out of_\n_the system. Blockchain essentially enforces trust onto a sys-_\n_tem so individual actors can’t take advantage or manipu-_\n_late the system for their advantage’. What the blockchain_\ndoes is create controlled data shared by multiple companies.\nEvery company has its information system, making incorrect data modifications impossible. The blockchain makes\npossible a process in which multiple organizations interact\nwith each other and, at the same time, it ensures that only\ncorrect data are exchanged through this interaction. Data\nare stored on the blockchain in a way that means they are\nnon-falsifiable and cannot be tampered with. The reason\nfor the blockchain increasing trust is not that data are automatically true, but that accountability for what is reported\nis clear. A good example of this is reported by Provider 3:\n_‘I can also write false information because the blockchain_\n_does not validate the data per se, so if I write the tempera-_\n_ture that a sensor detects while I have a warehouse full of_\n_sushi and the temperature is at 40 degrees but I write 0,_\n_the blockchain records 0. However, the fact remains that I_\n_digitally sign cryptographically what I am writing and I also_\n_take responsibility for what I am writing. So if a garment is_\n_made of merino wool and I declare that it is made of merino_\n_wool, this remains written, and therefore, there is this kind_\n_of advantage’._\nSome of the other companies also reported drivers that\nare consistent with the features of the blockchain itself: the\nblockchain is agnostic, or interoperable in terms of data,\nand so it makes it possible to achieve benefits such as having common communication layers among all levels of the\nchain and obtaining disintermediation of the network. These\ndrivers are valid for the fashion industry but aligned with\n\n## 1 3\n\n\nthe main drivers of the technology itself, as presented in\nliterature streams about blockchain technology.\nAnother group of supply chain drivers concerns the use\nof the blockchain as an extension of best practices along\nthe chain. Several companies stated that they are studying\nthis new technology as their main competitors are doing the\nsame: this point was highlighted by several focal companies, whereas it was quite neglected by the providers. If this\nshould become the standard, the late joiners might experience some damage either because they are late or simply\nbecause they are perceived as not being innovative. The\ndifference existing between focal companies and providers\nis interesting to highlight and is making this variable particularly critical for the industry under investigation, where\ninnovation represents definitively a critical success factor.\nVery interesting is what was mentioned by companies such\nas FC3, who said they want to use the blockchain to stress\nmore ethical behaviors along the entire chain.\nCompanies also expressed their willingness to adopt the\nblockchain because of the requests of their customers.\nThis created the third group of drivers. The customers of\nthe fashion industry can be divided into end consumers and\nretailers. This difference is a peculiarity of this industry,\nwhere retailers and end consumers might play a relevant, but\ndifferent role. In terms of the end consumers, the companies\nwant to become increasingly transparent concerning them.\nIn particular, some consumers are especially interested in\nbuying from open companies, and so the companies are willing to demonstrate the validity of what they offer in terms\nof the quality of the product, its authenticity, the features of\nthe products, etc. This topic emerges as particularly critical\nin this industry, due to the strong scandals that happened in\nthe past. On the one hand, the application of the blockchain\nto the production portion of the supply chain will make it\npossible to verify exactly which actors collaborate in the\nproduction of a product, with evident benefits in terms of\nproduct authenticity and also the protection of social and\nenvironmental sustainability (for instance by ensuring the\norigin of raw materials purchased at the international level).\nIt enables the suppliers to be controlled in a more precise\nway as regards the stringent laws in the environmental field\nand concerning guarantees that must be provided about child\nlabor and more generally, about the safety and contracts of\ntheir workers.\nOn the other hand, the blockchain will make it possible\nto follow the products during all their distribution steps all\nacross the world. This will guarantee the authenticity of the\nproducts available in shops, and it will also work as a certification for consumers. Focal companies, in particular, are\nreinforcing the importance of using technology to support\nthe story and the validity of the history of their products.\nThis perspective is comparable to what is presented also in\nthe literature about food products.\n\n\n-----\n\nIn terms of the retailers, they may push companies\ntowards a more transparent approach and so the focal companies will need to respond to these requests. This is mainly\nachieved through accountability towards the end consumer.\nA good example was reported by Provider 1: ‘I think that\n_money is the main driver for the economic sustainability._\n_And so, it might not be the customers like you and me, but it_\n_might be the customer like the big department stores. Maybe_\n_these department stores don’t want to work more with you._\n_Creating more transparency, people can make better deci-_\n_sions on where they source’._\nThirdly, several companies presented the coherence of\nthis approach by providing typical critical success factors\n(CSFs) of fashion companies, especially the high-end ones,\nsuch as telling the story, increasing brand awareness, and\npresenting the company as innovative and open towards\nits consumers. Proof of the products’ authenticity will add\nfurther security to the claims made by the brands: it will\nassure the consumers that information on the final product and certifications are verified by the company and its\nsuppliers. This helps in the prevention of false claims and\nincludes the field of sustainability where the risk of ‘greenwashing’ is always present (concerning both environmental\naspects and social sustainability). This is a point strongly\nstressed especially by focal companies, willing to find new\nlevers to differentiate proper sustainability and just minimal levers.\nThese results are summarized in the following research\nproposition:\n\n_RP1: The implementation of blockchain technology to_\n_improve traceability along the fashion supply chain_\n_is driven by three main groups of factors: to increase_\n_internal efficiency and effectiveness at the process_\n_level, to be aligned with the requests emerging at the_\n_fashion supply chain level, and to increase the level_\n_of trust communicated to end consumers and fashion_\n_retailers._\n\n#### 5.2 \u0007Barriers to blockchain technology\n\nBridging the digital and physical worlds by making the products’ path accessible to the customers through a blockchain\nsystem is not easy in any situation, and this is why some of\nthe barriers are discussed here.\n\n**Table 4 Barriers to blockchain technology**\n\n\nThe within-case analysis enabled two main groups of barriers to be identified: those that were strongly linked to the\ntechnology and those that were more oriented to cultural\napproaches and to the readiness of the industry to accept\nthis new way of working. The former was mainly described\nby the providers, who saw the technology as the critical element, whereas the focal companies were more focused on\nindustry-specific elements. This result could depend on the\nsample composition: focal companies are already implementing in the late stage of evaluation of the technology,\nthereby being quite sure of the willingness to introduce this\ntechnology. On the contrary, technology providers have the\nperspective of both adopters and not adopters and in this\ncase, technological barriers appear more relevant and complicated to overcome.\nThe cross-case analysis was performed considering these\ndifferent approaches and it is summarized in Table 4.\nThe first group of barriers is technology-specific. First,\nwas the theme of the investments needed to support the\ndevelopment of a blockchain system as the blockchain is\nstill perceived as an expensive technology. This was particularly regarded as an issue due to the risk that it would\nincrease the costs of the final product. For example, FC5\nsaid, ‘The reason why blockchain is deeply discussed within\n_my company is that the cost is still particularly high, espe-_\n_cially in comparison to other traceability systems. If we need_\n_to transfer this cost in the prices of the products, marketing,_\n_and salespeople are not aligned and not willing to accept_\n_this additional point whether they are not able to see the_\n_value for the customers’. Moreover, the blockchain is seen_\nas a complex technology, difficult to understand and motivate, for example, FC3 mentioned, ‘For me, it, was not easy\n_to understand how the technology works and so to trust the_\n_technology. Now I got it but the problem is still not com-_\n_pletely solved as now it is a matter of understanding which_\n_are the data to properly share.’ This barrier is not industry-_\nspecific but connected to the technology itself. In this vein,\nsolutions identified in other industries could also become a\nlever to overcome this technology in the fashion domain too.\nThe second group of barriers is called industry-specific\nas they relate to specific features of the fashion industry, such\nas the generally low level of digitalization in the supply chain\n(thereby requiring a big jump, especially for small companies), which is also related to a generally low technological culture in the industry. Moreover, at present, there is no\n\n\n**Technology specific** **Industry-specific**\n\n- difficult to understand how the technology works - low level of digitalization in the supply chain\n\n- the high cost of the technology - missing a shared technological standard in the industry\n\n - missing a technological culture in the industry\n\n - collaboration among different SC partners\n\n## 1 3\n\n\n-----\n\ntechnological standard, and several companies are worried\nabout this. For example, FC1 reported, ‘Today, the biggest\n_problem is not so much to use the blockchain, but to use it_\n_in the same way because if everyone makes his [own] block-_\n_chain fragment there is also a big race for who will be the_\n_winner-take-all’. Finally, to use the blockchain it is necessary_\nto have strong collaboration among the supply chain partners,\nbut the overall level of collaboration in the fashion industry\nis often poor, and this could reduce the feasibility of adopting blockchain technology. This is something presented as\nparticularly critical by focal companies, especially those in\nthe evaluating phase. To overcome this barrier is relevant to\nexpand the adoption of blockchain technology in this domain.\nThese results are summarized in the following research\nproposition:\n\n_RP2: The implementation of blockchain technology to_\n_improve traceability along the fashion supply chain_\n_is halted by two main groups of factors: a low under-_\n_standing of the newly emerging technology in the_\n_fashion industry and the perception that the fashion_\n_industry is not yet ready from either a technological_\n_or a cultural point of view._\n\n### 6 \u0007Supply chain variables and the impact on blockchain technology\n\nExploratory case studies were used to understand if and\nhow the characteristics of the supply chain might impact\nthe blockchain.\nWhat the cases suggest is that two different groups of\nsupply chain variables could influence the adoption of\n\n\nblockchain technology. First, there are the enablers, considered to be elements existing within the supply chain that\ncould support and exploit the adoption of blockchain technology. Second, there are contingent variables, described\nas the contextual factors of the supply chain, which could\nimpact the potential benefits achievable through blockchain\ntechnology as well as the possibility of implementing it.\nThese two groups of variables were used to perform the\ncross-case analysis reported in Annex A and summarized in\nTable 5. In analyzing the data reported in Annex A, we could\nnotice that there is quite a good consensus about the enablers\nidentified in different cases; these enablers are pretty in line\nwith the main barriers previously identified, addressing that\nthese variables could reduce the risks and the uncertainty\ngenerated by the technology. On the other hand, reading data\nof the cross-case analysis, some differences among the case\ncould be highlighted in terms of contingent variables. Providers are focusing more on fixed parameters, such as the\nsupply chain complexity and the features of the industry,\nwhereas focal companies are strongly presenting the relationships existing. This dichotomy again provides evidence\nof which are the elements influencing the adoption since the\nbeginning and which are the most relevant points presented\nduring the implementation, with a more practical and business perspective.\n\n#### 6.1 \u0007Supply chain contingent variables for blockchain technology\n\nThe case studies highlight several contingent variables that\ncould influence the adoption as well as the success of blockchain technology. Cases are quite aligned in the identification of variables to consider but have different perspectives\n\n\n**Table 5 Supply chain enablers and contingent variables of blockchain technology**\n\n**Contingent variables** **Enablers**\n\n\nSUPPLY CHAIN COMPLEXITY\n\n- the size of the companies (easier to use with big suppliers, more relevant\nwith small ones)\n\n- number of nodes involved (the higher the number of nodes the higher the\nsafety of the system)\n\n- globalization of the supply chains (the more the supply chain is global the\ngreater the need to bring information to the consumers)\n\n- level of vertical integration (less relevant when production activities are\nowned)\nTYPE OF RELATIONSHIP\n\n- duration of the relationships with suppliers (best used with stable\nsuppliers)\n\n- supplier commitment towards the company (adoptable with committed\nsuppliers)\nINDUSTRY\n\n- level of regulation (less valuable when the regulations are already super\nstrong and are monitoring everything, but proper regulations might be an\nenabler factor)\n\n- positioning (adaptable with high-end products)\n\n## 1 3\n\n\n\n- proper supply chain traceability system already in place (with the\nappropriate units of analysis, single product or container)\n\n- need to integrate blockchain with other technologies, such as IoT\n\n- willingness to collaborate with other actors in the chain\n\n\n-----\n\nabout the possible positive or negative influence of the variables. This is something very specific for the industry under\ninvestigation and not investigated in current literature. The\nmost frequently mentioned, and also most controversial element, pertained to supply chain complexity. This result\nhighlights the complexity of the supply chain as an important element fostering or reducing the effectiveness of the\nadoption of the new technology. Discussion on this point\nvaries widely as some companies address the supply chain\ncomplexity as being the greatest difficulty to introducing\nblockchain technology, with related costs and risk of failure (e.g., FC5). In contrast, other companies say that it is\nbecause of the high level of supply chain complexity that it\nis so important to exploit the traceability of the supply chain,\nand in this way, the potential value of blockchain technology is boo.\nIn this group, four main elements could be identified,\nwhich are consistent with the literature about supply chain\ncomplexity. First, the size of the company matters, but the\nimpact of this factor is controversial from the companies’\npoints of view. On the one hand, the blockchain may offer its\nstrongest contribution when small suppliers are involved, as\ntheir inclusion is critical to providing reliable and trustworthy data. On the other hand, these companies are also those\nwhere the industry-specific above are stronger, and so the\npossibility of involving them is more challenging.\nThe second element concerns the number of nodes\ninvolved: some companies indicated that the higher the number of nodes involved the higher the safety of the blockchain\nsystem. This is confirmed by the fact that it is easier to verify the validity of data provided when the number of actors\ninvolved is low, as it is easy to use alternative methods. At\nthe same time, other companies pointed out that when the\nnumber of nodes to be involved is high, the complexity in\nimplementing the technology and therefore the related costs\nincrease, thereby reducing the feasibility of the project.\nThirdly, the globalization of the supply chain was considered and discussed. Here again, contrasting opinions were\ngiven as some companies said that the more global the supply chain, the more difficult but also necessary it became\nto provide reliable information to the consumers. This is\nsomething very peculiar for this industry and with the sample analyzed, considering that all the focal companies considering present a high level of upstream and downstream\nglobalization, as illustrated in Table 2. Again, in terms of the\nnumber of nodes involved, the more global the supply chain,\nthe higher the costs of the technology.\nFinally, the level of vertical integration was mentioned. In\nkeeping with the opinions reported regarding the number of\nnodes involved, the contribution of the blockchain is higher\nif the level of vertical integration is low, as within a single\ncompany other methods, such as the more traditional centralized database, are sufficient.\n\n\nAccording to these insights, the following research proposition was formulated:\n\n_RP3: Supply chain complexity influences the imple-_\n_mentation of blockchain technology to increase trace-_\n_ability as the higher the supply chain complexity (in_\n_terms of size of the companies involved, number of_\n_nodes, globalization of the fashion supply chain, and_\n_level of vertical integration) the higher is the relevance_\n_of traceability along the fashion supply chain, but also_\n_the higher is the difficulty in implementing the block-_\n_chain technology._\n\nThe second contingent variable relates to the type of\n**relationship existing between the supply chain partners.**\nBlockchain technology is most effective with suppliers who\nhave been adopted for a long period, whereas in the case\nof a spot relationship, the cost and time required to integrate a new supplier into the blockchain would be greater\nthan the value to be obtained. This is a definitive and critical point for the fashion industry, as most of their products\nlast for not more than one season. Suppliers will likely be\nextensively revised for each collection, thereby reducing the\nnumber of actors that can be meaningfully involved in the\nblockchain. At the same time, suppliers must be committed\nto the relationship. The combination of these two elements\nwas illustrated by FC4: ‘There are big companies with fixed\n_and stable suppliers and therefore they can contractually_\n_manage this integration. When you have so many suppliers,_\n_even small ones that go in rotation, [it] is much more dif-_\n_ficult. We are perhaps big names, but we have volumes that_\n_are not comparable to someone else. And so the difficulty lies_\n_in keeping the supplier bound and performing what you ask_\n_him. We have productions in Asia where we are very small_\n_and we have to get in line with the others. In sneakers, if you_\n_talk about Adidas, Puma, or Nike, we are 0. The volume, in_\n_that case, is king.’_\nAccording to these insights, the following research proposition was formulated:\n\nRP4: Blockchain technology is easier to implement\nin the fashion supply chain with long-lasting relationships, where there is a high level of collaboration and\ntrust.\n\nFinally, some contingent factors are specific to the indus**try. From this perspective, two main contingent variables**\nwere highlighted by the interviews: the level of regulation\nand the product positioning. Regulations can play a role in\ndriving the adoption of the blockchain, but at the same time,\nthey can render the technology useless. For example, Provider 2 gave the example of the pharma industry, which is\nalready strongly regulated in terms of traceability and so it\nis less valuable for it to use blockchain technology as the\nachievable benefits would be little different. In this case, the\n\n## 1 3\n\n\n-----\n\nfashion industry can have a good potentiality, considering\nstill a limited level of regulation about the topic, but a growing relevance and perceived urgency.\nFor the latter, product positioning, the cost of the investment and the level of data to be shared are the same, independent of the type of product considered. To mitigate the\nbarriers related to the cost of the technology while exploiting\nthe drivers related to customers, there is greater potential\nwhen the technology is adopted for high-end products. This\nis a typical relevant variable for the industry, in discriminating among several strategic decisions.\nAccording to these insights, the following research proposition was formulated:\n\n_RP5: Blockchain technology is easier to implement in_\n_a regulated industry, such as the fashion one, where_\n_there is a strong need for traceability, which is not yet_\n_achieved, and for high-end products._\n\n#### 6.2 \u0007Supply chain enablers for blockchain technology\n\nIn terms of the enablers, the cases highlighted that some\nelements can make strengthen or ease the impact of both\ndrivers and barriers on the implementation of blockchain\ntechnology. In particular, the case studies highlighted how\nessential it is for fashion companies to evaluate the application of blockchains first of all, in guaranteeing the trace**ability of their products. Knowing where products come**\nfrom and what paths they have taken before arriving in the\nstores is useful both for brands, to check their supply chain,\nand for the customers who get additional information on\nthe product purchased. The major goal for the application\nof the blockchain in the field of fashion, therefore, becomes\nto trace and retrace every single passage of a product, from\nthe raw materials until the final store. The blockchain is not\nonly a tool that facilitates traceability, but it also enables the\nsharing of data. Most of the companies agreed that a proper\nsupply chain traceability system should be in place, whether\nthe companies wanted to exploit the benefits of blockchain\ntechnology. This was a point of agreement between the providers and the focal companies and differed from the initial\nexpectations that the use of the blockchain was to foster\ntraceability along the supply chain. This result is not always\ncompletely aligned with the insights of the literature, where\nthe relevance of blockchain to foster visibility is often presented. It is interesting to consider what FC7 reported: ‘We\n_already have in place a traceability system that was devel-_\n_oped several years ago. This is fundamental, as without a_\n_proper system it is irrelevant. Our driver is to increase vis-_\n_ibility along the supply chain.’_\nThe second element highlighted concerns the **possi-**\n**bilities offered by other technologies on the market. In**\n\n## 1 3\n\n\nparticular, correct verification requires a critical revision of\nthe other technologies available on the market that allow\ninformation sharing (for example, QR code, NFC, and the\nRFID system) to understand if they can meet the goals of\nbrand transparency. A relevant question that companies will\nhave to ask themselves is whether smart labels, such as NFC\ntags or custom plug-ins for e-commerce, could convey sufficient information to consumers for their business purposes.\nAlso, if the existing technologies are insufficient and the\nblockchain might provide a real contribution, it is necessary\nto understand how to integrate the blockchain with other\nexisting technologies to include existing data in ensuring\nreliable information.\nThe third and last enabler is the collaboration among\nall supply chain partners. Blockchain development inevitably requires that content and data will be collected from\nmultiple sources and suppliers and that information will be\nconstantly updated. This means involving each participant\nalong the supply chain in a long-term collaboration project,\nwhich must be grounded on mutual trust. The development\nof a blockchain project must foresee, at least initially, the\ncreation of support for companies in the network that will\nco-participate in the transparency project promoted by the\nbrand, without forgetting that the hostilities or reticence of\nsuppliers who may not want to collaborate with the other\nsuppliers will also have to be managed.\nAccording to these insights, the following research proposition was formulated:\n\n_RP6: The impact of drivers to foster the implementa-_\n_tion of blockchain technology and of the barriers to_\n_interfere with the implementation of blockchain tech-_\n_nology along the fashion supply chain depend on an_\n_already existing traceability system, on the possibility_\n_of integration with other technologies, and collabora-_\n_tion between supply chain partners._\n\n#### 6.3 \u0007Detailed research framework\n\nResults of the paper are summarized in a research framework\nas depicted in Fig. 1.\nShreds of evidence of the case studies and the summary\nof the detailed research framework provided above are also\nnecessary to offer some guidance about steps and phases that\ncompanies should perform to introduce blockchain technology in the fashion supply chain.\nThe driver of traceability along the supply chain, which\nis pushing companies towards blockchain projects, reveals\nhow strong is the need of companies to develop common\ndatabases to collect accurate supply chain information\nabout traceability and sustainability. This first need to be\nfulfilled becomes the first question to which companies\nmust answer in the process of defining the technology that\n\n\n-----\n\n**Fig. 1 – Detailed Research**\nFramework\n\nsupports such information sharing: “Does a company need\na database to collect and share data with Supply Chain\npartners?”. If companies respond negatively to this question, blockchain technology cannot and must not be taken\ninto consideration. A negative answer can be justified, for\nexample, by companies that are not very advanced concerning the issue of traceability and sustainability and that\nmanage the SCs still in “watertight compartments” among\nthe different SC partners.\nOn the contrary, if the response is positive, the company\nwill have to understand how much this point is relevant\nfor other actors of the supply chain and wonder about how\nmany partners will have to participate in information-sharing\nactivities. In particular, if the technology is not relevant for\nexternal partners and the exchange of information will be\nlimited between a dyad of partners, a blockchain will be a\nsuperstructure, which, would entail considerable costs and a\nconsiderable development commitment. In this case, a centralized database, managed directly by the focal company\nand accessible to the partners, could be a more streamlined\nsolution.\nAfter identifying the number of participants in the datasharing project, the type of relationship to be established\nand the kind of relationship willing to maintain should also\nbe analyzed. Considering that in a blockchain the partners will have to exchange sensitive data, it is necessary\nto understand the level of trust to be established. If the\nrelationship with the identified partners is not of full confidence the blockchain project must be discarded; alternatively, multiply the copies of the centralized databases\n\n\nin such a way that partners can access but not have full\ncontrol over all data. Blockchain technology contemplates\nthat a partner can change data for all connected partners,\nbut if this is not supported by trust, the blockchain project\ncannot continue.\nSubsequently, the operative aspects at the production\nlevel must be analyzed. Which transactions will have to be\nconnected and which production process must be linked in\nthe eventual blockchain? In other words, which production\nprocess must be traceable and traceable must be defined\nprecisely to comply with the traceability drivers that have\nencouraged the evaluation of a blockchain project. If the\nneed for traceability were not so strong, the blockchain project would not make sense. Probably for these companies,\nthe traceability of the supply chain is not so strong as to justify investments in new technology, but other less expensive\nprocesses are sufficient. Instead, if the traceability of the\nproduction processes along the entire supply chain will be\na very strong need of the company then the blockchain will\nbe the ideal solution.\n\n### 7 \u0007Conclusion\n\n‘Blockchain’ is one of the keywords for the future. When\nit was born, more than ten years ago, it was linked only\nto the bitcoin economy. Today, the decentralized database\nwhere transactions between users are recorded is not only\nlinked to banks’ transactions, but it is playing a significant\nrole within supply chains. International competition and the\n\n## 1 3\n\n\n-----\n\nadvent of innovative technologies are just some of the critical challenges that the fashion industry faces today. These\nchallenges require new ways of operating and accordingly,\nrequire changes in the supply chain processes.\nAlthough explored in other industries, literature is still\nquite preliminary at presenting what fashion companies specifically can do to implement blockchain technologies. For\nthis reason, this paper aims to understand the main drivers,\nbarriers, enablers, and contingent variables that explain the\nadoption of blockchain technology in the fashion industry.\nTo tackle this goal, the research was based on multiple case\nstudies, conducted through interviews with five blockchain\nproviders and seven fashion focal companies. Through\nanalysis of the case studies, the main groups of drivers (i.e.\ninternal drivers, supply chain drivers, and customer drivers),\nbarriers (i.e. technology and industry-specific), enablers, and\ncontingent variables (i.e. supply chain complexity, industry,\nand type of relationships with suppliers) were identified.\nAlthough exploratory, from an academic point of view\nthis work contributes to the schematization of the discussion\non the blockchain, identifying drivers and barriers for the\nfashion context and illustrating how the main features of the\nindustry may influence technology adoption. This industry\nhas some peculiarities and a great relevance, to justify a\nfocus in the existing literature and in trying to understand\nwhich principles valid in other industries could be replicated\nto fashion one. Moreover, current literature is just partially\nconsidering how supply chain variables could influence the\nadoption of blockchain technology to increase the visibility\nalong the supply chain; this paper, with a specific focus on\nthe fashion industry, tries to address which might be these\nareas of influence, contributing to the literature. Moreover,\nthe results hint at additional areas for investigation. Technology appears to offer a potentially valuable tool in the field\nof sustainability where previously, companies developed\nto control and audit systems based on internal protocols.\nThese were developed ad hoc by each brand or, in more\nadvanced cases, supported by certifications of environmental\n\n## 1 3\n\n\nand social sustainability. The blockchain will unquestionably\nmake it possible to see, in real-time, which actors in the\nsupply chain process the final products, and more generally, it will make it possible to provide guarantees on the\nsub-working activities through which these products have\npassed. In the fashion sector, it is common practice for suppliers to make use of sub-suppliers for production processes\nthat require highly specialized skills. The blockchain is\nincreasingly available for all sectors that need to certify the\nquality and origin of their products and raw materials. The\npotential of this technology lies in its ability to obtain greater\nconsumer confidence and to guarantee products in terms of\nsustainability and all that happens along the fashion supply\nchain. This will allow brands to provide verified information\non the materials, processes and, people behind their products. This topic is particularly relevant especially for fashion\ncompanies and further research could be necessary too.\nFrom the managerial point of view, this perspective is\na hot issue. This guide can be a useful tool for directing\ndiscussion on the feasibility of a blockchain project. This\nresearch offers valuable and original contributions to practitioners who are thinking about the drivers and barriers to\nnew blockchain projects, while the research also identifies\nconcrete questions that managers can use to check whether\nblockchain technology meets the needs of their particular\nproduction context.\nHowever, the paper does have some limitations, which\nopen opportunities for further investigation. First, the paper\ndoes consider both providers and focal companies but there\nis no proper discussion of the differences between the two\ngroups of actors. Additional research might also include\nthe viewpoint of the suppliers and compare the perspectives reported by different actors in the chain. Second, the\npaper illustrates the main drivers and barriers towards the\nadoption of the blockchain. The benefits and the costs to the\ncompanies are not discussed: further study might involve\nan action research project to assess the impacts in terms of\nperformance.\n\n\n-----\n\n### Annex A: Drivers, Barriers, Enablers, and Contingent variables\n\n**Case** **Internal drivers** **External drivers** **External drivers** **Barriers** **Enablers** **Contingent variables**\n**(company)** **(supply chain)** **(customers)**\n\n\n_Provider 1_ - business efficiency\nthrough breaking\ndown data silos\n\n - reduction of the\ncosts of compliance\n\n - improving internal\ndecision making\n\n_Provider 2_ - data safety\n\n - reduction of counterfeit products\n\n - reduction in the cost\nof compliance\n\n_Provider 3_ - process automation\n(e.g., through smart\ncontracts)\n\n - business efficiency\nand reduction of\ninternal costs\n\n\n\n- trust: reduction of\nopportunistic behaviors in the supply\nchain\n\n- reduction of information asymmetries at\ndifferent stages of the\nsupply chain – >\n\n-reduction of bounded\nrationality – >\n\n- authenticity and consistency of data\n\n- increase in efficiency\nat the supply chain\nlevel\n\n- shared communication\nlayers: blockchain is\nagnostic in terms of\nthe format of data\n\n- adoption by main\ncompetitors\n\n- decentralization and\ndisintermediation in\nthe network\n\n- trust: sharing of data\namong different actors\nof the supply chain\n\n- accountability for\nwhat is reported by\ndifferent actors in the\nchain\n\n- shared communication\nlayers: blockchain is\nagnostic in terms of\nthe format of data\n\n\n\n- supply chain complexity (globalization, number of\nactors involved, size\nof companies)\n\n- level of regulation\n(less valuable when\nthe regulation is\nalready super strong\nand is monitoring\neverything)\n\n- number of nodes\ninvolved (the higher\nthe number of nodes\nthe higher the safety\nof the system)\n\n- market globalization\n\n- level of regulation\n(proper regulations\nmight be an enabling\nfactor)\n\n## 1 3\n\n\n\n- proper supply chain\ntraceability system\nalready in place\n(with appropriate\nunits of analysis, a\nsingle product or\ncontainer)\n\n- proper supply chain\ntraceability system\nalready in place\n\n- willingness to collaborate with other\nactors in the chain\n\n\n\n- low level of digitalization in the supply\nchain\n\n- definition of the\ngovernance and the\ncentral authority\n\n- difficulty to understand which data\nare appropriate\nto share through\nthe blockchain, to\navoid the risk of\ndata overflow\n\n- missing a technological culture\n\n- difficult to understand how the\ntechnology works\n\n- the cost of blockchain is going to\nimpact the cost to\ncustomers\n\n\n\n- providing customers\nwith data to understand whether the\nprice is representative of the value of\nthe products\n\n- allowing retailers\nto decide to source\nfrom reliable suppliers\n\n- stronger communication with customers\nfor reasons of brand\nawareness\n\n- desire to assure the\nauthenticity and\nthe ownership of\nproducts to end\nconsumers\n\n- providing customers\nwith data to understand whether the\nprice is representative of the value of\nthe product\n\n- marketing desire:\npresent the company\nas innovative and\nwilling to share data\nwith customers\n\n- desire to assure\ntraceability of the\nsupply chain to\nassure sustainable and ethical\nbehaviors\n\n\n_Provider 4_ - reduction of coun- - trust of data provided\nterfeit products by other supply chain\nactors\n\n - accountability for\nwhat different actors\nare responsible for\ndoing\n\n\n-----\n\n**Case** **Internal drivers** **External drivers** **External drivers** **Barriers** **Enablers** **Contingent variables**\n**(company)** **(supply chain)** **(customers)**\n\n\n_Provider 5_ - trust of data provided\nby other supply chain\nactors\n\n - accountability for\nwhat different actors\nare responsible for\ndoing\n\n\n\n- global supply chains\n(the more the supply\nchain is global the\ngreater the need to\nbring information to\nthe consumers)\n\n- SC complexity\n\n- duration of relationships with suppliers\n(best used with\nstable suppliers)\n\n- positioning: the\nmethod is better\nsuited to luxury\nproducts as a product\ncannot cost 5$, and it\nis also necessary to\nshare all the data\n\n- global supply chains\n(to insert data of\nglobal markets such\nas North Korea,\nChina, or Bangladesh)\n\n- supply chain complexity (difficult to\nimplement when\nthere is high SC\ncomplexity)\n\n\n\n- the high cost of the - need to integrate\ntechnology blockchain with\nother technologies,\nsuch as IoT\n\n - willingness to collaborate with other\nactors in the chain\n\n- missing a techno- - willingness to collogical standard laborate with other\nactors in the chain\n\n - proper supply chain\ntraceability system\nalready in place\n\n - willingness to\ncollaborate with\nother actors in the\nsupply chain\n\n- difficult to understand which data\nare appropriate\nto share through\nthe blockchain, to\navoid the risk of\ndata overflow\n\n- the high cost of the\ntechnology\n\n- missing a shared\ntechnological\nstandard in the\nindustry\n\n\n_FC 1_ - business efficiency\nand reduction\nof internal costs\n(e.g., reduction of\ninsurance costs, of\nbureaucracy costs)\n\n_FC 2_ - Simplify the internal processes of\ndata traceability\n\n_FC 3_ - reduction of counterfeit products\n\n - process automation and business\nefficiency and\nreduction of\ninternal costs\n(e.g., reduction of\ninsurance costs, of\nbureaucracy costs)\n\n - reduction of logistics risks\n\n - reduction of the\ncost of compliance\n\n## 1 3\n\n\n\n- trust of data provided\nby other supply chain\nactors\n\n- accountability for\nwhat is reported by\ndifferent actors in the\nchain\n\n- trust of data provided\nby other supply chain\nactors\n\n- accountability for\nwhat is reported by\ndifferent actors in the\nchain\n\n- accountability for\nwhat is reported by\ndifferent actors in the\nchain\n\n- sharing of ethical\nprinciples along the\nsupply chain\n\n\n\n- desire to assure the\nauthenticity and\nthe ownership of\nproducts to end\nconsumers\n\n- marketing desire:\npresent the company\nas innovative and\nwilling to share data\nwith customers\n\n- the desire of new\nconsumers to have\nmore open companies\n\n- providing customers\nwith reliable data\nabout the product\nand the company\n\n- providing customers\nwith data to understand whether the\nprice is representative of the value of\nthe product\n\n- providing end\ncustomers with\nreliable data about\nthe product and the\ncompany\n\n- providing end\ncustomers with\nreliable data about\nthe product and the\ncompany\n\n\n-----\n\n**Case** **Internal drivers** **External drivers** **External drivers** **Barriers** **Enablers** **Contingent variables**\n**(company)** **(supply chain)** **(customers)**\n\n\n_FC 4_ - reduction of counterfeit products\n\n - reduction of the\ncost of compliance\n\n - process automation and business\nefficiency and\nreduction of internal costs\n\n\n\n- duration of the relationships with suppliers (best used with\nstable suppliers)\n\n- supplier commitment\ntowards the company\n\n- level of vertical integration (less relevant\nwhen production\nactivities are owned)\n\n- duration of the\nrelationships with\nsuppliers\n\n- global supply chains\n(more relevant but\nmore challenging for\nglobal supply chains)\n\n- supply chain complexity (globalization, number of\nactors involved, size\nof companies)\n\n- supply chain complexity (globalization, number of\nactors involved, size\nof companies)\n\n## 1 3\n\n\n\n- proper supply chain\ntraceability system\nalready in place\n\n- willingness to\ncollaborate with\nother actors of the\nsupply chain\n\n- proper supply chain\ntraceability system\nalready in place\n\n- need to integrate\nblockchain with\nother technologies,\nsuch as IoT\n\n- proper supply chain\ntraceability system\nalready in place\n\n- proper supply chain\ntraceability system\nalready in place\n\n\n\n- missing a shared\ntechnological\nstandard in the\nindustry\n\n- the high cost of the\ntechnology\n\n- a collaboration\namong different SC\npartners (identify\nthe partners who\nare willing to\ncollaborate in this\nproject)\n\n- a collaboration\namong different SC\npartners\n\n- low level of digitalization in the supply\nchain\n\n\n\n- accountability for\nwhat is reported by\ndifferent actors in the\nchain\n\n\n_FC 5_ - reduction of coun- - trust of data provided\nterfeit products by other supply chain\nactors\n\n - sharing of ethical\nprinciples along the\nsupply chain (verify\nthe origin of raw\nmaterials and production activities;\n\n - verify the sustainability (both social\nand environmental) of\nthe upstream supply\nchain)\n\n - the main competitors\nare evaluating the BC\n(great debate in the\nfashion sector)\n\n\n_FC 6_ - data safety\n\n - reduction of counterfeit products\n\n\n\n- trust of data provided\nby other supply chain\nactors\n\n- sharing of ethical\nprinciples along the\nsupply chain (verify\nthe origin of raw\nmaterials and production activities;\n\n- verify the sustainability (both social\nand environmental) of\nthe upstream supply\nchain)\n\n- the main competitors\nare evaluating the BC\n(great debate in the\nfashion sector)\n\n\n\n- providing customers\nwith data to understand whether the\nprice is representative of the value of\nthe product\n\n- confirm to customers the history of\nproducts (such as the\norigin of raw materials and production\nactivities)\n\n- confirm to customers the history of\nproducts (such as the\norigin of raw materials and production\nactivities)\n\n- storytelling about\nthe product for the\nconsumer\n\n- map the finished\nproduct lots that are\nshipped around the\nworld\n\n\n_FC 7_ - trust of data provided\nby other supply chain\nactors at the international level\n\n - the main competitors\nare evaluating the BC\n(great debate in the\nfashion sector)\n\n\n-----\n\n**Acknowledgements The authors thank AiIG- Associazione italiana di**\nIngegneria Gestionale for supporting the project for young researchers\n(BANDO “Misure di sostegno ai soci giovani AiIG”).\n\n**Funding Open access funding provided by Università degli Studi di**\nPadova within the CRUI-CARE Agreement.\n\n#### Declarations\n\n**Conflicts of interest The authors have no competing interests to de-**\nclare that are relevant to the content of this article.\n\n**Open Access This article is licensed under a Creative Commons Attri-**\nbution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long\nas you give appropriate credit to the original author(s) and the source,\nprovide a link to the Creative Commons licence, and indicate if changes\nwere made. The images or other third party material in this article are\nincluded in the article's Creative Commons licence, unless indicated\notherwise in a credit line to the material. If material is not included in\nthe article's Creative Commons licence and your intended use is not\npermitted by statutory regulation or exceeds the permitted use, you will\nneed to obtain permission directly from the copyright holder. To view a\n[copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.](http://creativecommons.org/licenses/by/4.0/)\n\n### References\n\nAhmed WA, MacCarthy BL (2021) Blockchain-enabled supply chain\ntraceability in the textile and apparel supply chain: A case study of\nthe fiber producer, Lenzing. Sustainability 13(19):10496\nAgrawal TK, Sharma A, Kumar V (2018) Blockchain-based secured\na traceability system for textile and clothing supply chain. In\nArtificial intelligence for the fashion industry in the big data era.\nSpringer, Singapore, pp 197–208\nAgrawal TK, Kumar V, Pal R, Wang L, Chen Y (2021) Blockchainbased framework for supply chain traceability: A case example of\ntextile and clothing industry. Comput Ind Eng 154:107130\nAhluwalia S, Mahto RV, Guerrero M (2020) Blockchain technology\nand startup financing: A transaction cost economics perspective.\nTechnol Forecast Soc Change 151:119854\nBaharmand H, Maghsoudi A, Coppi G (2021) Exploring the application of blockchain to humanitarian supply chains: insights from\nHumanitarian Supply Blockchain pilot project. Int J Oper Prod\n[Manag. https://doi.org/10.1108/IJOPM-12-2020-0884](https://doi.org/10.1108/IJOPM-12-2020-0884)\n\nBamakan SM, Moghaddam SG, Manshadi SD (2021) Blockchain-enabled pharmaceutical cold chain: Applications, key challenges, and\n[future trends. J Clean Prod 302:127021. https://doi.org/10.1016/j.](https://doi.org/10.1016/j.jclepro.2021.127021)\n[jclepro.2021.127021](https://doi.org/10.1016/j.jclepro.2021.127021)\n\nBechtsis D, Tsolakis N, Iakovou E, Vlachos D (2021) Data-driven\nsecure, resilient and sustainable supply chains: gaps, opportunities, and a new generalized data sharing and data monetisation\n[framework. Int J Prod Res. https://doi.org/10.1080/00207543.](https://doi.org/10.1080/00207543.2021.1957506)\n[2021.1957506](https://doi.org/10.1080/00207543.2021.1957506)\n\nBullón Pérez JJ, Queiruga-Dios A, Gayoso Martínez V, Martín del Rey\nÁ (2020) Traceability of ready-to-wear clothing through blockchain technology. Sustainability 12(18):7491\nBüyüközkan G, Göçer F (2018) Digital Supply Chain: Literature\nreview and a proposed framework for future research. Comput\nInd 97\nCaldarelli G, Zardini A, Rossignoli C (2021) Blockchain adoption in\nthe fashion sustainable supply chain: Pragmatically addressing\nbarriers. J Org Change Manag\n\n## 1 3\n\n\nCaniato F, Caridi M, Castelli CM, Golini R (2009) A contingency\napproach for SC strategy in the Italian luxury industry: do consolidated models fit? Int J Prod Econ 120(1):176–189\nCasey M, P Wong (2017) Global supply chains are about to get better,\nthanks to blockchain. Harv Bus Rev 13\nChang SE, Chen YC, Lu MF (2019) Supply chain re-engineering using\nblockchain technology: A case of the smart contract-based tracking process. Technol Forecast Soc Chang 144:1–11\nChen Y, Ding S, Xu Z, Zheng H, Yang S (2019) Blockchain-based\nmedical records secure storage and medical service framework.\nJ Med Syst 43(1):5\nChoi TM, Luo S (2019) Data quality challenges for sustainable fashion\nsupply chain operations in emerging markets: Roles of blockchain,\ngovernment sponsors and environment taxes. Transp Res E Logist\nTransp Rev 131:139–152\nChoi TM (2019) Blockchain-technology-supported platforms for diamond authentication and certification in luxury supply chains.\nTransp Res E Logist Transp Rev 128:17–29\nChoi TM (2020) Supply chain financing using blockchain: impacts on\nsupply chains selling fashionable products. Ann Oper Res 1–23\nDubey R, Gunasekaran A, Bryde DJ, Dwivedi YK, Papadopoulos T\n(2020) Blockchain technology for enhancing swift-trust, collaboration and resilience within a humanitarian supply chain setting.\nInt J Prod Res 58(11):3381–3398\nEisenhardt KM (1989) Building theories from case study research.\nAcad Manage Rev 14(4):532–550\nFu B, Shu Z, Liu X (2018) Blockchain enhanced emission trading\nframework in fashion apparel manufacturing industry. Sustainability 10(4):1105\nHarris H (2001) Content analysis of secondary data: A study of courage\nin managerial decision making. J Bus Ethics 34:191–208\nHastig GM, Sodhi MS (2020) Blockchain for supply chain traceability: Business requirements and critical success factors. Prod Oper\nManag 29(4):935–954\n[IBM (2019) https://www.ibm.com. Accessed Dec 2019](https://www.ibm.com)\nKatsikouli P, Wilde AS, Dragoni N, Høgh-Jensen H (2021) On the\nbenefits and challenges of blockchains for managing food supply\nchains. J Sci Food Agric 101(6):2175–2181\nKouhizadeh M, Saberi S, Sarkis J (2021) Blockchain technology and\nthe sustainable supply chain: Theoretically exploring adoption\nbarriers. Int J Prod Econ 231:107831\nKshetri N (2018) 1 Blockchain’s roles in meeting key supply chain\n[management objectives. Int J Inf Manag 39:80–89. https://doi.](https://doi.org/10.1016/j.ijinfomgt.2017.12.005)\n[org/10.1016/j.ijinfomgt.2017.12.005](https://doi.org/10.1016/j.ijinfomgt.2017.12.005)\n\nMoretto A, Macchion L, Lion A, Caniato F, Danese P, Vinelli A (2018)\nDesigning a roadmap towards a sustainable supply chain: A focus\non the fashion industry. J Clean Prod 193:169–184\nMorkunas VJ, Paschen J, Boon E (2019) How blockchain technologies\nimpact your business model. Bus Horiz 62(3):295–306\nMukherjee AA, Singh RK, Mishra R, Bag S (2021) Application of\nblockchain technology for sustainability development in the agricultural supply chain: justification framework. Oper Manag Res\n[1–16. https://doi.org/10.1007/s12063-021-00180-5](https://doi.org/10.1007/s12063-021-00180-5)\n\nNakamoto S (2008) Bitcoin: A peer-to-peer electronic cash system.\n[Available online: https://bitcoin.org/bitcoin.pdf. Accessed Jan](https://bitcoin.org/bitcoin.pdf)\n2019\nPazaitis A, De Filippi P, Kostakis V (2017) Blockchain and value systems in the sharing economy: The illustrative case of Backfeed.\nTechnol Forecast Soc Chang 125:105–115\nPólvora A, Nascimento S, Lourenço JS, Scapolo F (2020) Blockchain\nfor industrial transformations: A forward-looking approach with\nmulti-stakeholder engagement for policy advice. Technol Forecast\nSoc Chang 157:120091\n[PWC report (2019) https://www.pwc.com/it/it/industries/fintech/](https://www.pwc.com/it/it/industries/fintech/blockchain.html)\n\n[blockchain.html. Assessed Oct 2019](https://www.pwc.com/it/it/industries/fintech/blockchain.html)\n\n\n-----\n\nQueiroz MM, Wamba SF (2019) Blockchain adoption challenges in\nthe supply chain: An empirical investigation of the main drivers\nin India and the USA. Int J Inf Manage 46:70–82\nRaval S (2016) Decentralized applications: harnessing bitcoin’s blockchain technology, O’Reilly, Beijing, Boston, Farnham, Sebastopol,\nTokyo\nRogerson M, Parry GC (2020) Blockchain: case studies in food supply\nchain visibility. Supply Chain Manag Int J 25(5)\nSamson D (2020) Operations/supply chain management in a new world\ncontext. Oper Manag Res 13:1–3\nSavelyev A (2017) Contract law 2.0:‘smart’contracts as the beginning of the end of classic contract law. Inf Commun Technol Law\n26(2):116–134\nSharma M, Joshi S, Luthra S, Kumar A (2021) Managing disruptions\nand risks amidst COVID-19 outbreaks: role of blockchain technology in developing resilient food supply chains. Oper Manag\nRes 1–14\nSun S, Wang X (2019) Promoting traceability for food supply chain\nwith certification. J Clean Prod 217:658–665\nTapscott D, Tapscott A (2017) How blockchain will change organizations. MIT Sloan Manag Rev 58(2):10\nTreiblmaier H (2018) The impact of the blockchain on the supply\nchain: a theory-based research framework and a call for action.\nSupply Chain Manag Int J\n\n\nvan Hoek R (2019) Exploring blockchain implementation in the supply\nchain. Int J Oper Prod Manag\nVoss C, Tsikriktsis N, Frohlich M (2002) Case research in operations\nmanagement. Int J Oper Prod Manag 22(2):195–219\nWang B, Luo W, Zhang A, Tian Z, Li Z (2020) Blockchain-enabled\ncircular supply chain management: A system architecture for fast\nfashion. Comput Indy 123:103324\nWang Y, Singgih M, Wang J, Rit M (2019) Making sense of blockchain\ntechnology: How will it transform supply chains?. Int J Prod Econ\n211:221–236\nXiong Y, Lam HK, Kumar A, Ngai EW, Xiu C, Wang X (2021) The\nmitigating role of blockchain-enabled supply chains during the\nCOVID-19 pandemic. Int J Oper Prod Manag\nYadav S, Luthra S, Garg D (2020) Internet of things (IoT) based coordination system in Agri-food supply chain: development of an efficient framework using DEMATEL-ISM. Oper Manag Res 1–27\nYin RK (2003) Case study research - design and methods, 3rd edn.\nSage Publications, London\nZelbst PJ, Green KW, Sower VE, Bond PL (2019) The impact of RFID,\nIIoT, and Blockchain technologies on supply chain transparency.\nJ Manuf Technol Manag\n\n**Publisher's Note Springer Nature remains neutral with regard to**\njurisdictional claims in published maps and institutional affiliations.\n\n## 1 3\n\n\n-----\n\n"
| 20,845
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1007/s12063-022-00262-y?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/s12063-022-00262-y, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://link.springer.com/content/pdf/10.1007/s12063-022-00262-y.pdf"
}
| 2,022
|
[] | true
| 2022-03-16T00:00:00
|
[
{
"paperId": "2ce40b867bbfb35c74ce8115b94f227e9380a3c3",
"title": "Blockchain-Enabled Supply Chain Traceability in the Textile and Apparel Supply Chain: A Case Study of the Fiber Producer, Lenzing"
},
{
"paperId": "5ba6c3c09f04ac20974bb6cd9d26f3439f9dd266",
"title": "Data-driven secure, resilient and sustainable supply chains: gaps, opportunities, and a new generalised data sharing and data monetisation framework"
},
{
"paperId": "282fd3fcbcd0dd8301634f2b69a8fec1073037f0",
"title": "Exploring the application of blockchain to humanitarian supply chains: insights from Humanitarian Supply Blockchain pilot project"
},
{
"paperId": "0ad7d18b38eb7767e3ddfea43bf484188f81c64d",
"title": "Managing disruptions and risks amidst COVID-19 outbreaks: role of blockchain technology in developing resilient food supply chains"
},
{
"paperId": "b486ca0b1a044e1e0156b189b9010be4e82404ce",
"title": "The Mitigating Role of Blockchain-Enabled Supply Chains During the COVID-19 Pandemic"
},
{
"paperId": "55c7dbb01822216f9f6fc50611c665421519c8e2",
"title": "Blockchain-enabled pharmaceutical cold chain: Applications, key challenges, and future trends"
},
{
"paperId": "70b86f91b9751b4cb15d287848a30630d2133044",
"title": "Blockchain adoption in the fashion sustainable supply chain: Pragmatically addressing barriers"
},
{
"paperId": "3143998dd92cd6b5cb284f0481e781344baa2934",
"title": "Application of blockchain technology for sustainability development in agricultural supply chain: justification framework"
},
{
"paperId": "979a7afae01dd40825f0c7aecaf6ec2391d22cad",
"title": "Blockchain technology and the sustainable supply chain: Theoretically exploring adoption barriers"
},
{
"paperId": "92b575440db1d61436ee12190dfa0224c95f0f54",
"title": "Blockchain-enabled circular supply chain management: A system architecture for fast fashion"
},
{
"paperId": "903b6bd4b77553db5791a8317d747f4320a7c803",
"title": "On the Benefits and Challenges of Blockchains for Managing Food Supply Chains."
},
{
"paperId": "0c7805b10427c8414ead8f9dd19f14d85d46f76a",
"title": "Internet of things (IoT) based coordination system in Agri-food supply chain: development of an efficient framework using DEMATEL-ISM"
},
{
"paperId": "52347acb175398498248c2057c770684522effed",
"title": "Traceability of Ready-to-Wear Clothing through Blockchain Technology"
},
{
"paperId": "f8cc0ad95bd16743dd995ae00ce9598e6607aa04",
"title": "Blockchain for industrial transformations: A forward-looking approach with multi-stakeholder engagement for policy advice"
},
{
"paperId": "7ba9837997164387e4638f9781f5ef8fc048f312",
"title": "Operations/supply chain management in a new world context"
},
{
"paperId": "705521d77992cb6dc2cda0127bfea83a15dcf882",
"title": "Blockchain: case studies in food supply chain visibility"
},
{
"paperId": "49b1c373cd0600da8e9fd9b5bf0190d46bdd6d6c",
"title": "Supply chain financing using blockchain: impacts on supply chains selling fashionable products"
},
{
"paperId": "8e795081f23eb9cf05f8f6ec986091af039cbca4",
"title": "Blockchain technology for enhancing swift-trust, collaboration and resilience within a humanitarian supply chain setting"
},
{
"paperId": "29caff504c1a6de0af1ad6150ee8c0b33639c06e",
"title": "Blockchain technology and startup financing: A transaction cost economics perspective"
},
{
"paperId": "17fadbe23c535ba8259964f6d585e9fdd3eb837b",
"title": "Blockchain for Supply Chain Traceability: Business Requirements and Critical Success Factors"
},
{
"paperId": "150adea5a18de46f887f982606ac915708fd3281",
"title": "Data quality challenges for sustainable fashion supply chain operations in emerging markets: Roles of blockchain, government sponsors and environment taxes"
},
{
"paperId": "775c13c3045634c76065d13ccc3e65dfff03bc5a",
"title": "Exploring blockchain implementation in the supply chain"
},
{
"paperId": "390ce9c2306efd6e513ee8136617e8320e7e7869",
"title": "The impact of RFID, IIoT, and Blockchain technologies on supply chain transparency"
},
{
"paperId": "9d3f64861b03860d76ce12f00b4cd6df988bd249",
"title": "Blockchain-technology-supported platforms for diamond authentication and certification in luxury supply chains"
},
{
"paperId": "c75c0ffd5e3fa3b755ae8dd02ab46c405fcd0e09",
"title": "Supply chain re-engineering using blockchain technology: A case of smart contract based tracking process"
},
{
"paperId": "664ad6a548821db18dd0efeb6bb6c5decda00e49",
"title": "Blockchain adoption challenges in supply chain: An empirical investigation of the main drivers in India and the USA"
},
{
"paperId": "491d381f959c8274035d15b1291a72211b501462",
"title": "How blockchain technologies impact your business model"
},
{
"paperId": "a27da4a7af6ded1cbddab51dd0a7ce90adc1d80e",
"title": "Making sense of blockchain technology: How will it transform supply chains?"
},
{
"paperId": "fa8423b90e88579aea8a201f3d5145d3592dd926",
"title": "Promoting traceability for food supply chain with certification"
},
{
"paperId": "07127b8a978007b548782d4eb294273d96a5a601",
"title": "Blockchain-Based Medical Records Secure Storage and Medical Service Framework"
},
{
"paperId": "59759783851240a0ddf189852a001de5f6d2f3a2",
"title": "Designing a roadmap towards a sustainable supply chain: A focus on the fashion industry"
},
{
"paperId": "3f871adcd5c500cb8f6ca2a2dffa93ae95a01369",
"title": "The Impact of the Blockchain on the Supply Chain: A Theory-Based Research Framework and a Call for Action"
},
{
"paperId": "bf5a463675b1da9fb1be4660c0ba7bbfab2246f6",
"title": "Digital Supply Chain: Literature review and a proposed framework for future research"
},
{
"paperId": "e489fb6d194f0f3e5c1cbbebde898a178b38e454",
"title": "Blockchain Enhanced Emission Trading Framework in Fashion Apparel Manufacturing Industry"
},
{
"paperId": "677d276996ba7a84b9078e9c413cbb1d8820a15e",
"title": "1 Blockchain's roles in meeting key supply chain management objectives"
},
{
"paperId": "70d66233ba53dc4bc810970b172fb6deb4b080dc",
"title": "Blockchain and Value Systems in the Sharing Economy: The Illustrative Case of Backfeed"
},
{
"paperId": "60226df737e939d406c1fee019ef180e7617fdb0",
"title": "Contract law 2.0: ‘Smart’ contracts as the beginning of the end of classic contract law"
},
{
"paperId": "f473891a959736e8cb9304c0394c1030c0b65455",
"title": "Decentralized Applications: Harnessing Bitcoin's Blockchain Technology"
},
{
"paperId": "cfaa46dca4685dd81de96c01a3cca76de54f8cb1",
"title": "A contingency approach for SC strategy in the Italian luxury industry: do consolidated models fit?"
},
{
"paperId": "61856151bca6a40759a5cb2bfe07d8340b0f7e3e",
"title": "Report"
},
{
"paperId": "012a915ea3cdf280e4210e9e92dbac3c80d68506",
"title": "Case research in operations management"
},
{
"paperId": "55059760bd85716c29edbf4dd7cfd5747fa074ca",
"title": "Content Analysis of Secondary Data: A Study of Courage in Managerial Decision Making"
},
{
"paperId": "096dfe7d44cbc9c05c2e4943600a54ac142658ee",
"title": "Blockchain-based framework for supply chain traceability: A case example of textile and clothing industry"
},
{
"paperId": "815cb5526044da78c4c0186f228c042f876eb511",
"title": "Blockchain-Based Secured Traceability System for Textile and Clothing Supply Chain"
},
{
"paperId": null,
"title": "How blockchain will change organizations"
},
{
"paperId": null,
"title": "Global supply chains are about to get better, thanks to blockchain"
},
{
"paperId": "657d550e52f3aef3c5d3ac3dad3506c6befbd1df",
"title": "Case Study Research Design And Methods"
},
{
"paperId": "288d10eac0a63dc0d812fb7b7666116a667cf116",
"title": "Complexity leadership theory: An interactive perspective on leading in complex adaptive systems"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": null,
"title": "the blockchain to… 1489"
},
{
"paperId": null,
"title": "remains neutral with regard jurisdictional claims in"
},
{
"paperId": null,
"title": "Drivers, barriers and supply chain variables influencing the adoption of"
}
] | 20,845
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0064e6d447ef17824656c108545bea4fd4e5881a
|
[] | 0.852464
|
Sigmoid Activation Implementation for Neural Networks Hardware Accelerators Based on Reconfigurable Computing Environments for Low-Power Intelligent Systems
|
0064e6d447ef17824656c108545bea4fd4e5881a
|
Applied Sciences
|
[
{
"authorId": "72924426",
"name": "V. Shatravin"
},
{
"authorId": "71269649",
"name": "D. Shashev"
},
{
"authorId": "66815344",
"name": "Stanislav Shidlovskiy"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Appl Sci"
],
"alternate_urls": [
"http://www.mathem.pub.ro/apps/",
"https://www.mdpi.com/journal/applsci",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-217814"
],
"id": "136edf8d-0f88-4c2c-830f-461c6a9b842e",
"issn": "2076-3417",
"name": "Applied Sciences",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-217814"
}
|
The remarkable results of applying machine learning algorithms to complex tasks are well known. They open wide opportunities in natural language processing, image recognition, and predictive analysis. However, their use in low-power intelligent systems is restricted because of high computational complexity and memory requirements. This group includes a wide variety of devices, from smartphones and Internet of Things (IoT)smart sensors to unmanned aerial vehicles (UAVs), self-driving cars, and nodes of Edge Computing systems. All of these devices have severe limitations to their weight and power consumption. To apply neural networks in these systems efficiently, specialized hardware accelerators are used. However, hardware implementation of some neural network operations is a challenging task. Sigmoid activation is popular in the classification problem and is a notable example of such a complex operation because it uses division and exponentiation. The paper proposes efficient implementations of this activation for dynamically reconfigurable accelerators. Reconfigurable computing environments (RCE) allow achieving reconfigurability of accelerators. The paper shows the advantages of applying such accelerators in low-power systems, proposes the centralized and distributed hardware implementations of the sigmoid, presents comparisons with the results of other studies, and describes application of the proposed approaches to other activation functions. Timing simulations of the developed Verilog modules show low delay (14–18.5 ns) with acceptable accuracy (average absolute error is 0.004).
|
# applied sciences
_Article_
## Sigmoid Activation Implementation for Neural Networks Hardware Accelerators Based on Reconfigurable Computing Environments for Low-Power Intelligent Systems
**Vladislav Shatravin** **_[∗]_** **, Dmitriy Shashev** **and Stanislav Shidlovskiy**
Faculty of Innovative Technologies, Tomsk State University, 634050 Tomsk, Russia; [email protected] (D.S.);
[email protected] (S.S.)
*** Correspondence: [email protected]**
**Citation: Shatravin, V.; Shashev, D.;**
Shidlovskiy, S. Sigmoid Activation
Implementation for Neural Networks
Hardware Accelerators Based on
Reconfigurable Computing
Environments for Low-Power
Intelligent Systems. Appl. Sci. 2022,
_[12, 5216. https://doi.org/10.3390/](https://doi.org/10.3390/app12105216)_
[app12105216](https://doi.org/10.3390/app12105216)
Academic Editors: Deliang Fan and
Zhezhi He
Received: 6 April 2022
Accepted: 19 May 2022
Published: 21 May 2022
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright:** © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: The remarkable results of applying machine learning algorithms to complex tasks are**
well known. They open wide opportunities in natural language processing, image recognition, and
predictive analysis. However, their use in low-power intelligent systems is restricted because of high
computational complexity and memory requirements. This group includes a wide variety of devices,
from smartphones and Internet of Things (IoT)smart sensors to unmanned aerial vehicles (UAVs),
self-driving cars, and nodes of Edge Computing systems. All of these devices have severe limitations
to their weight and power consumption. To apply neural networks in these systems efficiently, specialized hardware accelerators are used. However, hardware implementation of some neural network
operations is a challenging task. Sigmoid activation is popular in the classification problem and is a
notable example of such a complex operation because it uses division and exponentiation. The paper
proposes efficient implementations of this activation for dynamically reconfigurable accelerators.
Reconfigurable computing environments (RCE) allow achieving reconfigurability of accelerators.
The paper shows the advantages of applying such accelerators in low-power systems, proposes the
centralized and distributed hardware implementations of the sigmoid, presents comparisons with
the results of other studies, and describes application of the proposed approaches to other activation
functions. Timing simulations of the developed Verilog modules show low delay (14–18.5 ns) with
acceptable accuracy (average absolute error is 4 × 10[−][3]).
**Keywords: deep neural networks; hardware accelerators; low-power systems; homogeneous structures;**
reconfigurable environments; parallel processing
**1. Introduction**
At present, artificial neural networks (NN) are actively used in various intelligent
systems for tasks that cannot be effectively solved by any classical approach: natural language processing, image recognition, complex classification, predictive analysis, and many
others. It is possible because of the ability of NN to extract domain-specific information
from a large set of input data, which can be used later to process new input data. The main
disadvantage of NN is their high computational complexity. A complex task requires a
large NN model with a huge number of parameters, which means that many operations
must be performed to calculate the result. The problem is especially acute for deep convolutional neural networks (CNN) because their models can include hundreds of billions of
parameters [1,2].
In cloud and desktop systems, the problem of computational complexity can be
partially solved by scaling. However, low-power and autonomous devices impose strict
requirements on their weight and battery life [3]. Some examples of such systems include
unmanned aerial vehicles (UAVs), self-driving cars, satellites, smart Internet of Things (IoT)
sensors, Edge Computing nodes, mobile robots, smartphones, gadgets, and many others.
-----
_Appl. Sci. 2022, 12, 5216_ 2 of 16
Such devices require specialized hardware accelerators and fine-tuned algorithms to use
machine learning algorithms effectively.
There are many papers about various optimizations of hardware and machine learning
algorithms for low-power applications. Hardware optimizations mainly mean replacing
powerful but energy-intensive graphics processing units (GPU), which are popular in the
cloud and desktop systems, with more efficient devices, such as field-programmable gate
array (FPGA) and application-specific integrated circuits (ASIC) [4–11]. At the same time,
to use hardware resources efficiently, applied algorithms are adapted to the features of the
chosen platform. In addition, the quality of NN models is often acceptably downgraded
to reduce computational complexity. For example, authors of [12,13] propose to reduce
the bit-lengths of numbers in a model to 8 and 4 bits, respectively, and in [14,15], binary
NNs are presented. In [16], the complexity of a NN is reduced using sparsing, which is the
elimination of some weights to reduce the number of parameters.
Such comprehensive solutions show good results in the implementation of concrete
NN architectures. However, in practice, complex intelligent systems can face various tasks
that require different NN architectures. Sometimes, it is impossible to predict which architectures will be required in the future. The problem is especially acute if the autonomous
system is remote and difficult to access. The simplest solution is to implement many
hardware subsystems for the most probable architectures, although this approach affects
weight, power consumption, reliability, and complexity of the device.
Another way to solve the problem is to use dynamically reconfigurable hardware
accelerators that can change the current model by an external signal at runtime. Application
of such accelerators offers great opportunities for performance, energy efficiency, and
reliability. Therefore, numerous research has been conducted in this area. We explore
dynamically reconfigurable accelerators based on the concept of reconfigurable computing
environments (RCEs).
One of the significant parts in developing RCE-based hardware accelerators is the
implementation of neuron activation functions. There are many different activations now,
and one of the most popular among them is the sigmoid activation (logistic function), which
is widely used in an output layer of NNs for classification tasks. However, the original
form of the activation is difficult to compute in hardware, so simplified implementations
are usually used. This paper proposes two implementations of sigmoid activation for
dynamically reconfigurable hardware accelerators.
**2. Artificial Neural Networks**
An artificial neural network is a mathematical model of a computing system, inspired
by the structure and basic principles of biological neural networks. By analogy with natural
NNs, artificial networks consist of many simple nodes (neurons) and connections between
them (synapses). In artificial NNs, a neuron is a simple computing unit, which sums
weighted inputs, adds its own bias value, applies an activation function to the sum, and
sends the result to neurons of the next layer (Figure 1). Neurons are distributed among
several layers. In the classical dense architecture, inputs of each neuron are connected to
the output of each neuron of a previous layer. Therefore, each NN has one input layer,
one output layer, and one or more hidden layers. If the network has many hidden layers,
this network is called “deep” (DNN). This architecture is called a feedforward neural
network (Figure 2) [17].
All values of synapses (weights) and neurons (biases) are parameters of the NN
model. To solve a specific task by NN, it is necessary to determine all parameters of the
model. This process of determining the parameters is called “learning”. There are many
learning techniques. The classical supervised learning process requires input dataset with
known expected results (marked dataset). During training, the parameters are corrected to
reduce the difference between the actual and expected results at the network output for
the training samples. Using a large and representative training set, a sufficiently complex
-----
_Appl. Sci. 2022, 12, 5216_ 3 of 16
model and a sufficient number of iterations allows obtaining a model with high accuracy
on new samples.
**Figure 1. Artificial neuron with three inputs.**
**Figure 2. Feedforward neural network with two hidden layers.**
An activation function of a neuron is a non-linear function that transforms a weighted
sum of the neuron’s inputs to some distribution. In practice, many different activations
are used, and they depend on the task and the chosen architecture [18–21]. In classification
tasks, rectified linear units (ReLU) in hidden layers and a sigmoid activation in the output
layer are very popular. This allows getting a result in the form of the probability that the
input object belongs to a particular class. Because the original sigmoid activation uses
division and exponentiation, it is often replaced with simplified analogs. In this paper, the
natural sigmoid is replaced by its piecewise linear approximation.
**3. Neural Networks Hardware Accelerators**
The applying of neural network models in real systems leads to the need to use
specialized computing devices. The huge number of simple operations inherent to NN
models is a challenge, even for a state-of-the-art CPU. At the same time, the dataflow in NN
allows high parallelization, so computing systems that support simultaneous execution of
large amounts of calculations are worth using.
A simple and powerful solution to the problem is the use of GPUs [4,5]. Due to their
multicore architecture, GPUs can significantly reduce time costs of training and using
DNNs in many applications. In addition, the simplicity of the development on the GPU
-----
_Appl. Sci. 2022, 12, 5216_ 4 of 16
makes it easy to implement and maintain solutions. Today, GPU-based accelerators are
widely used in desktop, server, and cloud systems, as well as in some large mobile systems,
such as self-driving cars. The main disadvantage of these accelerators is their high power
consumption, which limits their use in many autonomous and mobile systems.
Further research to improve the characteristics of accelerators has led to the development of highly specialized devices based on FPGA and ASIC. Due to their flexible
architecture, low-level implementation, and fine-tuning optimization, these solutions outperform GPU-based counterparts while consuming less power [7–11]. The disadvantages
of such solutions are the complexity and cost of developing, embedding, and maintaining.
In addition, the accelerators often support only a certain model or architecture of NN,
which limits their reuse for other tasks.
In recent years, hybrid computing systems have become popular. These systems include a CPU and an auxiliary neuroprocessor simultaneously. The CPU performs common
tasks and delegates machine learning tasks to the neuroprocessor [22]. Because of the
division of responsibilities, hybrid systems effectively solve a wide class of applied tasks.
But the neuroprocessors are designed to fit prevalent needs, and the problem of the narrow
focus of accelerators has not been completely eliminated yet.
One possible solution to this problem is applying dynamically reconfigurable hardware accelerators.
**4. Dynamically Reconfigurable Hardware Accelerators**
A key feature of dynamically reconfigurable accelerators is their ability to change
the current NN model at runtime. A single computing system is able to implement
different models and even architectures of NN at different points in time. This offers wide
opportunities for target systems:
- Support for different architectures of NN without hardware changes;
- Usage of different NN models for different operating modes. For example, the device
can use an energy saving model with low accuracy in standby mode and switch to the
main model with high accuracy and power consumption in a key working mode;
- Remote modification or complete replacement of NN models of the accelerator, which
can be very useful in cases where there is not enough information to set models
in advance;
- Ability to be recovered by reconfiguration and redistribution of computations over
the intact area of the accelerator.
There are many different approaches to designing reconfigurable accelerators. The
paper [23] proposes a hierarchical multi-grained architecture based on bisection neural
networks with several levels of configuration. In [24], reconfigurability is achieved by
using some amount of floating neurons that can move between layers of NN to create the
required model. The accelerators presented in [25–27] do not use reconfigurability in its
classical meaning, but modularity and homogeneous structures with an excessive number
of elements make it possible to solve a wide range of applied tasks.
We propose the development of reconfigurable hardware accelerators based on the
concept of reconfigurable computing environments.
**5. Reconfigurable Computing Environments**
A reconfigurable computing environment (RCE) is a mathematical model of a wide
range of computing systems based on the idea of a well-organized complex behavior of
many similar small processing elements (PE) [28,29]. In some references, RCE is called a
homogeneous structure [29]. The PEs are arranged in a regular grid and are connected to
neighboring PEs. Each PE can be individually configured by an external signal or internal
rule to perform some operation from a predefined set. The collaborative operating of
many independent processors allows implementing complex parallel algorithms. Thus, the
computational capabilities of RCE are limited only by its size and operations set.
-----
_Appl. Sci. 2022, 12, 5216_ 5 of 16
In general, an RCE can have an arbitrary number of dimensions (from one to three),
and its PEs can be of any shape. In this paper, we discuss a two dimensional RCE with
square PEs. Therefore, each non-boundary element is connected with four neighbors
(Figure 3).
**Figure 3. Reconfigurable computing environment.**
**6. Hardware Accelerators on Homogeneous Structures**
We are not pioneers of applying homogeneous structures in the hardware accelerators.
Numerous research in this area is based on the use of systolic arrays [25–27]. The systolic
array, or systolic processor, is a subclass of homogeneous computing structures, for which
additional restrictions are set [30]:
- All PEs perform the same operation;
- After completion of the operation (usually on each clock cycle), the PE transmits the
result to one or more neighboring PEs in accordance with the specified direction of
signal propagation;
- The signal passes through the entire structure in one direction.
Systolic processors are known for their simplicity and high performance. The homogeneity and modularity of their structure facilitates scaling and manufacturing. But there
are some disadvantages:
- Narrow focus on specific tasks, since all PEs perform the same simple operation;
- Low fault tolerance due to a large amount of processors and interconnections, as well
as a fixed direction of signal propagation;
- Some algorithms are difficult to adapt to the features of the processing flow in systolic arrays.
A classic task for systolic arrays is matrix multiplication (Figure 4) [31]. Since most
of the computations in NN are matrices multiplication, modern hardware accelerators
efficiently use computing units based on systolic arrays. All other calculations, such as
activation or subsampling, are performed by separate specialized units.
-----
_Appl. Sci. 2022, 12, 5216_ 6 of 16
**Figure 4. Matrix multiplication in the systolic array [31]. 1999 Academic Press, with permission**
from Elsevier.
One of the popular solutions for machine learning tasks is the tensor processing unit
(TPU), based on the systolic array (Figure 5) [27]. This is a hardware accelerator on ASIC
developed by Google. The TPU systolic array has a size of 256 256 PEs. It performs
_×_
matrix multiplication and shows very good results (Figure 6). The accumulation of sums,
subsampling, and activation are performed by dedicated blocks outside the array.
**Figure 5. Google Tensor processing unit architecture.**
**Figure 6. Systolic array in the Google TPU.**
-----
_Appl. Sci. 2022, 12, 5216_ 7 of 16
Another example of the efficiency of homogeneous structures in neural networks
hardware accelerators is presented in [25,26]. The proposed Eyeriss accelerator uses a
homogeneous computing environment consisting of 12 × 14 relatively large PEs (Figure 7).
Each PE receives one row of input data and a vector of weights and performs convolution
over several clock cycles using a sliding window. Accordingly, the accelerator’s dataflow is
called “row-stationary”.
**Figure 7. Eyeriss accelerator architecture.**
One of the important tasks in the development of accelerators is to reduce the data exchange with other subsystems of the device (memory, additional computing units, etc.) due
to the high impact on performance and power consumption. This can be achieved through
data reuse. There are four types of dataflows with data reuse: weight-stationary, inputstationary, output-stationary, and row-stationary [32]. In a weight-stationary dataflow, each
PE stores the weight values in its internal memory and applies them to each input vector.
Google TPU uses this dataflow [27]. The PE of input-stationary dataflow stores the input
vector and receives different weight vectors from the outside. The accelerators with outputstationary dataflow accumulate partial sums in PE, while the input and weights move
around the environment. The above-described Eyeriss accelerator uses a row-stationary
dataflow since each PE stores one row of input data and one vector of weights to perform
multicycle convolution [25].
The accelerator proposed in this paper uses a hybrid dataflow. It operates in weightstationary dataflow when input data are too large to be handled at once; otherwise, it does
not reuse data at all. The refusal to reuse data is compensated by an inherent ability of
our architecture to avoid any exchange of intermediate results with other subsystems by
performing complete processing within the computing structure. This allows eliminating
time cost of access to memory and auxiliary blocks. However, temporary weight reuse is
supported as part of pipeline processing.
In contrast to most counterparts [23–26], the proposed accelerator architecture is
based on the principle of atomic implementation of processing elements. This means that
PE has a simple structure and performs very simple operations. This decision makes it
possible to achieve high flexibility of the computing environment and allows fine-tuning of
each parameter. By analyzing classical feedforward networks, a set of PE operations was
selected: “signal source”, “signal transfer”, “MAC”, “ReLU”, “sigmoid” [33]. With these
operations, a neuron can be implemented as a chain of PEs (Figure 8) [34]. The length of
the chain determines the number of inputs, so a neuron of any configuration can be built.
The PE of the proposed model operates with 16-bit fixed-point numbers. Table 1 presents
the comparison of the proposed model and the mentioned counterparts. The decision to
implement activation functions in RCE instead of using a dedicated unit is based on the
following considerations:
- RCE is more flexible. This allows introducing new activations or modifying existing
ones after the system is deployed;
- RCE is more reliable. The dedicated unit can become a point of failure, while the RCE
can be reconfigured to partially restore functionality;
-----
_Appl. Sci. 2022, 12, 5216_ 8 of 16
- Data exchange between RCE and dedicated unit may be longer. It is more efficient to
keep all intermediate results inside the RCE.
**Figure 8. Neuron on the proposed RCE architecture. Reprinted from [34], with permission 2021 IEEE.**
**Table 1. Comparison of accelerators based on homogeneous structures.**
**Parameter** **TPU Systolic Array** **Eyeriss** **Proposed Model**
16-bit MAC with memory
16-bit unit, supporting 7 simple
Processing element (PE) 8-bit MAC block and partial sums
operations
accumulator
448KB SRAM + 72KB
PE memory None 21 bits
Registry
PE size Very small Relatively large Average
A lot of A few A lot of
Number of PEs
256 × 256 = 65,536 12 × 14 = 168 depends on the task
Reconfigurable PE No No Yes
A role of homogeneous
Matrix multiplication Matrix multiplication Complete processing
structure
Hybrid (weight-stationary or
Dataflow Weight-stationary Row-stationary
no reuse)
Intermediate results storage Buffer Buffer PEs (inside the environment)
Post-processing units
Dedicated blocks Dedicated blocks PEs (inside the environment)
(activation, subsampling)
To implement deep networks and perform pipelining, the proposed model supports
multi-cycle processing using internal rotation of the signal [35]. The RCE is divided into
several segments; each segment is configured to implement one layer of the required NN
model. When an input signal arrives at a segment, the segment calculates an output of the
layer and passes it on to the next segment. After the signal leaves the segment, the segment
is reconfigured to the layer, following the layer implemented in the previous segment. It
means that, in case of four segments, the second segment at the i-th step will implement
the (4 × (i − 1) + 2)-th layer of the neural network. Therefore, the signal is spinning inside
the structure until the final result is obtained (Figure 9).
The proposed architecture allows implementing a DNN of arbitrary depth. The only
limitation is the size of the configuration memory. Parallel flow processing of inputs
in many PEs significantly reduces computation time. In addition, it is possible to use
pipelining to improve further performance. The structure with n segments can process
_n −_ 1 signals simultaneously, while the n-th segment can be reconfigured by a sliding
window. The pipelining solves three problems:
-----
_Appl. Sci. 2022, 12, 5216_ 9 of 16
- Improves resource utilization;
- Increases performance by processing multiple signals simultaneously and eliminating
reconfiguration time cost;
- Introduces temporary reuse of weights (short-term stationary dataflow).
**Figure 9. Multi-cycle processing of NN in the segmented RCE.**
To implement a multi-cycle architecture, a “1-cycle latch” operation must be added to
the operations set. This is necessary to store intermediate results between segments. A 90°
signal rotation operation is included in the MAC.
The key disadvantage of the proposed segmented architecture is the limitation of
the maximum supported layer size. After segmentation, only some parts of the RCE can
be used to implement a layer. This is a significant drawback for the first hidden layers
of CNNs, which consist of a large number of neurons. To handle this case, an integral
operating mode can be used. In this mode, the entire RCE implements only one layer of
a NN and the signal is transmitted in one direction. Disadvantages of the mode are the
lack of pipelining and need for external control unit to route the input signal and store
intermediate results (Figure 10).
**Figure 10. RCE in intergal mode.**
Timing simulations of the proposed models showed acceptable results [34,35]. However, one of the operations, sigmoid activation, is computationally difficult in its natural
form (Figure 11). As a result, the processing elements and the entire environment become
large, complex, and slow. The piecewise linear approximation proposed in [36] partially
solves the problem, but the operation remains complicated compared to others. This
paper presents centralized and distributed modifications of the sigmoid activation for
implementation in the described RCE architecture.
-----
_Appl. Sci. 2022, 12, 5216_ 10 of 16
**Figure 11. Sigmoid activation.**
**7. Implementations of Sigmoid Activation**
_7.1. Centralized Implementation_
In the centralized implementation of the sigmoid activation, all calculations are performed in one PE, which allows to minimize and optimize it efficiently. These optimizations
are possible because of the features of the fixed-point number format and the applied approximation [36]. The piecewise linear approximation is described as:
_f (x) = ax + b_ (1)
where a and b are the constants related to the subrange in which x is located.
One of the most difficult parts of the piecewise linear approximation is finding the
range where the input value lies. Approximation nodes are the integers ranging from 5
_−_
to 5 (Figure 12). The ∆ symbol denotes the smallest step for this number format, and it
is equal to 1/256 for numbers with 8 fractional bits. Therefore, the PE must include 11
comparisons. At the same time, the fixed-point format puts the integer part of the number
in high order bits. Our models use 16-bit fixed-point numbers, and the 8 high order bits
contain an integer part.
**Figure 12. Approximation nodes. Dotted boxes indicate the bits used in Equations (2) (blue box),**
(3) (green box) and (4) (orange box).
-----
_Appl. Sci. 2022, 12, 5216_ 11 of 16
The analysis of these values shows that the four high order bits are the same for all
positive and all negative values (within the approximation range):
_is_neg = x15 & x14 & x13 & x12_ (2)
_is_pos = x15 & x14 & x13 & x12_ (3)
where xn is the n-th bit of x.
The next four bits help to determine the specific range. For example, to check if the
_x value is in the range from −5 (inclusive) to −4 (excluding), the following formula can_
be used:
_is_neg5_to_neg4 = is_neg & x11 & x10 & x9 & x8_ (4)
For all values above 5, use the following function:
_is_pos5_to_in f = x15 & (is_pos & x11) & (x10_ (x10 & x9 & x8)) (5)
_|_
As a result, the approximation coefficients can be found as:
_a = (is_neg5_to_neg4 & a1)_ (is_neg4_to_neg3 & a2) (is_pos4_to_pos5 & a10) (is_pos5_to_in f & 0) (6)
_|_ _| · · · |_ _|_
_b = (is_neg5_to_neg4 & b1)_ (is_neg4_to_neg3 & b2) (is_pos4_to_pos5 & b10) (is_pos5_to_in f & 1) (7)
_|_ _| · · · |_ _|_
Thus, 11 comparisons of 16-bit numbers can be replaced by 11 comparisons of 5-bit
numbers, which leads to the significant simplification of this implementation.
The centralized implementation has two major disadvantages. Firstly, a PE with the
proposed implementation of the sigmoid occupies a larger area on a chip. Secondly, the focus
on the sigmoid function does not allow to reuse this implementation for another activations.
_7.2. Distributed Implementation_
The distributed implementation is based on the ability to approximate each subrange
of the sigmoid function independently. In other words, each subrange can be computed in
parallel using separate chain of the PEs.
The complete distributed implementation is presented in Figure 13. The following
color differentiation of operations is used here: green—“signal source”, red—“minimum”,
yellow—“MAC” (the input value comes from the bottom, accumulating-from the left, and
the weight is stored in the internal memory of the PE), blue—“gate”, purple—“union”.
Dashed lines indicate the signals passing through the PEs without changes. Thus, distributed implementation requires three new operations, such as “minimum”, “gate”, and
“union”. However, these operations are the basic transformations, which can be effectively
used in different computations. It is appropriate to include them in the final operations set.
The “gate” operation controls the propagation of the signal. It compares a value from
the “key” input with the expected value stored in internal memory. If both values match,
the PE passes the value from the main input to the main output. Otherwise, the main
output value is zero. In the presented figure, the “key” input is on the bottom, the main
input is on the left, and the main output is on the right. The key value keeps moving
forward regardless of the comparison result. Since all gates expect different keys, at the
most, one gate will have a non-zero output value.
As mentioned earlier, due to the integer nodes of the approximation, we only need to
compare the 8 high order bits. However, “gate” is a general-purpose operation that realizes
a comparison of the entire key. To fix this contradiction, the key shifting logic is introduced.
It sets the 8 low order bits (fractional part) of the key value to “1” to generalize all possible
keys. The expected gate values are shifted in the same way.
The “union” operation applies a bitwise OR to both inputs. It is used to merge the
results of all gates and to generalize values of the key.
-----
_Appl. Sci. 2022, 12, 5216_ 12 of 16
The “minimum” operation calculates the smallest of two numbers. It helps to exclude
values of the key greater than 5, because according to the approximation used, all input
values equal to or greater than 5 result in an output value of 1. So it is not necessary to
process any key greater than 5. Input values below 5 are ignored in the implementation
_−_
since the output of the approximation in this range is zero.
Thus, the distributed implementation of sigmoid activation requires 48 processing
elements.
**Figure 13. Distributed implementation of sigmoid activation.**
**8. Experimental Results**
Two simulation models were developed to compare both presented implementations
of sigmoid activation. The models are Verilog HDL modules designed in the Quartus Prime
software. The modules support the entire operations set of the PE. Two key parameters of
the modules were measured: the size (the number of required logic elements (LE) of the
FPGA) and the maximum processing delay.
To evaluate these parameters, the developed modules were synthesized in the Quartus
Prime (version 20.1.0, build 711 SJ Edition) for the Cyclone V (5CGFXC9E7F35C8) FPGA
device. Usage of DSP blocks was disabled in the settings. Unwanted deletion of submodules
during the optimization phase was prevented by the “synthesis keep” directive.
To measure processing delays, the Timing Analyzer tool was used. The Timing
Analyzer is part of the Quartus software. All simulations were carried out with the
predefined mode of the analyzer “Fast 1000mV 0C”. During these simulations, the largest
signal delay between the input and output of the module was measured. Both modules
were pre-configured; thus, the configuration delays are eliminated from the results.
Due to bidirectional connections between PEs, the Timing Analyzer gets stuck in
combinational loops. To avoid this problem, we removed unnecessary interconnections
from the distributed sigmoid module for the duration of simulations.
In addition to the mentioned parameters, we measured the absolute error of the
sigmoid implementation. The measurements were realized in a special PC application
because the error value is not related to the hardware implementation and depends on the
number format and the approximation algorithm. The application selects a random value
from the approximation range ( 5, 5), rounds it to the nearest 16-bit fixed-point number,
_−_
and evaluates the difference with the general sigmoid implementation at the original (before
rounding) point. The algorithm is repeated one million times to get reliable statistics.
-----
_Appl. Sci. 2022, 12, 5216_ 13 of 16
The results of all experiments are presented in Table 2.
**Table 2. Experimental results.**
**Implementation** **Total Size, LE** **PE Size, LE** **Max Delay, ns** **Average Absolute Error** **Max Absolute Error**
Centralized 312 312 14
4 10[−][3] 1 10[−][2]
Distributed 13,175 296 18.5 _×_ _×_
Note, the centralized implementation is 24.4% faster than the distributed one. However, in fact, the difference is only 4.5 ns, which is not important for most real systems.
The sigmoid is usually used in the output layer of NN, and its contribution to the overall
processing time is relatively small. The second advantage of the centralized implementation is a convenient configuration. This approach requires only 1 PE to be configured to
perform the entire approximation algorithm, in contrast to the distributed approach, where
48 PEs must be configured. Another advantage is the bit-length of the configuration signal.
The more operations the PE supports, the more bits are required to encode them all. The
distributed implementation introduces three new operations, while the centralized uses
only one. However, the operations used in the distributed implementation can be efficiently
reused to compute different functions (not only the sigmoid function), in contrast to the
centralized implementation.
The key advantage of the distributed implementation is the area on a chip occupied by
PE. With this approach, each PE occupies 5.1% less area. Thus, many more PEs can be placed
in an RCE of the same size. It is a significant improvement since a typical RCE can contain
thousands of PEs. The second benefit is the simplicity of the PE, which leads to improved
reliability. In addition, the parallel processing inherent to the distributed realization is more
consistent to the key principles of the reconfigurable computing environments design.
The results of comparison with alternative research are presented in Table 3. Data
about counterparts are taken from [37]. Presented implementations of sigmoid activation
have an acceptable average error and very high performance. However, the distributed
implementation requires more logic elements, since the calculations are distributed among
many processing elements, each of which supports a complete set of operations. In addition,
our models use combinational logic, which provides high performance at the expense of a
larger area on a chip.
**Table 3. Comparison of the developed models and alternative research.**
**Implementation** **Emax** **Eavg** **Input Format** **Output Format** **LUT** **DSP** **Delay, ns**
Hajduk (McLaurin) 1.192 10[−][7] 1.453 10[−][8] 32b FP 32b FP 1916 4 940.8
_×_ _×_
Hajduk (Pade) 1.192 10[−][7] 3.268 10[−][9] 32b FP 32b FP 2624 8 494.4
_×_ _×_
Zaki et al. 1 10[−][2] N/A 32b FP 32b FP 363 2 N/A
_×_
Tiwari et al. 4.77 10[−][5] N/A 32b FXP 32b FXP 1388 22 1590–2130
_×_
Tsmots et al. 1.85 10[−][2] 5.87 10[−][3] 16b FXP 16b FXP N/A N/A N/A
_×_ _×_
Wei et al. 1.25 10[−][2] 4.2 10[−][3] 16b FXP 12b FXP 140 0 9.856
_×_ _×_
PLAN 1.89 10[−][2] 5.87 10[−][3] 16b FXP 16b FXP 235 0 30
_×_ _×_
NRA 5.72 10[−][4] 8.6 10[−][5] 16b FXP 16b FXP 351 6 85
_×_ _×_
Centralized 1 10[−][2] 4 10[−][3] 16b FXP 16b FXP 312 0 14
_×_ _×_
Distributed 1 10[−][2] 4 10[−][3] 16b FXP 16b FXP 13175 0 18.5
_×_ _×_
N/A—not assessed, FP—floating-point number, FXP—fixed-point number.
**9. Application for Other Activation Functions**
Sigmoid activation is one of the most popular and well known, but many other
activations are used in practice. The simplest of them (ReLU, LeakyReLU, and PReLU) can
be implemented in the proposed RCE by “maximum” and “MAC” operations and do not
require a complex design. Swish activation [18] is based on the sigmoid and requires minor
-----
_Appl. Sci. 2022, 12, 5216_ 14 of 16
modifications to the proposed design. P-Swish activation [19] is a combination of Swish
and ReLU functions, so it can be implemented with “minimum” and “gate” operations.
As mentioned above, the distributed implementation of the sigmoid activation can
be effectively reused to perform approximations of another functions. Thus, the proposed
RCE is able to support a wide variety of activations. The approximation of the exponential
function makes it possible to implement ELU [20] and Softmax [21] activations. The piecewise linear approximations of Tanh and Softplus activations can be introduced according
to the given design. To improve the accuracy of these approximations, intervals of small or
variable length can be used. To support variable-length intervals, the key-shift operation
must be used several times with different shift values.
**10. Conclusions**
Modern intelligent systems are increasingly faced with the need to use computationally complex machine learning algorithms. However, low-power systems have severe
restrictions to their weight and power consumption. To solve this issue, dynamically
reconfigurable hardware accelerators based on the reconfigurable computing environments
can be used. However, the efficiency of accelerators depends on the implementation of a
set of operations of the PEs. The sigmoid function is one of the most popular activations in
neural networks. However, its general form is computationally complex. Due to this, in
practice, it is replaced by simplified analogues.
This paper proposes two hardware implementations of the sigmoid activation on the
RCE. The centralized implementation has high performance and a simple configuration
process, but it leads to an increase in the size of each PE. The distributed implementation
has lower performance, requires more LEs, and uses only simple operations. As a result,
each PE has a smaller size.
The experimental results show high performance (the largest signal delay is 14–18.5 ns)
and acceptable accuracy (average and maximum errors are 4 10[−][3] and 1 10[−][2],
_×_ _×_
respectively) of the proposed sigmoid activation implementations compared to the existing alternatives.
**Author Contributions: Conceptualization, V.S. and S.S.; data curation, D.S.; formal analysis, S.S.;**
funding acquisition, D.S.; investigation, V.S.; methodology, D.S.; project administration, S.S.; resources,
S.S.; software, V.S.; supervision, D.S.; validation, D.S.; visualization, V.S.; writing—original draft,
V.S.; writing—review and editing, S.S. All authors have read and agreed to the published version of
the manuscript.
**Funding: The research was supported by the Russian Science Foundation, grant No. 21-71-00012,**
[https://rscf.ru/project/21-71-00012/ (accessed on 18 May 2022).](https://rscf.ru/project/21-71-00012/)
**Informed Consent Statement: Not applicable.**
**Data Availability Statement: Not applicable.**
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. Chen, J.; Li, J.; Majumder, R. Make Every Feature Binary: A 135B Parameter Sparse Neural Network for Massively Improved
[Search Relevance. Available online: https://www.microsoft.com/en-us/research/blog/make-every-feature-binary-a-135b-](https://www.microsoft.com/en-us/research/blog/make-every-feature-binary-a-135b-parameter-sparse-neural-network-for-massively-improved-search-relevance/)
[parameter-sparse-neural-network-for-massively-improved-search-relevance/ (accessed on 20 March 2022).](https://www.microsoft.com/en-us/research/blog/make-every-feature-binary-a-135b-parameter-sparse-neural-network-for-massively-improved-search-relevance/)
2. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.D.; Dhariwal, P.; Amodei, D. Language Models are Few-Shot Learners.
_arXiv 2020, arXiv:2005.14165v4._
3. Carrio, A.; Sampedro, C.; Rodriguez-Ramos, A.; Campoy, P. A Review of Deep Learning Methods and Applications for Unmanned
[Aerial Vehicles. J. Sens. 2017, 2017, 3296874. [CrossRef]](http://doi.org/10.1155/2017/3296874)
4. Nabavinejad, S.M.; Reda, S.; Ebrahimi, M. Coordinated Batching and DVFS for DNN Inference on GPU Accelerators. IEEE Trans.
_[Parallel Distrib. Syst. 2022, 33, 1–12. [CrossRef]](http://dx.doi.org/10.1109/TPDS.2022.3144614)_
5. Guo, J.; Liu, W.; Wang, W.; Yao, C.; Han, J.; Li, R.; Hu, S. AccUDNN: A GPU Memory Efficient Accelerator for Training Ultra-Deep
Neural Networks. In Proceedings of the 2019 IEEE 37th International Conference on Computer Design (ICCD), Abu Dhabi,
United Arab Emirates, 17–20 November 2019; pp. 65–72.
-----
_Appl. Sci. 2022, 12, 5216_ 15 of 16
6. Chajan, E.; Schulte-Tigges, J.; Reke, M.; Ferrein, A.; Matheis, D.; Walter, T. GPU based model-predictive path control for self-driving
vehicles. In Proceedings of the 2021 IEEE Intelligent Vehicles Symposium (IV), Nagoya, Japan, 11–15 July 2021; pp. 1243–1248.
7. Chang, K.C.; Fan, C.P. Cost-Efficient Adaboost-based Face Detection with FPGA Hardware Accelerator. In Proceedings of the
2019 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW), Yilan, Taiwan, 20–22 May 2019; pp. 1–2.
8. Lee, J.; He, J.; Wang, K. Neural Networks and FPGA Hardware Accelerators for Millimeter-Wave Radio-over-Fiber Systems. In
Proceedings of the 2020 22nd International Conference on Transparent Optical Networks (ICTON), Bari, Italy, 19–23 July 2020; pp. 1–4.
9. Yu, L.; Zhang, S.; Wu, N.; Yu, C FPGA-Based Hardware-in-the-Loop Simulation of User Selection Algorithms for Cooperative
[Transmission Technology Over LOS Channel on Geosynchronous Satellites. IEEE Access. 2022, 10, 6071–6083. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2022.3141098)
10. Kyriakos, A.; Papatheofanous, E.-A.; Bezaitis, C.; Reisis, D. Resources and Power Efficient FPGA Accelerators for Real-Time
[Image Classification. J. Imaging 2022, 8, 114. [CrossRef] [PubMed]](http://dx.doi.org/10.3390/jimaging8040114)
11. Lamoral Coines, A.; Jiménez, V.P.G. CCSDS 131.2-B-1 Transmitter Design on FPGA with Adaptive Coding and Modulation
[Schemes for Satellite Communications. Electronics 2021, 10, 2476. [CrossRef]](http://dx.doi.org/10.3390/electronics10202476)
12. Sakai, Y. Quantizaiton for Deep Neural Network Training with 8-bit Dynamic Fixed Point. In Proceedings of the 2020 7th International
Conference on Soft Computing and Machine Intelligence (ISCMI), Stockholm, Sweden, 14–15 November 2020; pp. 126–130.
13. Trusov, A.; Limonova, E.; Slugin, D.; Nikolaev, D.; Arlazarov, V.V. Fast Implementation of 4-bit Convolutional Neural Networks
for Mobile Devices. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15
January 2021; pp. 9897–9903.
14. Liu, Z.; Zhang, H.; Su, Z.; Zhu, X. Adaptive Binarization Method for Binary Neural Network. In Proceedings of the 2021 40th
Chinese Control Conference (CCC), Shanghai, China, 26–28 July 2021; pp. 8123–8127.
15. Zhu, B.; Al-Ars, Z.; Hofstee, H.P. NASB: Neural Architecture Search for Binary Convolutional Neural Networks. In Proceedings
of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–8.
16. Tang, Z.; Luo, L.; Xie, B.; Zhu, Y.; Zhao, R.; Bi, L.; Lu, C. Automatic Sparse Connectivity Learning for Neural Networks. In
Proceedings of the 2022 IEEE Transactions on Neural Networks and Learning Systems, Padua, Italy, 22 April 2022; pp. 1–15.
17. Haykin, S. Neural Network: A Comprehensive Foundation, 2nd ed.; Prentice Hall International, Inc.: Hoboken, NJ, USA, 1999; 842p.
18. Ramachandran, P.; Zoph, B.; Le, Q.V. Searching for Activation Functions. In Proceedings of the ICLR 2018 Conference, Vancouver,
BC, Canada, 30 April–3 May 2018; pp. 1–13.
19. Mercioni, M. A.; Holban, S. P-Swish: Activation Function with Learnable Parameters Based on Swish Activation Function in
Deep Learning. In Proceedings of the 2020 International Symposium on Electronics and Telecommunications (ISETC), Timisoara,
[Romania, 5–6 November 2020; pp. 1–4. [CrossRef]](http://dx.doi.org/10.1109/ISETC50328.2020.9301059)
20. Devi, T.; Deepa, N. A novel intervention method for aspect-based emotion Using Exponential Linear Unit (ELU) activation
function in a Deep Neural Network. In Proceedings of the 2021 5th International Conference on Intelligent Computing and
[Control Systems (ICICCS), Madurai, India, 6–8 May 2021; pp. 1671–1675. [CrossRef]](http://dx.doi.org/10.1109/ICICCS51141.2021.9432223)
21. Hu, R.; Tian, B.; Yin, S.; Wei, S. Efficient Hardware Architecture of Softmax Layer in Deep Neural Network. In Proceedings of the
2018 IEEE 23rd International Conference on Digital Signal Processing (DSP), Shanghai, China, 19–21 November 2018; pp. 1–5.
[[CrossRef]](http://dx.doi.org/10.1109/ICDSP.2018.8631588)
22. Lee, K.J.; Lee, J.; Choi, S.; Yoo, H.-J. The Development of Silicon for AI: Different Design Approaches. IEEE Trans. Circuits Syst.
**[2020, 67, 4719–4732. [CrossRef]](http://dx.doi.org/10.1109/TCSI.2020.2996625)**
23. Kan, Y.; Wu, M.; Zhang, R.; Nakashima, Y. A multi-grained reconfigurable accelerator for approximate computing. In Proceedings
of the IEEE Computer Society Annual Symposium on VLSI (ISVLSI), Limassol, Cyprus, 6–8 July 2020; pp. 90–95.
24. Khalil, K.; Eldash, O.; Dey, B.; Kumar, A.; Bayoumi, M. A Novel Reconfigurable Hardware Architecture of Neural Network. In
Proceedings of the IEEE 62nd International Midwest Symposium on Circuits and Systems (MWSCAS), Dallas, TX, USA, 4–7
August 2019; pp. 618–621.
25. Chen, Y.; Krishna, T.; Emer, J.S.; Sze, V. Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural
[Networks. IEEE J.-Solid-State Circuits 2017, 52, 127–138. [CrossRef]](http://dx.doi.org/10.1109/JSSC.2016.2616357)
26. Chen, Y.H.; Yang, T.J.; Emer, J.; Sze, V. Eyeriss v2: A flexible accelerator for emerging deep neural networks on mobile devices.
_[IEEE J. Emerg. Sel. Top. Circuits Syst. (Jetcas) 2019, 9, 292–308. [CrossRef]](http://dx.doi.org/10.1109/JETCAS.2019.2910232)_
27. Jouppi, N.P.; Young, C.; Patil, N.; Patterson, D.; Agrawal, G.; Bajwa, R.; Yoon, D.H. In-Datacenter Performance Analysis of a
Tensor Processing Unit. In Proceedings of the 44th Annual International Symposium on Computer Architecture (ISCA ’17),
Toronto, ON, Canada, 24–28 June 2017; pp. 1–12.
28. Bondarchuk, A.S.; Shashev, D.V.; Shidlovskiy, S.V. Design of a Model of a Reconfigurable Computing Environment for Determining
[Image Gradient Characteristics. Optoelectron. Instrum. Data Process. 2021, 57, 132–140. [CrossRef]](http://dx.doi.org/10.3103/S8756699021020047)
29. Evreinov, E.V. Homogeneous Computing Systems, Structures and Environments; Radio and Communication: Moscow, Russia,
1981; 208p.
30. Kung, S.Y. VLSI Array Processors; Prentice Hall Information and System Sciences Series; Englewood Cliffs: Bergen, NJ, USA,
1988; 600p.
31. Wanhammar, L. DSP Integrated Circuits; Academic Press Series in Engineering: Cambridge, MA, USA, 1999; 561p.
32. Ghimire, D.; Kil, D.; Kim, S.-H. A Survey on Efficient Convolutional Neural Networks and Hardware Acceleration. Electronics
**[2022, 945, 945. [CrossRef]](http://dx.doi.org/10.3390/electronics11060945)**
-----
_Appl. Sci. 2022, 12, 5216_ 16 of 16
33. Shatravin, V.; Shashev, D.V. Designing high performance, power-efficient, reconfigurable compute structures for specialized
[applications. J. Phys. Conf. Ser. 2020, 1611, 1–6. [CrossRef]](http://dx.doi.org/10.1088/1742-6596/1611/1/012071)
34. Shatravin, V.; Shashev, D.V.; Shidlovskiy S.V. Applying the Reconfigurable Computing Environment Concept to the Deep
Neural Network Accelerators Development. In Proceedings of the International Conference on Information Technology (ICIT),
Guangzhou, China, 15–17 January 2021; Volume 1611, pp. 842–845.
35. Shatravin, V.; Shashev, D.V.; Shidlovskiy S.V. Developing of models of dynamically reconfigurable neural network accelerators
based on homogeneous computing environments. In Proceedings of the XXIV International Scientific Conference Distributed
Computer and Communication Networks: Control, Computation, Communications (DCCN), Moscow, Russia, 26–30 September
2021; pp. 102–107.
36. Faiedh, H.; Gafsi, Z.; Besbes, K. Digital Hardware Implementation of Sigmoid Function and its Derivative for Artificial Neural Networks. In Proceedings of the 13 International Conference on Microelectronics, Rabat, Morocco, 29–31 October 2001;
pp. 189–192.
37. Pan, Z.; Gu, Z.; Jiang, X.; Zhu, G.; Ma, D. A Modular Approximation Methodology for Efficient Fixed-Point Hardware Implemen[tation of the Sigmoid Function. IEEE Trans. Ind. Electron. 2022, 69, 10694–10703. [CrossRef]](http://dx.doi.org/10.1109/TIE.2022.3146573)
-----
| 12,330
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/app12105216?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/app12105216, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2076-3417/12/10/5216/pdf?version=1653373842"
}
| 2,022
|
[] | true
| 2022-05-21T00:00:00
|
[] | 12,330
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Medicine",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/006619c94683268a9750b488563515a2c064e48e
|
[
"Computer Science",
"Medicine"
] | 0.857275
|
MDED-Framework: A Distributed Microservice Deep-Learning Framework for Object Detection in Edge Computing
|
006619c94683268a9750b488563515a2c064e48e
|
Italian National Conference on Sensors
|
[
{
"authorId": "2111508507",
"name": "Jihyun Seo"
},
{
"authorId": "2113778476",
"name": "Sumin Jang"
},
{
"authorId": "40538415",
"name": "Jae-Geun Cha"
},
{
"authorId": "2217088133",
"name": "Hyunhwa Choi"
},
{
"authorId": "2109268092",
"name": "Daewon Kim"
},
{
"authorId": "2154924919",
"name": "Sunwook Kim"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"SENSORS",
"IEEE Sens",
"Ital National Conf Sens",
"IEEE Sensors",
"Sensors"
],
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-142001",
"http://www.mdpi.com/journal/sensors",
"https://www.mdpi.com/journal/sensors"
],
"id": "3dbf084c-ef47-4b74-9919-047b40704538",
"issn": "1424-8220",
"name": "Italian National Conference on Sensors",
"type": "conference",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-142001"
}
|
The demand for deep learning frameworks capable of running in edge computing environments is rapidly increasing due to the exponential growth of data volume and the need for real-time processing. However, edge computing environments often have limited resources, necessitating the distribution of deep learning models. Distributing deep learning models can be challenging as it requires specifying the resource type for each process and ensuring that the models are lightweight without performance degradation. To address this issue, we propose the Microservice Deep-learning Edge Detection (MDED) framework, designed for easy deployment and distributed processing in edge computing environments. The MDED framework leverages Docker-based containers and Kubernetes orchestration to obtain a pedestrian-detection deep learning model with a speed of up to 19 FPS, satisfying the semi-real-time condition. The framework employs an ensemble of high-level feature-specific networks (HFN) and low-level feature-specific networks (LFN) trained on the MOT17Det dataset, achieving an accuracy improvement of up to AP50 and AP0.18 on MOT20Det data.
|
# sensors
_Article_
## MDED-Framework: A Distributed Microservice Deep-Learning Framework for Object Detection in Edge Computing
**Jihyun Seo *** **, Sumin Jang, Jaegeun Cha, Hyunhwa Choi** **, Daewon Kim and Sunwook Kim**
Artificial Intelligence Research Laboratory, ETRI, Daejeon 34129, Republic of Korea; [email protected] (S.J.);
[email protected] (J.C.); [email protected] (H.C.); [email protected] (D.K.); [email protected] (S.K.)
*** Correspondence: [email protected]**
**Abstract: The demand for deep learning frameworks capable of running in edge computing environ-**
ments is rapidly increasing due to the exponential growth of data volume and the need for real-time
processing. However, edge computing environments often have limited resources, necessitating the
distribution of deep learning models. Distributing deep learning models can be challenging as it
requires specifying the resource type for each process and ensuring that the models are lightweight
without performance degradation. To address this issue, we propose the Microservice Deep-learning
Edge Detection (MDED) framework, designed for easy deployment and distributed processing in
edge computing environments. The MDED framework leverages Docker-based containers and
Kubernetes orchestration to obtain a pedestrian-detection deep learning model with a speed of up
to 19 FPS, satisfying the semi-real-time condition. The framework employs an ensemble of highlevel feature-specific networks (HFN) and low-level feature-specific networks (LFN) trained on the
MOT17Det dataset, achieving an accuracy improvement of up to AP50 and AP0.18 on MOT20Det data.
**Keywords: multi-object detection; edge computing; deep learning; distributed system; software**
framework
**Citation: Seo, J.; Jang, S.; Cha, J.;**
Choi, H.; Kim, D.; Kim, S.
MDED-Framework: A Distributed
Microservice Deep-Learning
Framework for Object Detection in
Edge Computing. Sensors 2023, 23,
[4712. https://doi.org/10.3390/](https://doi.org/10.3390/s23104712)
[s23104712](https://doi.org/10.3390/s23104712)
Academic Editor: Antonio Puliafito
Received: 3 April 2023
Revised: 4 May 2023
Accepted: 8 May 2023
Published: 12 May 2023
**Copyright:** © 2023 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**1. Introduction**
Multiple object detection is a computer vision field that involves analyzing images
and videos to extract information about object classes and their locations. It has been
extensively studied in various domains, including autonomous driving [1,2], anomaly
detection [3,4], surveillance [5,6], aerial imagery [7,8], and smart farming [9,10]. By utilizing
artificial intelligence algorithms, research in this this field aims to address challenging
detection problems. However, with the rapid increase in the amount of real-time video data
acquired from sensors and IoT devices, there is a growing need for distributed computing
to process those data effectively. Consequently, there is an increasing demand for the
development of deep learning models optimized for distributed computing environments
while maintaining detection accuracy.
Distributed processing techniques via cloud computing have been used to address
high computational demands and resource constraints in deep learning models. However,
cloud computing suffers from limited bandwidth for data transfer and significant latency
when transferring large amounts of video data [11,12]. To address these issues, edge
computing is emerging as a solution, where deep learning servers are placed closer to
the physical locations where video data is generated, allowing data to be processed and
analyzed at the edge [13,14]. Edge computing is a horizontal architecture that distributes
computing resources on edge servers closer to users and edge devices.
However, there are several issues that need to be addressed in order to proceed with
the distributed processing of deep learning in an edge computing environment that solves
the shortcomings of cloud computing. First, we need a framework that can automatically
configure and maintain the environment of various devices. If a new device added to the
-----
_Sensors 2023, 23, 4712_ 2 of 16
cluster has enough GPU and memory, it can deploy a large deep learning model and make
inferences. However, for a smartphone or micro device, deploying a large deep learning
model may cause inferences to experience delays or failures. In traditional frameworks,
cluster managers monitor this and manually deploy models accordingly. However, as the
number of devices connected to the cluster increases, it becomes very difficult to manage
the deployment of models manually. Therefore, if automated microservice deployment
is possible in an edge computing cluster environment, it would make life easier for service managers. Additionally, microservices need to be able to scale in and out as user
requests increase or decrease in volume. Traditionally, administrators manually deploy
and release microservices to devices using spare resources, but this is highly inefficient and
users may experience service inconvenience due to the difficulty of instant microservice
processing. Automated microservice deployment can monitor the resources of edge devices
and dynamically scale them in and out, providing convenience to service users.
Second, there is a need for a framework that is optimized for distributed edge environments that includes improving the accuracy of deep learning models. The aim of deep
learning models is to achieve high performance, which typically requires the use of large
models. However, such models may not be suitable for low-resource edge environments.
To overcome this challenge, lightweight models can be used instead. Yet, training such
models using traditional methods [15] may lead to overfitting, long training times, and even
decreased performance. Therefore, a distributed computing environment can be utilized,
with multiple edge devices connected, to achieve good results even from lightweight deep
learning models with low performance.
In this paper, to address the above problems, we propose a microservice deep-learning
edge detection framework (MDED-Framework) that applies an architecture suitable for
distributed processing and can improve performance by parallelizing existing learned
object detection models.
The contributions of the proposed framework are as follows:
_•_ First, it supports efficient multi-video stream processing by analyzing the resources in
an edge cluster environment. It supports flexible scale in and scale out by periodically
detecting resource changes in the cluster. It also minimizes delay through efficient
distribution of tasks.
Second, we constructed high-level feature network (HFN) and low-level feature
_•_
network (LFN) networks that lighten the scaled YOLOv4 [16] model. The model
lightweighting based on the training features of the deep learning model provides
improved detection accuracy even on more complex data than the trained set.
Third, we implemented the deep learning model ensemble in a distributed environ
_•_
ment to maximize the benefits of distributed processing. In addition to improving
processing speed, object detection can continue even when some services fail.
_•_ Furthermore, we provide a web-based video analysis service to provide users with an
easy and convenient object detection service. Through the Rest API, users can easily
access functions such as selecting video resolution, which is one of the factors affecting
detection accuracy, uploading videos for detection, and checking deep learning results.
The remainder of the paper is organized as follows. Section 2 explains the framework
and deep learning models associated with this study. Section 3 offers an explicit explanation
of the proposed framework and its methodology. Section 4 illustrates the results of the
experiment. Finally, Section 5 concludes the paper and identifies avenues for future research.
**2. Background**
The first part of this section describes deep learning frameworks that operate in edge
computing environments. The second part describes the types and functions of neck
structures used to enhance features in CNN-based vision deep learning, and the last part
describes deep learning models that consider resolution parameters to enhance accuracy.
-----
_Sensors 2023, 23, 4712_ 3 of 16
_2.1. Deep Learning Framework on Edge Computing_
The utilization of deep learning has changed as its structure has transformed from
cloud computing to edge computing. As deep-learning-based object detection is linked to
various advances in the field, edge device-based platforms have emerged.
Sassu A. et al. [17] proposed a deep-learning-based edge framework that can analyze
multi-streams in real time. Docker-based services are structured to be processed independently, and two example applications are shown. While [17] focuses on improving
the performance of CPUs and GPUs, the end goal of deep learning applications is to improve the accuracy of the model. In this paper, we present a model that is efficient in a
distributed environment and also performs well on data not used for training, focusing
on both processing speed and model accuracy. Kul S. et al. [18] proposed a new means of
tracking specific vehicles on a video stream collected from surveillance cameras. Data on
microservices between networks are sent and each service extracts the vehicle type, color,
and speed, and combines these features. Apache Kafka [19] was used to introduce a system
that can offer feedback on real-time queries. Houmani Z. et al. [20] proposed a microservice
resource-management scheduling method for deep learning applications that overview
edge cloud. The study proposes a deep learning workflow architecture that divides cloud
resources into three categories (non-intensive, low-intensive, and high-intensive) based on
CPU, memory, storage, or bandwidth requirements, and which uses distributed pipelining.
The study reported improved processing speeds by 54.4% for edge cloud compared to
cloud-based scenarios running at 25 frames per second. In this paper, we present a resolution selector that allocates edge cloud resources according to the resolution of the image, to
consider the detection accuracy of the deep learning model and how to efficiently distribute
processing without compromising the resolution, which is one of the important factors of
the deep learning model. Li J. et al. [21] proposed a hierarchical architecture that improves
deep learning performance for vision-related processes by using multitasking training and
balancing the workload. The study improved mAP for pedestrian detection and person reidentification tasks. It also introduced a simulation of indoor fall detection. Xu Z. et al. [22]
proposed a real-time object detection framework using the cloud-edge-based FL-YOLO
network. The FL-YOLO network adds a depth-wise separable convolution down-sampling
inverted residual block to Tiny-YOLOv3 and consists of a framework that can train and
validate coal mines; it uses reduced model parameters and computations. Chen C. et al. [23]
introduced a new architecture that processes re-identification problems that occur when
personal information issues arise as data is sent via the cloud. The architecture is processed
on the AIoT EC gateway. The study designed an AIoT EC gateway that satisfies the relevant
resource requirements using a microservice structure with improved process speeds and
latency as the number of services increases.
_2.2. CNN-Based Vision Deep Learning Neck Structure_
Traditional image processing techniques sort the features of an image into two main
categories: high-level features and low-level features. High-level features, also known as
global features, refer to the overall information of an image (texture, color, etc.). These
features are usually found in layers close to the input image in the structure of the deep
learning model. Low-level features, also known as local features, are localized information
in the image (edges, corners, etc.). Unlike high-level features, low-level features reside in
the layers of the deep learning model structure that are furthest from the input image.
According to [24], humans typically recognize objects through high-level features in
an image, while deep learning models detect objects through low-level features acquired
through a series of operations. This means that deep learning models cannot achieve high
detection accuracy by using only high-level features. However, using a large number of
low-level features to increase accuracy leads to another problem: overfitting. This means
that only the low-level features extracted from the data used for training have high accuracy.
In deep learning architecture, the neck structure was created to solve these problems by
fusing high-level features with low-level features to improve accuracy.
-----
_Sensors 2023, 23, 4712_ 4 of 16
The neck structure is located between the backbone structure, which extracts features
from the detection network, and the head structure, which determines the existence of
an object through regression. The neck structure is divided into two types depending on
whether or not a pyramid structure is used to fuse low-level and high-level features. A
pyramid structure refers to a structure that is computed by fusing feature maps of different
sizes obtained by passing through convolution layers. Some of the most well-known neck
structures that create pyramid structures are the Feature Pyramid Network (FPN) [25],
PAN [26], NAS-FPN [27], BiFPN [28], etc. FPN utilizes an upsampling process where the
feature map obtained from the backbone in the order of high-level features to low-level
features is recomputed in the order of low-level features to high-level features. This process
allows the deep learning model to perform well by referring to a wider range of features.
PAN is a structure that adds one more bottom-up path to the FPN structure, enriching the
low-level features that can directly affect the accuracy of deep learning. NAS-FPN utilizes
the NAS structure to build an efficient pyramid structure on a dataset. Bi-FPN performs
bottom-up and top-down fusion through lateral connections.
On the other hand, there are also neck structures that fuse high-level features with lowlevel features without using a pyramid structure. Structures such as SPP [29], ASPP [30],
and SAM [31] utilize specific operations on the feature map to obtain features of different
sizes. SPP can obtain global features by applying max-pooling to feature maps of various
sizes; its biggest advantage is that it does not require the input image size to be fixed or
transformed for deep learning structures, because it operates directly on the feature map.
Unlike SPP, which is a type of pooling, ASPP is a type of convolution and is characterized
by expanding the area of the convolutional operation, which usually exists in a 3 × 3 size, so
that a wider range can be considered. In this paper, the authors apply the scaling factor rate
so that various receptive fields can be viewed. In addition, an expanded output feature map
can be generated without increasing the amount of computation, and detailed information
about the object can be acquired efficiently. SAM is a kind of optimizer; when calculating
the loss, it optimizes the loss value while converging to the flat minima to maximize the
generalization performance.
The traditional neck structure fuses low-level and high-level features in one network
to improve accuracy in an end-to-end manner. This method can create a high-accuracy
model in a single-server environment, but it is not suitable for environments with many
variables, such as cloud edge environments, because the size of the model is a limitation.
Therefore, in this paper, we add the neck structure into HFN, a network specializing in highlevel features, and LFN, a network specializing in low-level features, and use ensemble
techniques to compensate for the reduction in accuracy.
_2.3. Image Resolution-Related Deep Learning Models_
There are variations of deep learning models that utilize the neck structure described
above, whereas some networks consider the resolution of the input image. The EfficientDet [28] network is a model constructed using the EfficientNet [32] model as a backbone.
EfficientNet explains that there are three existing ways to increase the accuracy of the
model: increasing the depth of the model, increasing the number of filters in the model,
and increasing the resolution of the input image. The authors aimed to achieve better
performance by systematically analyzing the above three variables, and designed a more
efficient model using the compound method. In the EfficientDet network, they proposed a
BiFPN structure, which is a variation of the existing FPN structure, and constructed feature
maps with different scales through a bidirectional network, which is a resolution-dependent
structure, to learn richer features.
The PP-YOLO [33] network does not present a new detection method, but it combines
several techniques that can improve the accuracy of the YOLOv3 [34] network, resulting in
lower latency and higher accuracy than existing models. PP-YOLO achieved acceptable
performance for images with resolution sizes of 320, 416, 512, and 608, and utilizes a similar
structure to the existing FPN.
-----
_Sensors 2023, 23, 4712_ 5 of 16
Scaled YOLOv4 [16] is a model scaling network based on the existing YOLOv4 network.
Based on YOLOv4-CSP, the authors developed the Scaled-YOLOv4—large and ScaledYOLOv4—tiny models, with a modified structure, by combining the three parameters
(depth, width, and resolution) proposed by EfficientNet. These models are characterized
by better control of computational costs and memory bandwidth than existing models.
Depending on the resolution, the scaled YOLOv4 model showed improved performance
compared with the EfficientDet model, as well as fast inference speed. In this paper,
we modified the network based on the Scaled YOLOv4 model with good performance
to deploy the model smoothly in the edge computing environment, and constructed a
framework that can target and process images with resolutions of 640, 896, and 1280.
**3. MDED Framework**
_3.1. System Architecture_
As shown in Figure 1, the MDED framework consists of microservices that perform
video object detection, a MongoDB service, and persistent volumes. To make the framework
suitable for distributed processing, a cluster consists of a set of multiple individual nodes.
The nodes’ environments vary widely, and microservices are automatically deployed that
are appropriate for each node’s resources. Microservices are built on top of Docker [35]
containers and provide services in the form of Kubernetes [36] pods, the smallest unit
that can be deployed on a single node. Microservices are organized into four types: front
microservices, preprocessing microservices, inferencing microservices, and postprocessing microservices. The front microservice monitors the resources (CPU, GPU) inside the
cluster and coordinates the deployment of other microservices. The front microservice
also provides users with a web-based video object detection API. The preprocessing microservice splits the user-input video file into frames and performs preprocessing tasks
on CPU resources to transform the video to the user-specified resolution. The inferencing
microservice distributes video frames to the high-level feature-specific network (HFN) and
low-level feature-specific network (LFN) to obtain detection results from each network.
GPU resources are prioritized, and if they are insufficient, object detection is performed
using CPU resources. The postprocessing microservice ensembles the results obtained
through distributed processing in the inferencing microservice. It calibrates the results to
achieve better detection accuracy than can be achieved with a single deep learning model,
and performs visualization tasks such as displaying the detection results on the screen.
Each microservice is described in detail in later sections: Sections 3.1.1–3.1.3.
The microservices operate independently and share the metadata generated by each
microservice through MongoDB, which also prevents more than one microservice from
accessing the same data at the same time. Each microservice references input data (video)
and output data (class, bounding box) via NFS, which is a shared volume. NFS is created
through the binding of a PV to a PVC, and as user needs change, a new PVC can be created
to provide flexible connectivity to other PVs with the appropriate storage capacity.
3.1.1. Front Microservice
The front microservice is responsible for communicating directly with users and
showing work progress and results. In addition, it can monitor the resources and workload
of edge nodes connected to the cluster to create an environment that handles data flexibly.
The front microservice periodically updates the information about the overall resource and
usable resources of the edge node, and automatically determines whether to generate new
microservices as it receives new input data. Moreover, the flexible distributed processing
environment can be constructed by monitoring the processing workload and adjusting the
scalability of microservices.
-----
_Sensors 2023Sensors, 232022, 4712, 22, x FOR PEER REVIEW_ 6 of 17 6 of 16
**Figure 1. Illustration of the overall configuration of the MDED framework.**
**Figure 1. Illustration of the overall configuration of the MDED framework.**
The microservices operate independently and share the metadata generated by each
The front microservice performs the following functions:
microservice through MongoDB, which also prevents more than one microservice from
_•_ accessing the same data at the same time. Each microservice references input data (video) Rest API server: receives video data and user metadata that are used to build inference
and output data (class, bounding box) via NFS, which is a shared volume. NFS is created pipelines. The inference results stored in NFS are displayed on the HTTP API to
through the binding of a PV to a PVC, and as user needs change, a new PVC can be created provide user-friendly services. The Rest API server is implemented through the
to provide flexible connectivity to other PVs with the appropriate storage capacity. micro-framework Flask [37].
Microservices resource monitor (MRM): monitors the available resources and current
_•_
3.1.1. Front Microservice
workload on the edge nodes. The obtained information is passed to the Microservices
scale controller to configure the optimal microservices operating environment basedThe front microservice is responsible for communicating directly with users and
showing work progress and results. In addition, it can monitor the resources and work‐
on the resource state and to configure an efficient distributed processing environment.
load of edge nodes connected to the cluster to create an environment that handles data
_•_ flexibly. The front microservice periodically updates the information about the overall re‐Microservices scale controller (MSC): the results of MRM are used to adjust the number
of microservices to distribute processing jobs. If the workload is increasing, the MSC
source and usable resources of the edge node, and automatically determines whether to
uses information obtained through MRM to determine whether microservices can
generate new microservices as it receives new input data. Moreover, the flexible distrib‐
increase or not. As the workload decreases, the resource release process begins to grad
uted processing environment can be constructed by monitoring the processing workload
and adjusting the scalability of microservices. ually reduce idle microservices. Algorithm 1 introduces the resource allocation/release
algorithm for MSC.The front microservice performs the following functions:
- Rest API server: receives video data and user metadata that are used to build infer‐
3.1.2. Preprocessing Microservice
ence pipelines. The inference results stored in NFS are displayed on the HTTP API to
Figureprovide user‐friendly services. The Rest API server is implemented through the mi‐ 2 illustrates the processing flow of the MDED framework, which consists
of several microservices designed to perform specific tasks. The framework includes acro‐framework Flask [37].
preprocessing layer, an inference layer, and a postprocessing layer, each implemented as a Microservices resource monitor (MRM): monitors the available resources and current
separate microservice. The first microservice, the preprocessing microservice, handles theworkload on the edge nodes. The obtained information is passed to the Microservices
video data input from the user via the front microservice on the web. It obtains video framescale controller to configure the optimal microservices operating environment based
information from the processed video data. The preprocessing microservice splits the video
data source file into images of 30 frames per second. In the process, the image is processed
according to whether the user wants to use a specific resolution of the image for detection,
or wants to use only a part of the image for detection. The preprocessing microservice
obtains information about the processed images and classifies them according to resolution
for efficient resource utilization. Resolution is one of the variables listed in [28] that can
-----
_Sensors 2023, 23, 4712_ 7 of 16
improve accuracy, and the resolution selector is responsible for matching images with
the best resource environment to run without degrading their resolution. The resolution
selector runs in two categories: low resolution (640p, 896p) and high resolution (1280p)
to help prioritize the distribution of low-resolution video to less resourceful nodes and
high-resolution video to more resourceful nodes.
**Algorithm 1. Resource allocation/release algorithm.**
**Input: Microservice monitoring information (number of states for each task)**
**Output: the newly created process pod or returned resources**
Def Microservices Scale Controller
While(True):
Sleep(5)
Processes_lists = Microservice Monitoring()
Preprocess_ratio = The number of Enqueue/The number of Preprocess
Inference_ratio = The number of preprocess_complete/The number of inferences
Postprocess_ratio = The number of inference_complete/The number of postprocess
//CPU loop
While preprocess_ratio, postprocess_ratio close to threshold:
If preprocess_ratio, postprocess_ratio > threshold: //Scale-out
If there are sufficient CPU resources to add new pods
The number of replicaset += 1
elif preprocess_ratio, postprocess_ratio < threshold: //Scale-down
The number of replicaset −= 1
Microservices Scale Controller(preprocess) or
Microservices Scale Controller(postprocess)
//GPU loop
Resolution_lists = Microservices Monitoring()
Gpu_var = cuda.device_count()
While inference_ratio closes to threshold
If inference_ratio > threshold: //Scale-out
If gpu_var > The number of inference
The number of GPU inference replicaset += 2
Elif inference_ratio < threshold: //Scale-down
The number of GPU inference replicaset −= 2
Microservice Scale Controller(inferencing)
3.1.3. Inferencing Microservice
The inferencing microservice supports multi-object inference using deep learning models. The inferencing microservice consists of two networks, a high-level feature-specific
network (HFN) and a low-level feature-specific network (LFN), which modify the Scaled
YOLOv4 [16] network according to the features in the image. Additionally, since the Scaled
YOLOv4 network on which it is based considers the resolution of the input image, we modified the system to fit the Scaled YOLOv4-csp, Scaled YOLOv4-p5, and Scaled YOLOv4-p6
networks. Figure 3 shows the structure of the LFN and HFN according to csp, p5, and p6.
HFN and LFN are networks with improved performance in edge environments for
pedestrian objects [38]. Scaled YOLOv4 and high-performance networks attempt to improve accuracy by using a neck structure. However, the number of parameters increases,
and the computational demand increases when fusing low-level and high-level occurs
multiple times. This causes the model to grow larger and larger in size, making it difficult
to deploy deep learning models in edge environments where resources may be insufficient.
It is also difficult to scale out and scale down the model for flexible distributed processing. Therefore, we wanted to modify the Scaled YOLOv4 model to be suitable for use in
distributed processing environments.
HFN and LFN are networks that specialize in high-level features and low-level features,
which in Scaled YOLOv4 serve as inputs to the top-down pathway and bottom-up pathway
-----
_Sensors 2023, 23, 4712_ 8 of 16
performed by the PANet [26]. In the case of an HFN, the network is trained by acquiring
and optimizing the features acquired around the input of the backbone. However, highlevel features cannot be expected to be highly accurate for training deep learning models,
so we applied FPN to further fuse them with low-level features. The LFN is a network
that strengthens the last layer of the backbone network, which is the part where low-level
features mostly gather. We added an SPP [29] layer after convolution to strengthen the
global feature information of the low-level features, which also serves to prevent overfitting.
As shown in Figure 2, the high-level feature-specific network and the low-level featurespecific network fall under the same process, referred to as the inference layer, and are
distributed to different pods. The inputs are image frames whose resolutions are classified
by a resolution selector, and the inference microservices prioritize images with resolutions
appropriate to the environment in which they are performing. HFN and LFN detect objects
in parallel and store the results in shared storage.
_Sensors 2022, 22, x FOR PEER REVIEW 3.1.4. Postprocessing Microservice_ 8 of 17
The postprocessing microservice is responsible for assembling the results obtained
from the inferencing microservices and utilizing CPU resources to extract useful informa
images with the best resource environment to run without degrading their resolution. The
tion. Additionally, if the user wishes to view the results, the postprocessing microservice
resolution selector runs in two categories: low resolution (640p, 896p) and high resolution
offers the ability to display the bounding boxes directly on the image. The final detection
(1280p) to help prioritize the distribution of low‐resolution video to less resourceful nodes
results represent the objects obtained, assembled or encoded into a video.
and high‐resolution video to more resourceful nodes.
**Figure 2. Illustration of the overall flow of the proposed method.**
**Figure 2. Illustration of the overall flow of the proposed method.**
3.1.3. Inferencing Microservice
As shown in Algorithm 2, the bounding box gathering algorithm is used to obtain the
final meaningful bounding boxes from the HFN and LFN bounding boxes. This algorithmThe inferencing microservice supports multi‐object inference using deep learning
models. The inferencing microservice consists of two networks, a high‐level feature‐spe‐
calculates the intersection over union (IoU) of the bounding boxes RHigh from the HFN and
cific network (HFN) and a low‐level feature‐specific network (LFN), which modify the
the bounding boxes Rlow from the LFN. If the IoU ratio is close to 1, the boxes are likely to
Scaled YOLOv4 [16] network according to the features in the image. Additionally, since
represent the same object. The formula for calculating the IoU is presented below.
the Scaled YOLOv4 network on which it is based considers the resolution of the input
image, we modified the system to fit the Scaled YOLOv4‐csp, Scaled YOLOv4‐p5, and Intersection
Scaled YOLOv4‐p6 networks. Figure 3 shows the structure of the LFN and HFN according Intersection over Union(IoU) = (1)
_AreaA + AreaB_ _Intersection_
to csp, p5, and p6. _−_
HFN d LFN k i h i d f i d i f
-----
are distributed to different pods. The inputs are image frames whose resolutions are clas
_Sensors 2023, 23, 4712_ sified by a resolution selector, and the inference microservices prioritize images with res‐ 9 of 16
olutions appropriate to the environment in which they are performing. HFN and LFN
detect objects in parallel and store the results in shared storage.
**Figure 3.Figure 3. LFN and HFN of csp, p5, and p6 architecture.LFN and HFN of csp, p5, and p6 architecture.**
3.1.4. Postprocessing Microservice
**Algorithm 2. Bounding box gathering algorithm.**
The postprocessing microservice is responsible for assembling the results obtained
**Inputfrom the inferencing microservices and utilizing CPU resources to extract useful infor‐: HFN detection results RHigh = { bh1, bh2 · · ·, bhn },**
LFN detection resultsmation. Additionally, if the user wishes to view the results, the postprocessing micro‐ Rlow = { bl1, bl2 · · ·, bln }
**Outputservice offers the ability to display the bounding boxes directly on the image. The final : Ensembled detection results REnsembled = { be1, be2 · · ·, ben }**
_Rdetection results represent the objects obtained, assembled or encoded into a video. Ensembled ←_ []
For bh inAs shown in Algorithm 2, the bounding box gathering algorithm is used to obtain RHigh:
the final meaningful bounding boxes from the HFN and LFN bounding boxes. This algo‐REnsembled ← _bh_
HFN and the bounding boxes rithm calculates the intersection over union (IoU) of the bounding boxes ForArea bhl =, in R�Toplow:ybh − _Bottomybh_ �𝑅×���� from the LFN. If the IoU ratio is close to 1, the boxes Bottomxbh − _Topxbh_ � 𝑅���� from the
are likely to represent the same object. The formula for calculating the IoU is presented � � � �
below. Areal = _Topybl −_ _Bottomybl_ _×_ _Bottomxbl −_ _Topxbl_
� �
_TopInterx = max_ _Topxbh, Topxbl_
� �
_TopIntery = max_ _Topybh, Topybl_
� �
_BottomInterx = max_ _Bottomxbh, Bottomxbl_
� �
_BottomIntery = max_ _Bottomxbh, Bottomxbl_
_Areainter = max�0, Bottom�_ _interx −_ _Topinterx + 1�_ �
_×max_ 0, Bottomintery − _Topintery + 1_
_AreaUnion = Areah + Areal_
If IoU > threshold and IoU ≤ 1:
Already detected pedestrian, stop.
If bl is newly detected pedestrian:
_REnsembled ←_ _bl_
**4. Results**
This section describes the dataset and accuracy metrics used to measure the accuracy
of the inference microservice. It provides details of the experiments, and reports the results
of the distributed processing time measurements.
-----
_Sensors 2023, 23, 4712_ 10 of 16
_4.1. Datasets and Details_
The datasets used to measure the accuracy of pedestrian object detection in this
experiment were MOT17Det [39] and MOT20Det [40]. Various datasets contain pedestrian
objects, such as KITTI [41] and CrowdHuman [42]. However, the MOTDet dataset was
the only dataset with prior research on the relationship between datasets (MOT17Det and
MOT20Det), so we used it as the training and test data to measure the general accuracy
of the network. The MOT17Det validation and MOT20Det validation sets available on
the MOTChallenge website were not used, due to authentication issues. Figure 4 shows
examples of images from various datasets that are commonly used in pedestrian detection.
Dataset 4-(b) is known to cover more complex and diverse situations than dataset 4-(c), and
was used as test data in this experiment to assess the general performance improvement
_Sensors 2022, 22, x FOR PEER REVIEW of deep learning. Table 1 shows the specific information of dataset 4-(c) (MOT17Det train)11 of 17_
used for training and dataset 4-(b) (MOT20Det train) used for testing.
**Training Sequences**
**Figure 4. A dataset commonly used for pedestrian detection.**
**Figure 4. A dataset commonly used for pedestrian detection.**
**Table 1. Specific information about the dataset used in the experiment.**
**Table 1. Specific information about the dataset used in the experiment.**
**Training Sequences**
**The Number of**
**Name** **FPS** **Resolution** **Length The Number of** **Camera** **Condition**
**Name** **FPS** **Resolution** **Length** **Pedestrian** **Camera** **Condition**
**Pedestrian**
MOT17‐13‐SDP 25 1920 × 1080 750 (00:30) 11,642 Moving Day/outdoor
MOT17-13-SDP 25 MOT17‐11‐SDP 1920 × 1080 30 750 (00:30)1920 × 1080 900 (00:30) 11,642 9436 MovingMoving Indoor Day/outdoor
MOT17-11-SDP 30 MOT17‐10‐SDP 1920 × 1080 30 900 (00:30)1920 × 1080 654 (00:22) 9436 12,839 MovingMoving Night/outdoor Indoor
MOT17‐09‐SDP 30 1920 × 1080 525 (00:18) 5325 Static Day/outdoor
MOT17-10-SDP 30 MOT17‐05‐SDP 1920 × 1080 14 654 (00:22)640 × 480 837 (01:00) 12,839 6917 MovingMoving Day/outdoor Night/outdoor
MOT17-09-SDP 30 MOT17‐04‐SDP 1920 × 1080 30 525 (00:18)1920 × 1080 1050 (00:35) 5325 47,557 StaticStatic Night/outdoor Day/outdoor
MOT17‐02‐SDP 30 1920 × 1080 600 (00:20) 18,581 Static Day/outdoor
MOT17-05-SDP 14 640 × 480 837 (01:00) 6917 Moving Day/outdoor
Total 5316 (03:35) 112,297
MOT17-04-SDP 30 **Testing Sequences 1920 × 1080** 1050 (00:35) 47,557 Static Night/outdoor
MOT17-02-SDP 30 1920Name × 1080 **FPS** 600 (00:20)Resolution **Length** 18,581Number of **CameraStatic** **ConditionDay/outdoor**
**pedestrians**
Total 5316 (03:35) 112,297
MOT20‐01 25 1920 × 1080 429 (00:17) 19,870 Static Indoor
**Testing Sequences**
MOT20‐02 25 1920 × 1080 2782 (01:51) 154,742 Static Indoor
MOT20‐03 25 1173 × 880 2405 (01:36) 313,658 Static Night/outdoor
**Number of**
**Name** **FPS** MOT20‐05 Resolution 25 1654 × 1080 3315 (02:13) Length 646,344 **CameraStatic Night/outdoor Condition**
**Pedestrians**
Total 8931 (05:57) 1,134,641
MOT20-01 25 1920 × 1080 429 (00:17) 19,870 Static Indoor
MOT20-02 25 For the HFN and LFN, we extracted only pedestrian data from the MOT17Det train‐1920 × 1080 2782 (01:51) 154,742 Static Indoor
ing data and used these for training, keeping the ratio of training and validation datasets
MOT20-03 25 1173 × 880 2405 (01:36) 313,658 Static Night/outdoor
at 7:3. To measure the accuracy after applying the ensemble technique, the test dataset
MOT20-05 25 was the MOT20Det train dataset. Only the bounding boxes corresponding to pedestrians 1654 × 1080 3315 (02:13) 646,344 Static Night/outdoor
Total were extracted and used as ground truth. The images in the training and test datasets were 8931 (05:57) 1,134,641
changed to the resolutions supported by the underlying network, Scaled YOLOv4: 640
(CSP), 896 (p5), and 1280 (p6).
We trained a high‐level feature‐specific network and a low‐level feature‐specific net‐For the HFN and LFN, we extracted only pedestrian data from the MOT17Det training
work with a resolution of 200 epochs, a batch size of 2, and a learning rate of 0.001. Each data and used these for training, keeping the ratio of training and validation datasets at
model was implemented using Pytorch and trained on an NVIDIA GTX 3090. 7:3. To measure the accuracy after applying the ensemble technique, the test dataset was
the MOT20Det train dataset. Only the bounding boxes corresponding to pedestrians were
_4.2. Experimental Results_
O i f d h i i d f h d f
-----
_Sensors 2023, 23, 4712_ 11 of 16
extracted and used as ground truth. The images in the training and test datasets were
changed to the resolutions supported by the underlying network, Scaled YOLOv4: 640
(CSP), 896 (p5), and 1280 (p6).
We trained a high-level feature-specific network and a low-level feature-specific network with a resolution of 200 epochs, a batch size of 2, and a learning rate of 0.001. Each
model was implemented using Pytorch and trained on an NVIDIA GTX 3090.
_4.2. Experimental Results_
Our experiments focused on the execution time and accuracy of the proposed framework. For the preprocessing microservice and postprocessing microservice, we found
that processing did not take more than 30 m/s per image, which satisfies the real-time
requirement. As for execution time, this paper focused on the execution time of the deep
learning model because it is most dependent on the inference of the deep learning model
utilizing the GPU.
In addition, since the transfer speed of files and images may vary depending on the
configuration of the microservice environment, we excluded the transfer time consumed
by file transfer when measuring the results.
Table 2 shows the results for the number of parameters used by the deep learning
models in the proposed framework, the number of layers pruned, and the processing speed.
Since HFN and LFN are processed in parallel, we adopted the value of the lower FPS of
the two networks. The results show that the inference network in the proposed framework
can process on average up to two frames per second faster. We were also able to remove a
certain number of parameters in the model, removing two to three million parameters.
**Table 2. Parameters and processing speeds of the deep learning models used in the experiment.**
**Resolution** **Params (B)** **Layers** **FPS**
Scaled YOLOv4 (csp) 640 5.25 235 17
**HFN-csp (ours)** 640 **3.69** **193** **19**
**LFN-csp (ours)** 640 **3.13** **191** **19**
Scaled YOLOv4 (p5) 896 7.03 331 15
**HFN-p5 (ours)** 896 **5.13** **281** **16**
**LFN-p5 (ours)** 896 **5.66** **309** **16**
Scaled YOLOv4 (p6) 1280 12.7 417 14
**HFN-p6 (ours)** 1280 **9.5** **367** 14
**LFN-p6 (ours)** 1280 **10.9** **384** 14
The FPS of the Scaled YOLOv4 (p6) model was the same as that of the original model,
but there was a significant difference in accuracy. We used average precision (AP) as a
metric to measure the accuracy of object detection, and precision and recall metrics to
check how well the model learned. In the field of object detection, precision and recall are
calculated through the IoU value of similarity between the ground truth bounding box and
the predicted bounding box. The precision and recall metrics are shown below. The area
under the precision and recall curves, measured by dividing them by a certain interval, is
called AP, and is used to represent the accuracy of a typical object detection model.
_True Positive_
Precision = (2)
_True Positive + False Positive_
_True Positive_
Recall = (3)
_True Positive + False Negative_
-----
_Sensors 2023, 23, 4712_ 12 of 16
Table 3 shows the precision, recall, AP, and F1-score values of the conventional Scaled
YOLOv4 model and the proposed ensemble model as a function of the resolution of the
MOT20Det data. In the case of the ensembled csp model, the difference in accuracy from
the conventional model was not significant, but it showed an improvement in terms of FPS.
The general accuracy of the ensemble model was strengthened as the resolution increased,
and it had stronger detection performance for unfamiliar datasets despite having the same
FPS. As the resolution increased, the precision and recall ratios were also close to 1, meaning
that the training performance of the model was excellent.
**Table 3. Precision, recall, AP, and F1-score results for the MOT20Det training data.**
**MOT20-01** **MOT20-02** **MOT20-03** **MOT20-05**
Precision 0.63 0.63 0.55 0.29
Recall **0.47** **0.32** **0.24** **0.07**
Scaled YOLOv4
(csp) _AP50_ **0.53** 0.37 0.25 0.05
AP **0.21** 0.15 0.08 0.02
F1-score **0.50** 0.37 **0.27** **0.06**
Precision **0.85** **0.86** **0.78** **0.52**
Recall 0.36 0.32 0.20 0.04
**Ensembled**
**(MDED-csp, ours)** _AP50_ 0.43 **0.39** 0.25 0.05
AP 0.20 **0.18** 0.08 0.02
F1-score 0.44 **0.40** 0.23 0.04
Precision 0.87 0.88 **0.87** **0.79**
Recall 0.40 0.31 0.27 0.07
Scaled YOLOv4
(p5) _AP50_ 0.49 0.38 0.35 0.11
AP 0.23 0.19 0.13 0.04
F1-score 0.49 0.40 0.35 0.08
Precision **0.89** **0.90** 0.85 0.74
Recall **0.46** **0.36** **0.30** **0.14**
**Ensembled**
**(MDED-p5, ours)** _AP50_ **0.54** **0.44** **0.37** **0.20**
AP **0.28** **0.22** **0.14** **0.07**
F1-score **0.54** **0.45** **0.36** **0.13**
Precision **0.82** **0.82** **0.75** **0.60**
Recall 0.34 0.28 0.29 0.15
Scaled YOLOv4
(p6) _AP50_ 0.45 0.36 0.34 0.18
AP 0.20 0.16 0.12 0.06
F1-score 0.41 0.36 0.36 0.16
Precision 0.75 0.75 0.72 0.53
Recall **0.71** **0.68** **0.56** **0.58**
**Ensembled**
**(MDED-p6, ours)** _AP50_ **0.76** **0.74** **0.60** **0.48**
AP **0.37** **0.35** **0.19** **0.16**
F1-score **0.70** **0.67** **0.56** **0.50**
-----
_Sensors 2023, 23, 4712_ 13 of 16
Figure 5 shows the precision–recall curve of the MDED Framework deep learning
model proposed in this paper and the comparison Scaled YOLOv4 model. Both precision
and recall are evaluation indicators for which the closer to 1, the better the performance
of the model, but the two indicators have an inverse relationship. Therefore, the more
the graph is skewed to the upper right, the better the performance of the model can be
evaluated. In addition, AP (average precision), which means the area under the precision–
recall curve, is a common performance evaluation indicator for object detection. In Figure 5,
we only show the AP for the P6 (1280) model, which had a high percentage of performance
improvement. From Figure 5, we can see that overall, the MDED model is skewed to the
_Sensors 2022, 22, x FOR PEER REVIEW upper right. This provides visual confirmation that the models generally perform well,14 of 17_
even for data taken in different environments.
**Figure 5.Figure 5. Precision–recall curves for the proposed model (MDED) and the Scaled YOLOv4 model.Precision–recall curves for the proposed model (MDED) and the Scaled YOLOv4 model.**
FigureFigure 6 shows the detection results of the Scaled YOLOv4 model and the proposed 6 shows the detection results of the Scaled YOLOv4 model and the proposed
framework on the MOT20Det dataset. The detection performance is better than thatframework on the MOT20Det dataset. The detection performance is better than that of the
of the traditional model, despite the differences in indoor and outdoor settings, back-traditional model, despite the differences in indoor and outdoor settings, background con‐
ground contrast, and the density of pedestrian objects from the MOT17Det data usedtrast, and the density of pedestrian objects from the MOT17Det data used for training.
for training.
-----
_Sensors 2023, 23, 4712_ 14 of 16
_Sensors 2022, 22, x FOR PEER REVIEW_ 15 of 17
**Figure 6.Figure 6. Comparison of MOT20Det detection results of the scaled YOLOv4 model and the proposedComparison of MOT20Det detection results of the scaled YOLOv4 model and the proposed**
model as a resolution.
model as a resolution.
**5. Conclusions and Future Works5. Conclusions and Future Works**
The field of multi-object detection using deep learning models is still an active re-The field of multi‐object detection using deep learning models is still an active re‐
search area, and attempts to improve models operating in lightweight edge computingsearch area, and attempts to improve models operating in lightweight edge computing
environments are ongoing. In this paper, we propose a pedestrian detection frameworkenvironments are ongoing. In this paper, we propose a pedestrian detection framework
optimized for distributed processing in an edge computing environment, that can showoptimized for distributed processing in an edge computing environment, that can show
improved performance with images other than the dataset it was trained on. The frame-improved performance with images other than the dataset it was trained on. The frame‐
work consists of Docker-based containers, and independent pipelines called preprocesswork consists of Docker‐based containers, and independent pipelines called preprocess
microservices, inference microservices, and post-process microservices that are orchestratedmicroservices, inference microservices, and post‐process microservices that are orches‐
trated through Kubernetes. This makes it easy to maintain the inference environment even
through Kubernetes. This makes it easy to maintain the inference environment even as
as the edge computing environment changes; it also enables flexible scaling out and scal‐
the edge computing environment changes; it also enables flexible scaling out and scaling
ing down according to the quantity of resources available. By providing a web‐based ser‐
down according to the quantity of resources available. By providing a web-based service
vice that is familiar to users, we have created an environment where users can easily up‐
that is familiar to users, we have created an environment where users can easily upload the
load the videos they want to analyze and check the results.
videos they want to analyze and check the results.
Compared with the existing deep learning model (Scaled YOLOv4), the deep learn‐
Compared with the existing deep learning model (Scaled YOLOv4), the deep learning
ing model improved by the proposed framework showed good performance in terms of
model improved by the proposed framework showed good performance in terms of
accuracy and execution time. For an image with a resolution of 640, the performance was
accuracy and execution time. For an image with a resolution of 640, the performance
2 FPS faster than the existing model; meanwhile, for an image with a resolution of 1280,
was 2 FPS faster than the existing model; meanwhile, for an image with a resolution of
the accuracy was up to 0.18 AP faster than the existing model. This shows that the pro‐
1280, the accuracy was up to 0.18 AP faster than the existing model. This shows that the
posed method can be used to obtain improved detection results in quasi‐real time, even
proposed method can be used to obtain improved detection results in quasi-real time, even
for unfamiliar data that have not been trained.
for unfamiliar data that have not been trained.
As part of our future research, we plan to assess the general performance of our
As part of our future research, we plan to assess the general performance of our model
model by utilizing the MOT17Det test and MOT20Det test datasets, which we were unable
by utilizing the MOT17Det test and MOT20Det test datasets, which we were unable to
to use in this study due to authentication issues. This will allow us to compare our model’s
use in this study due to authentication issues. This will allow us to compare our model’s
accuracy with that of other models. Moreover, we intend to extend our microservice ar‐
accuracy with that of other models. Moreover, we intend to extend our microservicechitecture to cover the entire training process, beyond the scope of the current paper that
architecture to cover the entire training process, beyond the scope of the current paper thatonly covers the inference process. Specifically, we will incorporate a parameter server to
only covers the inference process. Specifically, we will incorporate a parameter server toenable deep learning model training in cloud‐edge environments. Additionally, we will
enable deep learning model training in cloud-edge environments. Additionally, we willinvestigate and develop a framework to address the challenges of federated learning.
investigate and develop a framework to address the challenges of federated learning.
**Author Contributions: S.J. proposed the frameworks, devised the program, and conducted the ex‐**
**Author Contributions:periments; J.S. helped with programming and figures; J.C., H.C. and D.K. contributed writing; S.K. S.J. proposed the frameworks, devised the program, and conducted the**
experiments; J.S. helped with programming and figures; J.C., H.C. and D.K. contributed writing; S.K.
contributed to funding acquisition and revised the manuscript. All authors have read and agreed to
the published version of the manuscript.
-----
_Sensors 2023, 23, 4712_ 15 of 16
**Funding: This work was supported by an Institute of Information and communications Technology**
Planning and Evaluation (IITP) grant funded by the Korean government (MSIT) (No. 2020-0-00116,
Development of Core Technology for Ultra Low Latency Intelligent Cloud Edge SW Platform to
Guarantee Service Response less than 10 m/s).
**Institutional Review Board Statement: Not applicable.**
**Informed Consent Statement: Not applicable.**
**Data Availability Statement: The MOT17Det and MOT20Det datasets were utilized, which are**
[available on the MOTChallenge site (https://motchallenge.net, accessed on 10 May 2023).](https://motchallenge.net)
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. Nguyen, A.; Do, T.; Tran, M.; Nguyen, B.; Duong, C.; Phan, T.; Tran, Q. Deep federated learning for autonomous driving. In
Proceedings of the 2022 IEEE Intelligent Vehicles Symposium (IV), Aachen, Germany, 4–9 June 2022; pp. 1824–1830.
2. Liu, L.; Zhao, M.; Yu, M.; Jan, M.; Lan, D.; Taherkordi, A. Mobility-aware multi-hop task offloading for autonomous driving in
[vehicular edge computing and networks. IEEE Trans. Intell. Transp. Syst. 2022, 24, 2169–2182. [CrossRef]](https://doi.org/10.1109/TITS.2022.3142566)
3. Ullah, W.; Hussain, T.; Khan, Z.A.; Haroon, U.; Baik, S. Intelligent dual stream CNN and echo state network for anomaly detection.
_[Knowl. Based Syst. 2022, 253, 109456. [CrossRef]](https://doi.org/10.1016/j.knosys.2022.109456)_
4. Tsai, C.; Wu, T.; Lai, S. Multi-scale patch-based representation learning for image anomaly detection and segmentation. In
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022;
pp. 3992–4000.
5. Patrikar, D.; Parate, M. Anomaly detection using edge computing in video surveillance system. Int. J. Multimed. Inf. Retr. 2022, 11,
[85–110. [CrossRef] [PubMed]](https://doi.org/10.1007/s13735-022-00227-8)
6. Specker, A.; Moritz, L.; Cormier, M.; Beyerer, J. Fast and lightweight online person search for large-scale surveillance systems. In
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022;
pp. 570–580.
7. Gupta, H.; Verma, O. Monitoring and surveillance of urban road traffic using low altitude drone images: A deep learning
[approach. Multimed. Tools Appl. 2022, 81, 19683–19703. [CrossRef]](https://doi.org/10.1007/s11042-021-11146-x)
8. Ajakwe, S.; Ihekoronye, V.; Kim, D.; Lee, J. DRONET: Multi-Tasking Framework for Real-Time Industrial Facility Aerial
[Surveillance and Safety. Drones 2022, 6, 46. [CrossRef]](https://doi.org/10.3390/drones6020046)
9. Cruz, M.; Mafra, S.; Teixeira, E.; Figueiredo, F. Smart Strawberry Farming Using Edge Computing and IoT. Sensors 2022, 22, 5866.
[[CrossRef]](https://doi.org/10.3390/s22155866)
10. Song, S.; Liu, T.; Wang, H.; Hasi, B.; Yuan, C.; Gao, F.; Shi, H. Using pruning-based YOLOv3 deep learning algorithm for accurate
[detection of sheep face. Animals 2022, 12, 1465. [CrossRef]](https://doi.org/10.3390/ani12111465)
11. Tzenetopoulos, A.; Masouros, D.; Koliogeorgi, K.; Xydis, S.; Soudris, D.; Chazapis, A.; Acquaviva, J. EVOLVE: Towards converging
big-data, high-performance and cloud-computing worlds. In Proceedings of the 2022 Design, Automation & Test in Europe
Conference & Exhibition, Antwerp, Belgium, 14–23 March 2022; pp. 975–980.
12. Niu, C.; Wang, L. Big data-driven scheduling optimization algorithm for Cyber–Physical Systems based on a cloud platform.
_[Comput. Commun. 2022, 181, 173–181. [CrossRef]](https://doi.org/10.1016/j.comcom.2021.10.020)_
13. Wan, S.; Ding, S.; Chen, C. Edge computing enabled video segmentation for real-time traffic monitoring in internet of vehicles.
_[Pattern Recognit. 2022, 121, 108146. [CrossRef]](https://doi.org/10.1016/j.patcog.2021.108146)_
14. Zhou, S.; Wei, C.; Song, C.; Pan, X.; Chang, W.; Yang, L. Short-term traffic flow prediction of the smart city using 5G internet of
[vehicles based on edge computing. IEEE Trans. Intell. Transp. Syst. 2022, 24, 2229–2238. [CrossRef]](https://doi.org/10.1109/TITS.2022.3147845)
15. Deng, J.; Dong, W.; Socher, R.; Li, L.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the
2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255.
16. Wang, C.; Bochkovskiy, A.; Liao, H. Scaled-yolov4: Scaling cross stage partial network. In Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13029–13038.
17. Sassu, A.; Saenz-Cogollo, J.; Agelli, M. Deep-Framework: A Distributed, Scalable, and Edge-Oriented Framework for Real-Time
[Analysis of Video Streams. Sensors 2021, 21, 4045. [CrossRef]](https://doi.org/10.3390/s21124045)
18. Kul, S.; Tashiev, I.; ¸Senta¸s, A.; Sayar, A. Event-based microservices with Apache Kafka streams: A real-time vehicle detection
[system based on type, color, and speed attributes. IEEE Access 2021, 9, 83137–83148. [CrossRef]](https://doi.org/10.1109/ACCESS.2021.3085736)
19. [Apache Kafka. Available online: https://kafka.apache.org/ (accessed on 6 June 2019).](https://kafka.apache.org/)
20. Houmani, Z.; Balouek-Thomert, D.; Caron, E.; Parashar, M. Enabling microservices management for Deep Learning applications
across the Edge-Cloud Continuum. In Proceedings of the 2021 IEEE 33rd International Symposium on Computer Architecture
and High Performance Computing (SBAC-PAD), Belo Horizonte, Brazil, 26–29 October 2021; pp. 137–146.
21. Li, J.; Zheng, Z.; Li, Y.; Ma, R.; Xia, S. Multitask deep learning for edge intelligence video surveillance system. In Proceedings
of the 2020 IEEE 18th International Conference on Industrial Informatics (INDIN), Warwick, UK, 20–23 July 2020; Volume 1,
pp. 579–584.
-----
_Sensors 2023, 23, 4712_ 16 of 16
22. Xu, Z.; Li, J.; Zhang, M. A surveillance video real-time analysis system based on edge-cloud and fl-yolo cooperation in coal mine.
_[IEEE Access 2021, 9, 68482–68497. [CrossRef]](https://doi.org/10.1109/ACCESS.2021.3077499)_
23. Chen, C.; Liu, C. Person re-identification microservice over artificial intelligence internet of things edge computing gateway.
_[Electronics 2021, 10, 2264. [CrossRef]](https://doi.org/10.3390/electronics10182264)_
24. Wang, H.; Wu, X.; Huang, Z.; Xing, E. High-frequency component helps explain the generalization of convolutional neural
networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25
June 2021; pp. 8684–8694.
25. Lin, T.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings
of the IEEE Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17–19 June 1997; pp. 2117–2125.
26. Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 8759–8768.
27. Ghiasi, G.; Lin, Y.; Le, V. Nas-fpn: Learning scalable feature pyramid architecture for object detection. In Proceedings of the
IEEE/CVF conference on computer vision and pattern recognition, Long Beach, CA, USA, 16–17 June 2019; pp. 7036–7045.
28. Tan, M.; Pang, R.; Le, Q. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10781–10790.
29. He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans.
_[Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [CrossRef]](https://doi.org/10.1109/TPAMI.2015.2389824)_
30. Chen, L.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A. Deeplab: Semantic image segmentation with deep convolutional
[nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [CrossRef]](https://doi.org/10.1109/TPAMI.2017.2699184)
31. Woo, S.; Park, J.; Lee, J.; Kweon, S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on
Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19.
32. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International
conference on machine learning, Long Beach, CA, USA, 10–15 June 2019; pp. 6105–6114.
33. Long, X.; Deng, K.; Wang, G.; Zhang, Y.; Dang, Q.; Gao, Y.; Wen, S. PP-YOLO: An effective and efficient implementation of object
detector. arXiv 2020, arXiv:2007.12099.
34. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767.
35. Merkel, D. Docker: Lightweight linux containers for consistent development and deployment. Linux J. 2014, 239, 2.
36. [Google Container Engine. Available online: http://Kubernetes.io/ (accessed on 23 February 2023).](http://Kubernetes.io/)
37. Grinberg, M. Flask Web Development: Developing Web Applications with Python; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2018.
38. Seo, J.; Kim, S. Robust pedestrian detection with high-level and low-level specialised network ensemble techniques. In Proceedings
of the Image Processing and Image Understanding, Jeju, Republic of Korea, 8–10 February 2023.
39. Milan, A.; Leal-Taixé, L.; Reid, I.; Roth, S.; Schindler, K. MOT16: A benchmark for multi-object tracking. _arXiv 2016,_
arXiv:1603.00831.
40. Dendorfer, P.; Rezatofighi, H.; Milan, A.; Shi, J.; Cremers, D.; Reid, I.; Roth, S.; Schindler, K.; Leal-Taixé, L. Mot20: A benchmark
for multi object tracking in crowded scenes. arXiv 2020, arXiv:2003.09003.
41. [Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. The KITTI Vision Benchmark Suite. Available online: http://www.cvlibs.net/datasets/](http://www.cvlibs.net/datasets/kitti)
[kitti (accessed on 2 May 2015).](http://www.cvlibs.net/datasets/kitti)
42. Shao, S.; Zhao, Z.; Li, B.; Xiao, T.; Yu, G.; Zhang, X.; Sun, J. Crowdhuman: A benchmark for detecting human in a crowd. arXiv
**2018, arXiv:1805.00123.**
**Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual**
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.
-----
| 16,152
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC10223742, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/1424-8220/23/10/4712/pdf?version=1683898381"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-05-01T00:00:00
|
[
{
"paperId": "f9bf72d373ffc46de2129ae23bd2c2491a2e068d",
"title": "Short-Term Traffic Flow Prediction of the Smart City Using 5G Internet of Vehicles Based on Edge Computing"
},
{
"paperId": "c795828f5390d4208bc70b8a110082a7fc3884fd",
"title": "Mobility-Aware Multi-Hop Task Offloading for Autonomous Driving in Vehicular Edge Computing and Networks"
},
{
"paperId": "b9b531ba98f4923d51702cc50dfef32a317430e3",
"title": "Smart Strawberry Farming Using Edge Computing and IoT"
},
{
"paperId": "eb9c890548ac35cfe637fae86a9a1e157f10aa9f",
"title": "Intelligent dual stream CNN and echo state network for anomaly detection"
},
{
"paperId": "a77648b296691734bbe9e1f7a615ac705e83e6dd",
"title": "Using Pruning-Based YOLOv3 Deep Learning Algorithm for Accurate Detection of Sheep Face"
},
{
"paperId": "da6164fb49c3d6fb879b467823d156af8161088d",
"title": "EVOLVE: Towards Converging Big-Data, High-Performance and Cloud-Computing Worlds"
},
{
"paperId": "0860ee965ba9f862bd4716a938cc74406777fb48",
"title": "DRONET: Multi-Tasking Framework for Real-Time Industrial Facility Aerial Surveillance and Safety"
},
{
"paperId": "f1cfbf07c6c78cc89e0c3d8cb62b9fb32e0f47ed",
"title": "Fast and Lightweight Online Person Search for Large-Scale Surveillance Systems"
},
{
"paperId": "45535b86c60661dd4c4e4f375abae80937563499",
"title": "Multi-Scale Patch-Based Representation Learning for Image Anomaly Detection and Segmentation"
},
{
"paperId": "fbf1ad3e430aa5b98bb0efede68b66d94014834c",
"title": "Deep Federated Learning for Autonomous Driving"
},
{
"paperId": "1828df9ece545999069554ecf0bc77fb87b6c044",
"title": "Big data-driven scheduling optimization algorithm for Cyber-Physical Systems based on a cloud platform"
},
{
"paperId": "e5824b5aa3feba2c24f06555bc80190bfa4027ad",
"title": "Enabling microservices management for Deep Learning applications across the Edge-Cloud Continuum"
},
{
"paperId": "b4b693b74edbebdf3bc9a4bd73641caeca92a8d6",
"title": "Person Re-Identification Microservice over Artificial Intelligence Internet of Things Edge Computing Gateway"
},
{
"paperId": "9f2e0d8d495ff6aa54c4d7d5e9d6831d44df2034",
"title": "Edge computing enabled video segmentation for real-time traffic monitoring in internet of vehicles"
},
{
"paperId": "645e45503a7f5fdbedafc5827e2dec3e1630d00f",
"title": "Anomaly detection using edge computing in video surveillance system: review"
},
{
"paperId": "c8e982c91f715c5b2512e6eb67b4f77ffd64ed85",
"title": "Monitoring and surveillance of urban road traffic using low altitude drone images: a deep learning approach"
},
{
"paperId": "20131c2b1b1908c330bacc1b963f011a9f55d877",
"title": "Deep-Framework: A Distributed, Scalable, and Edge-Oriented Framework for Real-Time Analysis of Video Streams"
},
{
"paperId": "126d36f17cae3e5210a6f62e5c6a23ddec0ef350",
"title": "Scaled-YOLOv4: Scaling Cross Stage Partial Network"
},
{
"paperId": "d14bc27d39614b1f16c84a55eeb1eae157ec9517",
"title": "PP-YOLO: An Effective and Efficient Implementation of Object Detector"
},
{
"paperId": "89fcd4c6e5c52799ccb85eda94afb0ea129a8ba6",
"title": "Multitask Deep Learning for Edge Intelligence Video Surveillance System"
},
{
"paperId": "d11c50e49998e2156da7c179a3caea86e9601abd",
"title": "MOT20: A benchmark for multi object tracking in crowded scenes"
},
{
"paperId": "41c67d04be2d1632c0d3b0880c21c9fe797cdab8",
"title": "EfficientDet: Scalable and Efficient Object Detection"
},
{
"paperId": "bceeb52a9d4a4127d6664eea4870e8a60b378eff",
"title": "High-Frequency Component Helps Explain the Generalization of Convolutional Neural Networks"
},
{
"paperId": "4f2eda8077dc7a69bb2b4e0a1a086cf054adb3f9",
"title": "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks"
},
{
"paperId": "b5375995ab8d679a581ffcc2f2e8d3777d60324b",
"title": "NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection"
},
{
"paperId": "de95601d9e3b20ec51aa33e1f27b1880d2c44ef2",
"title": "CBAM: Convolutional Block Attention Module"
},
{
"paperId": "03a65d274dc6caea94f6ab344e0b4969575327e3",
"title": "CrowdHuman: A Benchmark for Detecting Human in a Crowd"
},
{
"paperId": "ebc96892b9bcbf007be9a1d7844e4b09fde9d961",
"title": "YOLOv3: An Incremental Improvement"
},
{
"paperId": "5cc22f65bf4e5c0aa61f3139da832d3a946e15cf",
"title": "Path Aggregation Network for Instance Segmentation"
},
{
"paperId": "2a94c84383ee3de5e6211d43d16e7de387f68878",
"title": "Feature Pyramid Networks for Object Detection"
},
{
"paperId": "cab372bc3824780cce20d9dd1c22d4df39ed081a",
"title": "DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs"
},
{
"paperId": "ac0d88ca5f75a4a80da90365c28fa26f1a26d4c4",
"title": "MOT16: A Benchmark for Multi-Object Tracking"
},
{
"paperId": "cbb19236820a96038d000dc629225d36e0b6294a",
"title": "Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition"
},
{
"paperId": "875d90d4f66b07f90687b27ab304e04a3f666fc2",
"title": "Docker: lightweight Linux containers for consistent development and deployment"
},
{
"paperId": "d2c733e34d48784a37d717fe43d9e93277a8c53e",
"title": "ImageNet: A large-scale hierarchical image database"
},
{
"paperId": null,
"title": "Robust pedestrian detection with high-level and low-level specialised network ensemble techniques"
},
{
"paperId": null,
"title": "Google Container Engine"
},
{
"paperId": "904cc2c3adf953fd8e2104682e2a24407df35f14",
"title": "Event-Based Microservices With Apache Kafka Streams: A Real-Time Vehicle Detection System Based on Type, Color, and Speed Attributes"
},
{
"paperId": "81306ce83c8336a39d18e4db44c7cc580949d19c",
"title": "A Surveillance Video Real-Time Analysis System Based on Edge-Cloud and FL-YOLO Cooperation in Coal Mine"
},
{
"paperId": "0830c08ee54d6f2ff60886d40e5183f8975fec84",
"title": "Apache Kafka"
},
{
"paperId": null,
"title": "Web Development: Developing Web Applications with Python; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2018"
},
{
"paperId": null,
"title": "The KITTI Vision Benchmark Suite"
}
] | 16,152
|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Biology",
"source": "external"
},
{
"category": "Medicine",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/006932019f63eed43c87159f0d2a0b55d7af07c9
|
[
"Medicine",
"Biology"
] | 0.880522
|
Vici syndrome: a review
|
006932019f63eed43c87159f0d2a0b55d7af07c9
|
Orphanet Journal of Rare Diseases
|
[
{
"authorId": "143602925",
"name": "S. Byrne"
},
{
"authorId": "1396805778",
"name": "C. Dionisi-Vici"
},
{
"authorId": "2116049612",
"name": "Luke Smith"
},
{
"authorId": "4599810",
"name": "M. Gautel"
},
{
"authorId": "4131555",
"name": "H. Jungbluth"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Orphanet J Rare Dis"
],
"alternate_urls": [
"http://www.ojrd.com/",
"http://www.ojrd.com/home/"
],
"id": "fd29195d-fb00-4268-9736-e3162fe21a59",
"issn": "1750-1172",
"name": "Orphanet Journal of Rare Diseases",
"type": "journal",
"url": "https://ojrd.biomedcentral.com/"
}
|
Vici syndrome [OMIM242840] is a severe, recessively inherited congenital disorder characterized by the principal features of callosal agenesis, cataracts, oculocutaneous hypopigmentation, cardiomyopathy, and a combined immunodeficiency. Profound developmental delay, progressive failure to thrive and acquired microcephaly are almost universal, suggesting an evolving (neuro) degenerative component. In most patients there is additional variable multisystem involvement that may affect virtually any organ system, including lungs, thyroid, liver and kidneys. A skeletal myopathy is consistently associated, and characterized by marked fibre type disproportion, increase in internal nuclei, numerous vacuoles, abnormal mitochondria and glycogen storage. Life expectancy is markedly reduced.Vici syndrome is due to recessive mutations in EPG5 on chromosome 18q12.3, encoding ectopic P granules protein 5 (EPG5), a key autophagy regulator in higher organisms. Autophagy is a fundamental cellular degradative pathway conserved throughout evolution with important roles in the removal of defective proteins and organelles, defence against infections and adaptation to changing metabolic demands. Almost 40 EPG mutations have been identified to date, most of them truncating and private to individual families.The differential diagnosis of Vici syndrome includes a number of syndromes with overlapping clinical features, neurological and metabolic disorders with shared CNS abnormalities (in particular callosal agenesis), and primary neuromuscular disorders with a similar muscle biopsy appearance. Vici syndrome is also the most typical example of a novel group of inherited neurometabolic conditions, congenital disorders of autophagy.Management is currently largely supportive and symptomatic but better understanding of the underlying autophagy defect will hopefully inform the development of targeted therapies in future.
|
# Vici syndrome: a review
### Susan Byrne[1], Carlo Dionisi-Vici[2], Luke Smith[3], Mathias Gautel[3] and Heinz Jungbluth[1,3,4*]
Disease name
Vici syndrome; Dionisi-Vici-Sabetta-Gambarara syndrome; Immunodeficiency with cleft lip/palate, cataract,
hypopigmentation and absent corpus callosum.
Definition
Vici syndrome [OMIM242840, ORPHA1493] is a severe
congenital multisystem disorder characterized by the principal features of agenesis of the corpus callosum, cataracts,
oculocutaneous hypopigmentation, cardiomyopathy, a
combined immunodeficiency and additional, more variable multisystem involvement. The condition is due to recessive mutations in the EPG5 gene on chromosome 18q.
[* Correspondence: [email protected]](mailto:[email protected])
1Department of Paediatric Neurology, Neuromuscular Service, Evelina’s
Children Hospital, Guy’s & St. Thomas’ Hospital NHS Foundation Trust,
London, UK
3Randall Division of Cell and Molecular Biophysics, Muscle Signalling Section,
King’s College, London, UK
Full list of author information is available at the end of the article
Epidemiology
The incidence of Vici syndrome is unknown. Since the
original description of the disorder by Dionisi-Vici and
colleagues in 1988 [1], an exponentially increasing number
of patients has been reported, with around 50 genetically
confirmed cases published to date [1–14]. Vici syndrome
is likely to be rare but probably underdiagnosed.
Clinical description
Vici syndrome is one of the most extensive inherited
human multisystem disorders reported to date, presenting invariably in the first months of life. Apart
from the 5 principal diagnostic findings–callosal agenesis, cataracts, cardiomyopathy, hypopigmentation and
combined immunodeficiency-a wide range of variably
present additional features has been reported, suggesting that virtually any organ system can be involved [4].
Three additional findings (profound developmental
delay, acquired microcephaly and marked failure to
thrive) have recently emerged that, although non
© 2016 Byrne et al. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0
[International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and](http://creativecommons.org/licenses/by/4.0/)
reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to
the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver
-----
specific, are as consistently associated as the 5 main
diagnostic features and highly supportive of the diagnosis [14]. The common occurrence of structural congenital abnormalities and acquired organ dysfunction (for
example, congenital cardiac defects and cardiomyopathy
later in life) is not infrequently observed in individual
patients. Typical findings in Vici syndrome are outlined in
detail below and summarized in Table 1. The characteristic features of Vici syndrome are illustrated in Fig. 1.
CNS
Development in Vici syndrome is profoundly delayed:
Affected children may acquire a social smile, some degree of head control, and the ability to roll over, however
there have been no reports of children sitting independently, or acquiring speech. Where rolling has been
attained, this skill may subsequently be lost. Almost two
third of patients have seizures that are often difficult to
control. Although head circumference is usually normal
at birth, rapidly progressive microcephaly evolving
within the first year of life suggests a neurodegenerative
component superimposed on the principal neurodevelopmental defect.
In addition to agenesis of the corpus callosum, one of
the five principal diagnostic features of Vici syndrome,
other consistent radiological abnormalities include
Table 1 Clinical features of Vici syndrome
Feature Frequency
Principal diagnostic Absent corpus callosum ++++
features
Profound developmental delay ++++
Failure to thrive ++++
Hypopigmentation ++++
Immune problems ++++
Progressive microcephaly +++
Cardiomyopathy +++
Cataracts +++
Other features Presentation in neonatal period +++
Myopathy +++
Seizures ++
Absent reflexes (probable ++
neuropathy)
Thymic aplasia +
Sensorineural deafness +
Optic atrophy +
Renal tubular acidosis +
Cleft lip/palate +
Coarse facial features +
Hepatomegaly +
The 5 features initially considered to be diagnostic are indicated in italics. +++
+ = present in almost all children, +++ = present in most children, ++ = present
in more than half of children, + = present in some children
pontine hypoplasia, reduced opercularisation of the
Sylvian fissures, delayed myelination and general reduction in white matter bulk [14]. Cortical malformations
and cerebellar abnormalities have been observed but are
much less common. In few patients, distinct circumscribed signal abnormalities (decrease in T2 with or without associated increase in T1 signal) have been noted
within the thalami, similar to what has been described in
patients with lysosomal storage disorders [15], also corresponding to some clinical overlap with these conditions.
Muscle
An associated skeletal muscle myopathy, already suggested by the presence of often profound hypotonia
and variable hyperCKaemia in early case reports, was
documented in detail by McClelland and colleagues in
2010 [7] and subsequently confirmed in other reports
[2, 12]. Clinically, individuals with Vici syndrome are
often profoundly hypotonic and weak, probably reflecting
a combination of the progressive nature of the myopathy
and/or ongoing neurodegeneration. Histopathologically,
the myopathy associated with Vici syndrome is characterized by marked variability in fibre size, increase in internal
and centralized nuclei, type 1 fibre hypotrophy with normally sized type 2 fibres (occasionally fulfilling the criteria
for fibre type disproportion), increased glycogen storage
and variable vacuoles on light microscopy [2, 7, 14]. Additional changes on electron microscopy may include
abnormalities of mitochondrial structure and arrangement
[4, 14] and, less frequently, sarcomeric disorganization.
On the histopathological level there is considerable overlap with the congenital myopathies, in particular Congenital Fibre Type Disproportion (CFTD) and Centronuclear
Myopathy (CNM), primary vacuolar myopathies, glycogen
storage disorders and mitochondrial myopathies.
Nerves
Peripheral nerve involvement with almost complete absence of myelinated axons has been reported in only one
case to date [14]; however, an associated neuropathy
may have been overlooked in other patients because of
the overwhelming nature of other multisystem features.
The majority of children have absent deep tendon reflexes but those may be brisk in around a third.
Skin
Marked oculocutaneous hypopigmentation [16] is one of
the cardinal features of Vici syndrome and has been
noted in almost all cases reported to date. Affected individuals are, however, not typically complete albinos and
hypopigmentation is always relative to the familial and
ethnic background (Fig. 1). Children with Vici syndrome
have generally pale skin with light (often very blonde in
those of Caucasian origin) hair, rather than discrete
-----
hypopigmented patches. An intermittent, extensive
maculopapular rash almost resembling Stevens-Johnson
syndrome has been reported in few children [14].
Eyes
Bilateral cataracts are one of the “classical” diagnostic
features of Vici syndrome, however, in a recent series of
50 patients those were only documented in threequarters of affected individuals [14], probably reflecting
evolution over time. Ocular features of Vici syndrome
have been reviewed in detail by Filloux and colleagues
[16] and include optic nerve hypoplasia, visual impairment, nystagmus and fundus hypopigmentation. Although individuals with Vici syndrome are usually only
relatively hypopigmented, ocular features, in particular
evidence of optic pathways misrouting on visually
evoked potential (VEP) testing, and of a poorly defined
and lesser depressed fovea on optical coherence tomography, are similar to those in individuals with typical albinism [16].
Hearing
Sensorineural hearing loss was recognized in an isolated
case in 2010 [7] and has been subsequently reported in
other cases [6, 10] of Vici syndrome with or without
confirmed EPG5 mutations. Sensorineural hearing loss
is a feature that may be easily overlooked in Vici syndrome due to profound developmental delay and overwhelming multisystem involvement, and should be
actively investigated for.
Heart
Cardiac involvement is present in around 90 % of patients with Vici syndrome and in around 80 % of cases a
cardiomyopathy, one of the 5 main diagnostic features,
has been documented. Minor congenital heart defects
comprising persistent foramen ovale and atrial septal defects have been reported in around 10 % of patients. The
associated cardiomyopathy usually develops early in life,
although onset much later in childhood has been observed. Intermittent deterioration of cardiac function
during intercurrent illness has also been noted (Patient
12.1 in [4]). Both hypertrophic and dilated forms of cardiomyopathy have been reported, always with left
ventricular emphasis and occasionally in the same patient subsequently evolving over time. In two unrelated
patients where post mortem examination was performed
[8, 11], changes in the heart also showed left ventricular
emphasis, with variable degrees of interstitial fibrosis
and cardiomyocytes containing vacuoles and membranebound cytoplasmic inclusions, possibly glycogen. In
keeping with the underlying autophagy defect, cardiomyocytes showed increased staining for autophagy
markers LC3 and p62 on immunohistochemistry [8].
Immune system
A combined immunodeficiency is one of the diagnostic
hallmarks of Vici syndrome but is highly variable, mainly
depending on age and ranging from near normal to severely compromised immunity (for review, [6]). The associated immune defect manifests as recurrent, commonly
respiratory, infections from early in life, also including
-----
mucocutaneous candidiasis, sepsis and, less frequently,
urinary tract infections, gastroenteritis, bacterial conjunctivitis, and perineal abscesses. Due to the severely reduced
life expectancy, immune function has been assessed formally only in a few patients [6]. Abnormal findings reported to date include lymphopenia with variable T cell
subset defects, neutropenia, leucopenia, hypogammaglobulinaemia, lack of response to recall antigens and a defect
of memory B cells with lack of specific antibody response
to certain immunizations such as those with tetanus and
pneumococcal vaccine. Overall, these findings suggest
prominent impairment of the humoral immune response
with a milder defect of the T cell compartment, although
further prospective studies will be required to delineate
the immunological phenotype further. Immunological features of Vici syndrome, recommended immunological investigations and potential treatment approaches have been
outlined in detail by Finocchi and colleagues [6].
Thymus
Complete thymus aplasia or hypoplasia has been reported in around one fifth of patients [4, 14]. T-cell dysfunction is part of the combined immunodeficiency
observed in Vici syndrome although usually less prominent than B-cell dysfunction [6].
Lungs
Pulmonary hypoplasia has been reported in one patient
with Vici syndrome [2]. Pulmonary involvement is common
throughout life, due to recurrent respiratory infections secondary to the associated combined immunodeficiency.
Thyroid
Thyroid agenesis and thyroid dysfunction have both been
reported in rare patients with Vici syndrome [4, 14].
Liver
Hepatomegaly with or without associated liver dysfunction has been reported in around 10 % of patients with
Vici syndrome [4, 14] and is probably a reflection of increased glycogen storage, also reported on post mortem
in few cases.
Kidneys
Renal involvement comprising hydronephrosis, renal
dysfunction and/or signs of renal tubular acidosis with
associated electrolyte imbalances, in particular marked
hypokalaemia, have been reported in around 15 % of
cases [4, 9, 14].
Blood
Some patients with Vici syndrome have been noted to
develop profound anaemia [4, 14]; it is currently uncertain if this is a secondary feature (for example related to
recurrent severe infections) or, alternatively, reflects additional primary involvement of red cell lines.
Other features
Mildly dysmorphic, coarse facial features with full lips
and macroglossia resembling those seen in (lysosomal)
storage disorders have been noted in some patients with
Vici syndrome [4, 14] (Fig. 1). Cleft lip and palate were a
feature in Dionisi-Vici’s original siblings [1] but have
subsequently been seen only in few families. Other
minor dysmorphic features such as 2[nd] and 3[rd] toe syndactyly were a feature in two families reported [4, 14]. A
long philtrum has been described in one family [17].
Marked failure to thrive evolving over time has been recently recognized as an almost universal feature [14].
One recent case report also suggests severe sleep abnormalities that may have to be considered in Vici syndrome [18].
Aetiology
Vici syndrome is due to recessive mutations in EPG5 on
chromosome 18q12.3, organized in 44 exons and encoding ectopic P granules protein 5 (EPG5), a protein of
2579 amino acids. EPG5 (originally known as KIAA1632)
was initially identified amongst a group of genes found
to be mutated in breast cancer tissue [19] before its implication in Vici syndrome in 2013 [4].
To date, around 40 EPG5 mutations have been identified in families with Vici syndrome, distributed throughout the entire EPG5 coding sequence without clear
genotype-phenotype correlations [13, 14]. Most EPG5
mutations associated with Vici syndrome are truncating
with only few missense mutations on record. The large
majority of EPG5 mutations are private to individual
families, with only 3 recurrent mutations identified to
date, p. Met2242CysfsX5 in an Italian and a Maltese
family, p. Arg417X identified in the homozygous state in
a patient from the Middle East and in the heterozygous
state in a Caucasian child from the United States, and p.
Gln336Arg identified in the homozygous (n = 3) and in
the heterozygous (n = 1) state in four unrelated patients
with definite or possible Ashkenazi ancestry [14]. Failure
to identify an (or identification of one but not the allelic)
EPG5 mutation in a small number of cases with highly
suggestive diagnostic features indicate the possibility of
large copy number variations not detectable on Sanger sequencing, or an altogether different genetic background.
The EPG5 protein has a key role as a regulator of autophagy in multicellular organisms, initially characterized in C. elegans [20] and subsequently confirmed in
EPG5-mutated humans with Vici syndrome [4]. Autophagy is a fundamental cellular degradative pathway conserved throughout evolution with important roles in the
removal of defective proteins and organelles, defence
-----
against infections and adaptation to changing metabolic
demands (for review [[21–23]]). The autophagy pathway
involves several tightly regulated steps, evolving from the
initial formation of isolation membranes (or phagophores)
to autophagosomes, whose fusion with lysosomes results
in the final structures of degradation, autolysosomes
(Fig. 2). The ultimate aim of the autophagy pathway is the
effective delivery of an intracellular structure targeted for
removal to the lysosome, and its ultimate intralysosomal
degradation. Studies in EPG5-mutated fibroblasts from
humans with Vici syndrome suggest that EPG5 deficiency
results in failure of autophagosome-lysosome fusion [4]
and, ultimately, impaired cargo delivery to the lysosome.
It is currently uncertain if impaired autophagy is the only
consequence of EPG5 deficiency, or only the most important expression of a more generalized vesicular trafficking
defect in Vici syndrome. Moreover, it remains unresolved
if all manifestations of EPG5 deficiency are a direct consequence of the primary autophagy defect, or of the secondary effects of defective autophagy such as reduced
mitochondrial quality control and/or accumulation of defective proteins.
Autophagy is physiologically enhanced in neurons and
muscle, probably explaining the prominent CNS and
neuromuscular involvement in patients with Vici syndrome and other conditions with primary autophagy defects. The phenotype of epg5-/-KO mice recapitulates
the autophagy defect and the skeletal muscle myopathy
seen in humans with Vici syndrome [24], and in addition
exhibits clinical and pathological neurodegenerative
features, in particular progressive motor deficit, muscle
atrophy and damage of cortical 5 layer and spinal motor
neurones, resembling human amyotrophic lateral sclerosis (ALS). A recently generated conditional drosophila
knockout also shows a marked autophagy defect and evidence of progressive neurodegeneration in retinal photoneurons [14]. Taken together, these findings indicate Vici
syndrome as a paradigm of a disorder linking neurodevelopment and neurodegeneration in the same pathway.
Following the genetic resolution of Vici syndrome in
2013, a number of disorders associated with defects in
primary autophagy regulators have now been identified–
for example, Static Encephalopathy in childhood with
NeuroDegeneration in Adulthood (SENDA) due to Xlinked recessive mutations in WDR45, and early-onset
syndromic ataxia due to recessive mutations in SNX14,
suggesting congenital disorders of autophagy as a novel
group of neurometabolic disorders with recognizable
features, mechanistically linked in the same pathway
(reviewed in, [25]).
Apart from the heart, the role of normally functioning
autophagy in other organ systems involved in Vici syndrome has been much less explored but poses interesting questions for future research, regarding the normal
biology of organ development but also organ-specific
disease.
Diagnosis
The diagnosis of Vici syndrome is based on the presence
of suggestive clinical features and confirmation of
-----
recessive EPG5 mutations on diagnostic genetic testing.
Based on binary logistic regression analysis, the presence
of the eight key features as outlined above (absent corpus callosum, cataracts, hypopigmentation, cardiomyopathy, immune dysfunction, profound developmental
delay, progressive microcephaly, failure to thrive) has a
specificity of 97 %, and a sensitivity of 89 % for a positive
EPG5 genetic test [14]. EPG5 testing is now offered as a
diagnostic service [4]. Although the vast majority of
EPG5 mutations is unequivocally pathogenic, rarely
EPG5 variants of uncertain significance may require
functional autophagy studies in fibroblast cultures that
are currently only available on a research basis. In
addition, introduction of complementary diagnostic genetic strategies (including high resolution CHG arrays,
targeted MLPA testing, RNA studies) to investigate the
possibility of copy number variations within the large
EPG5 gene are indicated in patients with suggestive
diagnostic features where only one or no clearly pathogenic EPG5 variants have been identified on Sanger
sequencing.
Other useful diagnostic investigations to document the
extent of multisystem involvement (summarized in
Table 2) include an MRI of the brain (in particular to
document the callosal agenesis, one of the key diagnostic
features), EEG, ophthalmology assesment including slit
lamp examination and VEPs, chest x-ray, cardiac assessment including cardiac ultrasound, an abdominal ultrasound to document the extent of organ involvement,
laboratory investigations assessing immune, thyroid, liver
and renal function (see also paragraph on management).
A muscle biopsy is not strictly needed to establish the
diagnosis, however, in cases where this was performed
before Vici syndrome was suspected, a certain combination of consistent histopathological features as outlined
above may be supportive of EPG5 involvement.
Differential diagnosis
Although in the presence of all principal features the
clinical diagnosis of Vici syndrome should be straightforward and prompt EPG5 testing, it is important to bear
in mind that some of these features (in particular cataracts, cardiomyopathy and immunodeficiency) may only
evolve over time and are not necessarily present from
birth. The differential diagnosis of Vici syndrome includes a number of syndromes with overlapping clinical
features, neurological and metabolic disorders with similar CNS abnormalities (in particular callosal agenesis)
and primary neuromuscular disorders with a similar
muscle biopsy appearance.
Amongst the syndromic conditions that may mimic
Vici syndrome (Table 3), Marinesco-Sjoegren syndrome
(MSS) and related disorders share cataracts and a skeletal muscle myopathy with or without sensorineural
deafness; however, failure to thrive and acquired microcephaly are uncommon and the degree of developmental delay is also usually less severe [26].
Hypopigmentation and immune defects are the typical
Table 2 Recommended investigations for the diagnosis and surveillance of patients with Vici syndrome
Investigation Presentation/Diagnosis [expected key findings] Surveillance
EPG5 testing Baseline investigation [homozygous/ compound heterozygous mutation] Not required
MRI brain Baseline investigation [Congenital absence of corpus callosum, along with Not routinely required
other described features][a]
Ophthalmology Baseline investigation [Cataracts, ocular albinism][b] Required surveillance for cataracts
assessment
Cardiac ultrasound Baseline investigation [Structural defects and/or cardiomyopathy][a] Required surveillance for progressive
cardiomyopathy
Chest x-ray Baseline investigation [Thymus aplasia/hypoplasia] If clinically indicated
Immune function Baseline investigation[c] Required surveillance for progressive
tests immunedeficiency
Renal function tests Baseline investigation If clinically indicated
Thyroid function tests Baseline investigation If clinically indicated
Liver function tests Baseline investigation If clinically indicated
Amino acids Baseline investigation If clinically indicated
assessment
Feeding study Often clinically indicated [most children require percutaneous feeding] If clinically indicated
EEG If clinically indicated If clinically indicated
Sleep study If clinically indicated If clinically indicated
Muscle biopsy No longer indicated if genetic diagnosis has been established[a] No longer indicated if genetic diagnosis has
been established
For more detail of recommended investigations and/or expected findings see [a][14] [b][16] [c][6]
-----
Table 3 Syndromes showing phenotypical overlap with Vici syndrome (selection)
Condition Gene Clinical feature
CNS Cataract Cardiomyopathy Myopathy Neuropathy Immunodeficiency Hypopigmentation
Vici syndrome EPG5 + + + + + + +
MSS SIL1 + + − + +[a] − −
CCFDN CTDP1 + + − + + − −
Nathalie syndrome ? + + + + − − −
Griscelli syndrome 1 MYO5A + − − ? − − +
Griscelli syndrome 2 RAB27A + − − ? − + +
Griscelli syndrome 3 MLPH − − − ? − − +
Elejalde syndrome RAB27A + − − ? − − +
CHS LYST + − − + (+) + +
HPS 2 AP3B1 + − − ? + − +
Cohen syndrome VPS13B + − (+) − − + −
Danon disease LAMP2 + − + + + − −
MEDNIK AP1S1 + (+) − − + − −
CEDNIK SNAP29 + + − − + − −
MSS marinesco-sjoegren syndrome, CCFDN congenital cataracts, facial dysmorphism and neuropathy syndrome, CHS chediak-higashi syndrome, HPS2 hermanksypudlak syndrome type 2, MEDNIK mental retardation, enteropathy, deafness, peripheral neuropathy, ichthyosis and keratoderma syndrome. + = feature present;
- = feature absent; ? = not specifically investigated; (+) = feature controversial or not sufficiently documented; [a] = neuronopathy
features of Chédiak-Higashi (CHS) syndrome and related primary immunodeficiency syndromes. Amongst
the latter group, Griscelli syndrome (GS) most closely
resembles Vici syndrome, and is further subdivided in
3 clinically and genetically distinct groups (for review,
[27]), of which only GS type 2 due to recessive mutations in RAB27A features prominent immunological
involvement and hemophagocytic lymphohistiocytosis
(HLH), whereas GS type 1 due to recessive MYO5A,
the allelic Elejalde syndrome (ES) and GS type 3 due to recessive MLPH mutations only feature pigmentary abnormalities with or without primary neurological features,
respectively, but not typically immunodeficiency. Interestingly, at least in a subset of patients, MSS, CHS, GS and
ES are also neurodevelopmental disoders that, in common
with Vici syndrome, may develop clinical features of earlyonset neurodegeneration [28–32].
On the neuroradiological level, the differential diagnosis of callosal agenesis is wide and in relation to Vici
syndrome has been summarized by McClelland et al. [7].
Thalamic changes in some patients with Vici syndrome
may resemble those seen in patients with primary (lysosomal) storage disorders [15], a group of conditions also
featuring some clinical overlap.
On the histopathological level, muscle biopsy findings in
Vici syndrome may mimic a number of primary neuromuscular disorders, in particular vacuolar myopathies [33]
and the centronuclear myopathies [34], conditions that,
interestingly, have been linked with primary and secondary defects of the autophagy pathway [35]. The defects
implicated in Danon disease [36] and X-linked myopathy
with excessive autophagy (MEAX) [37], in particular impaired autolysosomal fusion and defective intralysosomal
digestion, concern the same part of the autophagy pathway also affected in Vici syndrome. Considering common
features of increased glycogen storage and abnormal mitochondria, Vici syndrome (or indeed other disorders with
primary autophagy defects) also ought to be considered in
patients with suspected but genetically unresolved glycogen or mitochondrial disorder.
Management
There is currently no cure for Vici syndrome and management is essentially supportive, aimed at alleviating
the effects of extensive multisystem involvement.
As some of the associated features may only evolve over
time, in addition to their usefulness at the point of diagnosis, investigations that ought to be repeated at an interval
include EEG, ophthalmology assesment including slit
lamp examination, CXR, cardiac assessment including cardiac ultrasound, and laboratory investigations assessing
immune, thyroid, liver and renal function (see also paragraph on diagnosis). Investigations recommended in patients with suspected or established Vici syndrome are
summarized in Table 2.
Management of the associated immunodeficiency poses
a particular challenge and may require regular intravenous immunoglobulin infusions and antimicrobial
prophylaxis. It is also important to bear in mind that patients with Vici syndrome may fail to respond to certain
-----
immunizations such as those with tetanus or pneumococcal vaccines. An detailed overview of recommended immunological investigations and possible management
approaches is provided by Finocchi et al. [6].
More than half of patients with Vici syndrome have
seizures that ought to be managed with appropriate anticonvulsant therapy. Considering the profound autophagy
abnormalities observed in patients with Vici syndrome,
responses to anticonvulsants (or, indeed, other drugs)
with potentially autophagy-modulating properties such
as carbamazepine should perhaps be monitored closely
following initiation of treatment.
If cataracts are present surgical removal may improve
visual outcome but the indication for cataract surgery
will have to be decided on an individual basis, based on
overall severity and expected prognosis.
If a cardiomyopathy is identified on regular cardiac assessments, this may benefit from proactive medical management; a deterioration of cardiac function during
intercurrent illness has to be expected. Both central and
obstructive apnoea may require polysomnographic monitoring, and non-invasive ventilatory support as indicated.
Hypothyroidism may require thyroid hormone replacement. Renal dysfunction and electrolyte imbalances, in
particular profound hypokalaemia, will have to be anticipated and managed actively. Profound anaemia may require blood transfusion in some patients.
Counselling
Vici syndrome is inherited in an autosomal-recessive
fashion. Genetic counselling should be offered to all
families in whom a diagnosis of Vici syndrome has been
established. Mutational analysis of the EPG5 gene is now
available on a diagnostic basis [4], and EPG5 testing, the
gold standard of antenatal diagnosis, can be offered to
families where causative EPG5 mutations have been
identified. It is important to bear in mind that foetal
ultrasound applied for the detection of callosal agenesis
may yield false positive and false negative results, therefore when genetic testing is not readily available foetal
MRI ought to be the preferred form of imaging.
Prognosis
Vici syndrome is a relentlessly progressive condition and
survival beyond the first decade has not been reported.
A large series recently demonstrated that death occurred
at a median age of 42 months (range 1 to 102 months).
Patients with homozygous mutations died sooner than
patients with heterozygous mutations (median age nine
months compared to 48 months) [14]. The degree of
cardiac involvement and/or the extent of the associated
immunodeficiency are the most important prognostic
indicators.
Unresolved questions
Vici syndrome is the most extensive human multisystem
disorder attributed to a primary autophagy defect reported
to date. Although rare, the condition illustrates the impact
of defective autophagy not only on neurodevelopment and
neurodegeneration but also on a wide range of other
organ systems where the role of normally functioning
autophagy is currently only partially understood or not
even considered yet. There are a number of unresolved
questions of direct relevance to families affected by Vici
syndrome but also for the wider field of autophagy
research:
It is currently uncertain if Vici syndrome is genetically
homogeneous, with the failure to identify two allelic mutations in some patients due to EPG5 copy number variations not detectable on Sanger sequencing, or if there is
genuine genetic heterogeneity with novel genetic backgrounds yet to be discovered in individuals with suggestive
features but no EPG5 mutations identified. Little is known
about the physiological cellular interactions of the EPG5
protein, and it remains unclear if impaired autophagy is
the only consequence of EPG5 deficiency, or just the
most dramatic expression of a more generalized vesicular trafficking defect in patients with Vici syndrome.
The autophagy pathway is amenable to pharmacological
manipulation, and delineating the precise defect in Vici
syndrome will be important for the development of rational therapies in future. The marked phenotypical overlap between Vici and clinically related syndromes such as
MSS or CHS is currently unexplained but suggests potential interaction of the defective proteins in related cellular
pathways, resulting in similar phenotypes.
Identification of new genotypes, further characterization
of the precise biological role of EPG5 and the relation between Vici and similar syndromes will further elucidate
the role of defective autophagy in inherited multisystem
disorders, and hopefully result in the development of effective therapies for Vici syndrome and related conditions
in future.
Consent
Written informed consent was obtained from the patient
(s) for publication of this manuscript and accompanying
images. A copy of the written consent is available for
review by the Editor-in-Chief of this journal.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
SB drafted and edited the manuscript. CDV edited the manuscript. LS edited
the manuscript and prepared Fig. 2. MG edited the manuscript. HJ conceived
of the review, and drafted and edited the manuscript. All authors read and
approved the final manuscript.
-----
Acknowledgements
LS was supported by a King’s Bioscience Institute PhD Fellowship. MG holds
the BHF Chair of Molecular Cardiology; LS and MG are supported by the
Leducq Foundation. HJ acknowledges grant support from the Myotubular
Trust, Great Britain (Grant reference number 12KCL01).
Author details
1Department of Paediatric Neurology, Neuromuscular Service, Evelina’s
Children Hospital, Guy’s & St. Thomas’ Hospital NHS Foundation Trust,
London, UK. [2]Division of Metabolism and Laboratory of Molecular Medicine,
Bambino Gesu Children’s Hospital IRCCS, Rome, Italy. [3]Randall Division of Cell
and Molecular Biophysics, Muscle Signalling Section, King’s College, London,
UK. [4]Department of Clinical and Basic Neuroscience, IoPPN, King’s College,
London, UK.
Received: 20 August 2015 Accepted: 8 February 2016
References
1. Vici CD, Sabetta G, Gambarara M, Vigevano F, Bertini E, Boldrini R, Parisi SG,
Quinti I, Aiuti F, Fiorilli M. Agenesis of the corpus callosum, combined
immunodeficiency, bilateral cataract, and hypopigmentation in two
brothers. Am J Med Genet. 1988;29(1):1–8.
2. Al-Owain M, Al-Hashem A, Al-Muhaizea M, Humaidan H, Al-Hindi H,
Al-Homoud I, Al-Mogarri I: Vici syndrome associated with unilateral lung
hypoplasia and myopathy. Am J Med Genet A. 2010;152A(7):1849–53.
3. Chiyonobu T, Yoshihara T, Fukushima Y, Yamamoto Y, Tsunamoto K,
Nishimura Y, Ishida H, Toda T, Kasubuchi Y. Sister and brother with Vici
syndrome: agenesis of the corpus callosum, albinism, and recurrent
infections. Am J Med Genet. 2002;109(1):61–6.
4. Cullup T, Kho AL, Dionisi-Vici C, Brandmeier B, Smith F, Urry Z, Simpson MA,
Yau S, Bertini E, McClelland V et al. Recessive mutations in EPG5 cause Vici
syndrome, a multisystem disorder with defective autophagy. Nat Genet.
2013;45(1):83–7.
5. del Campo M, Hall BD, Aeby A, Nassogne MC, Verloes A, Roche C, Gonzalez
C, Sanchez H, Garcia-Alix A, Cabanas F et al. Albinism and agenesis of the
corpus callosum with profound developmental delay: Vici syndrome,
evidence for autosomal recessive inheritance. Am J Med Genet. 1999;85(5):
479–85.
6. Finocchi A, Angelino G, Cantarutti N, Corbari M, Bevivino E, Cascioli S,
Randisi F, Bertini E, Dionisi-Vici C. Immunodeficiency in Vici syndrome: a
heterogeneous phenotype.
Am J Med Genet A. 2012;158A(2):434–9.
7. McClelland V, Cullup T, Bodi I, Ruddy D, Buj-Bello A, Biancalana V, Boehm J,
Bitoun M, Miller O, Jan W et al. Vici syndrome associated with sensorineural
hearing loss and evidence of neuromuscular involvement on muscle
biopsy. Am J Med Genet A. 2010;152A(3):741–7.
8. Miyata R, Hayashi M, Itoh E. Pathological changes in cardiac muscle and
cerebellar cortex in Vici syndrome. Am J Med Genet A. 2014;164A(12):3203–5.
9. Miyata R, Hayashi M, Sato H, Sugawara Y, Yui T, Araki S, Hasegawa T, Doi S,
Kohyama J. Sibling cases of Vici syndrome: sleep abnormalities and
complications of renal tubular acidosis. Am J Med Genet A. 2007;143(2):
189–94.
10. Ozkale M, Erol I, Gumus A, Ozkale Y, Alehan F. Vici syndrome associated
with sensorineural hearing loss and laryngomalacia. Pediatr Neurol. 2012;
47(5):375–8.
11. Rogers CR, Aufmuth B, Monesson S: Vici Syndrome: A Rare Autosomal
Recessive Syndrome with Brain Anomalies, Cardiomyopathy, and Severe
Intellectual Disability. Case Reports in Genetics 2011, Volume 2011.
12. Said E, Soler D, Sewry C. Vici syndrome–a rapidly progressive
neurodegenerative disorder with hypopigmentation, immunodeficiency
and myopathic changes on muscle biopsy. Am J Med Genet A. 2012;
158A(2):440–4.
13. Ehmke N, Parvaneh N, Krawitz P, Ashrafi MR, Karimi P, Mehdizadeh M,
Kruger U, Hecht J, Mundlos S, Robinson PN. First description of a patient
with Vici syndrome due to a mutation affecting the penultimate exon of
EPG5 and review of the literature. Am J Med Genet A. 2014;164A(12):
3170–5.
14. Byrne S: EPG-related Vici syndrome: a paradigm of neurodevelopmental
disorders with defective autophagy. Brain, in press.
15. Autti T, Joensuu R, Aberg L. Decreased T2 signal in the thalami may be a
sign of lysosomal storage disease. Neuroradiology. 2007;49(7):571–8.
16. Filloux FM, Hoffman RO, Viskochil DH, Jungbluth H, Creel DJ.
Ophthalmologic features of Vici syndrome. J Pediatr Ophthalmol Strabismus.
2014;51(4):214–20.
17. Tasdemir S, Sahin I, Cayir A, Yuce I, Ceylaner S, Tatar A. Vici syndrome in siblings
born to consanguineous parents. Am J Med Genet A. 2016;170(1):220–5.
18. El-Kersh K, Jungbluth H, Gringras P, Senthilvel E. Severe Central Sleep Apnea
in Vici Syndrome. Pediatrics. 2015;136(5):e1390–4.
19. Halama N, Grauling-Halama SA, Beder A, Jager D. Comparative integromics
on the breast cancer-associated gene KIAA1632: clues to a cancer antigen
domain. Int J Oncol. 2007;31(1):205–10.
20. Tian Y, Li Z, Hu W, Ren H, Tian E, Zhao Y, Lu Q, Huang X, Yang P, Li X et al. C.
elegans screen identifies autophagy genes specific to multicellular organisms.
Cell. 2010;141(6):1042–55.
21. Jiang P, Mizushima N. Autophagy and human diseases. Cell Res. 2014;24(1):69–79.
22. Klionsky DJ, Abdalla FC, Abeliovich H, Abraham RT, Acevedo-Arozena A,
Adeli K, et al. Guidelines for the use and interpretation of assays for
monitoring autophagy. Autophagy. 2012;8(4):445–544.
23. Mizushima N, Komatsu M. Autophagy: renovation of cells and tissues. Cell.
2011;147(4):728–41.
24. Zhao H, Zhao YG, Wang X, Xu L, Miao L, Feng D, Chen Q, Kovacs AL, Fan D,
Zhang H.. Mice deficient in Epg5 exhibit selective neuronal vulnerability to
degeneration. J Cell Biol. 2013;200(6):731–41.
25. Ebrahimi-Fakhari D, Saffari A, Wahlster L, Lu J, Byrne S, Hoffmann GF,
Jungbluth H, Sahin M: Congenital disorders of autophagy: An emerging
class of inborn errors of neuro-metabolism
26. Krieger M, Roos A, Stendel C, Claeys KG, Sonmez FM, Baudis M, Bauer P,
Bornemann A, de Goede C, Dufke A et al. SIL1 mutations and clinical
spectrum in patients with Marinesco-Sjogren syndrome. Brain: a journal of
neurology. 2013;136(Pt 12):3634–44.
27. Dotta L, Parolini S, Prandini A, Tabellini G, Antolini M, Kingsmore SF,
Badolato R. Clinical, laboratory and molecular signs of immunodeficiency in
patients with partial oculo-cutaneous albinism. Orphanet journal of rare
diseases. 2013;8:168.
28. Silveira-Moriyama L, Moriyama TS, Gabbi TV, Ranvaud R, Barbosa ER.
Chediak-Higashi syndrome with parkinsonism. Movement disorders: official
journal of the Movement Disorder Society. 2004;19(4):472–5.
29. Byrne S, Dlamini N, Lumsden D, Pitt M, Zaharieva I, Muntoni F, King A,
Robert L, Jungbluth H: SIL1-related Marinesco-Sjoegren syndrome (MSS)
with associated motor neuronopathy and bradykinetic movement disorder.
Neuromuscular disorders: NMD. 2015;25(7):585–8.
30. Duran-McKinster C, Rodriguez-Jurado R, Ridaura C, de la Luz O-CM, Tamayo
L, Ruiz-Maldonando R. Elejalde syndrome–a melanolysosomal
neurocutaneous syndrome: clinical and morphological findings in 7
patients. Arch Dermatol. 1999;135(2):182–6.
31. Pastural E, Barrat FJ, Dufourcq-Lagelouse R, Certain S, Sanal O, Jabado N,
Seger R, Griscelli C, Fischer A, de Saint Basile G. Griscelli disease maps to
chromosome 15q21 and is associated with mutations in the myosin-Va
gene. Nat Genet. 1997;16(3):289–92.
32. Pastural E, Ersoy F, Yalman N, Wulffraat N, Grillo E, Ozkinay F, Tezcan I,
Gedikoglu G, Philippe N, Fischer A et al. Two genes are responsible for
Griscelli syndrome at the same 15q21 locus. Genomics. 2000;63(3):299–306.
33. Malicdan MC, Nishino I. Autophagy in lysosomal myopathies. Brain Pathol.
2012;22(1):82–8.
34. Jungbluth H, Wallgren-Pettersson C, Laporte J. Centronuclear (myotubular)
myopathy. Orphanet J Rare Dis. 2008;3:26.
35. Jungbluth H, Gautel M. Pathogenic mechanisms in centronuclear
myopathies. Front Aging Neurosci. 2014;6:339.
36. Nishino I, Fu J, Tanji K, Yamada T, Shimojo S, Koori T, Mora M, Riggs JE, Oh
SJ, Koga Y et al. Primary LAMP-2 deficiency causes X-linked vacuolar
cardiomyopathy and myopathy (Danon disease). Nature. 2000;406(6798):
906–10.
37. Ramachandran N, Munteanu I, Wang P, Ruggieri A, Rilstone JJ, Israelian N,
Naranian T, Paroutis P, Guo R, Ren ZP et al. VMA21 deficiency prevents
vacuolar ATPase assembly and causes autophagic vacuolar myopathy.
Acta Neuropathol. 2013.
-----
| 9,923
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC4772338, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://ojrd.biomedcentral.com/track/pdf/10.1186/s13023-016-0399-x"
}
| 2,016
|
[
"Review",
"JournalArticle"
] | true
| 2016-02-29T00:00:00
|
[
{
"paperId": "68de7c3f8cae4811269261d01699bd6e2013faff",
"title": "EPG5-related Vici syndrome: a paradigm of neurodevelopmental disorders with defective autophagy"
},
{
"paperId": "95ceabab3d5c2b76a6e49d9754e8e8c664515ea1",
"title": "Congenital disorders of autophagy: an emerging novel class of inborn errors of neuro-metabolism."
},
{
"paperId": "db2e7da6ca32ed1410efd1c8cff44c5939e98a42",
"title": "Vici syndrome in siblings born to consanguineous parents"
},
{
"paperId": "b2d6a15b311a826f2603cfae7878d5407beb19b2",
"title": "Severe Central Sleep Apnea in Vici Syndrome"
},
{
"paperId": "87420251afad5d929255982b79cb02bbe4c00e00",
"title": "\n SIL1-related Marinesco–Sjoegren syndrome (MSS) with associated motor neuronopathy and bradykinetic movement disorder"
},
{
"paperId": "fa12bb173438f191df9ff553edb46f0e102aeba7",
"title": "First description of a patient with Vici syndrome due to a mutation affecting the penultimate exon of EPG5 and review of the literature"
},
{
"paperId": "18d09e1e552eebced673873129289de3a8e2ba5d",
"title": "Pathological changes in cardiac muscle and cerebellar cortex in Vici syndrome"
},
{
"paperId": "67d2afeac2451b11cc3440d28f7762e549c62173",
"title": "Pathogenic Mechanisms in Centronuclear Myopathies"
},
{
"paperId": "8cec34253fa0183a3e7b0e00fa2882f9cf764ff0",
"title": "Ophthalmologic features of Vici syndrome."
},
{
"paperId": "f0b0bf8f409cca9fa75deafff2c4729cc2ac5c89",
"title": "Autophagy and human diseases"
},
{
"paperId": "012f8ea109e99cad89ad14c62c8be0d7c46b9f3f",
"title": "SIL1 mutations and clinical spectrum in patients with Marinesco-Sjogren syndrome."
},
{
"paperId": "6d78bb412712a8f1f8c8fb54cdc7b1563490efd8",
"title": "Clinical, laboratory and molecular signs of immunodeficiency in patients with partial oculo-cutaneous albinism"
},
{
"paperId": "e4a6392e37dc513b847fc07422c4d585a850b0f3",
"title": "Mice deficient in Epg5 exhibit selective neuronal vulnerability to degeneration"
},
{
"paperId": "03bd607db25af57d2081a6689f4b4d590b4e5e0c",
"title": "VMA21 deficiency prevents vacuolar ATPase assembly and causes autophagic vacuolar myopathy"
},
{
"paperId": "186ff5994156f70d4b163bfaac383db6f9f85dc1",
"title": "Recessive mutations in EPG5 cause Vici syndrome, a multisystem disorder with defective autophagy"
},
{
"paperId": "61e3cc656a6c9e5316e38f3c149a62f0e31fb42f",
"title": "Vici syndrome associated with sensorineural hearing loss and laryngomalacia."
},
{
"paperId": "e32ad6586854e69af86fd3250f7c3091957fdc3d",
"title": "Guidelines for the use and interpretation of assays for monitoring autophagy"
},
{
"paperId": "a81ef55f13a53186ff94aa076d58c055994adb92",
"title": "Immunodeficiency in Vici syndrome: A heterogeneous phenotype"
},
{
"paperId": "6b8ec7cd8294ecda2739092649379d02daee2531",
"title": "Vici syndrome—A rapidly progressive neurodegenerative disorder with hypopigmentation, immunodeficiency and myopathic changes on muscle biopsy"
},
{
"paperId": "e1147091a168e75b1f52f90f814174c32f7923fe",
"title": "Autophagy in Lysosomal Myopathies"
},
{
"paperId": "c463fd6c71920efff206a5ed48c91091ddc0b3e5",
"title": "Autophagy: Renovation of Cells and Tissues"
},
{
"paperId": "e682068c5fdb977d59fc1a0a0782ef9891503be8",
"title": "Vici Syndrome: A Rare Autosomal Recessive Syndrome with Brain Anomalies, Cardiomyopathy, and Severe Intellectual Disability"
},
{
"paperId": "be7f2ccc613b97740282358e625478aa7437afb9",
"title": "Vici syndrome associated with unilateral lung hypoplasia and myopathy"
},
{
"paperId": "8e0902a5c84912dde65541305a9d15aef9d43248",
"title": "C. elegans Screen Identifies Autophagy Genes Specific to Multicellular Organisms"
},
{
"paperId": "8772a9254d90de2075532ab91061ce303c3589ff",
"title": "Vici syndrome associated with sensorineural hearing loss and evidence of neuromuscular involvement on muscle biopsy"
},
{
"paperId": "f44381ea5ac91588c23e4b3c8ae1d7a39dd8f451",
"title": "Comparative integromics on the breast cancer-associated gene KIAA1632: clues to a cancer antigen domain."
},
{
"paperId": "e6e1465203a5ad66f0afbc4b64886997091d5924",
"title": "Decreased T2 signal in the thalami may be a sign of lysosomal storage disease"
},
{
"paperId": "215f503fb47d1814c502c892c07296b74807a7c7",
"title": "Sibling cases of Vici syndrome: Sleep abnormalities and complications of renal tubular acidosis"
},
{
"paperId": "52190f2dad01e3ce7ba41b48326a3cf9351f0782",
"title": "Chediak‐Higashi syndrome with parkinsonism"
},
{
"paperId": "4f64034299316cb239c002f3a95214871017c477",
"title": "Sister and brother with Vici syndrome: agenesis of the corpus callosum, albinism, and recurrent infections."
},
{
"paperId": "b2d3f309efb150d99f0c9438e4362b73c3f44b15",
"title": "Primary LAMP-2 deficiency causes X-linked vacuolar cardiomyopathy and myopathy (Danon disease)"
},
{
"paperId": "7bf747c1ea878ae993950d54cdcdf3b26adaf370",
"title": "Two genes are responsible for Griscelli syndrome at the same 15q21 locus."
},
{
"paperId": "307df7fa9a828574b58e34f18d802e99920ae8e0",
"title": "Albinism and agenesis of the corpus callosum with profound developmental delay: Vici syndrome, evidence for autosomal recessive inheritance."
},
{
"paperId": "c6df644597dd6731cc3409e6bb46dfac097b7d31",
"title": "Elejalde syndrome--a melanolysosomal neurocutaneous syndrome: clinical and morphological findings in 7 patients."
},
{
"paperId": "09cc81f29cb1f0f01c258e9b36fe2344f082e9fc",
"title": "Griscelli disease maps to chromosome 15q21 and is associated with mutations in the Myosin-Va gene"
},
{
"paperId": "4005aaf270eac516bd57e1d3742119c8367d3be7",
"title": "Centronuclear (\"myotubular\") myopathy."
},
{
"paperId": "5453f7562f16d3f6628902ca4813b5fb674a85a4",
"title": "Guidelines for the use and interpretation of assays formonitoring autophagy (3rd edition)"
},
{
"paperId": null,
"title": "Acta Neuropathol"
},
{
"paperId": "57364c02019485cb7430f1ba585ee2128900cbcc",
"title": "Agenesis of the corpus callosum, combined immunodeficiency, bilateral cataract, and hypopigmentation in two brothers."
}
] | 9,923
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/006d191ba99830162802f983a5aa912cce7447db
|
[
"Computer Science"
] | 0.893086
|
Improvement in the Efficiency of a Distributed Multi-Label Text Classification Algorithm Using Infrastructure and Task-Related Data
|
006d191ba99830162802f983a5aa912cce7447db
|
Informatics
|
[
{
"authorId": "1832373",
"name": "M. Sarnovský"
},
{
"authorId": "2066572155",
"name": "Marek Olejnik"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Informatics (basel",
"Informatics (Basel)"
],
"alternate_urls": [
"https://www.mdpi.com/journal/informatics",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-390272"
],
"id": "60275282-b076-42f7-99dc-8757c2064681",
"issn": "2227-9709",
"name": "Informatics",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-390272"
}
|
Distributed computing technologies allow a wide variety of tasks that use large amounts of data to be solved. Various paradigms and technologies are already widely used, but many of them are lacking when it comes to the optimization of resource usage. The aim of this paper is to present the optimization methods used to increase the efficiency of distributed implementations of a text-mining model utilizing information about the text-mining task extracted from the data and information about the current state of the distributed environment obtained from a computational node, and to improve the distribution of the task on the distributed infrastructure. Two optimization solutions are developed and implemented, both based on the prediction of the expected task duration on the existing infrastructure. The solutions are experimentally evaluated in a scenario where a distributed tree-based multi-label classifier is built based on two standard text data collections.
|
# informatics
_Article_
## Improvement in the Efficiency of a Distributed Multi-Label Text Classification Algorithm Using Infrastructure and Task-Related Data
**Martin Sarnovsky *** **and Marek Olejnik**
Department of Cybernetics and Artificial Intelligence, Technical University Košice, Letná 9/A,
040 01 Košice, Slovakia; [email protected]
*** Correspondence: [email protected]**
Received: 2 January 2019; Accepted: 9 March 2019; Published: 18 March 2019
[����������](https://www.mdpi.com/2227-9709/6/1/12?type=check_update&version=2)
**�������**
**Abstract: Distributed computing technologies allow a wide variety of tasks that use large amounts of**
data to be solved. Various paradigms and technologies are already widely used, but many of them are
lacking when it comes to the optimization of resource usage. The aim of this paper is to present the
optimization methods used to increase the efficiency of distributed implementations of a text-mining
model utilizing information about the text-mining task extracted from the data and information
about the current state of the distributed environment obtained from a computational node, and to
improve the distribution of the task on the distributed infrastructure. Two optimization solutions
are developed and implemented, both based on the prediction of the expected task duration on the
existing infrastructure. The solutions are experimentally evaluated in a scenario where a distributed
tree-based multi-label classifier is built based on two standard text data collections.
**Keywords: text classification; multi-label classification; distributed text-mining; task assignment;**
resource optimization; grid computing
**1. Introduction**
Knowledge discovery in texts (KDT), often referred to as text-mining, is a special kind of
knowledge discovery in databases (KDD) process. It is usually complex because of the processing
of unstructured data and covers the topic of natural language processing [1]. The overall structure
of the process follows typical KDD patterns, and the main difference is in the transformation of the
unstructured text data into a structured format ready to be used by data mining algorithms.
A KDT process usually consists of several phases that can be mapped to standards for KDD,
such as CRISP-DM (CRoss-Industry Standard Process for Data Mining) [2,3]. The data and domain
understanding phase identify the most important concepts related to the application domain. The main
objective of this phase is also to specify the overall goal of the process. The data gathering phase
is aimed at the selection of textual documents for the solution of a specified task. It is important to
select the documents that cover the application domain or may be related to the task solution. Various
techniques can be applied, including manual or automatic selection of the documents or leveraging of
the existing corpora. The preprocessing phase involves the application of the operations that transform
the textual data into the structured format. Usually, the dataset is converted into one of the commonly
used representations (e.g., vector space model). During this phase, text preprocessing techniques
are also applied. These techniques are often language-dependent and include stop word removal,
stemming, lemmatization, and indexation. The text-mining phase represents the application of the
model-building algorithm. Depending on the task, classification, clustering, information retrieval,
or information extraction models are created. The models are evaluated in the next step, and then the
results can be visualized and interpreted, or models can be applied to the real environment.
-----
_Informatics 2019, 6, 12_ 2 of 15
The classification of text documents is one of the specific text-mining tasks, and its main goal is
to assign the document to one of the pre-defined classes (categories). Usually, the classifier is built
on a set of labeled data (train data). A trained model is evaluated on a test dataset and then can
be deployed to be used on live, unlabeled data. There are various classification methods available,
including tree-based classifiers, neural networks, support vector machines, k-nearest neighbors, etc.
In text classification tasks, we often deal with a multi-class classification problem (e.g.,
a classification task where a document may be assigned to more than one category) [4]. Certain types
of classifiers can handle multi-class data and can be directly used to solve multi-class classification
problems (e.g., probabilistic classifiers). In order to use other types of classifiers, training algorithms
have to be adapted. One of the commonly used ways of adapting an algorithm to handle multi-class
data is to build a set of binary classifiers of a given model, each dedicated to a particular category.
The final model then consists of a set of these binary classifiers.
When training models on real text collections, we often face the problem of processing large
volumes of data, which requires the adoption of advanced technologies for distributed computing.
To solve such computationally intensive tasks, training algorithms are modified to leverage the
advantages of distributed computing architectures and implemented using different programming
paradigms and underlying technologies.
However, the distributed implementation itself cannot guarantee the effective usage of the
available computing resources, especially in environments with limited infrastructures. Therefore,
various techniques for the improvement of resource allocation and the optimization of infrastructure
usage have been developed. In some tasks, however, resource allocation and task assignment can be
heavily influenced by the task itself and its parameters. In this paper, we propose a method that can
optimize the effectiveness of resource usage in the distributed environment as well as a task assignment
process specifically for the tasks associated with building distributed text classification models.
The paper is organized as follows: First, we give an overview of the distributed text classification
methods and tools and introduce the algorithms used in this paper. The next section describes the
pre-existing optimization techniques used in similar tasks. The section that follows presents the
description of the developed and implemented methods. Finally, the results of the experimental
evaluation are discussed in Section 6 and summarized in Section 7.
**2. Distributed Classification**
In general, there are two major approaches to the distribution of KDD and KDT model building
algorithms:
_•_ _Data-driven decomposition—in this case, we assume that the decomposition of the dataset is_
sufficient. Input dataset D is divided into n mutually disjoint subsets Dn. Each of the subsets
then serves as a training set for the training of partial models. When using this approach, a
merging mechanism aggregating all of the partial models has to be developed. There are several
models suitable for this type of decomposition, such as k-nearest neighbors (k-NN), support
vector machine classifier, or all instance-based methods in general.
_•_ _Model-driven decomposition—in this case, the input dataset remains complete and the algorithm is_
modified to run in a parallel or distributed way. In general, it is the decomposition of the model
building itself. The nature of the decomposition is model-specific, and we can generally state
that the partial subprocesses of partial model building have to be independent of each other.
Various models can be decomposed in this way, such as tree-based classifiers, compound methods
(boosting, bagging), or clustering algorithms based on self-organizing maps.
According to [5], it is more suitable to use a more complex algorithm on a single node applied to
a subset of the dataset. However, the dataset splitting and distribution of the data subsets to the nodes
can be more time-consuming. This approach is also not suitable for data mining tasks from raw data,
where their integration and preprocessing are needed, as it requires more transfers of the data between
-----
_Informatics 2019, 6, 12_ 3 of 15
the nodes. The second approach is more suitable when using large unstructured datasets (e.g., text
data), but it is more complex to design the distributed algorithm itself. Often, the communication cost
for constructing a model is rather high.
If the dataset is represented as a set of n-tuples, where each tuple represents particular attribute
values, there are two approaches to data fragmentation [6]:
Horizontal fragmentation—data are distributed in such a way that each node receives a part of
_•_
the dataset, in the case that the dataset comprises n-tuples, each node receives a subset of n-tuples.
_•_ Vertical fragmentation—in this case, partial tuples of a complete dataset are assigned to the nodes.
There are various existing implementations of distributed and parallel data as well as text
mining models using various underlying technologies and tools. Several machine learning libraries
are available offering algorithm implementations in MapReduce (e.g., Mahout) [7], on top of the
hybrid processing platforms such as Spark (MLlib, ML Pipelines, city, state abbreviation if USA,
country) [8] or a number of specific algorithm implementations using grid computing or MPI
(message parsing interface). In [9] authors describe PLANET, a distributed, scalable framework for
the classification of trees, building on large datasets. The tree-building algorithm is implemented
using the MapReduce paradigm and aims to maximize the number of nodes that can be expanded
in parallel, while considering memory limitations. It also aims to store in memory all the assigned
training data partitions on particular nodes. Another implementation based on Apache Spark uses
similar techniques such as Hadoop MapReduce implementations [10], while several other works
leverage the computational power of GPUs (Graphics Processing Units) to improve the performance
of the MapReduce implementations. Caragea et al. [11] describe a multi-agent approach to building
tree-based classifiers. Their approach is aimed at building models in distributed datasets and
minimizing communication between the nodes in distributed environments. When applied in the
realm of big data, there are numerous approaches that have already been published. In the area of
multi-label distributed classification algorithms, studies have presented classifiers able to handle data
with hundreds of thousands of labels [12], and more recent work can be found in References [13–16].
However, those approaches focus mostly on handling extremely large sets of target labels.
In our work, we used our own implementations of classification and clustering algorithms in the
Java Bag of Words library (Jbowl) [17]. Jbowl provides an API (Application Programming Interface)
for building text mining applications in Java and contains various tools for text preprocessing, text
classification, clustering, and model evaluation techniques. We designed and implemented distributed
versions of classification and clustering models from the Jbowl library. The GridGain platform [18]
was used as a distributed computing framework. Table 1 summarizes the currently implemented
sequential and distributed versions of the algorithms.
**Table 1. Overview of currently implemented supervised and unsupervised models in Java bag of**
words library (Jbowl).
**Sequential** **Distributed**
Decision tree classifier
k-Nearest neighbor classifier
Rule-based classifier
Support vector machine classifier
Boosting compound classifier
k-Means clustering
GHSOM clustering
The implementation of the distributed tree-based classifier addresses the multi-label classification
problem (often present in text classification tasks). Each class is considered as a separate binary
classification problem, and the final classifier consists of a set of binary classifiers for each
particular class/category. In our implementation, binary classifiers were built in a distributed way.
-----
_Informatics 2019, 6, 12_ 4 of 15
_Informatics 2019, 6, x FOR PEER REVIEW_ 4 of 15
k-NN-distributed implementation was based on the approach described in [19]. A data-driven
distribution method is used where data are split into sub-sets and local k-NN models are computed on
on these partitions. The distributed GHSOM (Growing Hierarchical Self-Organizing Maps) algorithm
these partitions. The distributed GHSOM (Growing Hierarchical Self-Organizing Maps) algorithm [20]
[20] is based on a parallel calculation of hierarchically ordered maps of growing SOM. The distributed
is based on a parallel calculation of hierarchically ordered maps of growing SOM. The distributed
k-means algorithm is inspired by References [21,22]. In this case, building k clusters were split among
k-means algorithm is inspired by References [21,22]. In this case, building k clusters were split among
the available computing resources and the clusters were created on the assigned data splits.
the available computing resources and the clusters were created on the assigned data splits.
_Distributed Multi-Label Classification_
_Distributed Multi-Label Classification_
The traditional approach to single-label classification is based on the training of a model, where
The traditional approach to single-label classification is based on the training of a model, where
each training document is assigned to one of the classes. In the case of multi-label classification,
each training document is assigned to one of the classes. In the case of multi-label classification,
documents are assigned to more classes at the same time (e.g., a news document may belong to both
documents are assigned to more classes at the same time (e.g., a news document may belong to both
“domestic news” and “politics”). Authors in [4] describe how traditional methods can be applied to
“domestic news” and “politics”). Authors in [4] describe how traditional methods can be applied to
solve the multi-class problem:
solve the multi-class problem:
- _Problem transforming methods—methods that adapt the problem itself, for example, transforming_
_•_ _Problem transforming methodsthe multi-label classification problem into a set of binary classification problems. —methods that adapt the problem itself, for example, transforming_
the multi-label classification problem into a set of binary classification problems.
- _Algorithm adaptation—approaches that adapt the model itself to be able to handle multi-class data._
_•_ _Algorithm adaptation—approaches that adapt the model itself to be able to handle multi-class data._
The distributed induction of decision trees is one of the possible ways to reduce the time required
to build a classifier on the large data collections. From the perspective of the model or data-driven The distributed induction of decision trees is one of the possible ways to reduce the time required
to build a classifier on the large data collections. From the perspective of the model or data-drivenparadigm, parallelism can be achieved by the parallel expansion of decision tree nodes (model-driven
paradigm, parallelism can be achieved by the parallel expansion of decision tree nodes (model-drivenparallelization) or by the distribution of a dataset and building partial models on these partitions
parallelization) or by the distribution of a dataset and building partial models on these partitions(data-driven parallelization). According to [23], building distributed tree models is a complex task.
(data-driven parallelization). According to [One of the reasons for this is that the structure of the final tree is often irregular, which places different 23], building distributed tree models is a complex task.
One of the reasons for this is that the structure of the final tree is often irregular, which places differentrequirements on the computational capabilities of the nodes responsible for the expansion of
requirements on the computational capabilities of the nodes responsible for the expansion of particularparticular nodes. This can lead to an increase in the total time taken to build a model. A static scheme
nodes. This can lead to an increase in the total time taken to build a model. A static scheme for taskfor task allocation can prove to be unsuitable when applied to unbalanced data. Another reason is
allocation can prove to be unsuitable when applied to unbalanced data. Another reason is that even ifthat even if the nodes can be expanded in parallel, all training set data shared from the tree nodes at
the nodes can be expanded in parallel, all training set data shared from the tree nodes at the same levelthe same level are still required for model building. There are several strategies for implementing
are still required for model building. There are several strategies for implementing distributed treedistributed tree model building; several other options also exist in the case of multi-label tree models.
model building; several other options also exist in the case of multi-label tree models. The process ofThe process of building a classifier corresponds to particular CRISP-DM phases and can be
building a classifier corresponds to particular CRISP-DM phases and can be summarized as followssummarized as follows (see Figure 1.):
(see Figure 1):
- _Data preparation—a selection of textual documents, and selection of a training and testing set._
- _Data preparationData preprocessing—a selection of textual documents, and selection of a training and testing set.—at the beginning of the process, preprocessing of the complete dataset is_
_•_ _Data preprocessingperformed. This includes text tokenization, lowercase transformation, and stop word removal. —at the beginning of the process, preprocessing of the complete dataset_
is performed. This includes text tokenization, lowercase transformation, and stop wordThen, a vector representation of the textual documents is computed using tf-idf (term frequencyremoval. Then, a vector representation of the textual documents is computed using tf-idf (terminverse document frequency) weighting.
- frequency-inverse document frequency) weighting.Model building—in this step, a tree-based classifier is trained on the training set. The result is a
_•_ _Model buildingclassification model ready to be evaluated and deployed. —in this step, a tree-based classifier is trained on the training set. The result is a_
classification model ready to be evaluated and deployed.
**Figure 1. Sequential model building.**
A distributed tree-based classifier is trained in a similar way. The biggest difference is in theFigure 1. Sequential model building.
process of processing a vector space text model and model building. In this case, the text model building
is divided into sub-tasks guided by a master node in the distributed infrastructure (see FigureA distributed tree-based classifier is trained in a similar way. The biggest difference is in the 2).
The master node assigns the sub-tasks to the worker nodes (with assigned data). When the taskprocess of processing a vector space text model and model building. In this case, the text model
building is divided into sub-tasks guided by a master node in the distributed infrastructure (see
Figure 2 ) The master node assigns the sub-tasks to the worker nodes (with assigned data) When the
-----
_Informatics 2019, 6, 12_ 5 of 15
_Informatics 2019, 6, x FOR PEER REVIEW_ 5 of 15
assignment is complete, partial models are created on the available computing nodes. When allassigned sub-models are created, the computing node sends partial models to the master node. When
assigned sub-models are created, the computing node sends partial models to the master node. Whenall computational nodes deliver partial models to the master node, they are collected and merged
all computational nodes deliver partial models to the master node, they are collected and merged intointo the final classification model.
the final classification model.
**Figure 2. Distributed model building.**
**Figure 2. Distributed model building.**
In the task assignment step, optimization methods can be applied. This paper presents two
solutions to the sub-task allocation to available computational grid nodes, and both methods areIn the task assignment step, optimization methods can be applied. This paper presents two
based on the estimation of the expected time required to build partial models on the node. Severalsolutions to the sub-task allocation to available computational grid nodes, and both methods are
parameters must be taken into consideration when estimating the build time, including sub-taskbased on the estimation of the expected time required to build partial models on the node. Several
complexity, overall task parameters (in text classification these are the number of terms, or categories),parameters must be taken into consideration when estimating the build time, including sub-task
or the computational power of the available nodes. The expected time of the sub-task building iscomplexity, overall task parameters (in text classification these are the number of terms, or
influenced by two task parameters: document frequency (i.e., number of documents in a particularcategories), or the computational power of the available nodes. The expected time of the sub-task
category) and the number of terms in documents from that category. The function dependencybuilding is influenced by two task parameters: document frequency (i.e., number of documents in a
between the task time and those parameters were estimated using a model built on the data fromparticular category) and the number of terms in documents from that category. The function
the previous experiments in the grid environment. The following sections give a description of thedependency between the task time and those parameters were estimated using a model built on the
presented methods.data from the previous experiments in the grid environment. The following sections give a
description of the presented methods.
**3. Optimization of the Classifier Building Using Dataset- and Infrastructure-Related Data**
**3. Optimization of the Classifier Building Using Dataset- and Infrastructure-Related Data**
_Optimization of Task Assignment in Distributed Environments_
_Optimization of Task Assignment in Distributed Environments There are various studies presenting a wide range of methods used to solve the task assignment_
problem in distributed environments. In [24] authors describe the scheduling of tasks in the grid
There are various studies presenting a wide range of methods used to solve the task assignment
environment using ant colony optimization, [25] presents the task assignment problem solved by
problem in distributed environments. In [24] authors describe the scheduling of tasks in the grid
means of a bee colony algorithm, and authors in [26] present the same problem solved by using directed
environment using ant colony optimization, [25] presents the task assignment problem solved by
graphs. Dynamic resource allocation aspects are addressed in [27]. The MapReduce framework solves
means of a bee colony algorithm, and authors in [26] present the same problem solved by using
the unbalanced sub-task distribution by running local aggregations (local reducers) on the mappers
directed graphs. Dynamic resource allocation aspects are addressed in [27]. The MapReduce
that can prepare the mapper intermediate results for the reduce step. However, the mapping phase
framework solves the unbalanced sub-task distribution by running local aggregations (local
has to be finished completely in order to start the reducer phase, so unbalanced assignment can lead to
reducers) on the mappers that can prepare the mapper intermediate results for the reduce step.
uneven node utilization and prolong the overall processing time.
However, the mapping phase has to be finished completely in order to start the reducer phase, so
In some specific tasks, the sub-task complexity is related to factors other than the data size. On the
unbalanced assignment can lead to uneven node utilization and prolong the overall processing time.
other hand, performance parameters and utilization of the available nodes can affect the overall task
In some specific tasks, the sub-task complexity is related to factors other than the data size. On
processing. In [28] some of the issues related to MapReduce performance in distributed environments
the other hand, performance parameters and utilization of the available nodes can affect the overall
are addressed in heterogeneous clusters, with authors focusing on the unreasonable allocation of
task processing. In [28] some of the issues related to MapReduce performance in distributed
tasks to the nodes with different computational capabilities to prove that such optimization brings
environments are addressed in heterogeneous clusters, with authors focusing on the unreasonable
significant benefits and greatly improves the efficiency of MapReduce-based algorithms. Authors
allocation of tasks to the nodes with different computational capabilities to prove that such
in [29] solve the problem of task assignment and resource allocation in distributed systems using a
optimization brings significant benefits and greatly improves the efficiency of MapReduce-based
genetic algorithm. Optimal assignment minimizes the costs of task processing with respect to the
algorithms. Authors in [29] solve the problem of task assignment and resource allocation in
specified constraints (e.g., available computing nodes, etc.). The costs of sub-task processing on a
distributed systems using a genetic algorithm. Optimal assignment minimizes the costs of task
particular node are represented by the cost matrix and, in a similar fashion, the communications
processing with respect to the specified constraints (e.g., available computing nodes, etc.). The costs
of sub-task processing on a particular node are represented by the cost matrix and, in a similar
-----
_Informatics 2019, 6, 12_ 6 of 15
matrix stores the costs of communication between the nodes during the creation of sub-tasks. A genetic
algorithm is then used to minimize the cost of the allocation function.
The main objective of the work presented in this paper is to develop and evaluate a method for
the improvement of task assignment to a particular set of specific text mining tasks—the building of
multi-label classification models in the grid environment. We decided to combine several aspects in
order to leverage both the performance-related information obtained from the distributed infrastructure
and the task-related data extracted from the dataset and improve the task assignment using these
data by solving the assignment problem. Especially when building classification models on large text
corpora, these methods can bring significant benefits in terms of resource utilization.
**4. Design and Implementation of Optimization Mechanisms**
In some cases, text mining processes are rather complex and often resource-intensive. Therefore,
solving a text mining task (as an iterative and interactive process) can consume substantial time
and computational resources. Our main aim was to extend the existing techniques for text mining
model building with optimization methods in order to improve resource utilization of the distributed
platform. In general, our focus was on gathering all the relevant data from the platform as well as data
related to the task itself and leverage that information for the improvement of the resource effectiveness
of the implemented distributed algorithms.
We used data extracted from the dataset, such as the size of the processed data, the structure of
the data (e.g., for classification tasks, we used the number of classes in the training set, distribution of
documents among the classes, etc.). We also used data gathered from the distributed infrastructure.
Those were used to describe the actual state of the platform, actual state of the particular computing
nodes, their performance, and available capacity. We identified the most relevant data, which we used
in both solutions described in the following sections:
Task-related data
_•_
Dataset characteristics
_•_
- Number of documents in a dataset;
- Number of terms in particular documents;
The frequency of category occurrence (in classification tasks)—one of the most important
_•_
criteria influencing the complexity of partial models (the most frequent categories result in
the most complex partial models).
Infrastructure-related data
_•_
Number of available computing nodes;
_•_
Node parameters
_•_
- Number of CPU cores;
- Available CPU;
- Available RAM;
- Heap space.
The following sections present the designed and implemented optimization mechanisms based
on the above-mentioned data.
_4.1. Tasks Assignment with No Optimization_
The first optimization solution is based on the assignment of sub-tasks to grid computational
nodes according to the task and infrastructure data. No optimization method is used in this case.
-----
_Informatics 2019, 6, 12_ 7 of 15
The first step is the initialization of the node and task parameters.
_•_
The variable describing the overall node performance is also initialized.
_•_
_•_ The algorithm checks the available grid nodes, and the values of the actual node parameters of all
grid nodes are set.
_•_ When the algorithm finishes checking all the available nodes, it checks the maximum value of the
obtained parameter values among the grid nodes (for each parameter). Then, all node parameter
values are normalized (to <0,1> interval).
In the next step, an actual node performance parameter is computed as the sum of all parameter
_•_
values. It is possible to set the weights of each parameter when a certain resource is more
significant (e.g., the task is more memory- or CPU-intensive). In our case, we used equal weight
values. Nodes are then ordered by the assigned performance parameters and an average node
(with average performance parameters) is found.
The next step computes the maximum number of sub-tasks assigned to a given node. A map
_•_
is created storing statistics describing the sub-tasks’ complexity information extracted from the
task-related data. Sub-tasks (binary classifiers) are ordered according to the frequency parameter.
Then, the average document frequency of a binary classifier is computed. This represents the
_•_
maximum number of sub-tasks that can be assigned to computational nodes with an average
overall performance. For the more powerful nodes, the limit is increased, and it is decreased for
the less powerful ones. The increase/decrease is computed in the same ratio as the performance
parameters ratio between a particular and average node.
Each available node is then equipped with a specific number of assigned sub-tasks in the same
_•_
way as in the non-optimized distributed model. The number of assigned tasks can be exceeded in
the final assignment in some cases (e.g., in a situation where all sub-tasks could not fit into the
computed assignments).
This method is rather simple and serves as the basis for the method using the optimization task
assignment problem described further.
_4.2. Task Assignment Using Assignment Problem_
The initial phase of the second proposed solution is the same as in the first approach.
The difference is that the particular sub-task assignment is solved as a combinatorial optimization
problem (assignment problem). As a special type of transportation problem, the assignment problem
is specified by a number of agents and a number of tasks [30]. Agents can be assigned to perform
a specific task, and this activity is represented by a cost. This cost can vary depending on the task
assignment. The goal is to perform all tasks by assigning exactly one agent to each task in such a way
that the total cost of the assignment is minimized [31]. In this particular case, we solved the generalized
assignment problem, where m tasks had to be assigned to n available computational nodes. The goal
was to perform the assignment to minimize the optimization function, which in this case represented
the computational time of the task. In distributed model building, the overall task completion time is
heavily influenced by the completion time of the longest sub-task. Therefore, we decided to specify
the constraints to ensure the even distribution of sub-tasks among the available nodes.
In our approach, we compute matrix M, where mi,j (where i = 1, ..., m and j = 1, ..., n represents
the predicted times (obtained in the same way as in the first presented solution) of task i on
computational node j) serves as input data for the assignment task. Each node is also graded based on
its computational power, in the same way as in the first solution.
The assignment task is solved under two sets of constraints. The first constraint specifies that
each task can be assigned to one particular available node:
_n_
### ∑ xi,j = 1, ∀i = 1, . . ., m.
_j=1_
-----
_Informatics 2019, 6, 12_ 8 of 15
The second constraint ensures that the task distribution to the nodes is homogeneous; that is,
each node is assigned a number of sub-tasks which should (when taking into consideration the
computational power of the nodes) take the minimum overall computation time. This is specified by
the criterion function:
_m_ _n_
### ∑ ∑ mi,jcjxi,j = MIN,
_i=1_ _j=1_
where mi,j are estimated times of task i on node j, ci,j represents the coefficient of computational power
of node j and xi,j = {0, 1} where xi,j = 1 when task i is assigned to node j, otherwise xi,j = 0. A set of
constraints specifies the homogeneous distribution of tasks among the nodes:
_m_
_i=1_ _[m][i][,][avg]_
### ∑ mi,jcjxi,j = [∑][m] × k, ∀j = 1, . . ., n,
_n_
_i=1_
where mi, avg is the total task completion time on an average node (computed from all nodes in
the grid) and k = 1 is the tuning parameter. When the algorithm is not able to find a solution,
the parameter k is increased by 0.1 until a solution is found. Once the task assignment is completed,
the algorithm continues similarly as in the previous solution and distributes sub-tasks to the assigned
nodes, builds the sub-models, and merges them into the final model. We used the IPOPT (Interior
Point OPTimizer) [32] solver implemented in the JOM (Java Optimization Modeler—a Java-based
open-source library for solving optimization problems) library to solve the assignment problem.
IPOPT is designed to cover a rather broad range of optimization problems. The results sometimes
rely on picking up a starting point, which can result in the solver becoming stuck in a local optimum.
To remove this limitation, it is possible to restart the optimizer with a perturbed found solution and
resolve the optimizer again.
**5. Experiments**
The experiments were performed in the testing environment comprising 10 workstations
connected via the local 1 Gbit intranet. Each connected node was equipped with an Intel Xeon
Processor W3550 clocked at 3.07 GHz CPU, 8 GB RAM, and 450 GB of available storage. We used the
multi-label tree-based classifier implementation described in [33]. The algorithm was implemented in
Java using the Jbowl library for specific text-processing methods and using GridGain as the platform
for distributed computing.
The main purpose of the conducted experiments was to compare both of the designed task
distribution algorithms in the task of building classification. Our main goal was to compare the sub-task
distribution in both proposed solutions and to compare them with the distributed model building
without implementing optimization. We focused on particular nodes load and the distribution of
particular sub-tasks to the computing nodes, and measured the time needed to complete the sub-tasks
as well as the whole job. A specific set of experiments was conducted to prove how the task balancing
algorithms dealt with the heterogeneous environment. The experiments were performed using two
standard datasets in the text classification area: the Reuters 21,578 dataset (ModApte split) and a subset
of MEDLINE corpus. Both datasets represent the traditional standard data collections frequently used
for benchmarking of the text classification algorithms. Both can be used to demonstrate the proof of
concept of the presented approach. Figures 3 and 4 show the structure of the dataset in terms of the
category frequency distribution, giving an overview of the distribution of sub-task complexity.
-----
_Informatics 2019, 6, 12_ 9 of 15
_Informatics 2019, 6, x FOR PEER REVIEW_ 9 of 15
**Figure 3. Reuters dataset: category frequency distribution.**
**Figure 3. Reuters dataset: category frequency distribution.**
**Figure 3. Reuters dataset: category frequency distribution.**
**Figure 4. MEDLINE dataset: category frequency distribution.**
**Figure 4. MEDLINE dataset: category frequency distribution.**
The first round of experiments was aimed at comparing different approaches to task distribution
when building a multi-label tree-based classifier in a homogeneously distributed environment.The first round of experiments was aimed at comparing different approaches to task distribution Figure 4. MEDLINE dataset: category frequency distribution.
We compared the proposed solutions with the distributed multi-label algorithm with no optimizationwhen building a multi-label tree-based classifier in a homogeneously distributed environment. We
compared the proposed solutions with the distributed multi-label algorithm with no optimization and with a very simple document frequency-based criterion. We also compared the completionThe first round of experiments was aimed at comparing different approaches to task distribution
and with a very simple document frequency-based criterion. We also compared the completion time time of the model building. The experiments were conducted on a grid consisting of 2, 4, 6, 8, andwhen building a multi-label tree-based classifier in a homogeneously distributed environment. We
10 computational nodes, each of about the same configuration. Figuresof the model building. The experiments were conducted on a grid consisting of 2, 4, 6, 8, and 10 compared the proposed solutions with the distributed multi-label algorithm with no optimization 5 and 6 give the experimental
computational nodes, each of about the same configuration. Figures 5 and 6 give the experimental results on both datasets and show that the proposed optimization methods may reduce the overalland with a very simple document frequency-based criterion. We also compared the completion time
results on both datasets and show that the proposed optimization methods may reduce the overall model construction time, even if in a balanced infrastructure with the homogeneous environment andof the model building. The experiments were conducted on a grid consisting of 2, 4, 6, 8, and 10
model construction time, even if in a balanced infrastructure with the homogeneous environment equally powerful computational nodes. In both experiments, the addition of more than 10 nodes didcomputational nodes, each of about the same configuration. Figures 5 and 6 give the experimental
and equally powerful computational nodes. In both experiments, the addition of more than 10 nodes not bring any benefit to the task completion. This was mainly caused by the structure of the datasets,results on both datasets and show that the proposed optimization methods may reduce the overall
did not bring any benefit to the task completion. This was mainly caused by the structure of the as the minimum task completion time was represented by the completion time of the most complexmodel construction time, even if in a balanced infrastructure with the homogeneous environment
datasets, as the minimum task completion time was represented by the completion time of the most sub-task. Each of the implemented optimization methods came close to that limitation.and equally powerful computational nodes. In both experiments, the addition of more than 10 nodes
complex sub-task. Each of the implemented optimization methods came close to that limitation. did not bring any benefit to the task completion. This was mainly caused by the structure of the
datasets, as the minimum task completion time was represented by the completion time of the most
complex sub-task. Each of the implemented optimization methods came close to that limitation.
**Figure 3. Reuters dataset: category frequency distribution.**
, 6, 12
, 6, x FOR PEER REVIEW
-----
_Informatics 2019, 6, 12_ 10 of 15
_Informatics 2019, 6, x FOR PEER REVIEW_ 10 of 15
_Informatics 2019, 6, x FOR PEER REVIEW_ 10 of 15
**Figure 5. Figure 5.Figure 5. Reuters dataset, homogeneous environment.Reuters dataset, homogeneous environment. Reuters dataset, homogeneous environment.**
The most significant improvements were noticed mostly in environments with fewerThe most significant improvements were noticed mostly in environments with fewer
The most significant improvements were noticed mostly in environments with fewer
computational nodes. The performance of particular computational nodes was also evaluated.computational nodes. The performance of particular computational nodes was also evaluated.
computational nodes. The performance of particular computational nodes was also evaluated.
**Figure 6.Figure 6. MEDLINE dataset, homogeneous environment.MEDLINE dataset, homogeneous environment.**
**Figure 6. MEDLINE dataset, homogeneous environment.**
Our main intention was to investigate how the distribution was performed and how the sub-tasksOur main intention was to investigate how the distribution was performed and how the sub
Our main intention was to investigate how the distribution was performed and how the sub
were assigned to the computational nodes. Figurestasks were assigned to the computational nodes. Figures 7 and 8 give the performance of nodes on 7 and 8 give the performance of nodes on both
tasks were assigned to the computational nodes. Figures 7 and 8 give the performance of nodes on
datasets in the homogeneous environment, summarize the completion times of the sub-tasks, andboth datasets in the homogeneous environment, summarize the completion times of the sub-tasks,
both datasets in the homogeneous environment, summarize the completion times of the sub-tasks,
show how the nodes were utilized during the overall process of model building.and show how the nodes were utilized during the overall process of model building.
and show how the nodes were utilized during the overall process of model building.
-----
_Informatics 2019, 6, x FOR PEER REVIEW_ 11 of 15
_Informatics 2019, 6, 12_ 11 of 15
_Informatics 2019, 6, x FOR PEER REVIEW_ 11 of 15
**Figure 7. Reuters data, four nodes, homogeneous environment.**
**Figure 7. Reuters data, four nodes, homogeneous environment.**
**Figure 7. Reuters data, four nodes, homogeneous environment.**
**Figure 8. Figure 8. MEDLINE data, four nodes, homogeneous environment.MEDLINE data, four nodes, homogeneous environment.**
, 6, x FOR PEER REVIEW
**Figure 7. Reuters data, four nodes, homogeneous environment.**
The second round of experiments was conducted in a heterogeneous environment.Figure 8. MEDLINE data, four nodes, homogeneous environment.
The second round of experiments was conducted in a heterogeneous environment. The
The computational power of the nodes was different, as we altered two nodes in a four-node
computational power of the nodes was different, as we altered two nodes in a four-node experiment.
The second round of experiments was conducted in a heterogeneous environment. The
experiment. The configuration of Node 1 was significantly improved (more CPUs and RAM). We also
The configuration of Node 1 was significantly improved (more CPUs and RAM). We also simulated
computational power of the nodes was different, as we altered two nodes in a four-node experiment.
simulated Node 3 loaded with other applications or processes, so the available CPU and RAM
Node 3 loaded with other applications or processes, so the available CPU and RAM parameters were
The configuration of Node 1 was significantly improved (more CPUs and RAM). We also simulated
parameters were significantly lower. We conducted a set of experiments on the configuration of four
significantly lower. We conducted a set of experiments on the configuration of four nodes on both
Node 3 loaded with other applications or processes, so the available CPU and RAM parameters were
nodes on both datasets, and similarly to the previous experiment, we focused on the performance
datasets, and similarly to the previous experiment, we focused on the performance of the nodes and
significantly lower. We conducted a set of experiments on the configuration of four nodes on both
of the nodes and the completion time of the assigned sub-tasks. The results of both solutions on the
the completion time of the assigned sub-tasks. The results of both solutions on the Reuters dataset
datasets, and similarly to the previous experiment, we focused on the performance of the nodes and
Reuters dataset are given in Figure 9. Figure 10 shows the same experimental results obtained on the
are given in Figure 9. Figure 10 shows the same experimental results obtained on the MEDLINE data.
the completion time of the assigned sub-tasks. The results of both solutions on the Reuters dataset
MEDLINE data.
are given in Figure 9. Figure 10 shows the same experimental results obtained on the MEDLINE data.
**Figure 7. Reuters data, four nodes, homogeneous environment.**
-----
_Informatics 2019, 6, x FOR PEER REVIEW_ 12 of 15
_Informatics 2019, 6, 12_ 12 of 15
_Informatics 2019, 6, x FOR PEER REVIEW_ 12 of 15
**Figure 9. Reuters data, four nodes, heterogeneous environment.**
**Figure 9. Reuters data, four nodes, heterogeneous environment.**
**Figure 9. Reuters data, four nodes, heterogeneous environment.**
**Figure 10. MEDLINE data, four nodes, heterogeneous environment.**
**Figure 10. MEDLINE data, four nodes, heterogeneous environment.**
**6. Discussion** **Figure 10. MEDLINE data, four nodes, heterogeneous environment.**
**6. Discussion**
Experiments performed on the selected dataset proved the usability of the designed solutions for
**6. Discussion**
Experiments performed on the selected dataset proved the usability of the designed solutions
the optimization of sub-task distribution based on the actual state of the used computing infrastructure
for the optimization of sub-task distribution based on the actual state of the used computing and task-related data. Various approaches for distributed machine learning algorithms are availableExperiments performed on the selected dataset proved the usability of the designed solutions
infrastructure and task-related data. Various approaches for distributed machine learning algorithms (see Sectionfor the optimization of sub-task distribution based on the actual state of the used computing 2), but most of them are mostly based on the data-based distribution paradigm. Most
are available (see Section 2), but most of them are mostly based on the data-based distribution large-scale distributed models (e.g., those based on the MapReduce paradigm) divide the training datainfrastructure and task-related data. Various approaches for distributed machine learning algorithms
paradigm. Most large-scale distributed models (e.g., those based on the MapReduce paradigm) into sub-sets to train the partial models in a distributed fashion. In some cases, this decomposition canare available (see Section 2), but most of them are mostly based on the data-based distribution
divide the training data into sub-sets to train the partial models in a distributed fashion. In some lead to an unbalanced distribution of workload. This can be specific for the text processing domain,paradigm. Most large-scale distributed models (e.g., those based on the MapReduce paradigm)
cases, this decomposition can lead to an unbalanced distribution of workload. This can be specific for as textual documents are usually in different sizes and contain a variable number of lexical units.divide the training data into sub-sets to train the partial models in a distributed fashion. In some
the text processing domain, as textual documents are usually in different sizes and contain a variable Another factor could also be the fact that most text classification tasks are multi-label problems. Thiscases, this decomposition can lead to an unbalanced distribution of workload. This can be specific for
number of lexical units. Another factor could also be the fact that most text classification tasks are can also be a factor when decomposing the model building just by splitting the data. From thisthe text processing domain, as textual documents are usually in different sizes and contain a variable
multi-label problems. This can also be a factor when decomposing the model building just by splitting perspective, multi-label text classification serves as a good example to evaluate optimization methodsnumber of lexical units. Another factor could also be the fact that most text classification tasks are
the data. From this perspective, multi-label text classification serves as a good example to evaluate based on task and environment data.multi-label problems. This can also be a factor when decomposing the model building just by splitting
optimization methods based on task and environment data.the data. From this perspective, multi-label text classification serves as a good example to evaluate Task-related data used for optimization are strictly tied to the solved problem and underlying
processed data. In our case, we selected factors that greatly influence the computing resourcesoptimization methods based on task and environment data.Task-related data used for optimization are strictly tied to the solved problem and underlying
processed data. In our case, we selected factors that greatly influence the computing resources requirements in the model building phase. Dataset characteristics can be obtained directly from theTask-related data used for optimization are strictly tied to the solved problem and underlying
requirements in the model building phase. Dataset characteristics can be obtained directly from the data, and these factors could also be considered in different text-processing problems (e.g., clustering).processed data. In our case, we selected factors that greatly influence the computing resources
data, and these factors could also be considered in different text-processing problems (e.g., Similar factors could be identified for standard structured data-mining problems (instead of therequirements in the model building phase. Dataset characteristics can be obtained directly from the
clustering). Similar factors could be identified for standard structured data-mining problems (instead number of terms, the number of attributes could be used as well as their respective statistics). In ourdata, and these factors could also be considered in different text-processing problems (e.g.,
of the number of terms, the number of attributes could be used as well as their respective statistics). approach we used a category occurrence frequency, which is specific for multi-label classificationclustering). Similar factors could be identified for standard structured data-mining problems (instead
In our approach we used a category occurrence frequency, which is specific for multi-label problems. To apply the optimization methods in different tasks, those must be replaced by otherof the number of terms, the number of attributes could be used as well as their respective statistics).
classification problems. To apply the optimization methods in different tasks, those must be replaced In our approach we used a category occurrence frequency, which is specific for multi-label
classification problems To apply the optimization methods in different tasks, those must be replaced
**Figure 10. MEDLINE data, four nodes, heterogeneous environment.**
**Figure 10. MEDLINE data, four nodes, heterogeneous environment.**
**Figure 9. Reuters data, four nodes, heterogeneous environment.**
-----
_Informatics 2019, 6, 12_ 13 of 15
particular task-specific factors. On the other hand, the infrastructure-related data used to optimize
the distribution process are not specific for the solved problem. Those are rather dependent on the
underlying technology and the infrastructure used for the model building. In our case, we used the
GridGain framework deployed on standard laboratory machines, which enabled us to directly obtain
the needed data. Many other distributed computing frameworks have similar capabilities, and most of
the data could be obtained directly from the OS deployed on the machines.
In order to utilize the optimization approach to a wider range of text-processing tasks and models,
semantic technologies could be leveraged. For this purpose, such a semantic model could be developed,
which would address the necessary concepts related to task assignment–infrastructure description,
data description, and model description. The semantic model then could be used to choose the right
task-assignment strategy for particular distributed model according to specific conditions related to
the underlying distributed architecture and processed data.
The experiments were performed on selected standard datasets that are frequently used in the
text classification domain. Our main objective was not to focus on the performance of the particular
classifiers themselves, but rather to compare how the presented optimization methods could enhance
the existing distributed classifiers with no task or environment optimization implemented. From
this perspective, the conducted experiments could serve as a proof of concept that the application of
optimization solutions to other distributed classification model implementations could bring similar
benefits to their performance.
**7. Conclusions**
In this paper, we presented a comparative study of optimization methods used for task allocation
and the improvement of distribution applied in the domain of multi-label text classification. Our main
objective was to prove that the integration of the data characterizing the task complexity and
computational resources can enhance the effectiveness of building distributed models and can optimize
resource utilization. The developed and implemented methods were experimentally evaluated on
standard text corpora, and their effectiveness was proved, especially when deployed on small-sized
distributed infrastructures. The overall task completion time was significantly lower when compared
with sequential solutions. It also performed well when compared with distributed model building
with no optimization. The proposed approach is suitable for multi-class classifiers, as the task-related
data are specific for that type of problem. To use the proposed methods in a wider range of text-mining
tasks, a more general method of task-related data description has to be utilized. One of the possible
approaches is to use semantic technologies, which could enable the construction of more generalized
models applicable to tasks other than classification.
**Author Contributions: Algorithm design, M.S. and M.O.; implementation, M.O.; experiments and validation,**
M.S. and M.O.; writing—original draft preparation, M.S.
**Funding: This work was supported by Slovak Research and Development Agency under the contract No.**
APVV-16-0213 and by the VEGA project under grant No. 1/0493/16.
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. Feldman, R.; Feldman, R.; Dagan, I. Knowledge Discovery in Textual Databases (KDT). In Proceedings of the
The First International Conference on Knowledge Discovery and Data Mining, Montreal, QC, Canada, 20–21
August 1995; pp. 112–117.
2. Shearer, C.; Watson, H.J.; Grecich, D.G.; Moss, L.; Adelman, S.; Hammer, K.; Herdlein, S. The CRISP-DM
model: The New Blueprint for Data Mining. J. Data Wareh. 2000, 5, 13–22.
3. Shafique, U.; Qaiser, H. A Comparative Study of Data Mining Process Models (KDD, CRISP-DM and
SEMMA). Innov. Space Sci. Res. 2014, 12, 217–222.
4. Tsoumakas, G.; Katakis, I.; Overview, A. Multi-Label Classification: An Overview. Int. J. Data Wareh. Min.
**[2007, 3, 1–13. [CrossRef]](http://dx.doi.org/10.4018/jdwm.2007070101)**
-----
_Informatics 2019, 6, 12_ 14 of 15
5. Weinman, J.J.; Lidaka, A.; Aggarwal, S. Large-scale machine learning. In GPU Computing Gems Emerald
_Edition; Elsevier: Amsterdam, The Netherlands, 2011; pp. 277–291. ISBN 9780123849885._
6. Caragea, D.; Silvescu, A.; Honavar, V. A Framework for Learning from Distributed Data Using Sufficient
[Statistics and its Application to Learning Decision Trees. Int. J. Hybrid Intell. Syst. 2004, 1, 80–89. [CrossRef]](http://dx.doi.org/10.3233/HIS-2004-11-210)
[[PubMed]](http://www.ncbi.nlm.nih.gov/pubmed/20351798)
7. Haldankar, A.; Bhowmick, K. A MapReduce based approach for classification. In Proceedings of the 2016
Online International Conference on Green Engineering and Technologies (IC-GET), Coimbatore, India,
19 November 2016; pp. 1–5.
8. Shanahan, J.; Dai, L. Large Scale Distributed Data Science from scratch using Apache Spark 2.0.
In Proceedings of the 26th International Conference on World Wide Web Companion—WWW ’17 Companion,
Perth, Australia, 3–7 April 2017.
9. Panda, B.; Herbach, J.S.; Basu, S.; Bayardo, R.J. PLANET: Massively Parallel Learning of Tree Ensembles
[with MapReduce. Learning 2009, 2, 1426–1437. [CrossRef]](http://dx.doi.org/10.14778/1687553.1687569)
10. Semberecki, P.; Maciejewski, H. Distributed Classification of Text Documents on Apache Spark Platform.
In Artificial Intelligence and Soft Computing; Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R.,
Zadeh, L.A., Zurada, J.M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; Volume 9692,
pp. 621–630. ISBN 978-3-319-39377-3.
11. Caragea, D.; Silvescu, A.; Honavar, V. Decision Tree Induction from Distributed Heterogeneous Autonomous
Data Sources. In Intelligent Systems Design and Applications; Abraham, A., Franke, K., Köppen, M., Eds.;
Springer: Berlin/Heidelberg, Germany, 2003; pp. 341–350. ISBN 978-3-540-40426-2.
12. Babbar, R.; Shoelkopf, B. DiSMEC—Distributed Sparse Machines for Extreme Multi-label Classification.
In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining-WSDM ’17,
Cambridge, UK, 6–10 February 2017; pp. 721–729, ISBN 978-1-4503-4675-7.
13. Babbar, R.; Schölkopf, B. Adversarial Extreme Multi-label Classification. arXiv, 2018; arXiv:1803.01570.
14. Zhang, W.; Yan, J.; Wang, X.; Zha, H. Deep Extreme Multi-label Learning. In Proceedings of the 2018 ACM on
International Conference on Multimedia Retrieval-ICMR ‘18, Yokohama, Japan, 11–14 June 2018; pp. 100–107,
ISBN 978-1-4503-5046-4.
15. Belyy, A.; Sholokhov, A. MEMOIR: Multi-class Extreme Classification with Inexact Margin. arXiv 2018,
arXiv:1811.09863.
16. Sun, X.; Xu, J.; Jiang, C.; Feng, J.; Chen, S.-S.; He, F. Extreme Learning Machine for Multi-Label Classification.
_[Entropy 2016, 18, 225. [CrossRef]](http://dx.doi.org/10.3390/e18060225)_
17. Sarnovský, M.; Butka, P.; Bednár, P.; Babiˇc, F.; Paraliˇc, J. Analytical platform based on Jbowl library
providing text-mining services in distributed environment. In Proceedings of the Lecture Notes in Computer
Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics).
In Information and Communication Technology-EurAsia Conference; Springer: Cham, Switzerland, 2015;
pp. 310–319.
18. Gualtieri, M. The Forrester Wave[TM]: In-Memory Data Grids, Q3. Available online: [https://www.](https://www.forrester.com/report/The+Forrester+Wave+InMemory+Data+Grids+Q3+2015/-/E-RES120420)
[forrester.com/report/The+Forrester+Wave+InMemory+Data+Grids+Q3+2015/-/E-RES120420 (accessed](https://www.forrester.com/report/The+Forrester+Wave+InMemory+Data+Grids+Q3+2015/-/E-RES120420)
on 2 January 2019).
19. Zhang, C.; Li, F.; Jestes, J. Efficient parallel kNN joins for large data in MapReduce. In Proceedings of the
Proceedings of the 15th International Conference on Extending Database Technology-EDBT ’12, Berlin,
Germany, 26–30 March 2012; p. 38.
20. Sarnovsky, M.; Ulbrik, Z. Cloud-based clustering of text documents using the GHSOM algorithm on
the GridGain platform. In Proceedings of the SACI 2013-8th IEEE International Symposium on Applied
Computational Intelligence and Informatics, Timisoara, Romania, 23–25 May 2013.
21. Anchalia, P.P.; Koundinya, A.K.; Srinath, N.K. MapReduce Design of K-Means Clustering Algorithm.
In Proceedings of the 2013 International Conference on Information Science and Applications (ICISA),
Pattaya, Thailand, 24–26 June 2013; pp. 1–5.
22. Zhao, W.; Ma, H.; He, Q. Parallel K-means clustering based on MapReduce. In Proceedings Lecture Notes
_in Computer Science; Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in_
Bioinformatics; Springer: Berlin/Heidelberg, Germany, 2009.
-----
_Informatics 2019, 6, 12_ 15 of 15
23. Amado, N.; Silva, O. Exploiting Parallelism in Decision Tree Induction. In Parallel and Distributed computing
for Machine Learning. In Proceedings of the Conjunction 14th European Conference on Machine Learning
ECML’03 7th European Conference Principles and Practice of Knowledge Discovery in Databases PKDD’03,
Dublin, Ireland, 10–14 September 2018.
24. Kianpisheh, S.; Charkari, N.M.; Kargahi, M. Reliability-driven scheduling of time/cost-constrained grid
[workflows. Futur. Gener. Comput. Syst. 2016, 55, 1–16. [CrossRef]](http://dx.doi.org/10.1016/j.future.2015.07.014)
25. Liu, H.; Zhang, P.; Hu, B.; Moore, P. A novel approach to task assignment in a cooperative multi-agent design
[system. Appl. Intell. 2015, 43, 162–175. [CrossRef]](http://dx.doi.org/10.1007/s10489-014-0640-z)
26. Gruzlikov, A.M.; Kolesov, N.V.; Skorodumov, Y.M.; Tolmacheva, M.V. Graph approach to job assignment in
[distributed real-time systems. J. Comput. Syst. Sci. Int. 2014, 53, 702–712. [CrossRef]](http://dx.doi.org/10.1134/S106423071404008X)
27. Ramírez-Velarde, R.; Tchernykh, A.; Barba-Jimenez, C.; Hirales-Carbajal, A.; Nolazco-Flores, J. Adaptive
[Resource Allocation with Job Runtime Uncertainty. J. Grid Comput. 2017, 15, 415–434. [CrossRef]](http://dx.doi.org/10.1007/s10723-017-9410-6)
28. Zhang, X.; Wu, Y.; Zhao, C. MrHeter: Improving MapReduce performance in heterogeneous environments.
_[Clust. Comput. 2016, 19, 1691–1701. [CrossRef]](http://dx.doi.org/10.1007/s10586-016-0625-2)_
29. Younes Hamed, A. Task Allocation for Minimizing Cost of Distributed Computing Systems Using Genetic
[Algorithms. Available online: https://www.semanticscholar.org/paper/Task-Allocation-for-Minimizing-](https://www.semanticscholar.org/paper/Task-Allocation-for-Minimizing-Cost-of-Distributed-Hamed/1dc02df36cbd55539369def9d2eed47a90c346c4)
[Cost-of-Distributed-Hamed/1dc02df36cbd55539369def9d2eed47a90c346c4 (accessed on 2 January 2019).](https://www.semanticscholar.org/paper/Task-Allocation-for-Minimizing-Cost-of-Distributed-Hamed/1dc02df36cbd55539369def9d2eed47a90c346c4)
30. Çela, E. Assignment Problems. Handb. Appl. Optim. Part II Appl. 2002, 6, 667–678.
31. Winston, W.L. Transportation, Assignment, and Transshipment Problems. Oper. Res. Appl. Algorithm. 2003,
_41, 1–82._
32. Kawajir, L. Waechter Introduction to IPOPT: A tutorial for downloading, installing, and using IPOPT.
[Available online: https://www.coin-or.org/Ipopt/documentation/ (accessed on 2 January 2019).](https://www.coin-or.org/Ipopt/documentation/)
33. Sarnovsky, M.; Kacur, T. Cloud-based classification of text documents using the Gridgain platform.
In Proceedings of the SACI 2012-7th IEEE International Symposium on Applied Computational Intelligence
and Informatics, Timisoara, Romania, 24–26 May 2012.
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
[(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.)
-----
| 14,478
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/INFORMATICS6010012?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/INFORMATICS6010012, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2227-9709/6/1/12/pdf?version=1552965075"
}
| 2,019
|
[
"JournalArticle"
] | true
| 2019-03-18T00:00:00
|
[
{
"paperId": "08ca99dde4d51edfedcf2730ee47a831b6c49839",
"title": "Large-Scale Machine Learning"
},
{
"paperId": "b88e72f3261cb298990c3e3d0e78c0e36b96c1f3",
"title": "MEMOIR: Multi-class Extreme Classification with Inexact Margin"
},
{
"paperId": "2ef6c309ca3e78a298339512882b447a24e9d656",
"title": "Adversarial Extreme Multi-label Classification"
},
{
"paperId": "52282bbd5cd991950085dbf797b42c7c1d2f3740",
"title": "Adaptive Resource Allocation with Job Runtime Uncertainty"
},
{
"paperId": "27600451752d1a07937d11d4e0e276fcfb3d0c48",
"title": "Deep Extreme Multi-label Learning"
},
{
"paperId": "a86fed94c9d97e052d0ff84b2403b10200280c6b",
"title": "Large Scale Distributed Data Science from scratch using Apache Spark 2.0"
},
{
"paperId": "4e997497124c2d778063752a81750a7a8367453d",
"title": "A MapReduce based approach for classification"
},
{
"paperId": "691bd46d8c1bef282b214c3a0d4d8cfb651d0aa8",
"title": "DiSMEC: Distributed Sparse Machines for Extreme Multi-label Classification"
},
{
"paperId": "d57a206147dd590f7e9770056b3333c49287822c",
"title": "MrHeter: improving MapReduce performance in heterogeneous environments"
},
{
"paperId": "d21cfba6fa0780e1b81ae1e8bdd149c15ad516c3",
"title": "Distributed Classification of Text Documents on Apache Spark Platform"
},
{
"paperId": "1db92c5ce364d2f459f264108f1044242583aa30",
"title": "Extreme Learning Machine for Multi-Label Classification"
},
{
"paperId": "5c2c7f97d60728972cc3302c60fd67e3859cd56f",
"title": "Reliability-driven scheduling of time/cost-constrained grid workflows"
},
{
"paperId": "516ba57d778b4f6d95781da925e002d7073a1ae4",
"title": "Developing a Symbiotic System for Scientific Information Seeking: The MindSee Project"
},
{
"paperId": "0f8228559b64775c90657cab6e29278fe00cee82",
"title": "Analytical Platform Based on Jbowl Library Providing Text-Mining Services in Distributed Environment"
},
{
"paperId": "1387df3ccb6bea3db7792be5904c195a8bd0de68",
"title": "A novel approach to task assignment in a cooperative multi-agent design system"
},
{
"paperId": "a07e3fbc5f70eae7e9b586cb3a32d909370f75df",
"title": "A Comparative Study of Data Mining Process Models (KDD, CRISP-DM and SEMMA)"
},
{
"paperId": "45f575eb9656f486b5dea2d705f74ba1849a0484",
"title": "Graph approach to job assignment in distributed real-time systems"
},
{
"paperId": "7b3c984a7d7a7d63775b8df496efa1a636a5e348",
"title": "MapReduce Design of K-Means Clustering Algorithm"
},
{
"paperId": "ac35d8a7749ec10bb246b8e1769991d908622d90",
"title": "Cloud-based clustering of text documents using the GHSOM algorithm on the GridGain platform"
},
{
"paperId": "e05d321ededcab150c58a3fbf923a49b756a7485",
"title": "Cloud-based classification of text documents using the Gridgain platform"
},
{
"paperId": "76ba0d0744b52a48b7d537c5a383e9844060e38e",
"title": "Efficient parallel kNN joins for large data in MapReduce"
},
{
"paperId": "23db2d9c5a97f36f1b63ea249402b4be0919ebc9",
"title": "Parallel K-Means Clustering Based on MapReduce"
},
{
"paperId": "74c170f55160fcdbf558390cc768625b76c52cc9",
"title": "PLANET: Massively Parallel Learning of Tree Ensembles with MapReduce"
},
{
"paperId": "861e677983965847b5213dc11d01a584fd92ac06",
"title": "Improving MapReduce Performance in Heterogeneous Environments"
},
{
"paperId": "a6ccfe1ac31444fb5a0d32b58182e0fb1b17c0e4",
"title": "Multi-Label Classification: An Overview"
},
{
"paperId": "14cebc12b90627efe6d0fa263315d7eb5c734625",
"title": "A Framework for Learning from Distributed Data Using Sufficient Statistics and Its Application to Learning Decision Trees"
},
{
"paperId": "ed58a10febc743e7a27841c8dec4cf75f261923d",
"title": "Knowledge Discovery in Textual Databases (KDT)"
},
{
"paperId": null,
"title": "Introduction to IPOPT: A tutorial for downloading, installing, and using IPOPT"
},
{
"paperId": null,
"title": "In-Memory Data Grids, Q3"
},
{
"paperId": null,
"title": "The Forrester Wave TM : In-Memory Data Grids"
},
{
"paperId": null,
"title": "Information and Communication Technology-EurAsia Conference"
},
{
"paperId": "24979da211314ad3c69ec6c2963e37dcab5175d9",
"title": "Task Allocation for Minimizing Cost of Distributed Computing Systems Using Genetic Algorithms"
},
{
"paperId": "2c5bfdb6aaffc72c2be686b953d61c77f4a8d5bb",
"title": "Exploiting Parallelism in Decision Tree Induction"
},
{
"paperId": "af8f52fa61a02d2c43bf5099fe9f4946f54cee06",
"title": "Decision Tree Induction from Distributed Heterogeneous Autonomous Data Sources"
},
{
"paperId": null,
"title": "Transportation, Assignment, and Transshipment Problems"
},
{
"paperId": null,
"title": "The CRISP-DM model: The New Blueprint for Data Mining"
},
{
"paperId": "bec9df1f201be611a132806402c8972fc7b43e31",
"title": "Assignment Problems"
}
] | 14,478
|
en
|
[
{
"category": "Business",
"source": "external"
}
] |
https://www.semanticscholar.org/paper/0070fe694b01d068d6bcbf9b7f47c8a0d500494e
|
[
"Business"
] | 0.861628
|
Explosion in Virtual Assets (Cryptocurrencies)
|
0070fe694b01d068d6bcbf9b7f47c8a0d500494e
|
Revista Mexicana de Economía y Finanzas
|
[
{
"authorId": "2205769069",
"name": "Daniel Cerecedo Hernández"
},
{
"authorId": "2206084520",
"name": "Carlos Armando Franco Ruiz"
},
{
"authorId": "1404943437",
"name": "Mario I. Contreras-Valdez"
},
{
"authorId": "2206362081",
"name": "Jovan Axel Franco Ruiz"
}
] |
{
"alternate_issns": [
"1665-5346"
],
"alternate_names": [
"Rev Mex Econ Finanz"
],
"alternate_urls": [
"https://www.remef.org.mx/index.php/remef",
"http://www.scielo.org.mx/scielo.php?lng=en&pid=1665-5346&script=sci_serial"
],
"id": "f31ac28c-aec5-472f-8419-8eb01c3ad0ec",
"issn": "2448-6795",
"name": "Revista Mexicana de Economía y Finanzas",
"type": "journal",
"url": "https://www.remef.org.mx/index.php/remef/index"
}
|
El objetivo de esta investigación es analizar la presencia de burbujas financieras o un comportamiento explosivo en cuatro criptomonedas: Ethereum, Ripple, Bitcoin Cash y EOS. La selección de los activos se basó en la capitalización de mercado. La metodología implementada fue una prueba simple y generalizada (SADF y GSADF) de una variación de la prueba aumentada de Dickey-Fuller propuesta por Phillips et al. (2011, 2015). Encontramos diez, siete, seis y siete comportamientos exuberantes en los activos mencionados, respectivamente. Esta metodología ha sido en gran parte inexplorada y podría emplearse de manera estándar en el sector financiero para cualquier otro activo. Esta es la primera investigación que detecta este tipo de comportamiento para un grupo de criptomonedas con frecuencia diaria. Con el presente trabajo y el artículo de Li et al. (2018), el 68,47% del mercado ha sido analizado bajo la metodología. En consecuencia, este comportamiento podría estar disperso en todo el sector.
|
Revista mexicana de economía y finanzas
ISSN: 1665-5346
ISSN: 2448-6795
Instituto Mexicano de Ejecutivos de Finanzas, A. C.
Cerecedo Hernández, Daniel; Franco-Ruiz, Carlos Armando;
Contreras-Valdez, Mario Iván; Franco-Ruiz, Jovan Axel
Explosion in Virtual Assets (Cryptocurrencies)
Revista mexicana de economía y finanzas, vol. 14, no. 4, 2019, pp. 715-727
Instituto Mexicano de Ejecutivos de Finanzas, A. C.
DOI: https://doi.org/10.21919/remef.v14i4.374
[Available in: https://www.redalyc.org/articulo.oa?id=423765104006](https://www.redalyc.org/articulo.oa?id=423765104006)
[How to cite](https://www.redalyc.org/comocitar.oa?id=423765104006)
[Complete issue](https://www.redalyc.org/fasciculo.oa?id=4237&numero=65104) Scientific Information System Redalyc
[More information about this article](https://www.redalyc.org/articulo.oa?id=423765104006) Network of Scientific Journals from Latin America and the Caribbean, Spain and
Portugal
[Journal's webpage in redalyc.org](https://www.redalyc.org/revista.oa?id=4237)
Project academic non-profit, developed under the open access initiative
-----
Revista Mexicana de Economía y Finanzas Nueva Época
Volumen 14 Número 4, Octubre - Diciembre 2019, pp. 715-727
DOI: https://doi.org/10.21919/remef.v14i4.374
# **Explosion in Virtual Assets (Cryptocurrencies)**
**Daniel Cerecedo Hernández** [1]
Tecnológico de Monterrey, México
**Carlos Armando Franco-Ruiz**
Tecnológico de Monterrey, México
**Mario Iván Contreras-Valdez**
Tecnológico de Monterrey, México
**Jovan Axel Franco-Ruiz**
Tecnológico de Monterrey, México
*(Recepción: 18/diciembre/2019, aceptado: 2/abril/2019)*
# **Explosión en Activos Virtuales (Criptomonedas)**
1 EGADE Business School, Tecnológico de Monterrey. E-mail: [email protected]. Dirección: Calle del
Puente 222, Tlalpan, Ejidos de Huipulco, 14380 Ciudad de México, CDMX. Teléfono: 01 55 5483 2020
-----
REMEF (The Mexican Journal of Economics and Finance)
716 *Explosion in Virtual Assets* *(* *Cryptocurrencies* *)*
# **1. Introduction**
The advent of the so call technology era has brought diverse developments in an enormous
range of areas. One of those is the development of a new digital object that does not
entirely fit in the conventional definitions. This entity has been studied under many
considerations, leading to the critical question of what are virtual assets? As in all areas
of knowledge, you cannot comprehend certain subject without first understanding the
central concepts that contextualize the main theme. For this reason, the definitions of the
phenomenon about cryptocurrencies, the blockchain, and digital currency are essential.
Therefore, digital currency is understood as a means of payment that is only available
in a digital manner; however, it has the classic fundamental characteristics of fiat money
that bases its value on the trust of an entity and has no endorsement of any physical
good. Yao (2018) mentioned that “by the nature”, digital currency “is still central bank’s
liability against the public with its value supported by sovereign credit, which gives it
two unconquerable advantages over private digital currencies.” Primary, he stated that
it could perform successfully all the fundamentals of money and second, it allows the
creation of credit and plays a big role in the impact of the economy.
On the other hand, cryptocurrencies are an asset or means of payment that its creation
is constituted through “an electronic payment system based on cryptographic proof instead of trust, allowing any two willing parties to transact directly with each other without
the need for a trusted third party.” (Nakamoto, 2009). While, by definition, cryptocurrencies are not supported by a central bank or other authority, Zimmer (2017) alludes that
the development of cryptocurrencies have been the result of the merger of two elements
within a globalized economy: the computational unit and money itself, where the element
that gives value to this high tech is scarcity. So, we are living changes in the technological
field which is creating economic competition and obviously, decentralization in the markets and giving power to individuals. So, from the words of Mikolajewicz-Woźniak, A., &
Scheibe, A. (2015) using the work of Kotler, P., Kartajaya, H., & Setiawa, I. (2010) we
are “forthcoming new era is called the cooperation era - where people not only receive the
message but also co-create it.”
Likewise, blockchain is defined as an open technology and distributed ledger that
has the ability to perform efficient transactions between two agents with the following
characteristics: verifiable and permanent. But, despite the expectation of growth on this
type of technology, we believe it should be taken as an opportunity to found new bases
for the social and economic system, and not only, to perceive it as a disruptive technology
that completely changes the world. In other words, the potential of the blockchain is
imminent in any field, nevertheless, a gradual adoption will be needed like any other
technological change. McPhee & Ljutic (2017) present “blockchain adds a totally new
dimension: the exchange of value between potential strangers in the absence of trusted
relationships. Replacing the dependency on trust with cryptography means that most
verification, identification, authentication, and similar forms of assurance, accreditation,
certification, and legalization of identity, origin, competence, or authority of persons or
assets can now be guaranteed by mathematics. And once trust is replaced by reliable
cryptography, there can be disintermediation of all the layers of middlemen.”
Having considered the ambiguous and abstract definition of those terms, it is necessary to state the problems to categorize it under common asset classifications. Bitcoin is
properly defined as a cryptocurrency; nevertheless, this concept may be understood as
a variety of characteristics, some shared by currencies, commodities, speculative assets,
trade mechanisms, etc. In particular, this paper treats the presence of financial bubbles
in cryptocurrencies at this day [2] due to the effects of high volatility and their speculative
behavior.
2 This study was actualized during April 2018.
-----
Revista Mexicana de Economía y Finanzas Nueva Época, Vol. 14 No. 4, pp. 715-727
DOI: https: // doi.org / 10.21919 / remef.v14i4.374 717
The lack of close and absolute criteria to categorize the digital object may be considered as a reason to theorize about the existence of bubbles. Angel & McCabe (2015)
present the possibility that cryptocurrencies may be used as a substitute for credit and
debit payment system; however, a payment system relies on the trust in an institution to
cover the debt. In this case, Bitcoin is not backed by anything. In this sense, even when it
can be used as a transaction facilitator, the barter problem may arise if the counterpart
does not recognize it as worthy. On this behalf, Fry & Cheah (2016) refer to the condition
of the cryptocurrencies depending on the realizations of the self-fulfilling expectations.
In the legal aspect, Bitcoin and all the virtual money are considered as a commodity
by the Commodity Futures Trading Commission (CFTC) (Kawa, 2015). In this sense
the so call mining of cryptocurrencies is seen as the productions cost equivalent to the
obtaining of precious metals or the extraction of crude oil. However, the Cornell Law
School under the U.S. Code, General Provisions, Chapter 1, § 1 [a] – Definitions (9), state
that commodities are material goods as well as services, rights, and interests. Under this
definition, cryptocurrencies may be considered as a right; nevertheless, as stated earlier,
there is no institution backing or regulating the payment made using this mean.
On the economic view, cryptocurrencies share some qualities related to currencies. As
exposed by Frisby (2014), Bitcoin presents relatively low transactions costs, as well as
convertibility to diverse currencies all around the world. Following the immateriality that
characterizes the virtual money, the fiduciary money does not depend on commodities to
determine their value; instead, they rely on the consumers’ trust to use it as an exchange
mechanism. This property is followed by the use of offer and demand laws to explain the
movements in price relative to other assets. Although Bitcoin may cover these points, it
lacks the control and regulations relative to Central Banks or any other financial institution. The problem with this also expands to the transaction efficiency as the price of
goods and services in the real economy are not measured in any cryptocurrencies; so, in
the last instance, it may be considered as a mere asset convertible to currencies.
To consider this, Yermack, D. (2015) studies the behavior of Bitcoin with respect to
U.S. Dollar. Making it to the conclusion that the cryptocurrencies lack the store value
required to fit the property of a currency. Also, because of the high volatility, he mentions
this virtual object to act as a speculative asset. Taking this in consideration Cheah &
Fry (2015) develop the hypothesis on the possibility of bubbles in the Bitcoin markets
as the price is linked with sentiments, as well as peaks in price related to news. Thus, a
quantitative and empirical analysis on the possible existence of financial bubbles will be
applied using the generalized sup augmented Dickey-Fuller test method to four cryptocurrencies selected by their market capitalization. We excluded the analysis of Bitcoin
because Li et al. (2018) previously have done this work. In this way, we are selecting for
our analysis another 30.59 % of the market share of these assets. The structure of this
work is as follows. Section 2 shows a brief theoretical framework. Section 3 contains the
methodology and data description. Section 4 displays the results and Section 5 exposes
the conclusions.
# **2. Theoretical framework**
The theory of bubbles has been greatly studied in recent years since the 2008 economic
and financial crisis. Properly defined as a deviation of the price from its fundamental
value (Campbell, Lo & MacKinlay, 1997), bubbles have the potential to extend to different markets and even affect economic activities. Because of this, detection of these
disturbances becomes crucial for regulatory authorities as well as investors. The problem
for this is stated by Greenspan A. (2002) who mentions that bubbles are only detected
once they have collapsed since there is no way to determine if the rise in price is due to
a fundamental reason or is mere speculation. The reasons behind bubbles are many and
may present in different forms; in particular, Brunnermeier & Oehmke (2012) mention
-----
REMEF (The Mexican Journal of Economics and Finance)
718 *Explosion in Virtual Assets* *(* *Cryptocurrencies* *)*
that a technological change in form of an innovation can lead to the creation of imbalances ultimately making the conditions for bubbles. On the other side, Caginalp, Porter &
Smith (2001) state that access to information, data analysis and media have done nothing
to prevent them from happening.
Relative to Bitcoin, it was during the last financial crisis of 2008 that Nakamoto proposed this virtual currency as an alternative to conventional ones. According to Bouri,
Gupta, Tiwari & Roubaud (2017), it was because of the loss of trust on financial institutions that cryptocurrencies were sought as an alternative to conventional assets; a
perspective that followed during the next years. Although the prices of Bitcoin have been
increasing since then, high volatility has been a main characteristic of the asset. To compare this, Kubát (2015) compares the deviation of different financial assets, including
currencies, indexes, and commodities; his results provide evidence of the turbulent behavior of the virtual currency. To study this phenomenon deeply, Bouoiyour & Selmi (2015)
propose a GARCH analysis on the price of Bitcoin relative to U.S. Dollar, their results
conclude the excessive volatility of it, as well as a larger impact of bad news in comparison
to positive shocks. In this case, it is possible to identify some of the properties concerning
of bubbles with the cryptocurrencies.
Harvey et al. (2016) pointed out that the methodology implemented by Phillips et al.
(2011) may contain spurious results on explosive behavior when there are permanent changes in volatility in the innovation processes of the right-tailed recursive Dickey-Fuller-type
unit root test. Given this circumstance, they propose the incorporation of the bootstrap
test when a non-stationary volatility is present. In their studies, they use Nasdaq stock
price index during the decade of 1990. In their results, it is possible to determine the
existence of an explosive behavior in 1995; characteristic held by financial bubbles.
In subsequent works, Phillips et al. (2015) improve the methodology of the DickeyFuller mechanism to the augmented one in order to identify multiple bubbles. For this
purpose, they use a sample of the S&P 500 in a so-called long period of time from 1871
to 2010. In this new development, the historical bubbles are properly detected during the
recognized periods.
The literature of virtual assets is a topic that has grown due to its applications as a
means of payment and as an investment asset. Thum (2018) points out that the unusual
behavior of growth and the immediate drop in the price of Bitcoin generates a great
uncertainty and dispute over whether this behavior could be due to speculative bubbles
in the cryptocurrencies.
Gringerg, R. (2011) exposes a parallel between trust and its relationship with irrational bubbles in cryptocurrencies. In it, he treats how unexpected changes such as a
definitive prohibition by the government, an increase in the competition of alternative
currencies, a deflationary spiral, problems with privacy, and loss of money or theft could
affect the aforementioned relationship and, therefore, become determining facts for the
cryptocurrency demand.
In the present, the search for possession of cryptocurrencies encompasses the search
for expected profits in the future. But, to some extent, the existence of speculation is not
exclusive of cryptocurrencies, in the Foreign Exchange market there is this performance
and it is not necessarily related to an expectation of gain. Added to this point, Godsiff
(2015) compares the volatility of the Bitcoin price with the speculative euphoria of the
tulip crisis where the futures market was affected causing a rapid increase in prices followed
by an immediate collapse. In the same way, he mentions that there is evidence of the
volatility of the price of this cryptocurrency and searches in google. Also, he points out
that the bubbles in the Bitcoin have been socially created and that the levels of activity
in this economic phenomenon can develop markets and even increase public awareness.
On the other hand, Cheah & Fry (2015) reveal the empirical existence of a financial
bubble in Bitcoin through a complex method originated in physics and determine that in
-----
Revista Mexicana de Economía y Finanzas Nueva Época, Vol. 14 No. 4, pp. 715-727
DOI: https: // doi.org / 10.21919 / remef.v14i4.374 719
this cryptocurrency there is a speculative element and that in addition, the fundamental
value is zero.
Li et al. (2018) used for the Bitcoin prices with respect to the USD and the renminbi
(RMB) the generalized sup augmented Dickey-Fuller test method set forth by Phillips et
al. (2015). They mentioned that the prices of China and the United States of America
are different, and therefore, it is important to take into account this discrepancy. These
authors find out for the Bitcoin/RMB six bubbles, while for Bitcoin/USD only five. Additionally, they pointed out that Bitcoin is susceptible to exogenous shocks. This means
that this cryptocurrency is affected to a greater extent by international economic events
causing long-term bubbles, and by local economic decisions causing bubbles in the short
term.
# **3. Methodology and data description**
This methodology is based on the work of Phillips et al. (2011, 2015) and we applied
the observations made to these articles by Harvey et al. (2016). Phillips and Yu (2011)
expose the supreme of recursively determined ADF t-statistics with the aim of improving
the known unit root tests. Therefore, the sup ADF test (SADF) uses a sample sequence
in a forward expansion that considers the repetitive ADF estimate as the main basis. The
result of this test comes from the sup value of the ADF statistical sequence. This model
is consistent with the detection of a single bubble for the period analyzed. However, for
the case of two or more bubbles observable under the previous model. An approximation
with the GSADF test is recommended, which has a better accuracy under the previous
scenario because it considers a greater number of subsamples, and a greater flexibility in
the windows used to the range of the samples of the model. The program executed for
this investigation is an EViews add-in called Rtadf (right-tail augmented Dickey-Fuller)
developed by Itamar Caspi (2017).
The cryptocurrency information was obtained from coinmarketcap [3] on April 22nd,
2018 with a total market capitalization of $400,337,634,585 [4] USD. With respect to this
total, 30.59 % of the market share of this measure will be taken in consideration, which
involves the second, third, fourth and fifth cryptocurrencies [5] . As we mentioned before,
Bitcoin was excluded in this analysis. But if we consider the study of Li et al. (2018) plus
our analysis of the remaining five primal. We will be contemplating and applying this
methodology to the 68.47 % of the market cap of all cryptocurrencies to the date of the
study. The price that was implemented is the daily closing of the sample described below [6] : Ethereum from 07/08/2015 to 22/04/2018; Ripple from 04/08/2013 to 22/04/2018;
Bitcoin Cash from 23/07/2017 to 22/04/2018; and EOS from 01/07/2017 to 22/04/2018 [7] .
# **4. Results**
We applied the SADF and GSADF methodology with 10,000 and 2,000 replications, respectively. The results obtained are shown in Table 2, 3, 4 and 5. With this information, we
can derive the presence of explosive behavior in these four cryptocurrencies. In all cases,
we applied the SADF methodology ( *Figure 1, 3, 5 and 7* ) in order to have evidence of
at least one exuberant behavior in these financial series where the null hypothesis was
rejected. Therefore, for the first asset we have evidence of at least one explosive behavior
in Ethereum with a level of significance of 1 %; for the Ripple with a level of significance
of 1 %; for the Bitcoin Cash with a level of significance of 10 %, and for the EOS with a
level of significance of 5 %.
3 `https://coinmarketcap.com`
4 Last updated: April 23rd, 2018 3:54 PM UTC
5 From a total of 1,583 - Last updated: April 23, 2018, 4:50 PM UTC
6 Crypto-currencies are in order of Market Capitalization
7 Dates format (dd/mm/yyyy)
-----
REMEF (The Mexican Journal of Economics and Finance)
720 *Explosion in Virtual Assets* *(* *Cryptocurrencies* *)*
With the previous results and in order to figure out the periods of multiple bubbles
where they began an explosive growth in their price and also, they came to be used not
only by a specific guild of experts in technology and finance. We selected a subsample (for
Ethereum and Ripple) and the original sample (for Bitcoin Cash and EOS) of the series
starting in 2017 and implemented the methodology of multiple bubbles GSADF ( *Figure*
*2, 4, 6 and 8* ). For the four assets studied, the null hypothesis was rejected with a level
of significance of 1 %. Consequently, we can perceive the presence of multiple bubbles or
that these assets contain explosive behaviors in their price in this subperiod (2017 - 2018).
Graphically, we can examine these results from Figure 1 to Figure 8 where we can
identify the presence of bubbles when the Forward ADF sequence (blue line) is above the
percentage of the critical value sequence (red line) with 95 % confidence intervals. The
completion or the collapse of the bubble is constituted when the Forward ADF sequence
(blue line) is below the percentage of the critical value sequence (red line) with 95 %
confidence intervals.
Thereupon, as the GSADF ( *Figure 2, 4, 6 and 8* ) outperforms the SADF ( *Figure 1,*
*3, 5 and 7* ) we detect in Table 1. the bubbles resulted from the first test. For Ethereum
we found 10 exuberant explosions, where the bubbles identified with the numbers 3 and 6
last more than two months. For Ripple we located 7 bubbles, where the third and seventh
explosion last more than one month. Then, for Bitcoin Cash we found six bubbles, in
which the third and sixth are the biggest in length. Finally, for EOS we have seven
bubbles where the first and third explosion last more than one month. The previous
longest periods stated are presented in Table 1. with a shading over them.
As mentioned above, the purpose of analyzing the period 2017 - 2018 was to observe
a more recent period where cryptocurrencies began a boom in terms of their knowledge
in the general public. The presence of bubbles in the four cryptocurrencies analyzed
(Ethereum, Ripple, Bitcoin Cash and EOS) coincides with the results reported by Li et
al. (2018) for the quarters of 2017 studied in the Bitcoin. Hence, we can infer that the
existence of bubbles is not found in a single asset like Bitcoin, but rather could be present
in the entire cryptocurrency sector.
|Table 1. Number of bubbles in cryptocurrencies implementing GSADF test|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
|GSADF test|||||
|Number of Bubbles|Dates||||
||Ethereum|Ripple|Bitcoin Cash|EOS|
|1|14/02/17 - 08/03/17|17/02/17 - 04/03/17|23/09/17 - 27/09/17|28/08/17 - 29/10/17|
|2|10/03/17 - 18/03/17|18/03/17 - 14/04/17|29/09/17 - 10/10/17|31/10/17 - 04/11/17|
|3|19/03/17 - 14/07/17|28/04/17 - 06/07/17|29/10/17 - 13/11/17|08/11/17 - 10/12/17|
|4|17/07/17 - 30/07/17|10/07/17 - 02/08/17|17/11/17 - 03/12/17|13/12/17 - 22/12/17|
|5|08/08/17 - 12/09/17|09/08/17 - 21/08/17|18/12/17 - 22/12/17|25/12/17 - 28/12/17|
|6|17/09/17 - 28/11/17|06/10/17 - 16/10/17|23/12/17 - 21/01/18|03/01/18 - 08/01/18|
|7|05/01/18 - 11/01/18|27/11/17 - 08/01/18||11/01/18 - 15/01/18|
|8|12/01/18 - 04/02/18||||
|9|08/02/18 - 08/03/18||||
|10|18/03/18 - 05/04/18||||
-----
Revista Mexicana de Economía y Finanzas Nueva Época, Vol. 14 No. 4, pp. 715-727
DOI: https: // doi.org / 10.21919 / remef.v14i4.374 721
**Figure 1.** SADF test of the price of Ethereum
**Figure 2.** GSADF test of the price of Ethereum
-----
REMEF (The Mexican Journal of Economics and Finance)
722 *Explosion in Virtual Assets* *(* *Cryptocurrencies* *)*
**Figure 3.** SADF test of the price of Ripple
**Figure 4.** GSADF test of the price of Ripple
-----
Revista Mexicana de Economía y Finanzas Nueva Época, Vol. 14 No. 4, pp. 715-727
DOI: https: // doi.org / 10.21919 / remef.v14i4.374 723
**Figure 5.** SADF test of the price of Bitcoin Cash
**Figure 6.** GSADF test of the price of Bitcoin Cash
-----
REMEF (The Mexican Journal of Economics and Finance)
724 *Explosion in Virtual Assets* *(* *Cryptocurrencies* *)*
**Figure 7.** SADF test of the price of EOS
**Figure 8.** GSADF test of the price of EOS
-----
Revista Mexicana de Economía y Finanzas Nueva Época, Vol. 14 No. 4, pp. 715-727
DOI: https: // doi.org / 10.21919 / remef.v14i4.374 725
**Table 2.** The SADF and GSADF tests result in Ethereum
|Ethereum Price|SADF|GSADF|
|---|---|---|
||14.49784***|8.163710***|
|Critical values|||
|99 % level|13.13328|5.571988|
|95 % level|10.47793|5.571988|
|90 % level|9.170676|5.571988|
*** Significance at the 1 % level.
** Significance at the 5 % level.
- Significance at the 10 % level.
**Table 3.** The SADF and GSADF tests result in Ripple
|Ripple Price|SADF|GSADF|
|---|---|---|
||24.62992***|9.459127***|
|Critical values|||
|99 % level|21.68289|3.504366|
|95 % level|17.35930|3.504366|
|90 % level|15.07182|3.504366|
*** Significance at the 1 % level.
** Significance at the 5 % level.
- Significance at the 10 % level.
**Table 4.** The SADF and GSADF tests result in Bitcoin Cash
|Bitcoin Cash Price|SADF|GSADF|
|---|---|---|
||4.379902*|5.345659***|
|Critical values|||
|99 % level|7.104894|3.025993|
|95 % level|4.886610|3.025993|
|90 % level|3.835460|3.025993|
*** Significance at the 1 % level.
** Significance at the 5 % level.
- Significance at the 10 % level.
**Table 5.** The SADF and GSADF tests result in EOS
|EOS Price|SADF|GSADF|
|---|---|---|
||5.490921**|7.239904***|
|Critical values|||
|99 % level|7.147984|4.993705|
|95 % level|5.336393|4.993705|
|90 % level|4.418388|4.993705|
*** Significance at the 1 % level.
** Significance at the 5 % level.
- Significance at the 10 % level.
-----
REMEF (The Mexican Journal of Economics and Finance)
726 *Explosion in Virtual Assets* *(* *Cryptocurrencies* *)*
# **5. Conclusion**
In conclusion, the presence of multiple bubbles was examined in the four cryptocurrencies,
subsequent to Bitcoin, with the largest market capitalization. The results show from the
GSADF test that Ethereum presents ten bubbles from January 1st, 2017 to April 22nd,
2018; Ripple, seven bubbles from January 1st, 2017 to April 22nd, 2018; Bitcoin Cash,
six bubbles from July 23rd, 2017 to April 22nd, 2018; and EOS, seven bubbles from
July 1st, 2017 to April 22nd, 2018. This could mean that a technological change in the
financial markets and their spontaneous knowledge in the general public could be causing
purchases and sales in a speculative way in these virtual assets and not based in the
fundamental value of them. So, the presence of exuberant behavior could be in the entire
cryptocurrency sector and not exclusively in one. It is worth mentioning that the market
capitalization of these assets is still too small to represent a financial risk, however the
regulatory authorities should be alert to these explosions and collapses as the investments
in these virtual assets increase.
1. In the Conclusions is missing a wider discussion regarding that Mexico’s stagnation
is due to its inefficient financial system and its lack of contract enforcement
2. In the last paragraph of the Conclusions the authors can still say more about easing
credit constraints and its possible impact on poverty and inequality burdens.
# **Referencias**
Angel James J. & McCabe Douglas (2015). The Ethics of Payments: Paper, Plastic, or Bitcoin? Springer Science+Business Media Dordrecht 2014, *J Bus Ethics* (2015) 132:603–611, DOI `10.1007/`
```
s10551-014-2354-x
```
Bouoiyour Jamal & Selmi Fefk (2015) Bitcoin Price: Is it Really that New Round of Volatility can be on
way? CAAT, University of Pau, France, ESC, Tunis Business School of Tunis, *Tunisia*, paper No.
65680
Bouri Elie, Gupta Rangan, Tiwari Aviral Kumar & Roubaud David (2017) Does Bitcoin Hedge Global
Uncertainty? Evidence from Wavelet-based Quantile-in-Quantile Regressions, *Finance Research*
*Letters*, Vol. 23 pp. 87-95
Brunnermeier Markus K., Oehmke Martin (2012) Bubbles, Financial Crises, and Systemic Risk, Working
Paper 18398, National Bureau Of Economic Research 1050 Massachusetts Avenue Cambridge, MA
02138
Caginalp Gunduz, Porter David, & Smith Vernon (2001) Financial Bubbles: Excess Cash, Momentum,
and Incomplete Information, *The Journal of Psychology and Financial Markets* Vol. 2, No. 2, 80–99
Campbell J.Y., Lo A.W., MacKinlay A.C. (1997) The Econometrics of Financial Markets, Princeton
University Press, Princeton, NJ.
Cheah, E. -T., & Fry, J. M. (Eds.) (2015). Speculative bubbles in Bitcoin markets? An empirical investigation into the fundamental value of Bitcoin. *Economic Letters*, 130, 32–36.
Cornell Law School under the U.S. Code, General Provisions, Chapter 1, § 1 [a] – Definitions (9) `https:`
```
//www.law.cornell.edu/uscode/text/7/1a
```
Frisby, D. (2014). Bitcoin: the future of money? London: Unbound.
Fry John, Cheah Eng-Tuck (2016). Negative bubbles and shocks in crypto-currency markets. *International*
*Review of Financial* Analysis 47 (2016) 343–352.
Godsiff, P. (2015) Bitcoin: Bubble or Blockchain, in Agent and Multi-Agent Systems: Technologies and
Applications, Springer, pp. 191-203.
Greenspan A. (2002) Economic volatility, At a symposium sponsored by the Federal Reserve Bank of
Kansas City, Jackson Hole, Wyoming, August 30, 2002.
Grinberg, R. (2011). Bitcoin: an innovative alternative digital currency. Hastings Sci. Technol. Law J.
4(1), 160–206.
Harvey, D. I., Leybourne, S. J., Sollis, R., & Taylor, A. M. R. (2016). Tests for explosive financial bubbles
in the presence of non-stationary volatility. *Journal of Empirical Finance*, 38, 548-574.
Itamar Caspi. (2017). Rtadf: Testing for Bubbles with EViews. Journal of Statistical Software, Vol 81,
Iss 1, Pp 1-16 (2017), (1), 1. `https://0-doi-org.millenium.itesm.mx/10.18637/jss.v081.c01`
Kawa Luke (2015). Bitcoin Is Officially a Commodity, According to U.S. Regulator. Bloomberg News, 17
September 2015
-----
Revista Mexicana de Economía y Finanzas Nueva Época, Vol. 14 No. 4, pp. 715-727
DOI: https: // doi.org / 10.21919 / remef.v14i4.374 727
Kotler, P., Kartajaya, H., & Setiawa, I. (2010) Marketing 3.0, MT Business, Warszawa, pp. 20, 27 - 33.
Kubát Max (2015) Virtual currency Bitcoin in the scope of money definition and store of value, *Procedia*
*Economics and Finance* 30 (2015) 409 – 416.
Li, Z., Tao, R., Su, C., & Lobonţ, O. (2018). Does bitcoin bubble burst? Quality and Quantity, 1-15.
McPhee, C., & Ljutic, A. (2017). Editorial: Blockchain. Technology Innovation Management Review,
7(10), 3-5.
Mikolajewicz-Woźniak, A., & Scheibe, A. (2015). Virtual currency schemes – the future of financial
services. Foresight, 17(4), 365-377.
Nakamoto, S. (2009). Bitcoin: A Peer-to-Peer Electronic Cash System. (White paper).
Phillips, P. C. B., & Yu, J. (2011). Dating the timeline of financial bubbles during the subprime crisis.
*Quantitative Economics*, 2(3), 455-491.
Phillips, P. C. B., Shi, S., & Yu, J. (2015). testing for multiple bubbles: Historical episodes of exuberance
and collapse in the s&p 500. *International Economic Review*, 56(4), 1043-1078.
Phillips, P. C. B., Wu, Y., & Yu, J. (2011). explosive behavior in the 1990s NASDAQ: When did exuberance escalate asset values? *International Economic Review*, 52(1), 201-226.
Thum, M. (2018). The economic cost of bitcoin mining. CESifo Forum, 19(1), 43-45.
Yao, Q. (2018). A systematic framework to understand central bank digital currency. *Science China*
*Information Sciences*, 61(3), 1-8
Yermack, D. (2015). Chapter 2: Is Bitcoin a Real Currency? An Economic Appraisal. Handbook of Digital
Currency, 31–43. `https://0-doi-org.millenium.itesm.mx/10.1016/B978-0-12-802117-0.00002-3`
Zimmer, Z. (2017). Bitcoin and potosí silver: Historical perspectives on crypto-currency. Technology and
Culture, 58(2), 307-334.
-----
| 8,357
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.21919/remef.v14i4.374?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.21919/remef.v14i4.374, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GREEN",
"url": "https://www.redalyc.org/journal/4237/423765104006/423765104006.pdf"
}
| 2,019
|
[] | true
| 2019-09-26T00:00:00
|
[] | 8,357
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Economics",
"source": "external"
},
{
"category": "Economics",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/007358e80d276c35515bc6a696fc39f0aaedb59a
|
[
"Computer Science",
"Economics"
] | 0.89283
|
Decentralized Finance, Centralized Ownership? An Iterative Mapping Process to Measure Protocol Token Distribution
|
007358e80d276c35515bc6a696fc39f0aaedb59a
|
Journal of Blockchain Research
|
[
{
"authorId": "2038268968",
"name": "Matthias Nadler"
},
{
"authorId": "2083839088",
"name": "Fabian Schär"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"J Blockchain Res"
],
"alternate_urls": null,
"id": "9707a5b5-0a39-4a1a-aedb-7a178151b1a6",
"issn": "2643-5748",
"name": "Journal of Blockchain Research",
"type": "journal",
"url": null
}
|
In this paper, we analyze various Decentralized Finance (DeFi) protocols in terms of their token distributions. We propose an iterative mapping process that allows us to split aggregate token holdings from custodial and escrow contracts and assign them to their economic beneficiaries. This method accounts for liquidity-, lending-, and staking-pools, as well as token wrappers, and can be used to break down token holdings, even for high nesting levels. We compute individual address balances for several snapshots and analyze intertemporal distribution changes. In addition, we study reallocation and protocol usage data, and propose a proxy for measuring token dependencies and ecosystem integration. The paper offers new insights on DeFi interoperability as well as token ownership distribution and may serve as a foundation for further research.
|
# Decentralized finance, centralized ownership? An iterative mapping process to measure protocol token distribution
## Matthias Nadler and Fabian Schär[∗]
In this paper, we analyze various Decentralized Finance
(DeFi) protocols in terms of their token distributions. We
propose an iterative mapping process that allows us to split
aggregate token holdings from custodial and escrow contracts and assign them to their economic beneficiaries. This
method accounts for liquidity-, lending-, and staking-pools,
as well as token wrappers, and can be used to break down
token holdings, even for high nesting levels. We compute individual address balances for several snapshots and analyze
intertemporal distribution changes. In addition, we study
reallocation and protocol usage data, and propose wrapping
_complexity as a proxy for measuring token dependencies_
and ecosystem integration. The paper offers new insights
on DeFi interoperability as well as token ownership distribution and may serve as a foundation for further research.
AMS 2000 subject classifications: Primary 91-08,
91B84; secondary 91-11, 91G45.
Keywords and phrases: Blockchain governance, Decentralized finance, DeFi, Wrapping complexity, Ethereum, Token economy.
## 1. INTRODUCTION
Decentralized Finance (DeFi) refers to a composable
and trust-minimized protocol stack that is built on public
Blockchain networks and uses smart contracts to create a
large variety of publicly accessible and interoperable financial services. In contrast to traditional financial infrastructure, these services are mostly non-custodial and can mitigate counterparty risk without the need for a centralized
third party. Funds are locked in smart contracts and handled in accordance with predefined rules, as specified by the
contract code. Some examples of DeFi protocols include constant function market makers, lending-platforms, prediction
markets, on-chain investment funds, and synthetic assets,
[11].
Most of these protocols issue corresponding tokens that
represent some form of partial protocol ownership. Although
the exact implementations, the feature sets, and the token
[arXiv: 2012.09306](http://arxiv.org/abs/2012.09306)
_∗Corresponding author._
holder rights vary greatly among these tokens, the reason for
their existence can usually be traced back to two motives:
_Protocol Governance and Protocol Economics._
**Governance: Tokens may entitle the holder to vote on con-**
tract upgrades or parameter changes. A token-based
governance system allows for the implementation of
new features. Moreover, the protocol can react to exogenous developments, upcoming interface changes, and
potential bugs.
**Economics: Most tokens have some form of implicit or**
explicit value-capture that allows the token holder to
participate economically in the growth of the protocol. Value is usually distributed through a utility and
burn mechanism (deflationary pressure) or some form
of dividend-like payments. In many cases, initial token
sales are used to fund protocol development and continuous release schedules to incentivize protocol usage.
Considering the two main reasons for the existence of
these tokens, it becomes apparent that token distribution
is a critical factor in the protocols’ decentralization efforts.
Heavily centralized token allocations may result in situations where a small set of super-users can unilaterally change
the protocol – potentially at the expense of everyone else.
Moreover, a heavily concentrated distribution may create an
ecosystem where much of the value is captured by a small
number of actors.
The authors are unaware of previous academic research
on this subject. In August 2020, an analysis was circulated
on social media, [3]. Simone Conti analyzed token contracts
for their top holders and used this data to compute ownership concentration measures. However, the study was based
on questionable assumptions and fails to account for the
large variety of contract accounts. In particular, liquidity-,
lending- and staking-pools, as well as token wrappers, had
been counted as individual entities. As these contract accounts are mere custodians and usually hold significant token amounts on behalf of a large set of economic agents, this
approach clearly leads to spurious results.
There are previous studies that tackle similar research
questions in the context of the Bitcoin network, [6], [1], [7].
However, due to Bitcoin’s relatively static nature and the
separation of token ownership and protocol voting rights,
-----
coin’s standard client discourages address reuse makes these
analyses much harder to perform. In a similar vein, a recent
working paper conducted an analysis for the evolution of
shares in proof-of-stake based cryptocurrencies, [10].
The remainder of this paper is structured as follows: In
Section 2, we describe how the token and snapshot samples
have been selected. Sections 3 and 4 explore the data preparation and analysis respectively. In Section 5, we discuss the
results, limitations and further research. In Section 6, we
briefly summarize our findings and the contribution of this
paper.
## 2. SAMPLE SELECTION
In this section, we describe the scope of our analysis. In
particular, we discuss how tokens and snapshots have been
selected. The token selection determines which assets we
observe. The snapshot selection determines at which point
in time the blockchain state is observed.
## 2.1 Token selection
To qualify for selection, tokens had to fulfill the following
criteria:
1. The token must be a protocol token. It must incorporate some form of governance and/or utility mechanism.
Pure stablecoins, token wrappers, or token baskets have
not been considered.[1]
2. The token must be ERC-20 compliant[2] and contribute
towards decentralized financial infrastructure.
3. As of September 15th, 2020, the token must fulfill at
least one of the following three conditions:
a) Relevant supply with market cap ≥ 200 mm (MC).
b) Total value locked in the protocol’s contracts
(vesting not included) ≥ 300 mm (VL).
c) Inclusion in Simone Conti’s table (SC).
Market cap and value locked serve as objective and quantitative inclusion criteria. Tokens from Simone Conti’s table
have mainly been included to allow for comparisons.
Applying these criteria, we get a sample of 18 DeFi tokens. The tokens and the reason for their selection are summarized in Table 1. Please note that we have decided to
exclude SNX since some of its features are not in line with
standard conventions and make it particularly difficult to
analyze.
1Although wrappers and baskets will be considered for fund reallocation, as described in Section 3.
2ERC-20 refers to the widely adopted token standard described in the
Ethereum improvement proposal 20 [12].
30 _M. Nadler and F. Schär_
_Table 1. Token Selection_
Token MC VL SC Deployment
BAL ✗ ✓ ✓ 2020-06-20
BNT ✗ ✗ ✓ 2017-06-10
COMP ✓ ✓ ✓ 2020-03-04
CREAM ✗ ✓ ✗ 2020-08-04
CRV ✗ ✓ ✗ 2020-08-13
KNC ✓ NA ✓ 2017-09-12
LEND ✓ ✓ ✓ 2017-09-17
LINK ✓ NA ✗ 2017-09-16
LRC ✓ ✗ ✗ 2019-04-11
MKR ✓ ✓ ✓ 2017-11-25
MTA ✗ ✗ ✓ 2020-07-13
NXM ✓ ✗ ✗ 2019-05-23
REN ✓ ✗ ✓ 2017-12-31
SUSHI ✓ ✓ ✗ 2020-08-26
UMA ✓ ✗ ✗ 2020-01-09
YFI ✓ ✓ ✓ 2020-07-17
YFII ✓ ✗ ✗ 2020-07-26
ZRX ✓ NA ✗ 2017-08-11
## 2.2 Snapshot selection
To analyze how the allocation metrics change over time,
we decided to conduct the analysis for various snapshots.
The first snapshot is from June 15th, 2019. We had then
taken monthly snapshots. The snapshots’ block heights and
timestamps are listed in Table 2.
_Table 2. Snapshot Selection_
Nr. Block Height Date
1 7962629 2019-06-15
2 8155117 2019-07-15
3 8354625 2019-08-15
4 8553607 2019-09-15
5 8745378 2019-10-15
6 8938208 2019-11-15
7 9110216 2019-12-15
8 9285458 2020-01-15
9 9487426 2020-02-15
10 9676110 2020-03-15
11 9877036 2020-04-15
12 10070789 2020-05-15
13 10270349 2020-06-15
14 10467362 2020-07-15
15 10664157 2020-08-15
16 10866666 2020-09-15
## 3. DATA PREPARATION
We use our token and snapshot selection from 2 to analyze the allocation characteristics and observe how they
change over time. All the necessary transaction- and event
data was directly extracted from a Go-Ethereum node using Ethereum-ETL, [8]. To construct accurate snapshots of
-----
address that actually owns and may ultimately claim the
funds.
A simple example is the YFI/wETH Uniswap V2 liquidity pool: A naïve analysis would lead to the conclusion
that the tokens are owned by the Uniswap exchange contract. However, this contract is just a liquidity pool with
very limited control over the tokens it holds. Full control,
and thus ownership of the tokens, remains with the liquidity providers. To account for this and to correctly reflect
the state of token ownership, the tokens must be mapped
proportionally from the exchange contract to the liquidity
providers.
A more complex example illustrates the need for an iterative mapping process: YFI is deposited into a Cream lending pool, minting crYFI for the owner. This crYFI together
with crCREAM is then deposited in a crYFI/crCREAM
Balancer-like liquidity pool, minting CRPT (Cream pool tokens) for the depositor. Finally, these CRPT are staked in a
Cream staking pool, which periodically rewards the staker
with CREAM tokens but does not mint any ownership tokens. The actual YFI tokens, in this case, are held by the
Cream lending pool. Trying to map them to their owners
via the lending pool tokens (crYFI) will lead us to the liquidity pool and finally to the staking pool, where we can
map the YFI to the accounts that staked the CRPT tokens.
Each of these steps needs to be approached differently, as the
underlying contracts have distinct forms of tracking token
ownership. And further, these steps must also be performed
in the correct order.
## 3.1 Identifying and categorizing addresses
Addresses that do not have bytecode deployed on them
– also called externally owned accounts or EOAs – cannot be analyzed further with on-chain data. To determine
whether to include or exclude an EOA from our analysis, we
use a combination of tags from etherscan.io, nansen.ai, and
coingecko.com, [4], [9], [2]. An EOA qualifies for exclusion
if it is a known burner address, owned by a centralized, offchain exchange (CEX) or if the tokens on the account are
disclosed by the developer team as FTIA (foundation, team,
investor, and advisor) vesting. Every other EOA is assumed
to be a single actor and is included in the analysis.
Addresses with deployed bytecode are smart contracts or
contract accounts. These contracts are analyzed and categorized based on their ABI[3], bytecode, return values, and
manual code review. Most implementations of multisig wallets are detected and treated equivalent to EOAs. Mappable
smart contracts are described by the following categories:
**Liquidity Pools: Decentralized exchanges, converters, to-**
ken baskets, or similar contracts that implement one
3ABI stands for application binary interface. Each smart contract has
an ABI that describes all the possible ways to interact with the smart
contract. It is not stored on-chain and can be fetched from a repository
like etherscan.io [4].
mapped proportionally to the relevant liquidity pool
tokens.
**Lending Pools: Aave, Compound, and Cream offer lend-**
ing and borrowing of tokens. Both the debts and deposits are mapped to their owners using protocolspecific events and archival calls to the contracts.
**Staking Contracts: Staking contracts differ from liquid-**
ity pools in the sense that they usually do not implement an ERC-20 token to track the stakes of the owners.
We further differentiate if the token in question is used
as a reward, as a stake, or both. Future staking rewards
are excluded as they cannot be reliably mapped to future owners. The remaining tokens are mapped using
contract-specific events for depositing and withdrawing
stakes and rewards. For Sushi-like staking pools, we also
account for a possible migration of staked liquidity pool
tokens.
**Unique Contracts: These contracts do not fit any of the**
above categories, but the tokens can still be mapped
to their owners. Each contract is treated individually,
using contract-specific events and archival calls where
needed. A few examples include MKR governance voting, REN darknode staking, or LRC long-term holdings.
Smart contracts which hold funds that are not owned by
individual actors or where no on-chain mapping exists are
excluded from the analysis. Most commonly, this applies to
contracts that hold and manage funds directly owned by a
protocol with no obvious distribution mechanism.
## 3.2 Iterative mapping process for tokens
For each token and snapshot, we construct a token holder
table listing the initial token endowments per address. We
then proceed with an iterative mapping process as follows:
**Algorithm 1 Iterative Mapping Process**
1: H ← initial token holder table
2: repeat
3: sort H by token value, descending
4: **for all h ∈** top 1,000 rows of H do
5: identify and categorize h
6: apply exclusion logic to h
7: **if h is mappable then**
8: map h according to its category
9: **end if**
10: **end for**
11: until no mappable rows found in last iteration
12: assert every row with more than 0.1% of the total relevant
supply is properly identified and categorized
The exclusion logic will skip and permanently ignore any
holder h that qualifies for exclusion according to the criteria
defined in 3.1. This is done with a combination of automated
detection and a manually maintained include- and excludelist. Every address h is either unambiguously categorized or
_Decentralized finance, centralized ownership? An iterative mapping process to measure protocol token distribution_ 31
-----
list.
It is possible that tokens must be mapped from an address
onto themselves. For most mappable contracts, these tokens
are permanently lost[4] and are thus treated as burned and are
excluded from the analysis. For contracts where the tokens
are not lost in this way, we implemented contract-specific
solutions to avoid potential infinite recursion.
Every instance of a remapping from one address to another, called an adjustment, is tracked and assigned to one of
five adjustment categories. There is no distinction between
situations where the protocol token or a wrapped version
thereof is remapped. The five adjustment categories are:
**Internal Staking: Depositing the token into a contract**
that is part of the same protocol. This includes liquidity provision incentives, protocol stability staking, and
some forms of governance voting.
**External Staking: Depositing the token into a contract**
that is not part of the same protocol. This is most
prominent for Sushi-like liquidity pool token staking
with the intention of migrating the liquidity pool tokens, but it also includes a variety of other, external
incentive programs.
**AMM Liquidity: Depositing the token into a liquidity**
pool run by a decentralized exchange with some form
of an automated market maker.
**Lending / Borrowing: Depositing the token into a liq-**
uidity pool run by a decentralized lending platform or
borrowing tokens from such a pool.
**Other: Derivatives, 1:1 token wrappers with no added**
functionality, token migrations, and investment fundlike token baskets.
## 4. DATA ANALYSIS
In this section, we will use our data set to analyze two
questions: First, we study the token ownership concentration and use our remapping approach to compute more
accurate ownership tables and introduce new allocation
metrics. These metrics are of particular interest, as highly
concentrated token allocations could potentially undermine
any decentralization efforts. Second, we use our remapping
and protocol usage data to introduce wrapping complexity,
shortage ratio, and token interaction measures. These measures essentially serve as a proxy and indicate the degree
of integration into the DeFi ecosystem. Moreover, they may
serve as an important measure for potential dependencies
and the general stability of the system.
## 4.1 Concentration of token ownership
Table 3 shows key metrics to illustrate the concentration
of adjusted token ownership for the most recent snapshot,
4For example, if Uniswap liquidity pool tokens are directly sent to their
liquidity pool address, they can never be retrieved.
32 _M. Nadler and F. Schär_
note that relevant supply refers to the sum of all adjusted
and included token holdings, taking into account outstanding debts. Excluded token holdings are described in detail
in Section 3.1.
**Owner #: Total number of addresses owning a positive**
amount or fraction of the token.
**Top n: Percentage of the relevant supply held by the top**
_n addresses._
**Top n%: Minimum number of addresses owning a com-**
bined n% of the relevant supply.
**Gini 500: The Gini coefficient, [5], is used to show the**
wealth distribution inequality among the top 500 holders of each token. It can be formalized as (1).
(1) _G500 =_
�500 �500
_i=1_ _j=1[|][x][i][ −]_ _[x][j][|]_
2 500[2]x¯
_·_
For tokens with historical data of at least 12 months, we
include the trend and standard deviation over this period.
The trend represents the monthly change in percent according to an OLS regression line; the standard deviation shows
the volatility of the trend.
## 4.2 Ecosystem integration
Table 4 presents key metrics of the tokens’ integration
into the DeFi ecosystem. The table is described below.
**Inclusion %: Relevant token supply divided by total token**
supply, excluding burned tokens.
**Wrapping Complexity: Relevant** adjustments divided
by relevant supply. Relevant adjustments are adjustments that are mapped to non-excluded addresses.
Some of the excluded addresses still deposit their tokens
in mappable contracts; e.g. a centralized exchange that
deposits their users’ tokens in a staking pool. To prevent distortion, we exclude these mappings from both
the relevant supply and the relevant adjustments.
The wrapping complexity is formalized in (2), where N
is the total number of relevant adjustments for a given
token, ω := (ω1, . . ., ωN ) represents the vector of all
relevant adjustments for this token and _S[¯] represents_
relevant supply of this token.
�N
(2) _i=1_ _[|][ω][i][|]_
_S¯_
**Multi-Token Holdings: Number of addresses with a min-**
imum allocation of 0.1% of this token and 0.1% for at
least n ∈ (1, 2, 3, 4) other tokens from our sample.
**Shorted: Negative token balances in relation to relevant**
supply; i.e. value on addresses that used lending markets to borrow and resell the token, to obtain a short
exposure, divided by _S[¯]._
-----
_Table 3. Token Ownership Structure_
Token Owner # Top 5 Top 10 Top 50 Top 100 Top 500 Top 50% Top 99% Gini 500
BAL† Sep 20 16,661 27.6% 36.71% 77.3% 85.01% 94.86% 18 2,157 83.77%
Sep 20 49,294 15.69% 24.71% 49.5% 61.77% 80.95% 52 10,010 69.82%
BNT Trend +1.64% _−5.43%_ _−4.43%_ _−2.94%_ _−2.14%_ _−1.06%_ +49.45% +7.52% _−1.5%_
_σ 12m_ 2,882.0 0.0712 0.0764 0.0827 0.0669 0.0378 15.7 1,481.9 0.0487
COMP† Sep 20 36,033 31.23% 43.79% 86.75% 96.15% 98.91% 14 564 90.36%
CREAM† Sep 20 4,426 48.44% 57.11% 74.32% 81.77% 94.16% 6 1,549 83.04%
CRV† Sep 20 11,076 56.92% 61.09% 73.23% 79.07% 90.27% 2 3,549 84.64%
Sep 20 92,780 24.93% 35.63% 57.73% 64.62% 77.99% 26 19,922 77.6%
KNC Trend +6.51% +3.36% +5.01% +2.14% +0.98% +0.04% _−5.39%_ +15.74% +1.21%
_σ 12m_ 12,589.4 0.0302 0.0594 0.0489 0.0336 0.0171 13.9 3,971.3 0.0374
Sep 20 174,861 36.67% 43.64% 61.44% 67.42% 80.05% 16 57,534 79.97%
LEND Trend +0.23% +33.26% +22.23% +11.35% +8.26% +3.74% _−9.77%_ _−4.7%_ +3.98%
_σ 12m_ 3,066.9 0.1294 0.1389 0.1358 0.1258 0.0878 82.2 21,962.9 0.0933
Sep 20 233,128 7.18% 13.46% 37.0% 44.99% 61.23% 166 61,910 65.27%
LINK Trend +31.34% _−0.5%_ _−0.62%_ +1.72% +1.24% +0.08% _−2.73%_ +16.99% +1.24%
_σ 12m_ 52,004.9 0.0029 0.004 0.0221 0.0204 0.0067 25.0 12,158.7 0.0279
Sep 20 66,382 13.75% 20.06% 43.44% 62.11% 87.9% 66 5,251 66.36%
LRC Trend +1.49% _−2.3%_ _−1.68%_ _−1.26%_ _−1.14%_ _−0.41%_ +3.23% +7.95% _−0.74%_
_σ 12m_ 3,392.5 0.0236 0.0232 0.0261 0.0313 0.0163 6.1 811.7 0.0205
Sep 20 29,765 24.43% 36.49% 67.71% 79.49% 93.72% 20 3,918 79.26%
MKR Trend +8.31% _−3.45%_ _−2.12%_ _−0.45%_ _−0.19%_ _−0.12%_ +4.5% +7.17% _−0.22%_
_σ 12m_ 4,511.7 0.0503 0.0405 0.0175 0.0107 0.0057 3.0 587.0 0.01
MTA† Sep 20 5,595 13.81% 22.97% 51.18% 63.51% 88.27% 47 2,090 65.93%
Sep 20 7,355 32.17% 44.3% 70.42% 78.51% 91.29% 14 2,817 81.14%
NXM Trend _−36.69%_ _−2.87%_ _−2.71%_ _−1.65%_ _−1.12%_ _−0.37%_ +18.09% _−33.11%_ _−0.24%_
_σ 12m_ 1,918.2 0.0704 0.0992 0.0869 0.0619 0.0238 2.7 747.1 0.0434
Sep 20 22,770 10.45% 15.29% 32.81% 41.79% 67.85% 166 8,500 55.31%
REN Trend +26.0% _−3.12%_ _−2.97%_ _−2.98%_ _−2.64%_ _−1.5%_ +42.78% +25.39% _−1.56%_
_σ 12m_ 4,673.4 0.0232 0.0313 0.0671 0.072 0.0579 38.4 1,718.0 0.0437
SUSHI† Sep 20 22,740 25.64% 35.26% 58.31% 66.28% 83.78% 28 7,300 74.11%
UMA† Sep 20 5,634 56.21% 75.64% 96.87% 98.21% 99.43% 5 240 95.61%
YFI† Sep 20 14,296 11.52% 16.98% 37.32% 48.1% 73.75% 114 5,145 57.6%
YFII† Sep 20 8,513 20.8% 27.78% 53.93% 66.23% 85.15% 40 3,278 72.18%
Sep 20 161,285 23.71% 38.4% 59.39% 63.87% 72.91% 21 38,404 82.63%
ZRX Trend +4.05% _−1.15%_ _−0.02%_ +0.76% +0.64% +0.22% _−2.96%_ +6.28% +0.43%
_σ 12m_ 16,372.0 0.0133 0.0056 0.0158 0.0147 0.0082 3.6 5,233.6 0.0132
_†Insufficient historical data._
It is important to note that the inclusion ratio is predominantly dictated by the tokens’ emission schemes. In some
cases, the total supply is created with the ERC-20 token deployment but held in escrow and only released over the following years. Consequently, we excluded this non-circulating
supply.
Figure 1 shows the development of the tokens’ wrapping
complexities by adjustment category in a stacked time series. Note that the limits of the y-axis for the CREAM graph
are adjusted to accommodate for the higher total wrapping
complexity. We have not included a graph for the SUSHI token, as there is only one snapshot available since its launch[5].
5On September 15th, 2020, the 109.9% wrapping complexity of SUSHI
is composed of 28.2% internal staking, 49.3% external staking, 30.1%
AMM liquidity, and 2.2% lending/borrowing.
A wrapping complexity > 1 means that the same tokens are wrapped several times. If, for example, a token is
added to a lending pool, borrowed by another person, subsequently added to an AMM liquidity pool, and the resulting
LP tokens staked in a staking pool, the wrapping complexity
would amount to 4. Similarly, a single token could be used
multiple times in a lending pool and thereby significantly
increase the wrapping complexity.
Note that most tokens have experienced a sharp increase
in wrapping complexity in mid-2020. The extent to which
each category is used depends on the characteristics of each
token; internal staking, in particular, can take very different
forms.
The “other” category is mainly driven by token migrations, where new tokens are held in redemption contracts,
and 1:1 token wrappers.
_Decentralized finance, centralized ownership? An iterative mapping process to measure protocol token distribution_ 33
-----
_Table 4. Token Wrapping Complexity_
Wrapping Complexity Multi-Token Holdings
Token Inclusion % Shorted
Jun-19 Sep-19 Dec-19 Mar-20 Jun-20 Sep-20 1+ 2+ 3+ 4+
BAL 19.6% - - - - - 51.7% 17.6% 5.5% 1.1% - 0.026%
BNT 56.8% 11.9% 11.9% 10.3% 20.8% 9.6% 10.2% 8.7% 1.4% 0.7% 0.7% COMP 36.0% - - - 0.0% 0.0% 7.5% 8.4% 3.6% 2.4% - 0.004%
CREAM 3.6% - - - - - 455.0% 30.1% 11.8% 5.4% - 11.971%
CRV 2.2% - - - - - 43.1% 20.9% 9.9% 4.4% 2.2% 0.761%
KNC 70.7% 0.2% 0.2% 0.4% 2.9% 1.8% 48.4% 17.7% 9.4% 4.2% 2.1% 0.123%
LEND 69.3% 0.0% 0.0% 0.1% 28.9% 50.7% 63.1% 38.6% 19.3% 6.8% 2.3% 0.039%
LINK 31.3% 0.0% 0.0% 0.0% 1.8% 2.2% 13.6% 12.9% 5.9% 4.0% 2.0% 0.383%
LRC 58.8% 5.3% 4.7% 7.4% 19.0% 21.4% 23.1% 1.8% 0.6% - - MKR 81.5% 33.6% 23.2% 31.5% 28.6% 37.3% 41.5% 7.2% 2.4% 0.8% - 0.036%
MTA 3.1% - - - - - 73.8% 15.1% 4.8% 1.8% - 2.631%
NXM 95.1% 0.0% 0.0% 0.0% 0.0% 0.0% 66.7% 17.0% 8.0% 2.0% - REN 61.3% 0.0% 0.0% 0.0% 0.2% 12.1% 59.9% 11.4% 4.4% 3.2% 1.3% 0.035%
SUSHI 48.2% - - - - - 109.9% 28.9% 9.9% 1.7% - 0.844%
UMA 53.8% - - - 0.0% 0.4% 3.0% 4.3% - - - YFI 94.8% - - - - - 70.5% 41.0% 14.1% 2.6% - 0.307%
YFII 40.1% - - - - - 54.2% 8.6% 4.3% 1.4% - ZRX 57.9% 0.7% 1.9% 1.7% 4.5% 6.8% 32.8% 19.0% 6.3% 4.8% 3.2% 0.052%
## 5. DISCUSSION
In this section, we discuss the results from our data analysis. We revisit Table 3 and 4 as well as Figure 1 and discuss
some interesting findings.
What seems to be true across the board is that DeFi
tokens have a somewhat concentrated ownership structure.
This is certainly an issue that merits monitoring, as it may
potentially undermine many of the advantages this new financial infrastructure may provide.
For protocols with token-based governance models, the
lower bound number of addresses needed to reach a majority, i.e., >50%, may be of special interest. A relatively low
threshold can indicate a higher likelihood of collusion and
centralized decision making. In extreme cases, a few individuals could jointly enact protocol changes. However, since
governance rules, the implementations of voting schemes,
and security modules (e.g., timelocks) vary greatly between
protocols, direct comparisons should only be made with
great care.
In addition to the decentralization and governance concerns, the study also shows DeFi’s limitations with regard
to transparency. While it is true that the DeFi space is extremely transparent in the sense that almost all data is available on-chain, it is very cumbersome to collect the data and
prepare it in a digestible form. High nesting levels with multiple protocols and token wrappers involved will overwhelm
most users and analysts and create the need for sophisticated analysis tools. The computation of accurate token
ownership statistics and reliable dependency statistics is extremely challenging.
The problem becomes apparent when we compare our
results to the results of Simone Conti’s analysis, [3]. Recall
that Conti’s analysis has not controlled for any accountspecific properties. Our analysis shows that for most tokens,
34 _M. Nadler and F. Schär_
the token holdings of the top 5 addresses thereby have been
overestimated by approximately 100% and in some extreme
cases by up to 700%. The main source of these errors is
the inclusion of token holdings from custodial- and escrow
contracts, such as liquidity-, lending-, and staking-pools, as
well as token wrappers, vesting contracts, migrations, burner
addresses, and decentralized exchange addresses. We control
for these accounts and split their holdings to the actual beneficiary addresses where possible and exclude them where
not possible. A closer comparison of the two tables reveals
that the differences remain high for lower holder thresholds
(i.e., top 10, top 50, and top 100). At the top 500 threshold,
the differences are still significant, although to a much lesser
degree.
In addition to the computation of more accurate holder
tables, transparency is a precondition for the analysis of protocol interconnections and dependencies. For this purpose,
we introduce the wrapping complexity and multi-token holding metrics. Wrapping complexity essentially shows how the
token is used in the ecosystem. On the one hand, high wrapping complexities can be interpreted as an indicator for a
token that is deeply integrated into the DeFi ecosystem. On
the other hand, high wrapping complexities may also be an
indicator for convoluted and unnecessarily complex wrapping schemes that may introduce additional risks.
A potential indicator for how the market feels about the
complexity is the shortage ratio, i.e., the value of all decentralized short positions in relation to the relative supply.
Interestingly, there is a high positive correlation between the
two measures, which may at first glance suggest that wrapping complexity is interpreted as a negative signal. However,
this would be a problematic interpretation since wrapping
complexity is, in fact, at least partially driven by the shorting activity. Once we exclude the lending and borrowing,
-----
_Figure 1. Adjustment Graphs._
_Decentralized finance, centralized ownership? An iterative mapping process to measure protocol token distribution_ 35
-----
nounced.
The DeFi space is developing very rapidly and constantly
increases in complexity. Many new and exciting protocols
have emerged in 2020. Novel concepts such as complex staking schemes started to play a role in most protocols. We see
staking, or more specifically staking rewards, as a catalyst
for the immense growth in the DeFi space. However, it is
somewhat questionable whether this growth will be sustainable. Treasury pools will eventually run out of tokens, and
uncontrolled token growth leads to an increase of the relevant token supply, which may create inflationary pressure.
While we are confident that our study provides interesting contributions with new metrics and processes to compute token ownership tables with unprecedented accuracy,
we would still like to mention some of the limitations of our
study and point out room for further extensions.
First, we perform no network analysis to potentially link
multiple addresses of the same actor. This approach has
likely lead to an overestimation of decentralization. In a further research project, one could combine our data set and
remapping method with address clustering.
Second, while the automated process may remap tokens
for all contract accounts, our manual analysis was limited
to contract accounts with a significant amount. We decided
to set the threshold value at 0.1% of relevant supply.
Third, we used various data sources to verify the labeling of addresses. In some unclear cases, we approached the
teams directly for more information. However, this information cannot be verified on-chain. Consequently, this is the
only part of the study for which we had to rely on information provided by third parties.
Further research may adopt the methods of this paper to
analyze token characteristics in the context of governance
models. The data could be used as a parameter for more
realistic simulations and game-theoretical governance models. Novel metrics, such as the wrapping complexity, may be
useful for studies concerned with the interdependencies and
risk assessment of the DeFi landscape. Finally, the proposed
readjustment categories may provide a good base for further
research on how DeFi tokens are being used and the reasons
for their spectacular growth.
## 6. CONCLUSION
In this paper, we analyze the holder distribution and
ecosystem integration for the most popular DeFi tokens. The
paper introduces a novel method that allows us to split and
iteratively reallocate contract account holdings over multiple wrapping levels.
Our data indicate that previous analyses severely overestimated ownership concentration. However, in most cases,
the majority of the tokens are still held by a handful of
individuals. This finding may raise important questions regarding protocol decentralization and build a foundation for
DeFi governance research.
36 _M. Nadler and F. Schär_
We further investigated dependencies and ecosystem integration. Our analysis suggests that the complexity of the
ecosystem has drastically increased. This increase seems to
be consistent among most tokens. However, the main drivers
vary significantly, depending on the nature of the token.
To conclude, DeFi is an exciting and rapidly growing
new financial infrastructure. However, there is a particular risk that high ownership concentration and complex
wrapping structures introduce governance risks, undermine
transparency and create extreme interdependence affecting
protocol robustness.
## ACKNOWLEDGEMENTS
The authors would like to thank Mitchell Goldberg,
John Orthwein and Victoria J. Block for proof-reading the
manuscript.
_Received 1 November 2021_
## REFERENCES
[1] Chohan, U. W. (2019). Cryptocurrencies and Inequality. Notes
_on the 21st Century (CBRI)._
[[2] CoinGecko (2020). Coingecko.com. https://coingecko.com.](https://coingecko.com)
[3] Conti, S. (2020). DeFi Token Holder Analysis - 6th
Aug 2020. [https://twitter.com/simoneconti_/status/](https://twitter.com/simoneconti_/status/1291396627165569026/photo/1)
[1291396627165569026/photo/1.](https://twitter.com/simoneconti_/status/1291396627165569026/photo/1)
[[4] Etherscan (2019). Etherscan.io. https://etherscan.io.](https://etherscan.io)
[5] Gini, C. (1912). Variabilità e mutabilità. vamu.
[6] Gupta, M. and Gupta, P. (2017). Gini Coefficient Based Wealth
Distribution in the Bitcoin Network: A Case Study. In Interna_tional Conference on Computing, Analytics and Networks 192–_
202. Springer.
[7] Kondor, D., Pósfai, M., Csabai, I. and Vattay, G. (2014).
Do the rich get richer? An empirical analysis of the Bitcoin transaction network. PloS one 9 e86197.
[8] Medvedev, E. and the D5 team (2018). Ethereum ETL.
[https://github.com/blockchain-etl/ethereum-etl.](https://github.com/blockchain-etl/ethereum-etl)
[[9] Medvedev, E. and the D5 team (2020). Nansen.ai. https://](https://nansen.ai)
[nansen.ai.](https://nansen.ai)
[10] Rosu, I. and Saleh, F. (2020). Evolution of shares in a proof-ofstake cryptocurrency. HEC Paris Research Paper No. FIN-2019_1339._
[11] Schär, F. (2020). Decentralized Finance: On Blockchain-and
Smart Contract-based Financial Markets. Available at SSRN
_3571335._
[12] Vogelsteller, F. and Buterin, V. (2015). EIP-20: Token Stan[dard. https://eips.ethereum.org/EIPS/eip-20.](https://eips.ethereum.org/EIPS/eip-20)
Matthias Nadler
Center for Innovative Finance
Faculty of Business and Economics
University of Basel
Basel, Switzerland
[E-mail address: [email protected]](mailto:[email protected])
Fabian Schär
Center for Innovative Finance
Faculty of Business and Economics
University of Basel
Basel, Switzerland
[E-mail address: [email protected]](mailto:[email protected])
-----
| 9,697
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2012.09306, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://www.intlpress.com/site/pub/files/_fulltext/journals/jbr/2022/0001/0001/JBR-2022-0001-0001-a003.pdf"
}
| 2,020
|
[
"JournalArticle"
] | true
| 2020-12-16T00:00:00
|
[
{
"paperId": "567f7a1a4cf05a9a123e53f3ea4907214a6cfc37",
"title": "Evolution of Shares in a Proof-of-Stake Cryptocurrency"
},
{
"paperId": "082f7f6e1fda6358d47df5d26fe862ef6021a803",
"title": "Decentralized Finance: On Blockchain- and Smart Contract-Based Financial Markets"
},
{
"paperId": "f2422f4eccdfce6053ef75a510a09d877c59401a",
"title": "Cryptocurrencies and Inequality"
},
{
"paperId": "8fb7d4ce23ae298b02a9cd2e6972bcb1bd3db411",
"title": "Gini Coefficient Based Wealth Distribution in the Bitcoin Network: A Case Study"
},
{
"paperId": "985c93942e6b10e99e34a550eb1446ce0dbbe5e2",
"title": "Do the Rich Get Richer? An Empirical Analysis of the Bitcoin Transaction Network"
},
{
"paperId": "aecf46bca84b4854e725f1458f107c435d81ff6e",
"title": "Variabilita e Mutabilita."
},
{
"paperId": null,
"title": "Defi token holder analysis - 6th aug 2020"
},
{
"paperId": null,
"title": "Medvedev and the D5 team"
},
{
"paperId": null,
"title": "Etherscan.io"
},
{
"paperId": null,
"title": "Ethereum etl"
},
{
"paperId": null,
"title": "The token must be a protocol token. It must incorporate some form of governance and/or utility mechanism. Pure stablecoins, token wrappers, or token baskets have not been considered"
},
{
"paperId": null,
"title": "The token must be ERC-20 compliant and contribute towards decentralized financial infrastructure"
},
{
"paperId": null,
"title": "2020) Coingecko.com"
}
] | 9,697
|
en
|
[
{
"category": "Mathematics",
"source": "external"
},
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0073f40a44c505bedf1bee1a5ded3c9aee9a0ec6
|
[
"Mathematics",
"Computer Science"
] | 0.857273
|
A Systems Theoretic Approach to the Design of Scalable Cryptographic Hash Functions
|
0073f40a44c505bedf1bee1a5ded3c9aee9a0ec6
|
International Conference/Workshop on Computer Aided Systems Theory
|
[
{
"authorId": "2662311",
"name": "J. Scharinger"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Int Conf Comput Aided Syst Theory",
"Computer Aided Systems Theory",
"Comput Aided Syst Theory",
"EUROCAST"
],
"alternate_urls": null,
"id": "a12f955f-4b79-460e-ae92-a85756feb890",
"issn": null,
"name": "International Conference/Workshop on Computer Aided Systems Theory",
"type": "conference",
"url": "http://www.wikicfp.com/cfp/program?id=953"
}
| null |
# A Systems Theoretic Approach to the Design of Scalable Cryptographic Hash Functions
Josef Scharinger
Johannes Kepler University, Institute of Computational Perception,
4040 Linz, Austria
```
[email protected]
```
**Abstract. Cryptographic hash functions are security primitives that**
compute check sums of messages in a strong manner and this way are
of fundamental importance for ensuring integrity and authenticity in
secure communications. However, recent developments in cryptanalysis
indicate that conventional approaches to the design of cryptographic
hash functions may have some shortcomings.
Therefore it is the intention of this contribution to propose a novel way
how to design cryptographic hash functions. Our approach is based on
the idea that the hash value of a message is computed as a messagedependent permutation generated by very special chaotic permutation
systems, so called Kolomogorov systems. Following this systems theoretic
approach we obtain arguably strong hash functions with the additional
useful property of excellent scalability.
## 1 Introduction and Motivation
Cryptographic hash functions for producing checksums of messages are a core
primitive in secure communication. They are used to ensure communication
integrity and are also essential to signature schemes because in practice one does
not sign an entire message, but the cryptographic checksum of the message.
All the cryptographic hash functions in practical use today (SHA-1, SHA224, SHA-256, SHA-384 and SHA-512) are specified in the Secure Hash Standard
(SHS, see [8]) and are based on ideas developed by R. Rivest for his MD5 message digest algorithm [9]. Unfortunately, recent attacks [14] on SHA-1 show that
this design approach may have some shortcomings. This is the reason why the
intention of this contribution is to deliver a radically different systems theory
based approach to the design of scalable cryptographic hash functions.
The reminder of this contribution is organized as follows. In section 2 we
explain the notion of a cryptographic hash function. Section 3 introduces the
well-known class of continuous chaotic Kolomogorov systems, present a discrete
version of Kolomogorov systems and analyze cryptographically relevant properties of these discrete Kolomogorov systems. Next, section 4 describes our novel
approach to the design of cryptographic hash functions which is essentially based
on the idea of computing a message check sum as a message dependent permutation generated by iterated applications of the discrete Kolmogorov systems
-----
described in section 3. Finally, section 5 intends to justify the claim that our
design of cryptographic hash functions based on systems theory constitutes a
highly scalable approach to the development of cryptographic hash functions.
## 2 Cryptographic Hash Functions
**2.1** **The Concept of a Cryptographic Hash Function**
Following [11], cryptographic hash functions come under many different names:
one-way hash function, message digest function, cryptographic checksum function, message authentication code, and quite some more. Essentially a cryptographic hash function takes an input string and converts it to a fixed-size (usually
smaller) output string.
In a more formal way, a cryptographic hash function H(M ) operates on an
arbitrary-length plaintext (message) M and returns a fixed-length hash value
_h = H(M_ ), where h is of length N . While one can think of many functions
that convert an arbitrary-length input and return an output of fixed length, a
cryptographic hash function has to have additional characteristics:
**– one-way property: given M, it is easy to compute h, but given h, it is hard**
to compute M
**– collision resistance: given M, it is hard to find another message M** _[′], such_
that H(M ) = H(M _[′]) and even more it should be hard to find two arbitrary_
messages M1 and M2 such that H(M1) = H(M2)
It is perfectly obvious to see that any cryptographic hash function producing
length N hash values can only offer order (2[N] ) security with respect to fulfilling
_O_
the one-way property. Even more, taking into consideration the so-called birthday
_attack [11], it follows that any cryptographic hash function can only offer order_
(2[N/][2]) security with respect to collision resistance. It is therefore essential to
_O_
note that N defines an upper limit on security that is achievable by any length
_N cryptographic hash function. Accordingly it would be nice to have scalable_
hash functions where increasing N should be as simple as possible, a point we
pay special attention to with our approach presented in this paper.
## 3 Chaotic Kolmogorov Systems
Among the most remarkable results of recent systems theory are novel findings
on chaotic systems. There has been good progress in systems science concerning
the analysis of complex dynamical systems and concepts like fractal dimension
or strange attractors are now well understood. However, it is worth noting that
the overwhelming majority of exiting systems is by definition of continuous type,
so system states are in some power set of R.
A fundamental property of chaotic systems is the fact that small deviations
in inputs can completely alter the systems behavior. This immediately leads to
-----
the problem that any approximations, as inherently involved by any digitization, may change systems behavior completely. Therefore, for practical digital
applications of interesting chaotic systems it is essential to successfully bridge
the gap from continuous type systems to discrete version that still preserve the
essential properties present in the continuous case.
In our contribution we focus on the class of chaotic Kolmogorov systems [3, 6,
13]. This class has been of great interest to systems scientists for a long time due
to some unique properties amongst which the outstanding degree of instability
is particularly remarkable. It has been proven [2] that continuous Kolmogorov
systems Tπ guarantee ergodicity, exponential divergence and perfect mixing of
the underlying state space for almost all valid choices of parameter π. Note that
these properties perfectly match the properties of confusion and diffusion (as
first defined by C. Shannon in [12]) that are so fundamental in cryptography.
**3.1** **Continuous Kolmogorov Systems**
Continuous chaotic Kolmogorov systems act as permutation operators upon the
unit square E. Figure 1 is intended to give a notion of the dynamics associated
with a specific Kolmogorov system parameterized by the partition π = ( [1]3 _[,][ 1]2_ _[,][ 1]6_ [).]
As can be seen, the unit square is first partitioned into three vertical strips according to [1]
3 _[,][ 1]2_ _[,][ 1]6_ [. These strips are then stretched to full width in the horizontal]
and squeezed by the same factor in the vertical direction and finally these transformed strips are stacked atop of each other. After just a few applications (see
Fig. 1 from top left to bottom right depicting the initial and the transformed
state space after 1, 2, 3, 6 and 9 applications of Tπ) this iterated stretching,
squeezing and folding achieves excellent mixing of the elements within the state
space.
**Fig. 1. Illustrating the chaotic and mixing dynamics associated when iterating a Kol-**
mogorov system.
-----
Formally this process of stretching, squeezing and folding is specified as follows. Given a partitionunit interval U and stretching and squeezing factors defined by π = (p1, p2, . . ., pk), 0 < pi < 1 and [�] qi[k]=1i =[p][i]p1[ = 1 of the]i _[.][ Further-]_
more, let Fi defined by F1 = 0 and Fi = Fi−1 + _pi−1 denote the left border of the_
vertical strip containing the point (x, y) ∈ E to transform. Then the continuous
Kolmogorov system Tπ will move (x, y) ∈ [Fi, Fi + pi) × [0, 1) to the position
_Tπ(x, y) = (qi(x −_ _Fi), [y]_ + Fi). (1)
_qi_
It is well known and proven [2] that for almost all valid choices of parame_ter π the corresponding continuous Kolmogorov system Tπ fulfills the following_
appealing properties:
**– ergodicity: guarantees that almost any initial point approaches any point in**
state space arbitrarily close as the system evolves in time. Speaking in terms
of cryptography this property can be considered as equivalent to confusion
since initial (input) positions does not give any information on final (output)
positions.
**– exponential divergence: neighboring points diverge quickly at exponential**
rate in horizontal direction. Speaking in terms of cryptography this property
can be considered as equivalent to diffusion since initially similar initial
(input) positions rapidly lead to highly different final (output) positions.
**– mixing: guarantees that all subspaces of the state space dissipate uniformly**
over the entire state space. Speaking in terms of cryptography this property
can be considered as a perfect equivalent to confusion and diffusion.
Deducing from this analysis it can be concluded that continuous Kolmogorov
systems offer all the properties desired for a perfect permutation operator in the
_continuous domain. Our task now is to develop a discrete version of Kolmogorov_
systems that preserves these outstanding properties. That is precisely what will
be done in the next subsection.
**3.2** **Discrete Kolmogorov Systems**
In our notation a specific discrete Kolmogorov system for permuting a data block
of dimensions n×n shall be defined by a list δ = (n1, n2, . . ., nk), 0 < ni < n and
�k
_i=1_ _[n][i][ =][ n][ of positive integers that adhere to the restriction that all][ n][i][ ∈]_ _[δ]_
must partition the side length n.
Furthermore let the quantities qi be defined by qi = _nni_ [and let][ N][i][ specified]
by N1 = 0 and Ni = Ni−1 + _ni−1 denote the left border of the vertical strip that_
contains the point (x, y) to transform.
Then the discrete Kolmogorov system Tn,δ will move the point (x, y) ∈
[Ni, Ni + ni) × [0, n) to the position
_Tn,δ(x, y) = (qi(x −_ _Ni) + (y mod qi), (y div qi) + Ni)._ (2)
-----
As detailed in the preceding subsection, continuous Kolmogorov systems Tπ
are perfect (ergodic and mixing) permutation operators in the continuous domain. Provided that our definition of discrete Kolmogorov systems Tn,δ has the
same desirable properties in the discrete domain, that would deliver a strong
permutation operator inherently possessing the properties of confusion, diffusion and perfect statistics in the sense that permutations produced are statistically indistinguishable for truly random permutations. The analysis in the next
subsection proofs exactly that this is true indeed.
**3.3** **Analysis of Discrete Kolmogorov Systems**
As detailed in [10], the following theorem can be proven for discrete Kolmogorov
systems Tn,δr :
**Theorem 1. Let the side-length n = p[m]** _be an integral power of a prime p. Then_
_the application of discrete Kolmogorov systems Tn,δr leads to ergodicity, expo-_
_nential divergence and mixing provided that at least 4m iterations are performed_
_and lists δr used in every round r are chosen independently and at random. As_
_an immediate consequence, this definitely is the case if at least 4 log2 n rounds_
_are iterated._
For any cryptographic system it is always essential to know how many different keys are available to the cryptographic system. In our case of discrete
Kolmogorov systems Tn,δ this reduces to the question, how many different lists
_δ = (n1, n2, . . ., nk) of ni summing up to n do exist when all ni have to part n?_
As detailed in e.g. [1], a computationally feasible answer to this question can
be found by a method based on formal power series expansion leading to a simple
recursion relation. If R = {r1, r2, . . ., rm} denotes the set of admissible divisors
in ascending order, then cn, the number of all lists δ constituting a valid key for
_Tn,δ, is given by_
0, if n < r1
_cn =_ _cn−r1 + cn−r2 + . . . + cn−rm_ if (n ≥ _r1) ∧_ (n ̸∈{r1, r2, . . ., rm})
1 + cn−r1 + cn−r2 + . . . + cn−rm if n ∈{r1, r2, . . ., rm}
(3)
Some selected results are given in table 1. To fully appreciate these impressive
numbers note that values given express the number of permissible keys for just
one round and that the total number of particles in the universe is estimated to
be in the range of about 2[265].
## 4 Hash Functions from Chaotic Kolmogorov Systems
Deducing from theorem 1, the following holds true:
**– if random parameters δr are used and at least 4 log2 n rounds are iterated,**
then any n _n square will be perfectly permuted by applying a sequence of_
_×_
transformations Tn,δr
-----
256 ≈ 2[103] 512 _≈_ 2[209] 1024 ≈ 2[418]
**Table 1. Number of permissible parameters δ for parameterizing the discrete Kol-**
mogorov system Tn,δ for some selected values of n
**– this permutation is determined by the sequence of parameters δr**
This immediately leads to the following idea how to calculate the hash value for
a message M using discrete Kolmogorov systems Tn,δr :
**– the bits of the message M can be interpreted as a sequence of parameters δr**
**– the application of a sequence of transforms Tn,δr will result in a permutation**
hash determined by message M
According to this principle, our algorithm for the calculation of a Kolmogorov
_permutation hash of length N for a message M works as described next._
**4.1** **Initialization**
In the initialization phase all that has to be done is fill a square array of side
length n (such that n _n = N_ ) with e.g. left half N/2 zeros and right half N/2
_×_
ones.
**4.2** **Message Schedule**
Next we partition message M into blocks Mi (e.g. of size 512 bits). This is useful
e.g. because this way the receiver of a message can begin calculation of hash
values without having to wait for receipt of the entire message and additionally
this keeps our approach in compliance with e.g. the HMAC algorithm [7] which
demands an iterated hash function in its very definition.
Then we expand block Mi to get a corresponding expanded pseudo-random
message block Wi. This can e.g. be done using linear congruence generators
(LCGs, see [5]), linear feedback shift registers (LFSRs, see [4]) or the expansion
mechanisms used in the Secure Hash Standard (SHS, see [8]) defined by NIST.
All we demand is that this expansion has to deliver ≥ 4 log2 n Mi-dependent
pseudo-random groups gi,r of bits interpretable as parameters δi,r (see following
two subsections on interpretation and use of bit groups gi,r).
**4.3** **Mapping Bit Groups onto Partitions**
When the task is to map bit groups gi,r from Mi and Wi onto valid parameters δi,r = (n1, n2, . . ., nk) this can be accomplished in very simple ways. Just
examine the following illustration:
|n|c n|n|c n|n|c n|
|---|---|---|---|---|---|
|4|1|8|5|16|55|
|32|5.271|64|47.350.055|128|≈250|
|256|≈2103|512|≈2209|1024|≈2418|
-----
|M or W i i|Col2|Col3|Col4|
|---|---|---|---|
|g i,1|g i,2|g i,3|. . .|
|0 1 . . . 0|1 1 . . . 0|0 1 . . . 1|. . . . . .|
|δ i,1|δ i,2|δ i,3|. . .|
If one writes down explicitly the different partitions δi,r possible for various
_n (whose total number is given in Tab. 1) one immediately notices that the_
probability of a value ni being contained in a partition decays exponentially
with the magnitude of ni. Therefore the following approach is perfectly justified.
When iterating over bit groups gi,r and generating factors ni for δi,r, we interpret
the smallest run of equal bits (length 1) as the smallest factor of n, namely
_ni = 2[1]_ = 2, a run of two equal bits as factor ni = 2[2] = 4, and so on.
There are two details to observe in the above outlined procedure of mapping
bit groups gi,r onto valid partitions δi,r:
**– One point to observe in this procedure is that the sum of ni’s generated from**
bit groups gi,r this way has to equal n. Therefore one has to terminate a run
as soon as an nj+1 would be generated such that [�]i[j]=1 _[n][i][ +][ n][j][+1][ > n.][ Then]_
the maximum possible nj+1 (as a power of 2) still fulfilling the constraint
has to be chosen, and the run length has to be reset to one.
**– The other point to observe is that one iteration over gi,r may yield ni sum-**
ming up to less than n. In that case gi,r just has to be scanned iteratively
until the ni generated sum up to n indeed.
Observance of these two details will guarantee that valid parameters δi,r will
always be generated for bit groups gi,r.
**4.4** **Processing a Message Block**
Processing message block Mi and corresponding expanded message block Wi is
perfectly straightforward. Just iteratively take groups of bits gi,r first from Mi
then from Wi, map them onto parameters δi,r, permute square array according
to Tn,δi,r, and finally rotate the array by gi,r mod N to avoid problems with
fixed points (0, 0) and (n 1, n 1). All one has to care about in this simple
_−_ _−_
scheme is that groups gi,r taken from Mi must have sizes k, such that 2[k] is
lower or equal to the number of permissible keys (see Tab. 1) for Tn,δi,r to avoid
collisions, and that groups gi,r taken from Wi must have sizes k, such that 2[k] is
greater or equal to the number of permissible keys for Tn,δi,r to ensure perfect
mixing according to theorem 1.
Applying this procedure for all message blocks Mi of message M will result
in excellent chaotic mixing of the square array in strong dependence on message
_M_ .
**4.5** **Reading Out the Message Digest**
Finally, reading out the state of array reached after processing all Mi yields a
strong checksum of length N = n _n for message M_ .
_×_
-----
## 5 Scalability
Some readers might wonder why our description of Kolmogorov permutation
hashes as specified in section 4 does not fix a specific value N for the length
of hash values produced by our approach. The reason is simple: we want our
approach to the design of cryptographic hash functions to be as generic as possible. As already indicated in the title of this contribution, we are aiming at the
development of scalable cryptographic hash functions.
To understand why this scalability is so important, recall from section 2
that it is a fact that an N bit hash function can only offer security up to level
(2[N/][2]) [11]. Consequently, as computing power is increasing steadily, it may
_O_
become desirable to increase the length of hash values produced without having
to redesign the hash function.
In our scheme, increasing the length and thus achieving remarkable scalability
is straightforward. By just changing the size of the underlying square array from
_n_ _n to 2n_ 2n, the length of hash values produced is increased by 4. Obviously,
_×_ _×_
this involves minor modifications to block expansion and bit group partitioning
as explained and specified in section 4, but besides these small changes, the same
algorithm can be used.
## References
1. M. Aigner. Kombinatorik. Springer Verlag, 1975.
2. V.I. Arnold and A. Avez. Ergodic Problems of Classical Mechanics. W.A. Benjamin, New York, 1968.
3. S. Goldstein, B. Misra, and M. Courbage. On intrinsic randomness of dynamical
systems. Journal of Statistical Physics, 25(1):111–126, 1981.
4. Solomon W. Golomb. Shift Register Sequences. Aegan Park Pr., 1981.
5. Donald E. Knuth. The Art of Computer Programming. Addison-Wesley, 1998.
6. J¨urgen Moser. _Stable and Random Motions in Dynamical Systems._ Princeton
University Press, Princeton, 1973.
7. NIST. Keyed-Hash Message Authentication Code (HMAC). FIPS 198, March
2002.
8. NIST. Secure hash standard (SHS). FIPS 180-2, August 2002.
9. R.L. Rivest. The MD5 message digest function. RFC 1321, 1992.
10. Josef Scharinger. An excellent permutation operator for cryptographic applications. In Computer Aided Systems Theory – EUROCAST 2005, pages 317–326.
Springer Lecture Notes in Computer Science, Volume 3643, 2005.
11. Bruce Schneier. Applied Cryptography. Addison-Wesley, 1996.
12. C.E. Shannon. Communication theory of secure systems. Bell System Technical
_Journal, 28(4):656–715, 1949._
13. Paul Shields. The Theory of Bernoulli Shifts. The University of Chicago Press,
Chicago, 1973.
14. Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu. Finding collisions in the full
SHA-1. In CRYPTO, 2005.
-----
| 5,202
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-540-75867-9_2?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-540-75867-9_2, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://link.springer.com/content/pdf/10.1007%2F978-3-540-75867-9_2.pdf"
}
| 2,007
|
[
"JournalArticle"
] | true
| 2007-02-12T00:00:00
|
[] | 5,202
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Physics",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Physics",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0073f9a960126e285a20391c1fdc891b703fbebf
|
[
"Computer Science",
"Physics"
] | 0.859529
|
Prospects of federated machine learning in fluid dynamics
|
0073f9a960126e285a20391c1fdc891b703fbebf
|
AIP Advances
|
[
{
"authorId": "3071838",
"name": "O. San"
},
{
"authorId": "11347090",
"name": "Suraj Pawar"
},
{
"authorId": "49219413",
"name": "A. Rasheed"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"AIP Adv"
],
"alternate_urls": [
"http://aip.scitation.org/journal/adv/",
"https://aip.scitation.org/journal/adv/"
],
"id": "ba34b1d8-ab70-4f80-96b5-97a5b161f9df",
"issn": "2158-3226",
"name": "AIP Advances",
"type": "journal",
"url": "http://aipadvances.aip.org/"
}
|
Physics-based models have been mainstream in fluid dynamics for developing predictive models. In recent years, machine learning has offered a renaissance to the fluid community due to the rapid developments in data science, processing units, neural network based technologies, and sensor adaptations. So far in many applications in fluid dynamics, machine learning approaches have been mostly focused on a standard process that requires centralizing the training data on a designated machine or in a data center. In this article, we present a federated machine learning approach that enables localized clients to collaboratively learn an aggregated and shared predictive model while keeping all the training data on each edge device. We demonstrate the feasibility and prospects of such a decentralized learning approach with an effort to forge a deep learning surrogate model for reconstructing spatiotemporal fields. Our results indicate that federated machine learning might be a viable tool for designing highly accurate predictive decentralized digital twins relevant to fluid dynamics.
|
## Prospects of federated machine learning in fluid dynamics
Omer San,[1,][ a)] Suraj Pawar,[1] and Adil Rasheed[2]
1)School of Mechanical & Aerospace Engineering, Oklahoma State University, Stillwater, OK 74078,
_USA._
2)Department of Engineering Cybernetics, Norwegian University of Science and Technology, N-7465, Trondheim,
_Norway._
(Dated: 16 August 2022)
Physics-based models have been mainstream in fluid dynamics for developing predictive models. In recent
years, machine learning has offered a renaissance to the fluid community due to the rapid developments
in data science, processing units, neural network based technologies, and sensor adaptations. So far in
many applications in fluid dynamics, machine learning approaches have been mostly focused on a standard
process that requires centralizing the training data on a designated machine or in a data center. In this
letter, we present a federated machine learning approach that enables localized clients to collaboratively learn
an aggregated and shared predictive model while keeping all the training data on each edge device. We
demonstrate the feasibility and prospects of such decentralized learning approach with an effort to forge a
deep learning surrogate model for reconstructing spatiotemporal fields. Our results indicate that federated
machine learning might be a viable tool for designing highly accurate predictive decentralized digital twins
relevant to fluid dynamics.
Keywords: Federated machine learning, decentralized digital twins, deep learning, proper orthogonal decomposition, surrogate modeling
In many complex systems involving fluid flows, computing a physics-based model might be prohibitive, especially when our simulations are compatible with the
timescales of natural phenomena. Consequently, there
is an ever-growing interest in generating surrogate or reduced order models[1]. It has also been envisioned that a
digital twin capable of accurately representing the physical system could offer a better value proposition to specific applications and stakeholders[2]. The role of this digital twin might be to provide descriptive, diagnostic, predictive, or prescriptive guidelines for a better-informed
decision. The market pull created by digital twin-like
technologies coupled with the technology push provided
by significant advances in machine learning (ML) and
artificial intelligence (AI), advanced and cost-effective
sensor technologies, readily available computational resources, and opensource ML libraries have accelerated
ML penetration in domain sciences like never before. The
last decade has seen an exponential growth of data-driven
modeling technologies (e.g., deep neural networks) that
might be key enablers for improving the modeling accuracy of geophysical fluid systems[3]. A recent workshop
held by NASA Advanced Information Systems Technology Program and Earth Science Information Partners on
ML adoption[4] identified the following guidelines, among
many others, in this area:
- Cutting edge ML algorithms and techniques need to be
available, packaged in some way and well understood
so as to be usable.
- Computer security implementations are outdated and
uncooperative with science investigations. Research in
[a)Electronic mail: [email protected]](mailto:[email protected])
making computational resources secure and yet easily
usable would be valuable.
One of the fluid flow problems that ML and AI can positively impact is weather forecasting. Big data will be the
key to making the digital twins of the natural environments a reality. In addition to the data from forecasting models and dedicated weather stations, it can be expected that there will be an unprecedented penetration of
smart devices (e.g., smart weather stations, smartphones,
and smartwatches), and contributions from crowdsourcing. For example, by 2025, there will be more than 7 billion smartphones worldwide. This number is much more
significant than the paltry (over 10,000) official meteorological stations around the world[5]. While analyzing and
utilizing data only from a few edge devices might not
yield accurate predictions, processing data from many
smart and connected devices equipped with sensors might
be a game changer in weather monitoring and prediction.
In their recent report, O’Grady et al. [6] highlighted that
the Weather Company utilizes data from over 250,000
personal weather stations. Moreover, Chapman, Bell,
and Bell [7] discussed how the crowdsourcing data-driven
modeling paradigm could take meteorological science to
a new level using smart Netatmo weather stations. As
more attention shifts to smart and connected internet of
_things devices, security and privacy implications of such_
smart weather stations have also been discussed[8]. Additionally, big data will come with its own challenges characterized by 10 Vs[9]. The 10 Vs imply large volume, velocity, variety, veracity, value, validity, variability, venue,
vocabulary, and vagueness. Volume refers to the size of
data, velocity refers to the data generation rate, variety refers to the data type, veracity refers to the data
quality and accuracy, value refers to the data usefulness,
validity refers to the data quality and governance, vari
-----
ability refers to the dynamic, evolving behavior in the
data source, venue refers to the heterogeneous data from
multiple sources, and vocabulary refers to the semantics
describing data structure. Finally, vagueness refers to
the confusion over the meaning of data and tools used.
In the weather forecast and many other processes, we
foresee that all these problems will have to be addressed.
To this end, in this letter, we focus on the statistical learning part and introduce a distributed training
approach to generate autoencoder models that are relevant to the nonlinear dimensionality reduction of spatiotemporally distributed data sets. We aim at exploring
the feasibility of such a decentralized learning framework
to model complex spatiotemporal systems in which local
data samples are held in edge devices. The case handled here is relatively simple but that was completely
intentional as it eases the communication and dissemination of the work to a larger audience. Specifically, we
put forth a federated ML framework considering the Kuramoto–Sivashinsky (KS) system[10,11], which is known for
its irregular or chaotic behavior.
This system has been derived to describe diffusioninduced chaotic behavior in reaction systems[12], hydrodynamic instabilities in laminar flames[13], phase dynamics of nonlinear Alfv´en waves[14] as well as nonlinear saturation of fluctuation potentials in plasma
physics[15]. Due to its systematic route to chaos, the
KS system has attracted much attention recently to test
the feasibility of emerging ML approaches specifically
designed to capture complex spatiotemporal dynamics
(see, for example, Gonz´alez-Garc´ıa, Rico-Mart`ınez, and
Kevrekidis [16], Pathak et al. [17], Vlachas et al. [18], Linot
and Graham [19], Vlachas et al. [20]). The KS equation with
_L-periodic boundary conditions can be written as_
_∂u_
(1)
_∂t_ [=][ −] _[∂]∂x[4][u][4][ −]_ _[∂]∂x[2][u][2][ −]_ _[u∂u]∂x_ _[,]_
on a spatial domain x [0, L], where the dynamics un_∈_
dergo a hierarchy of bifurcations as the spatial domain
size L is increased, building up the chaotic behavior.
Here, we perform the underlying numerical experiments
with L = 22 to generate our spatiotemporal data set.
Equation 1 is solved using the fourth-order method for
stiff partial differential equations[21] with the spatial grid
size of N = 64. The random initial condition is assigned
at time t = 250 and the solution is evolved with a time
_−_
step of 2.5 10[−][3] up to t = 0. The trajectory of the KS
_×_
system in the initial transient period is shown in Figure 1.
Using the solution at time t = 0 as the initial condition,
the KS system is evolved till t = 2500. The data is sampled at a time step of 0.25 and these 10,000 samples are
used for training and validation. For the testing purpose,
the data from t = 2500 to t = 3750 is utilized.
In this work, the federated ML is demonstrated for an
autoencoder which is a powerful approach for obtaining
the latent space on a nonlinear manifold. The autoencoder is composed of the encoder and a decoder, where
the encoder maps an input to a low-dimensional latent
FIG. 1. The evolution of the KS system illustrating the spatiotemporal field data at the initial transient period.
space and the decoder performs the inverse mapping from
latent space variables to the original dimension at the
output. If we denote the encoder function as η(w) and a
decoder function is defined as ξ(w), then we can represent
the manifold learning as follows
_η, ξ = arg max_ **_u_** (η _ξ)u_ _,_ (2)
_∥_ _−_ _◦_ _∥_
_η,ξ_
_η : u ∈_ R[N] _→_ **_z ∈_** R[R], (3)
_ξ : z ∈_ R[R] _→_ **_u ∈_** R[N] _,_ (4)
where z represent the low dimensional latent space and
_R is the dimensionality of the latent space._
We closely follow the seminal work in federated
learning[22], which introduces a federated averaging algorithm where clients collaboratively train a shared model.
Figure 2 contrasts the federated learning approach with
the centralized method. In the centralized method, the
local dataset is transferred from clients to a central server
and the model is trained using centrally stored data. In
case of the federated learning, the local dataset is never
transferred from clients to a server. Instead, each client
computes an update to the global model maintained by
the server based on the local dataset, and only this update to the model is communicated. The federated averaging algorithm assumes that there is a fixed set of
_K clients with a fixed local dataset and a synchronous_
update scheme is applied in rounds of communications.
At the beginning of each communication round, the central server sends the global state of the model (i.e., the
current model parameters) to each of the clients. Each
client computes the update to the global model based on
the global state and local dataset and this update is sent
to a server. The server then updates the global state of
the model based on the local updates received from all
clients, and this process continues. The objective function for a federated averaging algorithm can be written
as follows
_K_
� _nk_ �
_f_ (w) = where _Fk(w) = [1]_ _fi(w),_
_n [F][k][(][w][)]_ _nk_
_k=1_ _i∈Pk_
(5)
_Pk is the data on the kth client, nk is the cardinality of_
_Pk, and fi(w) = l(xi, yi; w) is the loss of the prediction on_
-----
FIG. 2. Overview and schematic illustrations of the centralized and federated ML approaches.
example (xi, yi). The above aggregation protocol can be
applied to any ML algorithm. In this work, we use the autoencoder for nonlinear dimensionality reduction[23], and
the complete pseudo-code for deep learning models in a
federated setting is provided in Algorithm 1. We highlight that the approach we utilize in our study simply
weights edge devices proportionally by the data they
own. More advanced approaches can be considered to
mitigate such limitations[24–28], but that is beyond the
scope of this letter.
Following the work of Vlachas et al. [20], we first validate the centralized approach by varying R. For the federated learning, we use K = 10 clients, and each client
model is trained for E = 1 local epoch with a batch size
_B = 32._ For a fair comparison, the batch size of 320
is utilized for training the centralized autoencoder. The
validation loss for the centralized and federated autoencoder with different dimensionality of the latent space
is depicted in Figure 3 and we see that both the losses
converge to very similar values. This shows that there
is no significant loss in accuracy due to federated learning compared to centralized learning. As shown in Figure 4, the reconstruction error for both centralized and
federated autoencoders saturates around R = 8 modes.
Figure 4 also demonstrates that a linear approach based
on the proper orthogonal decomposition (POD) (see, e.g.,
Ahmed et al. [1,29], Pawar et al. [30,31], San and Iliescu [32,33])
requires significantly more modes to represent underlying
flow dynamics with the same accuracy. Our observations,
which are consistent with previous works[19,20,34,35], suggest that the latent space dynamics lies effectively on a
manifold with R = 8 dimensions. Although our analysis
includes a global POD approach for comparison purpose,
we may consider to apply a localized POD approach[36–38]
for improved modal representation. Instead of a detailed
POD analysis here, our work rather aims primarily at
demonstrating the potential of federated learning in fluid
mechanics as opposed to centralized learning.
**Algorithm 1 Federated averaging algorithm. B is the**
local minibatch size, E is the number of local epochs, and
_α is the learning rate._
**Server execution:**
initialize w0
**for t = 1, 2, . . . do**
**for each client k do**
_wt[k]+1_ _[←]_ [ClientUpdate(][k, w]t[)]
**end for**
_wt+1 ←_ [�]k[K]=1 _nnk_ _[w]t[k]+1_
**end for**
**ClientUpdate(k, w):**
_B ←_ (split Pk into batches of size B)
**for each local epoch i from 1 to E do**
**for batch b ∈B do**
_w ←_ _w −_ _α∇l(w; b)_
**end for**
**end for**
return w to a server
The trajectory of the KS system for the testing period
is shown in Figure 5 along with the error between the
true data and reconstructed data from centralized and
federated autoencoders. The error is computed as the
absolute difference between the true and predicted state
of the KS system. Both the centralized and federated
autoencoders have a similar level of error.
_Conclusion — This letter explores the potential of fed-_
erated ML for modeling complex spatiotemporal dynamical systems. In particular, we considered the problem of
nonlinear dimensionality reduction of chaotic systems as
a demonstration case. Federated learning allows for collaborative training of a model while keeping the training
data decentralized. Our numerical experiments with the
application of autoencoder to the Kuramoto-Sivashinsky
system show that a federated model can achieve the same
level of accuracy as the model trained using the central
data collected from all clients. This work opens up the
possibility of updating a model in a centralized setting
without exposing the local data collected from different
sources.
-----
FIG. 3. Validation loss during training. Here, dashed line
corresponds to centralized learning and solid lines are for federated learning.
FIG. 4. Reconstruction mean squared error (MSE) on the
test data.
We argue that federated learning can solve some of the
_big data challenges in complex dynamical systems pro-_
vided that the different stakeholders, clients, and vendors
use the same vocabulary as follows:
- Big volume and velocity: Since inference, analysis and
modeling happened on the edge devices only, small
amount of data needs to be communicated. This decentralizing process will significantly reduce the communication bandwidth and storage burden.
- Big variety, venue, value and vagueness: Currently, a
lack of trained personnel (to deal with a large variety of
data in a centralized location) hinders the adoption of
scalable digital solutions. However, the problem is automatically remedied due to domain experts’ presence
at the data generation venue to extract value, thereby
minimizing vagueness.
- Big variability, veracity and validity: The variability
in the data generation and sharing processes resulting
from rapid changes in sensor technologies and corresponding regulatory environment will not be a challenge as it will be dealt with locally with federated
learning.
- Solving data privacy and security issues: Since the
data never leaves the local servers, it will enhance security and encourage clients and vendors to collaborate.
Although in this letter we primary focus on federated
learning in the context of spatiotemporal prediction of
such chaotic systems, our approach can be generalized to
large-scale computational settings beyond transport phenomena, for which the research outcomes might improve
broader modeling and simulation software capabilities to
design cohesive, effective, and secure predictive tools for
cross-domain simulations in the various levels of information density. In our future studies, we plan leveraging the
decentralized learning approaches in the context of precision meteorology, and develop new physics-guided federated learning approaches to forge new surrogate models compatible among heterogeneous computing environments.
FIG. 5. Reconstruction performance of the centralized and
federated learning approaches with R = 8.
This material is based upon work supported by the
-----
U.S. Department of Energy, Office of Science, Office of
Advanced Scientific Computing Research under Award
Number DE-SC0019290. O.S. gratefully acknowledges
their Early Career Research Program support.
**Data Availability**
The data that supports the findings of this study is available within the article.
1S. E. Ahmed, S. Pawar, O. San, A. Rasheed, T. Iliescu, and
B. R. Noack, “On closures for reduced order models—a spectrum
of first-principle to machine-learned avenues,” Physics of Fluids
**33, 091301 (2021).**
2A. Rasheed, O. San, and T. Kvamsdal, “Digital twin: Values,
challenges and enablers from a modeling perspective,” IEEE Access 8, 21980–22012 (2020).
3O. San, A. Rasheed, and T. Kvamsdal, “Hybrid analysis and
modeling, eclecticism, and multifidelity computing toward digital
twin revolution,” GAMM-Mitteilungen 44, e202100007 (2021).
4“Report from the NASA Machine Learning Workshop, April 17–
[19, 2018, Boulder, CO,” https://esto.nasa.gov/wp-content/](https://esto.nasa.gov/wp-content/uploads/2020/03/2018MachineLearningWorkshop_Report.pdf)
```
uploads/2020/03/2018MachineLearningWorkshop_Report.pdf.
```
5D. J. Mildrexler, M. Zhao, and S. W. Running, “A global comparison between station air temperatures and MODIS land surface temperatures reveals the cooling role of forests,” Journal of
Geophysical Research: Biogeosciences 116 (2011).
6M. O’Grady, D. Langton, F. Salinari, P. Daly, and G. O’Hare,
“Service design for climate-smart agriculture,” Information Processing in Agriculture 8, 328–340 (2021).
7L. Chapman, C. Bell, and S. Bell, “Can the crowdsourcing
data paradigm take atmospheric science to a new level? A case
study of the urban heat island of London quantified using Netatmo weather stations,” International Journal of Climatology
**37, 3597–3605 (2017).**
8V. Sivaraman, H. H. Gharakheili, C. Fernandes, N. Clark, and
T. Karliychuk, “Smart iot devices in the home: Security and
privacy implications,” IEEE Technology and Society Magazine
**37, 71–79 (2018).**
9F. N. Fote, S. Mahmoudi, A. Roukh, and S. A. Mahmoudi, “Big
data storage and analysis for smart farming,” in 2020 5th Inter_national Conference on Cloud Computing and Artificial Intelli-_
_gence: Technologies and Applications (CloudTech) (IEEE, 2020)_
pp. 1–8.
10D. Armbruster, J. Guckenheimer, and P. Holmes, “Kuramoto–
Sivashinsky dynamics on the center–unstable manifold,” SIAM
Journal on Applied Mathematics 49, 676–691 (1989).
11P. Holmes, J. L. Lumley, G. Berkooz, and C. W. Rowley, Tur_bulence, coherent structures, dynamical systems and symmetry_
(Cambridge University Press, Cambridge, 2012).
12Y. Kuramoto, “Diffusion-induced chaos in reaction systems,”
Progress of Theoretical Physics Supplement 64, 346–367 (1978).
13G. I. Sivashinsky, “Nonlinear analysis of hydrodynamic instability in laminar flames—I. Derivation of basic equations,” Acta
Astronautica 4, 1177–1206 (1977).
14E. Rempel, A.-L. Chian, A. Preto, and S. Stephany, “Intermittent chaos driven by nonlinear Alfv´en waves,” Nonlinear Processes in Geophysics 11, 691–700 (2004).
15R. E. LaQuey, S. Mahajan, P. Rutherford, and W. Tang, “Nonlinear saturation of the trapped-ion mode,” Physical Review Letters 34, 391 (1975).
16R. Gonz´alez-Garc´ıa, R. Rico-Mart`ınez, and I. G. Kevrekidis,
“Identification of distributed parameter systems: A neural net
based approach,” Computers & Chemical Engineering 22, S965–
S968 (1998).
17J. Pathak, B. Hunt, M. Girvan, Z. Lu, and E. Ott, “Model-free
prediction of large spatiotemporally chaotic systems from data:
A reservoir computing approach,” Physical Review Letters 120,
024102 (2018).
18P. R. Vlachas, W. Byeon, Z. Y. Wan, T. P. Sapsis, and
P. Koumoutsakos, “Data-driven forecasting of high-dimensional
chaotic systems with long short-term memory networks,” Proceedings of the Royal Society A: Mathematical, Physical and
Engineering Sciences 474, 20170844 (2018).
19A. J. Linot and M. D. Graham, “Deep learning to discover and
predict dynamics on an inertial manifold,” Physical Review E
**101, 062209 (2020).**
20P. R. Vlachas, G. Arampatzis, C. Uhler, and P. Koumoutsakos, “Multiscale simulations of complex systems by learning
their effective dynamics,” Nature Machine Intelligence 4, 359–
366 (2022).
21A.-K. Kassam and L. N. Trefethen, “Fourth-order time-stepping
for stiff PDEs,” SIAM Journal on Scientific Computing 26, 1214–
1233 (2005).
22B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A.
y Arcas, “Communication-efficient learning of deep networks
from decentralized data,” in Artificial Intelligence and Statistics
(PMLR, 2017) pp. 1273–1282.
23S. E. Ahmed, O. San, A. Rasheed, and T. Iliescu, “Nonlinear proper orthogonal decomposition for convection-dominated
flows,” Physics of Fluids 33, 121702 (2021).
24T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and
V. Smith, “Federated optimization in heterogeneous networks,”
Proceedings of Machine Learning and Systems 2, 429–450 (2020).
25T. Li, M. Sanjabi, A. Beirami, and V. Smith, “Fair resource allocation in federated learning,” arXiv preprint arXiv:1905.10497
(2019).
26A. Fallah, A. Mokhtari, and A. Ozdaglar, “Personalized federated learning: A meta-learning approach,” arXiv preprint
arXiv:2002.07948 (2020).
27Y. Deng, M. M. Kamani, and M. Mahdavi, “Adaptive personalized federated learning,” arXiv preprint arXiv:2003.13461
(2020).
28A. Z. Tan, H. Yu, L. Cui, and Q. Yang, “Towards personalized
federated learning,” IEEE Transactions on Neural Networks and
Learning Systems (2022).
29S. E. Ahmed, S. M. Rahman, O. San, A. Rasheed, and I. M.
Navon, “Memory embedded non-intrusive reduced order modeling of non-ergodic flows,” Physics of Fluids 31, 126602 (2019).
30S. Pawar, O. San, A. Nair, A. Rasheed, and T. Kvamsdal,
“Model fusion with physics-guided machine learning: Projectionbased reduced-order modeling,” Physics of Fluids 33, 067123
(2021).
31S. Pawar, S. E. Ahmed, O. San, and A. Rasheed, “Data-driven
recovery of hidden physics in reduced order modeling of fluid
flows,” Physics of Fluids 32, 036602 (2020).
32O. San and T. Iliescu, “Proper orthogonal decomposition closure
models for fluid flows: Burgers equation,” International Journal
of Numerical Analysis & Modeling, Series B 5, 217–237 (2014).
33O. San and T. Iliescu, “A stabilized proper orthogonal decomposition reduced-order model for large scale quasigeostrophic ocean
circulation,” Advances in Computational Mathematics 41, 1289–
1319 (2015).
34P. Cvitanovi´c, R. L. Davidchack, and E. Siminos, “On the state
space geometry of the Kuramoto–Sivashinsky flow in a periodic
domain,” SIAM Journal on Applied Dynamical Systems 9, 1–33
(2010).
35J. C. Robinson, “Inertial manifolds for the Kuramoto-Sivashinsky
equation,” Physics Letters A 184, 190–193 (1994).
36G. Tadmor, D. Bissex, B. Noack, M. Morzynski, T. Colonius,
and K. Taira, “Fast approximated POD for a Flat Plate Benchmark with a time varying angle of attack,” in 4th Flow Control
_Conference (2008) p. 4191._
37O. San and J. Borggaard, “Principal interval decomposition
framework for POD reduced-order modeling of convective Boussinesq flows,” International Journal for Numerical Methods in Fluids 78, 37–62 (2015).
38M. Ahmed and O. San, “Stabilized principal interval decomposition method for model reduction of nonlinear convective systems
with moving shocks,” Computational and Applied Mathematics
**37, 6870–6902 (2018).**
-----
| 6,202
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2208.07017, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://aip.scitation.org/doi/pdf/10.1063/5.0104344"
}
| 2,022
|
[
"JournalArticle"
] | true
| 2022-08-15T00:00:00
|
[
{
"paperId": "59d58c8f22a2b76fdac902529e6578b4e12ba850",
"title": "Nonlinear proper orthogonal decomposition for convection-dominated flows"
},
{
"paperId": "16861131abd1326b4698bb8949bad6e452dc8190",
"title": "On closures for reduced order models—A spectrum of first-principle to machine-learned avenues"
},
{
"paperId": "e197a2ed7bc6d2ac519e44dab16734cd802d98f0",
"title": "Model fusion with physics-guided machine learning: Projection-based reduced-order modeling"
},
{
"paperId": "e9e84cc5c4f9a1fceb1bf72d14f1424e12f5fc41",
"title": "Hybrid analysis and modeling, eclecticism, and multifidelity computing toward digital twin revolution"
},
{
"paperId": "481dd25896ac531707870c9b8c179cce20013401",
"title": "Towards Personalized Federated Learning"
},
{
"paperId": "9906e795be407342b934611966e9dfe383740039",
"title": "Big Data Storage and Analysis for Smart Farming"
},
{
"paperId": "1c5bd8eaea6cc9117a53c04c57fe33bec7ebd824",
"title": "Service design for climate-smart agriculture"
},
{
"paperId": "68b03ad0ed6c1c9ac968fa4c5ae66c3020dc783e",
"title": "Multiscale simulations of complex systems by learning their effective dynamics"
},
{
"paperId": "5c6952731c5f5a5de1311f66a0590e035cca418d",
"title": "Adaptive Personalized Federated Learning"
},
{
"paperId": "f7e7eb69db25df3e9efa1fc2cb3e2f2d6537580d",
"title": "Personalized Federated Learning: A Meta-Learning Approach"
},
{
"paperId": "8163f47056c7e0a338a25f3432dc7aaab7310631",
"title": "Deep learning to discover and predict dynamics on an inertial manifold"
},
{
"paperId": "453f4dd77dcf3f1c14e91280bf02fdc07ddabcbf",
"title": "Data-driven recovery of hidden physics in reduced order modeling of fluid flows"
},
{
"paperId": "ab058d35b6d990081fc901e87ba80837eb1c4d0c",
"title": "Memory embedded non-intrusive reduced order modeling of non-ergodic flows"
},
{
"paperId": "179ea04fc6feff8ae3ffdf471d575d5ddadb2e37",
"title": "Digital Twin: Values, Challenges and Enablers From a Modeling Perspective"
},
{
"paperId": "90fdbe550c6c04b4bd082e4f5714ed40d61738a6",
"title": "Fair Resource Allocation in Federated Learning"
},
{
"paperId": "1284ed4bf6a043ecf8cebca09e4811f1e3b83b65",
"title": "Federated Optimization in Heterogeneous Networks"
},
{
"paperId": "f4e5b1cb5bff32723dba63b600dbf3486c3ab5ef",
"title": "Stabilized principal interval decomposition method for model reduction of nonlinear convective systems with moving shocks"
},
{
"paperId": "052712109fd71c4261813db8cbc321ef5adfb060",
"title": "Smart IoT Devices in the Home: Security and Privacy Implications"
},
{
"paperId": "f772c778135e095d7f77d6536b673958caa7c2df",
"title": "Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks"
},
{
"paperId": "65b6c541ac4f1b57382fb4f95fe3cf4ccc8752be",
"title": "Model-Free Prediction of Large Spatiotemporally Chaotic Systems from Data: A Reservoir Computing Approach."
},
{
"paperId": "a64c85c4346e54b8734449429f2eba339e039ce6",
"title": "Can the crowdsourcing data paradigm take atmospheric science to a new level? A case study of the urban heat island of London quantified using Netatmo weather stations"
},
{
"paperId": "d1dbf643447405984eeef098b1b320dee0b3b8a7",
"title": "Communication-Efficient Learning of Deep Networks from Decentralized Data"
},
{
"paperId": "591d2af0b9a1ef184d37acfd3a1241e0a3523e5b",
"title": "Principal interval decomposition framework for POD reduced‐order modeling of convective Boussinesq flows"
},
{
"paperId": "f615da52886a911d9746edbe8b84ec8032211496",
"title": "A stabilized proper orthogonal decomposition reduced-order model for large scale quasigeostrophic ocean circulation"
},
{
"paperId": "482e18ce4e0d5bddda4811bf2fd7d85fff08a522",
"title": "Proper orthogonal decomposition closure models for fluid flows: Burgers equation"
},
{
"paperId": "ee139c7f6184765fe12e790420398a6fcfd5a323",
"title": "A global comparison between station air temperatures and MODIS land surface temperatures reveals the cooling role of forests"
},
{
"paperId": "0407a27e48da23a4453c2deddd8d03a182e6b8f1",
"title": "Fast Approximated POD for a Flat Plate Benchmark with a Time Varying Angle of Attack"
},
{
"paperId": "5c7c729da1d3c78fb5f1d176226cf9798bcd399a",
"title": "On the State Space Geometry of the Kuramoto-Sivashinsky Flow in a Periodic Domain"
},
{
"paperId": "f21ef1f3e082d544b329e7066d5dc5ebb4eeeccf",
"title": "Fourth-Order Time-Stepping for Stiff PDEs"
},
{
"paperId": "743a068f1cec28c49cf68cdba539dcc67e7ea4b0",
"title": "Intermittent chaos driven by nonlinear Alfvén waves"
},
{
"paperId": "9e4eb96b18d84d2a8693a8a6951917a58192d93f",
"title": "Identification of distributed parameter systems: A neural net based approach"
},
{
"paperId": "47fba81f2f2c685970a1b52ad198a1a575868869",
"title": "Inertial manifolds for the Kuramoto-Sivashinsky equation"
},
{
"paperId": "24c4e29fb0d72da992820e9fe7ab8c81b2f65144",
"title": "Kuramoto-Sivashinsky dynamics on the center-unstable manifold"
},
{
"paperId": "91b835abd299e7bdbc01dc58d70656bc35c5f397",
"title": "Diffusion-Induced Chaos in Reaction Systems"
},
{
"paperId": "619410424f9b549f5f8d509919302bd09d5bfe24",
"title": "Nonlinear analysis of hydrodynamic instability in laminar flames—I. Derivation of basic equations"
},
{
"paperId": "fcc7448cf47cca108bbd851df1975fabe60147e9",
"title": "Nonlinear Saturation of the Trapped-Ion Mode"
},
{
"paperId": null,
"title": "Tur-bulence"
},
{
"paperId": "4e9318e17bf9549661b39a54e57443ab1d56d2e5",
"title": "Nonlinear analysis of hydrodynamic instability in laminar flames—I. Derivation of basic equations"
},
{
"paperId": null,
"title": "“Report from the NASA Machine Learning Workshop, April 17– 19, 2018, Boulder, CO,”"
}
] | 6,202
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0077a28384ba6d3565de4227ae34f76cc4287004
|
[
"Computer Science"
] | 0.866039
|
Optimization of Data and Energy Migrations in Mini Data Centers for Carbon-Neutral Computing
|
0077a28384ba6d3565de4227ae34f76cc4287004
|
IEEE Transactions on Sustainable Computing
|
[
{
"authorId": "2180927363",
"name": "Marcos De Melo da Silva"
},
{
"authorId": "108555418",
"name": "A. Gamatie"
},
{
"authorId": "1710894",
"name": "G. Sassatelli"
},
{
"authorId": "144170531",
"name": "M. Poss"
},
{
"authorId": "143610132",
"name": "M. Robert"
}
] |
{
"alternate_issns": [
"2377-3782"
],
"alternate_names": [
"IEEE Trans Sustain Comput"
],
"alternate_urls": [
"https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=7274860"
],
"id": "8972ab32-b3a8-4d2e-bfa0-494c8801bcce",
"issn": "2377-3790",
"name": "IEEE Transactions on Sustainable Computing",
"type": null,
"url": "https://www.computer.org/web/tsusc"
}
|
Due to large-scale applications and services, cloud computing infrastructures are experiencing an ever-increasing demand for computing resources. At the same time, the overall power consumption of data centers has been rising beyond 1% of worldwide electricity consumption. The usage of renewable energy in data centers contributes to decreasing their carbon footprint and overall electricity costs. Several green-energy-aware resource allocation approaches have been studied recently. None of them takes advantage of the joint migration of jobs and energy in green data centers to increase energy efficiency. This paper presents an optimization approach for energy-efficient resource allocation in mini data centers. The observed momentum around edge computing makes the design of geographically distributed mini data centers highly desirable. Our solution exploits both virtual machines (VMs) and energy migrations between green compute nodes in mini data centers. These nodes have energy harvesting, storage, and transport capabilities. They enable the migration of VMs and energy across different nodes. Compared to VM allocation alone, joint-optimization of VM and energy allocation reduces utility electricity consumption by up to 22%. This reduction can reach up to 28.5% for the same system when integrating less energy-efficient servers. The gains are demonstrated using simulation and a Mixed Integer Linear Programming formulation for the resource allocation problem. Furthermore, we show how our solution contributes to sustaining the energy consumption of old-generation and less efficient servers in mini data centers.
|
### Optimization of Data and Energy Migrations in Mini Data Centers for Carbon-Neutral Computing
#### Marcos de Melo da Silva, Abdoulaye Gamatié, Gilles Sassatelli, Michael Poss,
Michel Robert
To cite this version:
##### Marcos de Melo da Silva, Abdoulaye Gamatié, Gilles Sassatelli, Michael Poss, Michel Robert. Op- timization of Data and Energy Migrations in Mini Data Centers for Carbon-Neutral Computing. IEEE Transactions on Sustainable Computing, 2023, 8 (1), pp.68-81. 10.1109/TSUSC.2022.3197090. lirmm-03746168
#### HAL Id: lirmm-03746168
https://hal-lirmm.ccsd.cnrs.fr/lirmm-03746168
##### Submitted on 4 Aug 2022
##### HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
##### L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
-----
## Optimization of Data and Energy Migrations in Mini Data Centers for Carbon-Neutral Computing
##### Marcos De Melo da Silva, Abdoulaye Gamati´e, Gilles Sassatelli, Michael Poss and Michel Robert
**Abstract—Due to large-scale applications and services, cloud computing infrastructures are experiencing an ever-increasing demand**
for computing resources. At the same time, the overall power consumption of data centers has been rising beyond 1% of worldwide
electricity consumption. The usage of renewable energy in data centers contributes to decreasing their carbon footprint and overall
electricity costs. Several green-energy-aware resource allocation approaches have been studied recently. None of them takes
advantage of the joint migration of jobs and energy in green data centers to increase energy efficiency.
This paper presents an optimization approach for energy-efficient resource allocation in mini data centers. The observed momentum
around edge computing makes the design of geographically distributed mini data centers highly desirable. Our solution exploits both
virtual machines (VMs) and energy migrations between green compute nodes in mini data centers. These nodes have energy
harvesting, storage, and transport capabilities. They enable the migration of VMs and energy across different nodes. Compared to VM
allocation alone, joint-optimization of VM and energy allocation reduces utility electricity consumption by up to 22%. This reduction can
reach up to 28.5% for the same system when integrating less energy-efficient servers. The gains are demonstrated using simulation
and a Mixed Integer Linear Programming formulation for the resource allocation problem. Furthermore, we show how our solution
contributes to sustaining the energy consumption of old-generation and less efficient servers in mini data centers.
**Index Terms—Mini data center, distributed computing, carbon neutrality, renewable energy, resource allocation, optimization,**
energy-aware systems, workload allocation and scheduling
##### ✦
##### 1 INTRODUCTION
LOUD computing and other large-scale applications
and services have caused an increase in energy needs
# C
for infrastructures such as data centers over the past decade.
According to [1], the annual energy consumption of data
centers is estimated to be 200 terawatt-hours (TWh). This
corresponds to around 1% of the worldwide electricity
consumption [2] and 0.3% of global CO2 emissions. Given
the rising energy demand in data centers, innovative technologies (e.g., hyperscale infrastructures) and renewable
energies will become crucial. Major industrial actors such
as Google, Amazon, and Facebook claim to operate carbonneutral data centers thanks to Renewable Energy Credits [3],
which are non-physical assets linked to renewable energy
projects. Although this strategic incentive does contribute to
developing renewables, it does not imply that data centers
themselves are powered by renewables. Recently, however,
Google announced its intention to match its global data
center energy consumption to renewable energy production.
Its ultimate objective is to make its data centers operate on
decarbonized energy 24/7, 365 days a year [4]. Facebook
declared in its report on sustainability that its global operations will be 100% supported by renewable energy in a few
years [5]. Amazon has set the same goal for 2025; it plans to
_•_ _Marcos De Melo da Silva, Abdoulaye Gamati´e, Gilles Sassatelli, Michael_
_Poss and Michel Robert are with LIRMM, Univ. Montpellier and CNRS,_
_Montpellier, France._
_E-mail: [email protected], [email protected],_
[email protected], [email protected], [email protected]_
_Manuscript received September 2021; revised May 2022_
achieve net-zero carbon emissions by 2040 [6].
The above trend of incorporating renewable energies
into the power supply mix of data centers will keep on
developing. It does not only reduce the total power consumption, but also the carbon emissions. To successfully
achieve this goal, the design of conventional grid-connected
data centers must be revisited. The new designs should be
robust to the intermittent nature of renewable energies while
minimizing the use of utility electricity. They should also
be scalable with respect to energy harvesting and workload
execution capabilities. Finally, they should guarantee low
response times to requests from client devices and users, as
expected in the edge computing context.
**1.1** **Limitations of current approaches**
A notable part of state-of-the-art approaches [7] [8] considers data center designs consuming power from both the
utility grid and renewable sources. Each of the sources is
connected to the data center infrastructure via its centralized
entry point. Renewable energy is either directly used or
stored in large-capacity batteries for later usage [9]. The
key challenge consists in maximizing the use of renewable
energy, while minimizing that from the utility grid. It is
usually solved through various energy-aware scheduling
techniques applied to tasks, workloads or virtual machines
(VMs) [10]. In this paper, we claim that acting only upon
mapping and scheduling of software objects (tasks, workloads or VMs) has limited optimization potential in terms of
energy consumption
-----
Indeed, data migrations required between computing
nodes are often I/O intensive as they usually involve several
operations on tasks and VMs: context saving before migration, transfer towards remote nodes, and context restoration
before resuming execution on destination nodes. Beyond
additional latencies, these operations incur a significant
energy consumption overhead of the server network [11]. As
shown in our study, combining energy migration with VM
migration between distributed servers, equipped each with
local energy harvesting and storage facilities, helps lowering
the required brown or non-green energy consumption.
**1.2** **Our solution : distributed mini data centers**
Mini data centers (1–10 racks within 1–25 square meters
compute space [12]) are very promising solutions to meet
the aforementioned requirements, i.e., energy-efficiency,
scalability and low latency. They can execute up to 100
VMs each thanks to their efficient servers. Application domains typically include industrial automation, environmental monitoring, oil and gas exploration, and, in general,
urban applications requiring local and real-time data processing [13].
Tens of such mini data centers can be deployed and
interconnected at the city district level to form a powerful
data center. They can operate through a dedicated electrical
grid, and promote the use of renewable energy [14]. In
the present work, we extend this concept by exploiting the
opportunity to migrate both the workload and energy across
different computing nodes.
We consider a novel design approach for green data
centers, which is composed of distributed green nodes. The
green nodes are interconnected by two kinds of links, as
shown in Figure 1: i) a data link (in blue color), typically
Gigabit Ethernet, used for task or VM migrations between
the nodes; and ii) a power link (in red color), for energy
transfer between green nodes connected by a power crossbar [15], [16]. The power crossbar is a central component
in the design that makes it possible to control the electrical
topology of the energy network via software.
Fig. 1. Mesh network of green nodes in a mini data center.
Each green node includes a compute system, an energy harvester, a local energy storage, and a custom programmable power electronics system handling the energy
transfer between the different nodes.
To demonstrate the relevance of our solution, we address
the following questions:
_•_ **Q1): given a workload profile, how to dimension the main**
_energy harvesting and storage components of our proposed_
_system design to ensure its energy neutrality over a_
_whole year? Here, by energy neutrality, we mean how_
the non-green energy used by a system to execute
a workload in scarce green energy periods can be
compensated for by the surplus of green energy
harvested in more favorable periods. This surplus
can be typically re-injected into the grid when the
energy storage is already full.
_•_ **Q2): how much non-green energy can be saved when**
_executing typical data center workloads in our proposed_
_system, i.e., supporting both data and energy migration_
_between execution nodes? The resulting non-green en-_
ergy savings have a beneficial impact on the electricity bill of data centers. The related expense can reach
several million dollars every year, representing a non
negligible percentage of the data center exploitation
costs. For this reason, electricity cost reduction for
data centers is still a major challenge [17].
_•_ **Q3): how does these non-green energy savings vary ac-**
_cording to i) different solar irradiation conditions (low_
_versus high irradiation), and ii) according to less energy-_
_efficient servers, typically relying on old-generation power_
_management technologies?_
**1.3** **Our contribution**
The contribution of the current work consists of an optimization approach to maximize the use of renewable energy
by exploiting the unprecedented energy and data migration
capability of the proposed design. We answer the aforementioned questions by characterizing the energy gains across
different design scenarios.
_•_ First, questions Q1) and Q2) are addressed by considering a battery and photovoltaic panel sizing model,
detailed in Appendix A. This model is used to solve
the energy optimization problem for four representative workload execution policies in the system: i) VM
execution without any migration, ii) VM execution
with data migration only, iii) VM execution with
energy migration only, and iv) VM execution with
both data and energy migration. These policies are
compared according to their efficiency in terms of
non-green energy usage, i.e., from the utility grid.
_•_ To answer question Q3), we evaluate the above migration policies while considering low and high solar
irradiation conditions used in the south of France.
The irradiation data are obtained from a well-known
database [18]. We also assess the same scenarios
while modeling old-generation servers dissipating
more static power due to inefficient power management mechanisms. Finally, we explore the impact of
energy harvesting and storage resource reduction on
the considered system. In particular, we reduced the
solar panels and battery capacity by 25% in the initial
energy neutral dimensioning.
_•_ Given the outcomes of the above evaluations, we
show that execution policies with energy migration
can reduce by more than 50% the non-green energy
usage over a year in favorable and realistic solar
-----
irradiation conditions. This tendency is preserved
with either server wear or a reduction by 25% of
energy harvesting and storage resources w.r.t. a reference energy-neutral dimensioning. These results
offer interesting insights into the trade-off between
the cost and sustainability of data centers against
their expected energy-efficiency after deployment.
**1.4** **Outline of this paper**
The remainder of this paper is organized as follows. Section
2 presents some related work on green data centers and
optimization techniques applied to the resource allocation
issues. Section 3 provides more details on the design principles of our solution. It also deals with the modeling of
the target distributed computing nodes for the optimization
problem. Section 4 describes an application of Mixed Integer Linear Programming to formulate the energy-efficient
resource allocation problem in our proposed system design.
Section 5 evaluates the optimization solution through different use cases. Finally, Section 6 gives some concluding
remarks and future works perspectives. Appendices A and
B provide further technical details and complementary results.
##### 2 RELATED WORK
We first discuss the management of green data centers.
The covered literature focuses on maximizing the use of
renewable energy over utility electricity, which is assumed
to be partly produced by conventional fossil fuels. Then, a
special focus is put on relevant optimisation techniques for
efficient resource allocation in data centers.
**2.1** **Green data center management**
A variety of power management techniques has been investigated for reducing the energy consumption of computing
systems, from embedded multiprocessor systems to highperformance computing domains [10]. At the software level,
these techniques cover workload allocation and scheduling
policies as well as dynamic load balancing technologies.
At the hardware level, well-established techniques include
dynamic voltage and frequency scaling (DVFS), dynamic
power management, etc. In the specific case of data centers,
the reduction of the non-negligible cooling-related power
has been also addressed [19], [20], [21].
The survey in [10] provides a comprehensive presentation of so-called green-energy-aware approaches. It distinguishes workload scheduling from VM management.
The former approach focuses on job scheduling to find
favorable conditions (electricity price, carbon emission level,
renewable power availability) in a data center. Meanwhile,
the latter approach leverages a virtualized environment
through VM migration during execution. Our solution also
applies to VM migration w.r.t. renewable power availability.
It also integrates a supplementary dimension, i.e. energy
migration, which contributes to reducing the overall energy
cost.
Typical green-energy-aware approaches exploit task
scheduling, as illustrated in [22] and [23]. In [22], a multiobjective co-evolutionary algorithm is advocated during the
scheduling. It enables configuring the execution nodes with
adequate voltage/frequency levels to match the renewable
energy supply. In this way, authors try to maximize both
the renewable energy utilization and the workloads quality
of service (QoS), i.e., higher task execution throughput and
lower energy consumption. In [23], a larger task scheduling
problem is considered for data centers. An evolutionary
multi-objective approach is applied to solve the problem
when both computing and cooling infrastructures are controlled. The solution addresses three optimization dimensions: power consumption, temperature, and QoS.
The problem of VM allocation to servers has also been
addressed by focusing on the server network activity [11].
The aim of this study is to reduce the number of active
switches in the network and balance the data traffic flows.
For this purpose, the relation between power consumption
and data routing is exploited. In [24], another approach
deals with energy-proportionality in large scale data centers
by lowering the power consumption of data center networks
while they are not being used. Different allocation policies
have been evaluated and analyzed through simulations.
An interesting insight from this study is that the size of
the networks plays a central role in achieving energyproportionality: the larger the data center networks, the
greater the energy-proportionality.
In [8], the authors propose a methodology for operation
planning to maximize the usage of locally harvested energy
in data centers. They define a mechanism to shift the energy
demand from low renewable energy production time slots
to higher energy production ones. This reduces the power
consumption from the utility grid. The shifting mechanism
relies on energy production prediction according to the
weather variation. The authors show their approach enables
an increase in renewable energy usage by 12%. A similar
study [7] recently shows that this usage can be increased
by 10%, while utility electricity energy consumption can
be reduced by 21%. It adopts a self-adaptive approach for
resource management in cloud computing. In the above
studies, the experimental results are obtained via simulation. More generally, access to suitable tools for studying
data centers integrating renewable energy sources has been
a real challenge. The most popular solutions include the
research platforms proposed by Goiri et al. [25], [26], [27].
They mainly focus on solar energy.
While the aforementioned studies account for both grid
power supply and renewable energy sources, other studies
only consider the latter. For instance, in [28] the authors
deal with independent task scheduling in computing facilities powered only by renewable energy sources. Using a
Python-based simulation environment, they evaluate different scheduling heuristics within a predicted power envelope
to minimize the makespan in multicore systems. In [29],
a similar problem is addressed for data centers. A specific
task scheduling module is defined, which aims to maximize
QoS demands. It considers a so-called power envelope estimated from weather forecasts and states of charge of energy
storage components. An interesting insight gained from this
study is more power does not necessarily lead to better QoS,
but knowing when the power is delivered is more relevant
for better outcomes.
The zero-carbon cloud (ZCCloud) project [30] deals with
-----
the exploitation of the so called stranded power . This
power is generated by renewable energy sources (e.g. wind,
solar) when the harvested power exceeds power demand
and cannot be stored by the grid due to limited storage
capacity. Instead of discarding this power at the source,
the project proposes an approach for using the stranded
power, hence reducing the carbon footprint of cloud computing. Examples of issues investigated in the framework
of ZCCloud are: compute load shifting to better leverage
carbon-free energy, execution of applications under realtime requirements (e.g. virtual reality, distributed video
analytics) on serverless cloud computing, and extension of
computing hardware lifetime.
A noteworthy approach is presented in the Datazero
project [9]. It aims at the zero-emission and robust management of data centers using renewable energy sources. Unlike
the majority of existing approaches, Datazero advocates
a separate optimization of design objectives: objectives of
the IT services versus electrical management. A negotiation module is defined between both to find a satisfactory
compromise with respect to their respective objectives and
constraints, e.g., high availability of IT services under the erratic behavior of renewable energy sources. By doing so, the
authors avoid a challenging global optimization problem.
**2.2** **Optimization for data center resource allocation**
An important optimization problem in cloud data centers
is the consolidation of VMs to physical servers. It consists
in placing VMs in as few servers as possible and putting
in sleep mode or shutting off idle servers. This reduces
the global power consumption without sacrificing QoS. The
objective is to enable high performance while ensuring that
_Service Level Agreement (SLA) levels are met and operational_
costs are minimized. Approaches for the VM placement
need to effectively answer the following questions: i) which
node(s) should host new VMs as well as VMs that are
being migrated? ii) when should a VM be migrated? iii)
which VMs should be migrated? The last two questions
are addressed by techniques that detect underutilized and
overloaded servers [31]. Finally, tackling the first question
requires the solution to an optimization problem involving
the allocation of limited server resources to the VMs.
The VM placement problem is usually formulated as
a multi-dimensional bin-packing problem, where servers are
modeled as bins and VMs as items. Multiple server resources, e.g., CPU, memory, disk space, network bandwidth,
are allocated to the VMs. Despite their limited practical
applicability, some authors have proposed exact approaches
based on Integer Linear Programming (ILP) to solve the
problem [32], [33]. However, given the dynamic nature
of the problem, most approaches are heuristic algorithms.
Among those, [34], [35], [36] developed algorithms based on
classical bin-packing heuristics, e.g., first-fit decreasing and
_best-fit decreasing. Meanwhile, as mentioned in the previous_
section, [22], [23] investigated evolutionary algorithms. We
refer the interested reader to recent surveys [31], [37], [38]
for a more exhaustive literature coverage.
When comparing the existing ILP-based resource allocation solutions with ours, we observe that the adopted formulations are often presented with the purpose of describing the problem The formulations are actually not solved
in practice. Instead, the authors prefer to use heuristic
procedures. The main reason is the higher execution times
required by ILP solvers. In addition, such formulations only
tackle the problem of assigning VMs to servers and allocating the necessary resources at a given moment. They do not
take into account how the resource utilization change over
time. This last observation can be applied to the heuristic
approaches as well. In our case, we need to properly model
and simulate the whole computing infrastructure so that
we are able to capture its dynamic behaviour, i.e., how
the resource utilization and server states evolve over time,
which implies solving the problem for an extended period of
time. For that, we employ a time-indexed formulation that
not only performs the VMs placement and tracks resources,
but also models energy consumption and its flow.
**2.3** **Summary**
Table 1 summarizes some relevant features of the discussed
studies in comparison with the present work. While our
approach aims at leveraging renewable energy sources in
mini data centers, it fundamentally differs from the above
studies in its additional optimization dimension brought by
energy transfer. The ability to trigger on-demand energy
transfers between distributed nodes is an important lever
(beyond data/workload migrations) for achieving the best
possible energy-efficient trade-offs depending on the node
requirements. This enables us to propose an optimization
model capable of finding the most favorable execution
of the system. By applying suitable data and energy migrations between the nodes, we seek to minimize utility
power consumption, up to solely using renewable energy
for system execution. This matches expectations in mini
data centers [14]. In a seminal work [39], we leveraged the
energy migration principle to address the formal modeling
and analysis of a safety-critical application on a multicore
embedded system, under energy neutrality conditions. The
present work rather focuses on the optimization problem of
resource allocation for a different kind of system.
##### 3 DESIGN PRINCIPLES OF OUR PROPOSAL
Our energy-neutral system design consists of n interconnected green nodes, N = 1, . . ., n . Each green node (see
_{_ _}_
Figure 2) includes a computing system such as a server blade,
an energy harvester consisting of photo-voltaic (PV) solar
_panels, a battery for local energy storage, and a logic board_
for managing the energy generation and storage, as well as
the transfer of energy between nodes.
A node operates primarily on the harvested energy. In
periods of low solar irradiation (e.g., night time, cloudy
and rainy days) in which the average energy demand is
higher than PV production, nodes consume the energy
accumulated in their batteries. In the event that a node has a
near-empty battery, which prevents continuing operation, it
can either transfer its workload to other green nodes or fetch
energy from remote green nodes (see Figure 3). Nodes can
therefore wire power ports together (or conversely isolate),
thereby connecting electrically remote components, e.g. a
given node’s battery with a distant node’s compute system.
This in essence means that energy can be migrated i e
-----
TABLE 1
Summary of related work on green data center management
Xu et al. (2020) [7] + + + +
Cioara et al. (2015) [8] + + + +
Pierson et al. (2019) [9] + + + +
Kong et al. (2012) [10] + + + + +
Wang et al. (2014) [11] + +
Abbasi et al. (2012) [19] + + +
Ganesh et al. (2013) [20] + + +
Li et al. (2020) [21] + +
Lei et al. (2016) [22] + + +
Nesmachnow et al. (2015) [23] + + +
Goiri et al. (2013) [25]
Goiri et al. (2014) [26] + + + + +
Goiri et al. (2015) [27]
Kassab et al. (2017) [28] + + +
Caux et al. (2018) [29] + + +
Chien et al. (2015) [30] + + + +
Ismaeel et al. (2018) [31] + +
Ruiu et al. (2017) [24] + +
Hwang et al. (2013) [32] + +
Tseng et al. (2015) [33] + +
Beloglazov et al. (2011) [34] + +
Jangiti et al. (2020) [35] + +
Liu et al. (2020) [36] + +
**This work** + + + + + +
(a) Conceptual view
|Col1|Renewable energy|Task–Job–VM scheduling|Power scheduling|Cooling power reduction|Simulation approach|Hardware prototype|ILP optim.|H ur t Ma ce hi ni es li ecs a r/ n.|D i b e ene ri gst yr tr aut ns fd er|
|---|---|---|---|---|---|---|---|---|---|
|Xu et al. (2020) [7]|+|+|||+|+|+|||
|Cioara et al. (2015) [8]|+|+||+|+|||+||
|Pierson et al. (2019) [9]|+|+|+||+||+|+||
|Kong et al. (2012) [10]|+|+||+|+|+|+|+||
|Wang et al. (2014) [11]||+|||+||+|||
|Abbasi et al. (2012) [19]||+||+|+|||+||
|Ganesh et al. (2013) [20]|||+|+|+|||||
|Li et al. (2020) [21]||||+|+|||+||
|Lei et al. (2016) [22]|+|+|||+|||+||
|Nesmachnow et al. (2015) [23]|+|+|||+|||+||
|Goiri et al. (2013) [25] Goiri et al. (2014) [26] Goiri et al. (2015) [27]|+|+|+||+|+|+|||
|Kassab et al. (2017) [28]|+|+|||+|||+||
|Caux et al. (2018) [29]|+|+|||+|||+||
|Chien et al. (2015) [30]|+|+|||+|+||||
|Ismaeel et al. (2018) [31]||+|||+||+|||
|Ruiu et al. (2017) [24]||+|||+|||+||
|Hwang et al. (2013) [32]||+|||+||+|+||
|Tseng et al. (2015) [33]||+|||+||+|||
|Beloglazov et al. (2011) [34]||+|||+|||+||
|Jangiti et al. (2020) [35]||+|||+|||+||
|Liu et al. (2020) [36]||+|||+|||+||
|This work|+|+|+|+|+|+|+|+|+|
(b) On-roof prototype view
Fig. 2. Green node: server + batteries + solar panels + control logic.
the power can be either supplied remotely, or transferred
and stored before use. This is a main difference with energy
packet networks [40], which support the latter option only.
As last resort, in case none of the previous actions are
possible, the nodes will be forced to purchase energy from
the utility grid to which they are connected. We assume
that the nodes are connected through Ethernet wires to a
switch (see Figure 3). This allows inter-node communication
and ensures the connectivity to the existing computing and
storage infrastructure, e.g., database servers, file servers,
cloud managing servers.
The above energy-neutral system operates outdoor for
Fig. 3. Simulated infrastructure for energy-neutral distributed computing.
instance placed on a rooftop for maximum solar irradiation.
The outdoor installation alongside the rather low compute
density (required for matching harvesting and compute
power consumption) makes for a cooling-friendly design,
in contrast to conventional data centers which require heavy
climate control equipment (HVAC). Experiments show that
even under high outside temperature a clever node thermal
design (using the enclosure as heatsink and having proper
positioning of vents) alongside few temperature-controlled
fans enables the system to maintain operation at temperatures below 70°C under heavy stress.
In addition, the advocated design inherently favors a
modular system extension Any new green node is inserted
-----
locally, thereby reducing drastically the necessary system
wide modifications. Finally, a failure of any green node
could be easily bypassed through data or energy re-routing
within the networked system, according to its topological
connections. This naturally increases the resilience of the
whole system [16].
**3.1** **Modeling the system behavior**
Beyond the physical infrastructure and its various components, i.e., the static elements of our design, we also need to
model their behaviour and how they evolve over time, i.e.,
the dynamics of the system.
Indeed, each component mentioned above presents an
interdependent dynamic behaviour, for instance,
_•_ the amount of energy generated by the PV panels varies with the solar irradiation levels, which
depends on weather conditions, time of day, and
season;
_•_ the amount of energy in the batteries varies as it is
consumed by the computational system or is charged
by the PV panels; and finally,
_•_ the amount of energy drained by the computing
elements changes in response to the workload.
To represent this inherent dynamic behaviour of the
system, we define a planning time horizon H, which we
discretize into T time steps. Each time step corresponds to
an interval of τ seconds.
In the following sections we provide further details on
the elements that compose the proposed green computing
infrastructure, as well as introduce some of the notation
used in the remainder of the paper.
**3.2** **Modeling of solar panels and batteries**
The amount of energy generated by the solar panels is
influenced both by the weather conditions (which affect the
local solar irradiation levels) and the physical characteristics
of PV panels, e.g., power conversion efficiency, dimensions,
etc. We apply the following equation to compute the energy,
in Watts-hour (Wh), produced in node i ∈ _N by ρn solar_
panels, each one having an area of ρs m[2] and conversion
efficiency of ρe, during time step t ∈ _H, with a solar_
irradiation of ι[t] _W/m[2]:_
_G[t]i_ [=][ ι][t][ ×][ (][ρ][e] _[×][ ρ][n]_ _[×][ ρ][s][)][ ×][ (][τ/][3600)]_ (1)
The energy generated by the energy harvester is used
primarily to power the computational system. The production surplus is stored in the batteries, whose capacity is
equal to Ui for each node i ∈ _N_ . Furthermore, in order
to avoid excessive battery wear, we ensure that batteries
cannot be discharged below a safety level of Li = 0.15 × Ui,
i.e., we always keep the batteries charged at least 15% of
their maximum capacity.
**3.3** **Modeling of energy migration efficiency**
The migration of energy between nodes involves a chain
of different power electronics components (e.g., several DC/DC converters wires) with variable efficiencies
(DC/DC converters) and losses (wires resistances, solid
state MOSFET switches, etc.). In our physical system prototype, all unit efficiencies have been measured so as to be able
to accurately model energy transfer losses for any arbitrary
path in the system.
Fig. 4. Energy migration paths among green nodes.
Figure 4 depicts two possible paths when transferring
energy between two nodes. In our physical system implementation, the direct connection path (c1 → _c2 →_ _c3),_
in which the energy stored in the battery of node 1 is
transferred directly to supply the computational module
of node 2, has an efficiency of 85.8%, which is rather high
in a source-to-sink configuration as typical industrial grade
AC/DC PSUs achieve similar efficiency at node-level only.
The indirect connection path (c1 → _c2 →_ _c4 →_ _c5), in which_
the energy stored in the battery of node 1 is transferred to
supply the computational module of node 3 using node 2 as
intermediary, has efficiency of 84.3%. Furthermore, for each
additional intermediary node the efficiency drops by 1.8%.
Overall, the actual energy migration capability incurs little
additional losses compared to node-local functioning, which
itself has efficiency similar to that of conventional compute
nodes.
**3.4** **Modeling of computing resources and workloads**
The computing system installed in a green node is characterized by its available processing, storage and network
resources and an idle power consumption and a dynamic
power consumption that vary with the computational load
being executed. Each node i ∈ _N has Ri[M]_ MB of RAM,
_Ri[D]_ [GB of disk storage, a network bandwidth of][ R]i[B] [Mb/s,]
and CPU load capacity Ri[C] [. Please note that instead of]
representing the CPU resource capacity in MIPS (million
instructions per second), as in [34], [36], we define an
utilization ratio in the interval [0.0, 1.0], where 0.0 means
that the machine is idle and 1.0 means that the CPU is
100% utilized. Furthermore, for the sake of simplicity, this
utilization measure is not applied in a per-core basis, but
for the whole processing unit. As a consequence, a VM can
utilize 100% of all the cores available in a computing node.
With respect to the energy consumption of a node, for a
given idle and full load power profile and a time interval of
_τ seconds, the idle energy consumption is equal to εI Wh_
and, the dynamic consumption is equal to εP Wh. We note
that the green nodes considered in a mini data center can be
either homogeneous or heterogeneous in terms of compute
resources, solar panel and battery capacities.
The computational workload to be executed on the green
nodes consists of m VMs, J = 1, . . ., m . Each VM j _J_
_{_ _}_ _∈_
has a requirement in terms of processing capacity, as well
as memory, disk, and network bandwidth resources. We
denote Vj[r][, for][ r][ ∈{][M, D, B][}][, the memory, disk, and]
bandwidth resources respectively required by VM j _J_
_∈_
-----
As for the CPU resources, the amount of computational
work carried by each VM changes over time; hence, we have
that Cj[t] [is the average CPU load imposed by VM][ j][ ∈] _[J]_
during time step t _H._
_∈_
##### 4 MILP FORMULATION FOR THE ENERGY- EFFICIENT RESOURCE ALLOCATION PROBLEM
The energy-neutral resource allocation optimization problem we need to solve concerns the distributed computing infrastructure designed according to the principles described
in the previous section. It consists in allocating m VMs to
_n green-energy nodes, such that the processing, memory,_
disk, and bandwidth resource demands of each VM are met
without overloading the machines. Nodes are allowed to
share energy among themselves or buy it from the utility
in order to process their workloads. Acceptable levels of
QoS are ensured by allowing VMs to be migrated between
nodes. The objective is to minimize the amount of non-green
energy bought from the utility grid and avoid energy waste
by performing unnecessary VMs and energy migrations
between nodes.
In this section we propose a Mixed Integer Linear Programming (MILP) [41], [42] formulation for the resource
allocation problem. We first present our working assumptions. Next, we summarize the model parameters, define
the decision variables and introduce the mathematical formulation of the problem. Then, we describe the energy cost
estimation related to VM migrations used within the model.
Finally, we explain how different variants of the problem
can be obtained by incorporating or eliminating certain
families of constraints from the formulation. In addition, we
provide implementation details and solution approaches.
**4.1** **Working assumptions**
Considering the dynamic nature of the modeled system and
the need to represent how its component’s states change
over time, we formulate the resources allocation problem as
a time-indexed MILP. To complement the problem description, and for ease of presentation, we list below our working
assumptions on the system’s components.
1) We model the changing aspects of the system by
considering the optimization problem over a planning horizon H = 1, . . ., T .
_{_ _}_
2) The VMs exist during the whole planning horizon.
Those that are not performing any work are assumed to be in an idle state in which they do not
consume resources.
3) The CPU utilization Cj[t] [of each VM][ j][ ∈] _[J][ is known]_
_t_ _H._
_∀_ _∈_
4) The dynamic energy consumption of each node is
based on CPU utilization level, which is directly
affected by the computational load of the VMs assigned to the node.
5) The initial energy Ii stored in the battery of each
node i _N is known._
_∈_
6) The safe discharge limit Li and the maximum storage
_capacity Ui of the battery installed in the node i ∈_ _N_
are known.
7) The energy gain G[t]i [of each node][ i][ ∈] _[N][ is known]_
_t_ _H_
_∀_ _∈_
8) The efficiency when transferring energy among nodes
_Eik, ∀i, k ∈_ _N is know._
9) VMs can be migrated among nodes, and the energy
cost for the source and target servers are µs and
_µd, respectively. In other words, the costs of VM_
context saving and restoring during VM migrations
are captured. We do not take into account the energy
consumed by switches and other network equipment, as it would complicate the problem even further. ILP optimization provides high-quality results
at the expense of high computation time. It is a
good fit for moderate complexity problems, which
correspond to the mini data centers and workloads
targeted in this paper (5 nodes and 25 VMs). Larger
scale systems are more tractable with heuristics as
found in some of related works.
**4.2** **Problem formulation**
We summarize in Table 2 the parameters tied to the system’s components described in the previous sections, and
introduce the sets and new parameters that are specific to
our formulation.
TABLE 2
Sets and parameters used in the problem formulation
Sets Description
_H_ Planning horizon
_T_ Number of time step in planning horizon H
_N_ Set of nodes
_J_ Set of VMs
_Ri_ Resources set of node i ∈ _N_
_Vj_ Resource requirements set of VM j ∈ _J_
Parameters Description
_Cj[t]_ CPU utilization of VM j ∈ _J at time step t ∈_ _H_
_Eik_ Energy transfer efficiency between nodes i, k ∈ _N_
_G[t]i_ Energy generated in node i ∈ _N at time step t ∈_ _H_
_Ii_ Initial amount of energy stored in node i ∈ _N_
_Li_ Safety discharge level of battery in node i ∈ _N_
_Ui_ Maximum capacity of battery in node i ∈ _N_
_εI_ Node’s idle energy consumption
_εP_ Additional energy consumption when node’s at 100%
_λ_ Energy loss for transferring 1 Wh between nodes
_µ_ Total energy cost for migrating a VM
_µd_ Target server energy cost for migrating a VM
_µs_ Source server energy cost for migrating a VM
_ν_ Penalization for server CPU overloading
_ϕ_ Energy loss for injecting 1 Wh in the utility grid
**Decision variables. We define the following decision**
variables:
1, if VM j _J is running on node i_ _N_
_∈_ _∈_
_•_ _x[t]ij_ [=] during time step t ∈ _H._
0, otherwise
1, if VM j _J is transferred from node_
_i ∈_ _N to node ∈_ _k ∈_ _N at the beginning_
_•_ _zikj[t]_ [=]
of time step t _H._
0, otherwise _∈_
_•_ _fik[t]_ _[≥]_ [0][, indicates the amount of energy transferred]
from node i _N to node k_ _N during time step_
_∈_ _∈_
_t_ _H_
_∈_
-----
_•_ _Li ≤_ _wi ≤_ _Ui, indicates the level of energy on the_
battery of node i _N at the end of time step t_
_∈_ _∈_
_H_ 0 .
_∪{_ _}_
_•_ _b[t]i_ _[≥]_ [0][, indicates the amount of energy bought by]
node i _N during time step i_ _H._
_∈_ _∈_
_•_ _qi[t]_ _[≥]_ [0][, indicates the amount of energy injected into]
the utility grid by node i _N during time step i_
_∈_ _∈_
_H._
_•_ _vi[t]_ _[≥]_ [0][, measures the amount of CPU capacity over-]
load in node i _N during time step i_ _H._
_∈_ _∈_
**Objective function and constraints. Our mathematical**
formulation consists of the objective function defined below:
Minimize λ � � _fik[t]_ [+][ µ] � � � _zikj[t]_
_t∈H_ _i,k∈N_ _t∈H_ _i,k∈N_ _j∈J_ (Obj1)
� � � � � �
_b[t]i_ [+][ ϕ] _qi[t]_ [+][ ν] _vi[t]_
_t∈H_ _i∈N_ _t∈H_ _i∈N_ _t∈H_ _i∈N_
which is subject to the following constraints:
_wi[t]_ [=][ w]i[t][−][1] + b[t]i [+][ G]i[t] [+] � _Ekifki[t]_
_k∈N_
_−_ � _fik[t]_ _[−]_ _[µ][s]_ � � _zikj[t]_ _[−]_ _[µ][d]_ � � _zkij[t]_
_k≠_ _i∈N_ _k∈N_ _j∈J_ _k∈N_ _j∈J_
�
_−_ (εI + εP _Cj[t][x]ij[t]_ [)][ −] _[q]i[t]_ _∀t ∈_ _H, i ∈_ _N_
_j∈J_
(2)
CPU resource allocation is modeled by constraints (4).
Note that we are not explicitly enforcing the allocation of
other computational resources as memory, disk and network
bandwidth. The reason is that we are interested mainly in
correctly modeling the additional energy consumption due
to increased CPU utilization. Nevertheless, these additional
resources can be easy incorporated by adding the following
constraints:
� _Vj[r][x]ij[t]_ _[≤]_ _[R]i[r]_ _∀t ∈_ _H, i ∈_ _N, r ∈{M, D, B}_ (10)
_j∈J_
The scheduling of the VMs to nodes is ensured by constraints (5), i.e., they ensure that during all time steps
_t_ _H, each VM j_ _J should be assigned to one of the_
_∈_ _∈_
computational nodes i _N_ . The number of VM migrations
_∈_
among nodes is accounted for in constraints (6), which
checks whether a VM changes from computational node
between time steps t 1 and t. The batteries’ maximum
_−_
capacity and discharge safety levels are enforced by constraints (7). Finally, constraints (8)-(9) define the domains of
the variables.
In the energy flow conservation constraint (2), a node’s
computational energy consumption (εI + εP �j∈J _[C]j[t][x][t]ij[) is]_
described using the model proposed by [43]. This model
assumes that the server power consumption and CPU utilization have a linear relationship.
**4.3** **Estimating the energy cost of VM migration**
Several authors have proposed models for estimating the
energy cost of migrating VMs in cloud environments [44],
[45], [46], [47]. Such models present varying levels of precision and modeling complexity, with the more descriptive
and complex ones achieving better precision at the expense
of extra parameter estimation and model tuning efforts [47].
Due to its simplicity and reasonable precision, we chose
to implement the model by [45]. As pointed out by the
authors, VM migration is an I/O intensive task and also the
most energy expensive one when transmitting and receiving
data over the network. Indeed, their model is based on
the assumption that the energy cost of performing VM
migrations can be determined by the amount of data that
is transferred during the migration process. The energy
consumption by the source and destination hosts increases
linearly with the network traffic, as described in the following equation:
_Emig = Esour + Edest = (γs + γd)Vmig + (κs + κd)_ (11)
where Esour is the energy consumed by the source host and
_Edest is the energy (in joules) spent by the destination host_
for transferring Vmig megabytes of data. γs, γd, κs, and κd
are model parameters to be trained. Equation (11) can be
further simplified if both source and destination hosts are
homogeneous:
_Emig = Esour + Edest = γVmig + κ_ (12)
Then, the MILP formulation (Obj1), (2)-(9) parameters related to VM migration are defined as µ = Emig, µs = Esour,
and µd = Edest. In addition, in our simulations we use
0 512 20 165 i [45]
_wi[0]_ [=][ I][i] _∀i ∈_ _N_ (3)
�
_Cj[t][x]ij[t]_ _[≤]_ _[R]i[C]_ [+][ v]i[t] _∀t ∈_ _H, i ∈_ _N_ (4)
_j∈J_
�
_x[t]ij_ [= 1] _∀t ∈_ _H, j ∈_ _J_ (5)
_i∈N_
_zikj[t]_ _[≥]_ _[x]ij[t][−][1]_ + x[t]kj _[−]_ [1] _∀t ≥_ 2, j ∈ _J, i ̸= k ∈_ _N_ (6)
_Li ≤_ _wi[t]_ _[≤]_ _[U][i]_ _∀t ∈_ _H, i ∈_ _N_ (7)
_x[t]ij[, z]ikj[t]_ _[∈{][0][,][ 1][}]_ _∀t ∈_ _H, j ∈_ _J, i ̸= k ∈_ _N_ (8)
_b[t]i[, q]i[t][, v]i[t][, f]ik[ t]_ _[≥]_ [0] _∀t ∈_ _H, i ̸= k ∈_ _N_ (9)
The objective function (Obj1) seeks to minimize the
energy losses incurred when performing energy and VM
migrations among nodes, the total amount of energy bought
from the utility grid, the energy losses associated with the
surplus energy generated that is injected into the utility
grid, and the penalties for over-utilization of processing
resources, respectively.
Constraints (2)-(7) model the characteristics of the problem related to resources allocation/VM scheduling, battery
charge levels, energy generation/consumption, and energy
flow conservation. More specifically, constraints (3) set up
the initial state of each node’s battery, i.e., the batteries
charge levels at the beginning of the simulation period. Similarly, constraints (2) define how the charge of the batteries
vary between two consecutive time steps ( _t_ _H, t_ 1)
_∀_ _∈_ _≥_
by taking into account the batteries state in the previous
time step (t 1) and the flow of energy from other sources
_−_
during the current time step (t), i.e., the energy generated
by the solar panels, the energy transferred among nodes, the
energy bought from and inject in the grid, and the energy
utilized by the computational workload
-----
**4.4** **Resource problem variants and implementation**
The formulation (Obj1), (2)-(9) can be adjusted to tackle
different variants of the resource allocation problem in our
proposed distributed infrastructure. In the following, we
describe how these variants are obtained and how we solve
them.
**Resource allocation problem variants. By adding or**
removing variables and constraints from the model (Obj1),
(2)-(9) such that energy/data migrations are permitted or
not generates four variants of the problem:
_•_ _Energy+Data Migrations: this is the default formu-_
lation and includes all variables, objective function (Obj1) and constraints (2)-(9).
_•_ _Data Migration Only: this variant is obtained by re-_
moving variables fik[t] _[,][ ∀][t][ ∈]_ _[H, i, k][ ∈]_ _[N]_ [. This model]
is a suitable baseline for the majority of state-of-theart solutions.
_•_ _Energy Migration Only: in this variant no data migra-_
tion is allowed. This is defined by removing the variables zikj[t] _[,][ ∀][t][ ∈]_ _[H, i, k][ ∈]_ _[N, j][ ∈]_ _[J][ and constraint (6),]_
while adding the binary variables yij, ∀i ∈ _N, j ∈_ _J_,
and the following constraints:
�
_yij = 1_ _∀j ∈_ _J_ (13)
_i∈N_
�
_x[t]ij_ [=][ Ty][ij] _∀i ∈_ _N, j ∈_ _J_ (14)
_t∈H_
which together ensure that each VM is assigned to a
node for the whole planning horizon. Each variable
_yij is equal to 1 if VM j ∈_ _J is assigned in to node_
_i_ _N_, 0 otherwise.
_∈_
_•_ _No Migration: this is the simplest one. It includes the_
two previous modifications, i.e., removing variables
_fik[t]_ [, and][ z]ikj[t] _[,][ ∀][t][ ∈]_ _[H, i, k][ ∈]_ _[N, j][ ∈]_ _[J]_ [, and con-]
straints (6), while adding variables yij, ∀i ∈ _N, j ∈_ _J_
and constraints (13)-(14).
**Rolling-Horizon heuristic. While the scenarios with no**
migrations or energy migration only can be solved in a
couple of hours when simulating an infrastructure with 5
nodes and 25 VMs for a planning horizon of one week, the
solution times become prohibitively high for the other two
scenarios which involves data migration. The reason is the
large number of variables zikj[t] _[,][ ∀][t][ ∈]_ _[H, i, k][ ∈]_ _[N, j][ ∈]_ _[J]_
and constraints (6) being generated. To cope with such large
models, we decided to apply a rolling-horizon heuristic
approach [48], [49]. This heuristic consists in splitting the
planning horizon in smaller pieces, and solving them sequentially. We call each planning horizon fragment a frame.
**Implementation details. The formulations, heuristic and**
other algorithms were coded in Julia[1] (version 1.5.4) using
the embedded modeling language for mathematical optimization JuMP[2] [50] and executed on an Intel® Xeon®
2.2 GHz CPU, with 64.0 GB of RAM running under
GNU/Linux Ubuntu 14.04 (kernel 4.4.0). Gurobi[3] 9.0.3 was
used as the LP and MILP solver. Four computation threads
1. https://julialang.org
2. https://jump.dev/
3 https://www gurobi com/
were used when simulating each scenario. In our rolling
horizon heuristic, each frame has 24 time steps.
We present in Figure 5 the average computational times
obtained when solving the proposed formulations, either
by using the rolling-horizon heuristic or applying the MILP
solver directly, for all the case studies analysed in the next
sections. As expected, the scenarios without any migrations
are the fastest, and the optimal solutions are obtained in less
than 10 minutes. Next, using the rolling-horizon heuristic,
on average one hour is needed to compute the solutions
for the scenarios with energy and data migrations. In these
two scenarios, no significant variation can be observed for
different periods of the year. The most time consuming scenarios are those with data migration only. On average, five
hours of computation using the rolling-horizon heuristic
are necessary for a complete solution. The scenarios with
energy migration only take on average two hours to prove
the optimality of solutions. Those with both energy and data
migrations require four hours. We note that the solutions in
the former scenarios are optimal, while those in the latter
are heuristic, i.e., approximate and not necessarily optimal.
Fig. 5. Average solution times of the four formulations considering all
study cases.
##### 5 CASE STUDY
In this section, we evaluate our resource allocation solution
through different use cases[4]. We first describe the experimental setup. Then, we evaluate the gains in non-green
energy enabled by four execution policies.
**5.1** **Experimental setup**
For our case studies, we simulated a mini data center
consisting of 5 nodes, whose maximum power consumption
is 500 W and the idle power consumption is either 50 W or
100 W. The nodes are connected as depicted in Figure 3. We
selected 5 nodes in our experiments for simulation complexity reasons. This is due to the costly constraints resolution
induced by the applied global ILP optimization problem.
The other infrastructure and formulation parameter values
are described in Table 3.
Beyond the above 5 nodes, a centralized computer is
used to perform the ILP solver. The cost of this computer
is considered as constant and is ignored in the remainder of
this case study.
**Irradiation and VMs CPU utilization data. Historical**
irradiation data was retrieved with the aid of the European
4. Some complementary evaluation scenarios of the use cases are
presented in Appendix B
-----
TABLE 3
Parameter values used in our simulations
Model parameters Values
_n_ 5 nodes
_m_ 25 VMs
_τ_ 5 minutes
_T_ 2016 time steps (7 days)
_Vmig_ 8192 MB (Equation 12)
_λ_ 1 − ([�][n]i=1 �nk=1,k≠ _i_ _[E][ik][)][/n][(][n][ −]_ [1)]
_µ_ _Emig (Equation 12)_
_µd_ _Edest (Equation 12)_
_µs_ _Esour (Equation 12)_
_ν_ 1000
_ϕ_ 0.5
Nodes parameters Values
_εI_ 4.17 Wh (50 W)
_εP_ 37.50 Wh (500 W)
_Ii_ 0.5 × Ui, ∀i ∈ _N_
PV panels[1] parameters Values
_ρs_ 1.59 m[2]
_ρe_ 0.19
([1]) Corresponds to panels NeON 2 by LG, model LG325N1C-A5.
_Photo-voltaic Geographical Information System (PVGIS) [18]._
The hourly irradiation datasets for Montpellier, France (Lat.
43.611, Long. 3.876) for the period 2005-2016 (Database
PVGIS-SARAH, all available years) were used to compute
the energy generated by each node. We also performed
additional studies for a location in Africa: Mali (Lat. 17.159,
Long. -3.340). Please note that the irradiation data points
in these datasets have a one hour interval to make it compatible with our time step interval τ of 5 minutes we used
piece-wise linear interpolation.
The workload of the VMs are simulated using real VMs
workload traces from the CoMon project, a monitoring
infrastructure for PlanetLab [51]. We used the same traces
dataset[5] as [34], [36]. This traces dataset consists of the CPU
utilization by a few thousand VMs from servers located at
more than 500 places around the world that were recorded
in the period of Mars and April of 2011. Each data point in
a VM trace corresponds to a 5 minutes interval of utilization
measurement.
From the thousand of traces available in this dataset,
we selected 25 for which the associated VMs are active for
at least the simulated time horizon, i.e., 7 days (2016 time
steps). It is worth noting that our time step duration τ is
the same as the measurement interval used when creating
those traces, i.e., 5 minutes. The 25 VMs we chose for our
simulation have a mean of 2.23, a standard deviation of 0.50
and a variance of 0.25.
**Batteries and PV panels sizing. Using the MILP sizing**
formulation (Obj2)-(25) presented in Appendix A combined
with the CPU and irradiation database described above,
we computed the optimal sizing of batteries capacities and
amount of PV panels to be installed so that the proposed
computational infrastructure would be neutral in terms of
non-green energy.
For the system consisting of 5 nodes, each with an
idle power consumption of 50 W and maximum power
5 https://github com/beloglazov/planetlab-workload-traces
consumption of 500 W, planning horizon of one week (2016
time steps with a 5 minutes granularity) and the average
irradiation for Montpellier (Lat. 43.611, Long. 3.876), using
all the 624 weeks (2005-2016) of available data, the optimal
sizing for the whole system consists of 20 PV panels and a
combined battery capacity of 25 kWh.
In the next, we evaluate the resource allocation policies
introduced in Section 4.4. The simulated workloads are executed in a best-effort manner: the makespan of the execution
is identical for all policies, while the computational load of
the computing nodes may vary slightly depending on the
VM migrations applied by the optimizer. The energy transfer between the five nodes has an impact on the amount of
non-green energy used from the utility grid, when batteries
are empty. Thus, we compare the four resource allocation
policies based on the amount of non-green energy needed
to fulfill the makespan.
**5.2** **Use case 1: energy-neutral heterogeneous system**
We discuss the results obtained with the optimally sized
heterogeneous system consisting of 5 nodes, each with an
idle power consumption of 50 W and maximum power
consumption of 500 W, the PV panels and batteries are
distributed as follows: 2 big nodes with 7 PV panels and
battery capacity of 8 kWh in each one, and 3 little nodes
with 2 PV panels and battery capacity of 3 kWh in each one.
(a) Low irradiation
(b) High irradiation
Fig. 6. Use case 1: normalized results for low and high irradiation
periods.
The results for a planning horizon of one week and
periods of low and high irradiation over a whole year are
presented in the plots depicted in Figures 6a-6b. In both
irradiation conditions, the execution policies integrating
energy migration provide the best outcome in terms of nongreen energy reduction, compared to the policy without
any migration In particular the high irradiation scenario
-----
(a) May (b) November
Fig. 7. Use case 1: scheduling of VMs to nodes with high irradiation, during two different months.
enables up to 82.5% reduction on average over the year
thanks to the larger amount of harvested green energy. In
the same scenario, the execution policy relying merely on
data migration provides only up to 59.8% reduction on
average over the year. This corresponds to 22% less savings
than the energy migration based execution policy.
We should point out that the marginal savings observed
when using only data migration, for the period of low
irradiation in the months of January and April, are due
to errors introduced when applying the rolling-horizon
heuristic approach. If solved to optimality, the model which
employs both energy and data migrations would have at
most the same cost as the model using data migration only,
as the former is more general than the latter in terms of
migration options. Table 4 shows the annual energy bought
from and injected into the utility grid by each one of the
four tested policies, considering both periods of low and
high irradiation.
TABLE 4
Annual average energy bought from and energy injected into utility grid
for periods of low and high irradiation.
Energy (kWh) E.+D. D. Only E. Only None
Bought - Low 810.23 838.18 796.02 933.27
Bought - High 56.52 130.03 56.50 323.57
Total 866.75 968.22 852.52 1256.84
Injected - Low 206.90 269.86 206.08 359.60
Injected - High 690.50 830.02 691.09 1031.81
Total 897.40 1099.89 897.17 1391.40
Energy Migrated – Low 259.03 - 178.39 Energy Migrated – High 370.07 - 370.91
Total 629.10 - 549.30
VM Migrations – Low 2018 5570 - VM Migrations – High 2036 5773 -
Total 4054 11343 -
More generally, when the amount of harvested green
energy is lower the generated scheduling solution exploits
VMs migration as much as possible to meet the system execution requirements. For illustration, let us consider again
the high irradiation scenario depicted in Figure 6b, where
both energy and VMs migrations are enabled.
Figures 7a-7b detail the scheduling of the 25 VMs on the
five nodes for the months of May and November, under
their best energy harvesting conditions. Note that May and
November are two typical months during which the solar
irradiation is respectively high and low in Montpellier. As
a result, we can observe, through the figures, the system
behavior in the presence of potential surplus and deficit of
energy production.
Here, each row describes the temporal execution of a
VM on the five nodes. For instance, in Figure 7b VM 13
is executed on Node 04 without any migration, while in
Figure 7a it is migrated three times during its execution (on
Node 04, Node 03 and Node 02).
Globally, we observe that VMs migrations tend to be
more frequent in the right-hand half of the execution timeline for both months. This can be explained by the fact that
the overall VMs average CPU utilization for the first 84
hours, increases by 26% (from 1.98 to 2.49) when compared
to second half of the simulated period. For instance, let us
focus on the activity on Node 02, one of the two biggest
nodes in terms of energy harvesting and storage capacity.
Figures 8a and 8b show the CPU load and energy evolution
profiles for this node in the scenario with energy and VMs
migrations, and high irradiation period for the months of
May and November. We observe an increase in its associated average load after the 84[th] hour, by 27% and 23%,
respectively, in these two months. This is mainly due to the
increase in the VMs average CPU utilization, which forces
frequent migrations of VMs to avoid CPU over-assignment
in some nodes. In addition, to compensate for the extra
energy production and storage capacity, CPU intensive VMs
may be migrated to Node 02 from the other nodes with less
energy storage to successfully achieve VM execution
-----
(a) May (b) November
Fig. 8. Use case 1: node 2 load and energy evolution profiles for May and November under high irradiation.
Figure 8a shows that due to the higher and stable irradiation in May there is no need to buy energy from the
utility grid. On the other hand, for November, as illustrated
in Figure 8b, we obtain a mixed profile in which energy is
both bought and injected back into the utility grid.
**5.3** **Use case 2: accounting for old-generation servers**
Most data center operators, such as those mentioned in
the introduction section, regularly update the IT hardware,
notably for benefiting from higher energy efficiency of lastgeneration silicon. This indeed results in higher mid-term
benefits, i.e. lesser power consumption for similar sold
compute service. Nevertheless, this may not be a problem
if the ”free” harvested renewable energy enables to sustain
the full utilization of these old generation servers.
In the current use case, we therefore explore the outcomes of the previous energy-neutral system dimensioning
when considering old generation servers. The rest of the
system is kept identical as in the use case 1 (see Section 5.2).
However, we degrade the static power consumption of each
server by increasing its idle power consumption to 100 W,
while keeping its maximum power consumption of 500 W.
The results for a planning horizon of one week and
periods of high irradiation over a whole year are presented
in Figure 9. We observe that despite the degradation of the
static power consumption of the servers, the overall energyefficiency of the system is almost preserved thanks to the
energy migration scheduling policies. The current design
enables up to 77.5% reduction of the bought energy on
average over the year compared to the no-migration policy.
On the other hand, the policy based on data migration
reduces this energy by only 49% on average over the year,
compared to the no-migration policy
Fig. 9. Use case 2: results for high irradiation period.
**5.4** **Use case 3: cost-effective heterogeneous system**
To devise an energy-neutral system over a whole year, we
consider the same sizing of batteries and PV panels as in
Section 5.1. In this new use case, we are interested in the cost
reduction of the considered energy infrastructure. For this
purpose, we explore an alternative system dimensioning by
reducing the battery and PV panel components compared
to use case 1 (see Section 5.2). Therefore, the total amount of
PV panels and battery capacities installed in use case 1 are
now reduced by 25%.
These energy resources are now distributed in the following way: 2 big nodes with 5 and 4 PV panels, respectively, and battery capacity of 6 kWh in each one, and 3 little
nodes with 1 PV panel and battery capacity of 1 kWh in
each one.
The results for a planning horizon of one week during a
period of high irradiation over a whole year is presented in
Figure 10 The lighter system dimensioning considered here
-----
reduces by 66% the bought energy reduction on average
over the year, compared to the no-migration policy. This is
only 17% less than the energy reduction obtained with the
same policy in use case 1. On the other hand, the policy
leveraging data migration only in use case 3 reduces the
bought energy by 47% on average over the year, compared
to the no-migration policy.
Fig. 10. Use case 3: results for high irradiation period.
##### 6 CONCLUSION AND PERSPECTIVES
In this paper, we presented an optimization approach for
the energy-efficient resource allocation of data centers integrating renewable energy. We promoted a novel distributed
system design where both data (or VMs) and energy migrations are permitted. We formulated and solved the resource
allocation problem by adopting Mixed Integer Linear Programming combined with a rolling horizon heuristic. We
validated our proposal on a representative case study, by
analyzing real VMs workload traces and accounting for old
generation and less energy-efficient servers. We showed the
relevance of our solution for reducing non-green energy
consumption and sustaining computing equipment.
In particular, compared to usual resource allocation policies relying on data migration, our solution provides up
to 22% reduction of the non-green energy consumption
thanks to its energy migration capability. When replacing
the servers of the baseline system with old-generation and
less energy-efficient servers, this reduction can reach up
to 28.5%. This favors the sustainability of the computing
equipment at a reasonable exploitation cost in data centers.
Further gains could be foreseen with system deployment in
geographical areas with higher solar irradiation conditions,
such as the Saharan zone. Appendix B reports the evaluation
in Mali (West-Africa), of the same system design as for use
case 1. Figure 12 shows that even under low irradiation, the
reduction of the non-green energy is notable.
Future work will focus on the reduction of the MILP
resolution complexity used in our approach. In particular,
we plan to extend our resource allocation framework with
further heuristics. On the other hand, investigating selfadaptive management approaches such as [7], capable of
leveraging energy migration and online prediction of solar
irradiation, is a compelling research direction. More generally, the solution presented in the current study at a mini
data center level could be extended between multiple mini
data centers Indeed in a realistic urban setup several such
mini clusters could be within a limited geographical area,
with therefore negligible overheads when exchanging workloads. A straightforward abstraction consists in modeling
each entire mini data center as a single green compute node
with an aggregated energy profile. Solving could either be
handled as per the MILP formulation proposed in this paper
or alternatively using heuristics in case the intended number
of participating mini data centers is large.
##### REFERENCES
[1] N. Jones, “How to stop data centres from gobbling up the world’s
electricity,” Nature, vol. 561, no. 7722, pp. 163–166, 2018.
[2] E. Masanet, A. Shehabi, N. Lei, S. Smith, and J. Koomey, “Recalibrating global data center energy-use estimates,” Science, vol. 367,
no. 6481, pp. 984–986, 2020.
[3] E. Sage, “Renewable energy credits,” https://www.energysage.
com/other-clean-options/renewable-energy-credits-recs/.
accessed 23 Dec 2020, 2020.
[4] Google, “Google environmental report,” https:
//www.gstatic.com/gumdrop/sustainability/
google-2019-environmental-report.pdf. accessed 27 Feb 2021,
2019.
[5] Facebook, “Facebook sustainability,” https://sustainability.fb.
com. accessed 27 Feb 2021, 2021.
[6] Amazon, “Sustainable operations: Renewable energy,”
https://sustainability.aboutamazon.co.uk/environment/
sustainable-operations. accessed 27 Feb 2021, 2021.
[7] M. Xu, A. N. Toosi, and R. Buyya, “A self-adaptive approach
for managing applications and harnessing renewable energy for
sustainable cloud computing,” IEEE Transactions on Sustainable
_Computing, pp. 1–1, 2020._
[8] T. Cioara, I. Anghel, M. Antal, S. Crisan, and I. Salomie, “Data
center optimization methodology to maximize the usage of locally
produced renewable energy,” in 2015 Sustainable Internet and ICT
_for Sustainability (SustainIT)._ Institute of Electrical and Electronics
Engineers (IEEE), 2015, pp. 1–8.
[9] J.-M. Pierson, G. Baudic, S. Caux, B. Celik, G. D. Costa, L. Grange,
M. Haddad, J. Lecuivre, J.-M. Nicod, L. Philippe, V. Rehn-Sonigo,
R. Roche, G. Rostirolla, A. Sayah, P. Stolf, M.-T. Thi, and C. Varnier,
“DATAZERO: Datacenter with zero emission and robust management using renewable energy,” IEEE Access, vol. 7, pp. 103 209–
103 230, 2019.
[10] F. Kong and X. Liu, “A survey on green-energy-aware power
management for datacenters,” ACM Computing Surveys, vol. 47,
no. 2, pp. 1–38, 2015.
[11] L. Wang, F. Zhang, J. A. Aroca, A. V. Vasilakos, K. Zheng, C. Hou,
D. Li, and Z. Liu, “GreenDCN: A general framework for achieving
energy efficiency in data center networks,” IEEE Journal on Selected
_Areas in Communications, vol. 32, no. 1, pp. 4–15, jan 2014. [Online]._
Available: https://doi.org/10.1109\%2Fjsac.2014.140102
[12] Y. Sverdlik, “How is a Mega Data Center Different from a Massive
One?” https://www.datacenterknowledge.com/archives/2014/
10/15/how-is-a-mega-data-center-different-from-a-massive-one.
accessed 20 Sep. 2021, 2014.
[13] K. Bilal, O. Khalid, A. Erbad, and S. U. Khan, “Potentials,
trends, and prospects in edge technologies: Fog, cloudlet, mobile
edge, and micro data centers,” Computer Networks, vol. 130,
pp. 94–120, 2018. [Online]. Available: https://www.sciencedirect.
com/science/article/pii/S1389128617303778
[14] S. Bird, A. Achuthan, O. Ait Maatallah, W. Hu, K. Janoyan,
A. Kwasinski, J. Matthews, D. Mayhew, J. Owen, and P. Marzocca,
“Distributed (green) data centers: A new concept for energy,
computing, and telecommunications,” _Energy_ _for_ _Sustainable_
_Development, vol. 19, pp. 83–91, 2014. [Online]. Available: https://_
www.sciencedirect.com/science/article/pii/S0973082613001129
[15] F. Di Gregorio, G. Sassatelli, A. Gamati´e, and A. Castelltort, “A
Flexible Power Crossbar-based Architecture for Software-Defined
Power Domains,” in 22nd European Conference on Power Electronics
_and Applications (EPE’20 ECCE Europe), Lyon, France, Sep. 2020._
[16] F. Di Gregorio, “Exploration of Dynamic Reconfiguration
Solutions for Improved Reliability in DC Microgrids,” Theses,
Universit´e de montpellier 2, Nov. 2021. [Online]. Available:
https://hal archives-ouvertes fr/tel-03558951
-----
[17] Y. Zhang, Y. Wang, and X. Wang, Electricity bill capping for
cloud-scale data centers that impact the power markets,” in 2012
_41st International Conference on Parallel Processing, 2012, pp. 440–_
449.
[18] The European Commission’s science and knowledge service,
“Photovoltaic geographical information system (pvgis),” https:
//ec.europa.eu/jrc/en/pvgis. accessed 27 Feb 2021, 2021.
[19] Z. Abbasi, G. Varsamopoulos, and S. K. S. Gupta, “TACOMA:
server and workload management in internet data centers considering cooling-computing power trade-off and energy proportionality,” ACM Transactions on Architecture and Code Optimization,
vol. 9, no. 2, pp. 1–37, 2012.
[20] L. Ganesh, H. Weatherspoon, T. Marian, and K. Birman, “Integrated approach to data center power management,” IEEE Trans_actions on Computers, vol. 62, no. 6, pp. 1086–1096, 2013._
[21] Y. Li, Y. Wen, D. Tao, and K. Guan, “Transforming cooling optimization for green data center via deep reinforcement learning,”
_IEEE Transactions on Cybernetics, vol. 50, no. 5, pp. 2002–2013, 2020._
[22] H. Lei, R. Wang, T. Zhang, Y. Liu, and Y. Zha, “A multi-objective
co-evolutionary algorithm for energy-efficient scheduling on a
green data center,” Computers & Operations Research, vol. 75, pp.
103–117, 2016.
[23] S. Nesmachnow, C. Perfumo, and I. Goiri, “Holistic multiobjective[´]
planning of datacenters powered by renewable energy,” Cluster
_Computing, vol. 18, no. 4, pp. 1379–1397, 2015._
[24] P. Ruiu, C. Fiandrino, P. Giaccone, A. Bianco, D. Kliazovich,
and P. Bouvry, “On the energy-proportionality of data center
networks,” IEEE Transactions on Sustainable Computing, vol. 2, no. 2,
pp. 197–210, 2017.
[25] I. Goiri, W. Katsak, K. Le, T. D. Nguyen, and R. Bianchini, “Parasol[´]
and GreenSwitch: Managing datacenters powered by renewable
energy,” ACM SIGPLAN Notices, vol. 48, no. 4, pp. 51–64, 2013.
[26] ——, “Designing and managing data centers powered by renewable energy,” IEEE Micro, vol. 34, no. 3, pp. 8–16, 2014.
[27] I. Goiri, M. E. Haque, K. Le, R. Beauchea, T. D. Nguyen, J. Guitart,[´]
J. Torres, and R. Bianchini, “Matching renewable energy supply
and demand in green datacenters,” Ad Hoc Networks, vol. 25, pp.
520–534, 2015, new Research Challenges in Mobile, Opportunistic
and Delay-Tolerant Networks Energy-Aware Data Centers:
Architecture, Infrastructure, and Communication. [Online].
Available: https://www.sciencedirect.com/science/article/pii/
S1570870514002649
[28] A. Kassab, J.-M. Nicod, L. Philippe, and V. Rehn-Sonigo, “Scheduling independent tasks in parallel under power constraints,” in
_2017 46th International Conference on Parallel Processing (ICPP)._
Institute of Electrical and Electronics Engineers (IEEE), 2017, pp.
543–552.
[29] S. Caux, P. Renaud-Goud, G. Rostirolla, and P. Stolf, “IT optimization for datacenters under renewable power constraint,” in 24th
_European Conference on Parallel Processing (Euro-Par 2018). Springer_
International Publishing, 2018, pp. 339–351.
[30] A. A. Chien, R. Wolski, and F. Yang, “The zero-carbon
cloud: High-value, dispatchable demand for renewable power
generators,” The Electricity Journal, vol. 28, no. 8, pp. 110–
118, 2015. [Online]. Available: https://www.sciencedirect.com/
science/article/pii/S1040619015001931
[31] S. Ismaeel, R. Karim, and A. Miri, “Proactive dynamic virtualmachine consolidation for energy conservation in cloud data
centres,” Journal of Cloud Computing, vol. 7, no. 1, 2018.
[32] I. Hwang and M. Pedram, “Hierarchical virtual machine consolidation in a cloud computing system,” in 2013 IEEE Sixth
_International Conference on Cloud Computing._ Institute of Electrical
and Electronics Engineers (IEEE), 2013, pp. 196–203.
[33] F.-H. Tseng, C.-Y. Chen, L.-D. Chou, H.-C. Chao, and J.-W. Niu,
“Service-oriented virtual machine placement optimization for
green data center,” Mobile Networks and Applications, vol. 20, no. 5,
pp. 556–566, 2015.
[34] A. Beloglazov and R. Buyya, “Optimal online deterministic algorithms and adaptive heuristics for energy and performance
efficient dynamic consolidation of virtual machines in cloud data
centers,” Concurrency and Computation: Practice and Experience,
vol. 24, no. 13, pp. 1397–1420, 2011.
[35] S. Jangiti and S. S. VS, “EMC2: Energy-efficient and multiresource-fairness virtual machine consolidation in cloud data centres,” Sustainable Computing: Informatics and Systems, vol. 27, p.
100414 2020
[36] X. Liu, J. Wu, G. Sha, and S. Liu, Virtual machine consolidation
with minimization of migration thrashing for cloud data centers,”
_Mathematical Problems in Engineering, vol. 2020, pp. 1–13, 2020._
[37] I. Hamzaoui, B. Duthil, V. Courboulay, and H. Medromi, “A survey on the current challenges of energy-efficient cloud resources
management,” SN Computer Science, vol. 1, no. 2, 2020.
[38] R. Zolfaghari, A. Sahafi, A. M. Rahmani, and R. Rezaei, “Application of virtual machine consolidation in cloud computing
systems,” Sustainable Computing: Informatics and Systems, vol. 30,
p. 100524, 2021.
[39] A. Gamati´e, G. Sassatelli, and M. Mikuˇcionis, “Modeling and
analysis for energy-driven computing using statistical modelchecking,” in Design, Automation and Test in Europe Conference
_(DATE’21), 2021._
[40] E. Gelenbe, “Energy packet networks: Adaptive energy
management for the cloud,” in Proceedings of the 2nd International
_Workshop on Cloud Computing Platforms, ser. CloudCP ’12._ New
York, NY, USA: Association for Computing Machinery, 2012.
[Online]. Available: https://doi.org/10.1145/2168697.2168698
[41] A. Schrijver, Theory of linear and integer programming, ser. WileyInterscience series in discrete mathematics and optimization. Wiley, 1998.
[42] G. Sierksma and Y. Zwols, Linear and integer optimization : theory
_and practice, 3rd ed., ser. Advances in applied mathematics. Chap-_
man & Hall/CRC, 2015.
[43] X. Fan, W.-D. Weber, and L. A. Barroso, “Power provisioning for
a warehouse-sized computer,” ACM SIGARCH Computer Architec_ture News, vol. 35, no. 2, pp. 13–23, 2007._
[44] Q. Huang, F. Gao, R. Wang, and Z. Qi, “Power consumption of virtual machine live migration in clouds,” in 2011 Third International
_Conference on Communications and Mobile Computing._ Institute of
Electrical and Electronics Engineers (IEEE), 2011, pp. 122–125.
[45] H. Liu, H. Jin, C.-Z. Xu, and X. Liao, “Performance and energy
modeling for live migration of virtual machines,” Cluster Comput_ing, vol. 16, no. 2, pp. 249–264, 2011._
[46] A. Strunk, “A lightweight model for estimating energy cost of live
migration of virtual machines,” in 2013 IEEE Sixth International
_Conference on Cloud Computing._ Institute of Electrical and Electronics Engineers (IEEE), 2013, pp. 510–517.
[47] V. D. Maio, R. Prodan, S. Benedict, and G. Kecskemeti, “Modelling
energy consumption of network transfers and virtual machine
migration,” Future Generation Computer Systems, vol. 56, pp. 388–
406, 2016.
[48] J. G. Rakke, M. St˚alhane, C. R. Moe, M. Christiansen, H. Andersson, K. Fagerholt, and I. Norstad, “A rolling horizon heuristic
for creating a liquefied natural gas annual delivery program,”
_Transportation Research Part C: Emerging Technologies, vol. 19, no. 5,_
pp. 896–911, 2011.
[49] A. Bischi, L. Taccari, E. Martelli, E. Amaldi, G. Manzolini, P. Silva,
S. Campanari, and E. Macchi, “A rolling-horizon optimization algorithm for the long term operational scheduling of cogeneration
systems,” Energy, vol. 184, pp. 73–90, 2019.
[50] I. Dunning, J. Huchette, and M. Lubin, “Jump: A modeling language for mathematical optimization,” SIAM Review, vol. 59, no. 2,
pp. 295–320, 2017.
[51] K. Park and V. S. Pai, “CoMon: a mostly-scalable monitoring
system for PlanetLab,” ACM SIGOPS Operating Systems Review,
vol. 40, no. 1, pp. 65–74, 2006.
##### ACKNOWLEDGMENTS
This work is supported by the IWARE project, funded by
R´egion Occitanie, France.
-----
**Dr. Marcos De Melo da Silva is a research**
engineer at CNRS, currently working as an operations research specialist with the Adaptive
Computing team at LIRMM laboratory, University
of Montpellier, France. He is a computer scientist
with specialization in combinatorial optimization
and operations research. His research focuses
on the design, analysis and development of efficient, exact and approximate, algorithms for
combinatorial problems in the domains of city
logistics, transportation and scheduling.
**Dr. Abdoulaye Gamati´e is currently a Senior**
CNRS Researcher at LIRMM, a joint laboratory
of Univ. Montpellier and CNRS, France. His research activity focuses on the design of energyefficient multicore and multiprocessor systems
for embedded and high-performance computing domains. He has coauthored more than 90
articles in peer-reviewed journals and international conferences. He has been involved in several collaborative international projects with both
academic and industrial partners. He has served
on the editorial boards of scientific journals including IEEE TCAD and
ACM TECS.
**Dr. Gilles Sassatelli is a Senior CNRS Scien-**
tist at LIRMM, a CNRS-University of Montpellier joint research unit. He conducts research
in the area of adaptive energy-efficient systems
in the adaptive computing group. He is the author of more than 200 publications in a number
of renowned international journals and international conferences. He regularly serves as Track
or Topic Chair in major conferences in the field of
embedded systems (DATE, ReCoSoC, ISVLSI
etc.). Most of his research is conducted in collaboration with international partners; over the past five years he has been
involved in several national and European research projects including
DREAMCLOUD and MONT-BLANC projects (FP7 and H2020).
**Dr. Michael Poss is a senior research fellow at**
the LIRMM laboratory that depends on both the
University of Montpellier and the National Center
for Scientific Research (CNRS). His current research focuses mainly on robust combinatorial
optimization. He has been involved in several
collaborative projects, and has served as PI for
some of them.
**Prof. Michel Robert (PhD’1987) is Professor at**
the University of Montpellier (France), where he
is teaching microelectronics in the engineering
program. His present research interests at the
Montpellier Laboratory of Informatics, Robotics,
and Micro-electronics (LIRMM) are design and
modelisation of system on chip architectures. He
is author or co-author of more than 250 publications in the field of CMOS integrated circuits
design.
##### APPENDIX A BATTERY AND PV PANELS SIZING MODEL
In the MILP formulation (Obj1), (2)-(9) presented in Section
4.2, the number of photovoltaic panels used for computing
the amount of solar energy that is injected into the nodes
and the capacity of the batteries installed in each node are
input parameters that need to be informed by the user. We
will now describe an MILP formulation that can be applied
to compute such parameters.
The proposed sizing model can be seen as an extension
of the scheduling MILP model (Obj1), (2)-(9). Therefore, in
addition to the decision variables (f, q, v, x, w, and z), we
also need the following variables:
_•_ _ui ≥_ 0 : battery capacity to be used in node i.
_•_ _gi ≥_ 0 : amount of PV panels to be used in node i.
When sizing the batteries we ensure that they cannot be
discharged below a safety level σ = 0.15, i.e., 15%.
The sizing formulation objective function is:
_wi[0]_ _[≥]_ _[σu][i]_ _∀i ∈_ _N_ (16)
_wi[t]_ _[≥]_ _[σu][i]_ _∀t ∈_ _H, i ∈_ _N_ (17)
_wi[t]_ _[≤]_ _[u][i]_ _∀t ∈_ _H, i ∈_ _N_ (18)
�
_wi[T]_ _[≥]_ _[w]i[0]_ _[−]_ _[φ]_ _qi[t]_ _∀i ∈_ _N_ (19)
_t∈H_
�
_Cj[t][x][t]ij_ _[≤]_ _[R]i[C]_ [+][ v]i[t] _∀t ∈_ _H, i ∈_ _N_ (20)
_j∈J_
�
_x[t]ij_ [= 1] _∀t ∈_ _H, j ∈_ _J_ (21)
_i∈N_
_zikj[t]_ _[≥]_ _[x][t]ij[−][1]_ + x[t]kj _[−]_ [1] _∀t ≥_ 2, j ∈ _J, i ̸= k ∈_ _N_ (22)
_x[t]ij[, z]ikj[t]_ _[∈{][0][,][ 1][}]_ _∀t ∈_ _H, j ∈_ _J, i ̸= k ∈_ _N (23)_
_gi, ui ≥_ 0 _∀i ∈_ _N_ (24)
_b[t]i[, q]i[t][, v]i[t][, w]i[t][, f]ik[ t]_ _[≥]_ [0] _∀t ∈_ _H, i ̸= k ∈_ _N_ (25)
The objective function (Obj2) seeks to minimize the sum
of battery capacities, the number of installed solar panels,
and similar to the formulation (Obj1), (2)-(9), minimizes: i)
the energy losses incurred when performing energy or VM
migrations between nodes, ii) the energy losses associated
with the surplus energy generated that is injected into the
utility grid, and iii) the penalties for over-utilization of
processing resources respectively
Minimize � _gi +_ � _ui + λ_ � � _fik[t]_
_i∈N_ _i∈N_ _t∈H_ _i,k∈N_
� � � � �
+ µ _zikj[t]_ [+][ ϕ] _qi[t]_
_t∈H_ _i,k∈N_ _j∈J_ _t∈H_ _i∈N_
� �
+ ν _vi[t]_
_t∈H_ _i∈N_
and is subject to the following constraints:
_wi[t]_ [=][ w]i[t][−][1] + G[t]i[g][i] [+] � _Ekifki[t]_
_k∈N_
_−_ � _fik[t]_ _[−]_ _[µ][s]_ � � _zikj[t]_ _[−]_ _[µ][d]_ � � _zkij[t]_
_k≠_ _i∈N_ _k∈N_ _j∈J_ _k∈N_ _j∈J_
�
_−_ (εI + εP _Cj[t][x]ij[t]_ [)] _∀t ∈_ _H, i ∈_ _N_
_j∈J_
(Obj2)
(15)
-----
Constraint (15) defines how the state of the batteries is
updated each time step t _H. The batteries initial charge,_
_∈_
safe discharge levels, maximum capacity and remaining
charge levels are enforced by constraints (16)-(19), respectively. The constraints related to CPU resource allocation
(20), and the scheduling of the VMs to nodes (21) and
(22) are the same as in model (Obj1), (2)-(9). Finally, constraints (23)-(25) define the domains of the variables.
##### APPENDIX B ALTERNATIVE USE CASE EVALUATIONS
In the sequel, we briefly illustrate two additional evaluations of our proposal, under different setups: an homogeneous system resource dimensioning and the deployment
of the system on a different geographical zone.
**B.1** **Homogeneous system under energy-neutrality**
We discuss the results obtained with an optimally sized
homogeneous system consisting of 5 green nodes, each with
an idle power consumption of 50 W and maximum power
consumption of 500 W. The PV panels and batteries are
equally distributed among the 5 green nodes: 4 PV panels
and battery capacity of 5 kWh per node. The results for
a planning horizon of one week and periods of low and
high irradiation over a whole year are presented in the plots
depicted in Figures 11a-11b.
(a) Low irradiation
(b) High irradiation
Fig. 11. Homogeneous system design under energy neutrality in the
South of France.
Given the identical resource dimensioning across the
different green nodes, the energy and computing demand
is also identical over the time. Therefore, neither VM nor
energy migration is helpful here As a consequence the
four execution scenarios become equivalent in terms of non
green energy reduction.
This homogeneous system design shows that both data
and energy migrations are mainly relevant in the situations
where the resource availability evolves differently in the
considered nodes, over the time. Therefore, VMs and energy
migrations can help in re-equilibrating the resource utilization.
**B.2** **Use case 1-bis: evaluation for Mali (West-Africa)**
We analyse the behaviour of the proposed system for Mali,
a Saharan country in West-Africa, where we expect higher
levels of solar irradiation during the whole year provided
that this country is closer to the equator.
We consider the same heterogeneous system and resources sizing of the use case 1 (see Section 5.2). The results
for a planning horizon of one week and periods of low and
high irradiation over a whole year for Mali are presented in
the plots depicted in Figures 12a-12b.
They show that for the regions of the World with a very
favorable solar irradiation condition, the overall gains in
terms of non-green energy reduction are very significant
over a year.
(a) Low irradiation
(b) High irradiation
Fig. 12. Use case 1 normalized results for low and high irradiation
periods in Mali (West-Africa).
-----
| 22,918
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/TSUSC.2022.3197090?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/TSUSC.2022.3197090, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "CLOSED",
"url": ""
}
| 2,023
|
[
"JournalArticle"
] | false
| 2023-01-01T00:00:00
|
[] | 22,918
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Mathematics",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0077b7cb8c5025bfbb01a3bf8420ecdaf5353286
|
[
"Computer Science"
] | 0.807945
|
Markov processes in blockchain systems
|
0077b7cb8c5025bfbb01a3bf8420ecdaf5353286
|
Computational Social Networks
|
[
{
"authorId": "47422861",
"name": "Quanlin Li"
},
{
"authorId": "3364756",
"name": "Jing-Yu Ma"
},
{
"authorId": "51211184",
"name": "Yan-Xia Chang"
},
{
"authorId": "51209844",
"name": "Fan-Qi Ma"
},
{
"authorId": "48002932",
"name": "Hai-Bo Yu"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Comput Soc Netw"
],
"alternate_urls": [
"https://computationalsocialnetworks.springeropen.com",
"https://computationalsocialnetworks.springeropen.com/"
],
"id": "c1b50dd7-5d6f-4bf2-97d9-750c49aff197",
"issn": "2197-4314",
"name": "Computational Social Networks",
"type": "journal",
"url": "http://www.computationalsocialnetworks.com/"
}
|
In this paper, we develop a more general framework of block-structured Markov processes in the queueing study of blockchain systems, which can provide analysis both for the stationary performance measures and for the sojourn time of any transaction or block. In addition, an original aim of this paper is to generalize the two-stage batch-service queueing model studied in Li et al. (Blockchain queue theory. In: International conference on computational social networks. Springer: New York; 2018. p. 25–40) both “from exponential to phase-type” service times and “from Poisson to MAP” transaction arrivals. Note that the MAP transaction arrivals and the two stages of PH service times make our blockchain queue more suitable to various practical conditions of blockchain systems with crucial factors, for example, the mining processes, the block generations, the blockchain building and so forth. For such a more general blockchain queueing model, we focus on two basic research aspects: (1) using the matrix-geometric solution, we first obtain a sufficient stable condition of the blockchain system. Then, we provide simple expressions for the average stationary number of transactions in the queueing waiting room and the average stationary number of transactions in the block. (2) However, on comparing with Li et al. (2018), analysis of the transaction–confirmation time becomes very difficult and challenging due to the complicated blockchain structure. To overcome the difficulties, we develop a computational technique of the first passage times by means of both the PH distributions of infinite sizes and the RG factorizations. Finally, we hope that the methodology and results given in this paper will open a new avenue to queueing analysis of more general blockchain systems in practice and can motivate a series of promising future research on development of blockchain technologies.
|
p g
## RESEARCH
## Open Access
# Markov processes in blockchain systems
#### Quan‑Lin Li[1*†], Jing‑Yu Ma[2†], Yan‑Xia Chang[3†], Fan‑Qi Ma[2†] and Hai‑Bo Yu[1†]
*Correspondence:
[email protected]
†All authors contributed
equally to this work.
1 School of Economics
and Management, Beijing
University of Technology,
Beijing 100124, China
Full list of author information
is available at the end of the
article
**Abstract**
In this paper, we develop a more general framework of block-structured Markov pro‑
cesses in the queueing study of blockchain systems, which can provide analysis both
for the stationary performance measures and for the sojourn time of any transaction or
block. In addition, an original aim of this paper is to generalize the two-stage batchservice queueing model studied in Li et al. (Blockchain queue theory. In: International
conference on computational social networks. Springer: New York; 2018. p. 25–40) both
“from exponential to phase-type” service times and “from Poisson to MAP” transaction
arrivals. Note that the MAP transaction arrivals and the two stages of PH service times
make our blockchain queue more suitable to various practical conditions of blockchain
systems with crucial factors, for example, the mining processes, the block generations,
the blockchain building and so forth. For such a more general blockchain queue‑
ing model, we focus on two basic research aspects: (1) using the matrix-geometric
solution, we first obtain a sufficient stable condition of the blockchain system. Then,
we provide simple expressions for the average stationary number of transactions in
the queueing waiting room and the average stationary number of transactions in the
block. (2) However, on comparing with Li et al. (2018), analysis of the transaction–con‑
firmation time becomes very difficult and challenging due to the complicated block‑
chain structure. To overcome the difficulties, we develop a computational technique of
the first passage times by means of both the PH distributions of infinite sizes and the
_RG factorizations. Finally, we hope that the methodology and results given in this paper_
will open a new avenue to queueing analysis of more general blockchain systems in
practice and can motivate a series of promising future research on development of
blockchain technologies.
**Keywords: Blockchain, Bitcoin, Markovian arrival process (MAP), Phase type (PH)**
distribution, Matrix-geometric solution, Block-structured Markov process, RG
factorization
**Introduction**
**Background and motivation**
Blockchain is one of the most popular issues discussed extensively in recent years, and
it has already changed people’s lifestyle in some real areas due to its great impact on
finance, business, industry, transportation, healthcare and so forth. Since the introduction of Bitcoin by Nakamoto [1], blockchain technologies have obtained many important
advances in both basic theory and real applications up to now. Readers may refer to,
for example, excellent books by Wattenhofer [2], Prusty [3], Drescher [4], Bashir [5] and
Parker [6]; and survey papers by Zheng et al. [7], Constantinides et al. [8], Yli-Huumo
et al. [9], Plansky et al. [10], Lindman et al. [11] and Risius and Spohrer [12].
© The Author(s) 2019. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License
[(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium,](http://creativecommons.org/licenses/by/4.0/)
provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and
indicate if changes were made.
-----
It may be necessary and useful to further remark several important directions and
key research as follows: (1) smart contracts by Par [13], Bartoletti and Pompianu [14],
Alharby and van Moorsel [15] and Magazzeni et al. [16]; (2) ethereum by Diedrich [17],
Dannen [18], Atzei et al. [19] and Antonopoulos and Wood [20]; (3) consensus mechanisms by Wang et al. [21], Debus [22], Pass et al. [23], Pass and Shi [24] and Cachin
and Vukolić [25]; (4) blockchain security by Karame and Androulaki [26], Lin and Liao
[27] and Joshi et al. [28]; (5) blockchain economics by Swan [29], Catalini and Gans [30],
Davidson et al. [31], Bheemaiah [32], Becket al. [33], Biais et al. [34], Kiayias et al. [35]
and Abadi and Brunnermeier [36]. In addition, there are still some important topics
including the mining management, the double spending, PoW, PoS, PBFT, withholding
attacks, pegged sidechains and so on. Also, their investigations may be well understood
from the references listed above.
Recently, blockchain has become widely adopted in many real applications. Readers
may refer to, for example, Foroglou and Tsilidou [37], Bahga and Madisetti [38] and Xu
et al. [39]. At the same time, we also provide a detailed observation on some specific perspectives, for instance, (1) blockchain finance by Tsai et al. [40], Nguyen [41], Tapscott
and Tapscott [42], Treleaven et al. [43] and Casey et al. [44]; (2) blockchain business by
Mougayar [45], Morabito [46], Fleming [47], Beck et al. [48], Nowiński and Kozma [49]
and Mendling et al. [50]; (3) supply chains under blockchain by Hofmann et al. [51], Korpela et al. [52], Kim and Laskowski [53], Saberi et al. [54], Petersen et al. [55], Sternberg
and Baruffaldi [56] and Dujak and Sajter [57]; (4) internet of things under blockchain by
Conoscenti et al. [58], Bahga and Madisetti [59], Dorri et al. [60], Christidis and Devetsikiotis [61] and Zhang and Wen [62]; (5) sharing economy under blockchain by Huckle
et al. [63], Hawlitschek et al. [64], De Filippi [65], and Pazaitis et al. [66]; (6) healthcare
under blockchain by Mettler [67], Rabah [68], Griggs et al. [69] and Wang et al. [70]; (7)
energy under blockchain by Oh et al. [71], Aitzhan and Svetinovic [72], Noor et al. [73]
and Wu and Tran [74].
Based on the above discussion, whether it is theoretical research or real applications,
we always hope to know how performance of the blockchain system is obtained, and
whether there is still some room to be able to further improve performance of the blockchain system. For this, it is a key to find solution of such a performance issue in the study
of blockchain systems. Thus, we need to provide mathematical modeling and analysis
for blockchain performance evaluation by means of, for example, Markov processes,
Markov decision processes, queueing networks, Petri networks, game models and so on.
Unfortunately, so far only a little work has been on performance modeling of blockchain
systems. Therefore, this motivates us in this paper to develop Markov processes and
queueing models for a more general blockchain system. We hope that the methodology
and results given in this paper will open a new avenue to Markov processes of blockchain systems and can motivate a series of promising future research on development of
blockchain technologies.
**Related work**
Now, we provide several different classes of related work for Markov processes in blockchain systems, for example, queueing models, Markov processes, Markov decision processes, random walks, fluid limit and so on.
-----
**_Queueing models_**
To use queueing theory to model a blockchain system, we need to observer some key
factors, for example, transaction arrivals, block generation, blockchain-building, block
size, transaction fee, mining pools, mining reward, solving difficulty of crypto mathematical puzzle, throughput and so forth. As shown in Fig. 1, we design a two-stage,
Service-In-Random-Order and batch service queueing system by means of two stages
of asynchronous processes: block generation and blockchain building. Li et al. [75] is
the first one to provide a detailed analysis for such a blockchain queue by means of the
matrix-geometric solution. Kasahara and Kawahara [76] and Kawase and Kasahara [77]
discussed the blockchain queue with general service times through an incompletely
solving idea, which has still been for dealing with an interesting open problem up to
now. In addition, they also gave some useful numerical experiments for performance
observation. Ricci et al. [78] proposed a framework encompassing machine learning and
a queueing model, which is used to identify which transactions will be confirmed and to
characterize the confirmation time of a confirmed transaction. Memon et al. [79] proposed a simulation model for the blockchain systems by means of queuing theory.
Bowden et al. [80] discussed time-inhomogeneous behavior of the block arrivals in
the bitcoin blockchain because the block-generation process is influenced by several
key factors such as the solving difficulty level of crypto mathematical puzzle, transaction fee, mining reward, and mining pools. Papadis et al. [81] applied the timeinhomogeneous block arrivals to set up some Markov processes to study evolution
and dynamics of blockchain networks and discussed key blockchain characteristics
|Col1|Hash of Block 0|Col3|Col4|
|---|---|---|---|
|Timestamp||Nonce||
|Col1|Hash of Block 1|Col3|Col4|
|---|---|---|---|
|Timestamp||Nonce||
|Col1|Hash of Block k|Col3|Col4|
|---|---|---|---|
|Timestamp||Nonce||
|TX 1|TX 2|
|---|---|
|TX 1|TX 2|
|---|---|
|TX 1|TX 2|
|---|---|
|1|2|Col3|b-1|b|
|---|---|---|---|---|
-----
such as the number of miners, the hashing power (block completion rates), block dissemination delays, and block confirmation rules. Further, Jourdan et al. [82] proposed
a probabilistic model of the bitcoin blockchain by means of a transaction and block
graph and formulated some conditional dependencies induced by the bitcoin protocol
at the block level. Based on analysis in the two papers, it is clear that when the blockgeneration arrivals are a time-inhomogeneous Poisson process, we believe that the
blockchain queue analyzed in this paper will become very difficult and challenging
and, thus, it will be an interesting topic in our future study.
**_Markov processes_**
To evaluate performance of a blockchain system, Markov processes are a basic mathematical tool, e.g., see Bolch et al. [83] for more details. As an early key work to apply
Markov processes to blockchain performance issues, Eyal and Sirer [84] established
a simple Markov process to analyze the vulnerability of Nakamoto protocols through
studying the block-forking behavior of blockchain. Note that some selfish miners may
get higher payoffs by violating the information propagation protocols and postponing their mined blocks such that such selfish miners exploits the inherent block forking phenomenon of Nakamoto protocols. Nayak et al. [85] extended the work by Eyal
and Sirer [84] through introducing a new mining strategy: stubborn mining strategy.
They used three improved Markov processes to further study the stubborn mining
strategy and two extensions: the Equal-Fork Stubborn (EFS) and the Trail Stubborn
(TS) mining strategies. Carlsten [86] used the Markov process to study the impact of
transaction fees on the selfish mining strategies in the bitcoin network. Göbel et al.
[87] further considered the mining competition between a selfish mining pool and
the honest community by means of a two-dimensional Markov process, in which they
extended the Markov model of selfish mining by considering the propagation delay
between the selfish mining pool and the honest community.
Kiffer and Rajaraman [88] provided a simple framework of Markov processes for
analyzing consistency properties of the blockchain protocols and used some numerical experiments to check the consensus bounds for network delay parameters and
adversarial computing percentages. Huang et al. [89] set up a Markov process with an
absorbing state to analyze performance measures of the Raft consensus algorithm for
a private blockchain.
**_Markov decision processes_**
Note that the selfish miner may adopt different mining policies to release some blocks
under the longest-chain rule, which is used to control the block-forking structure.
Thus, it is interesting to find an optimal mining policy in the blockchain system. To
do this, Sapirshtein et al. [90], Sompolinsky and Zohar [91] and Gervais et al. [92]
applied the Markov decision processes to find the optimal selfish-mining strategy, in
which four actions: adopt, override, match and wait, are introduced in order to control the state transitions of the Markov decision process.
-----
**_Random walks_**
Goffard [93] proposed a random walk method to study the double-spending attack
problem in the blockchain system and focused on how to evaluate the probability of the
double-spending attack ever being successful. Jang and Lee [94] discussed profitability of
the double-spending attack in the blockchain system through using the random walk of
two independent Poisson counting processes.
**_Fluid limit_**
Frolkova and Mandjes [95] considered a bitcoin-inspired infinite-server model with a
random fluid limit. King [96] developed the fluid limit of a random graph model to discuss the shared ledger and the distributed ledger technologies in the blockchain systems.
**Contributions**
The main contributions of this paper are twofold. The first contribution is to develop a
more general framework of block-structured Markov processes in the study of blockchain systems. We design a two-stage, Service-In-Random-Order and batch service
queueing system, whose original aim is to generalize the blockchain queue studied in
Li et al. [75] both “from exponential to phase-type” service times and “from Poisson to
MAP” transaction arrivals. Note that the transaction MAP arrivals and two stages of PH
service times make our new blockchain queueing model more suitable to various practical conditions of blockchain systems. Using the matrix-geometric solution, we obtain
a sufficient stable condition of the more general blockchain system and provide simple
expressions for two key performance measures: the average stationary number of transactions in the queueing waiting room, and the average stationary number of transactions
in the block.
The second contribution of this paper is to provide an effective method for computing the average transaction–confirmation time of any transaction in a more general
blockchain system. In general, it is always very difficult and challenging to analyze the
transaction–confirmation time in the blockchain system with MAP inputs and PH service times, because the service discipline of the blockchain system is new from two key
points: (1) the “block service” is a class of batch service and (2) some transactions are
chosen into a block by means of the Service-In-Random-Order. In addition, the MAP
inputs and PH service times also make analysis of the blockchain queue more complicated. To study the transaction–confirmation time, we set up a Markov process with
an absorbing state (see Fig. 4) according to the blockchain system (see Figs. 1 and 2).
Based on this, we show that the transaction–confirmation time of any transaction is the
first passage time of the Markov process with an absorbing state, hence we can discuss
the transaction–confirmation time (or the first passage time) by means of both the PH
distributions of infinite sizes and the _RG factorizations. Based on this, we propose an_
effective algorithm for computing the average transaction–confirmation time of any
transaction. We hope that our approach given in this paper can be applicable to deal
with the transaction–confirmation times in more general blockchain systems.
The structure of this paper is organized as follows. "Model description" section
describes a two-stage, Service-In-Random-Order and batch service queueing system,
-----
( 0,0 ) ( 1,0 ) ( b-1,0 ) ( b,0 ) ( b+1,0 ) ( 2b,0 )
( 0,1 ) ( 1,1 ) ( b-1,1 ) ( b,1 ) ( b+1,1 ) ( 2b,1 )
(0,b-1) (1,b-1) (b-1,b-1) (b,b-1) (b+1,b-1) (2b,b-1)
( 0,b ) ( 1,b ) ( b-1,b) ( b,b ) ( b+1,b ) ( 2b,b )
**: The Markov arrival process (MAP) with irreducible representation** _C D,_
**: The PH blockchain-building times with irreducible representation**,T
**: The PH blockchain-generation times with irreducible representation**, _S_
where the transactions arrive at the blockchain system according to a Markovian
arrival process (MAP), the block-generation and blockchain-building times are all
of phase type (PH). "A Markov process of GI/M/1 type" section establishes a continuous-time Markov process of GI/M/1 type, derives a sufficient stable condition of
the blockchain system, and expresses the stationary probability vector of the blockchain system by means of the matrix-geometric solution. "The stationary transaction
numbers" section provides simple expressions for the average stationary number of
transactions in the queueing waiting room, the average stationary number of transactions in the block, and uses some numerical examples to verify computability of
our theoretical results. To compute the average transaction–confirmation time of any
transaction, "The transaction–confirmation time" section develops a computational
technique of the first passage times by means of both the PH distributions of infinite
sizes and the _RG factorizations. Finally, some concluding remarks are given in last_
section.
**Model description**
In this section, from a more general point of view of blockchain, we design an interesting and practical blockchain queueing system, where the transactions arrive at the
blockchain system according to a Markovian arrival process (MAP), while the blockgeneration and blockchain-building times are all of phase type (PH).
From a more practical background of blockchain, it is necessary to extend and generalize the blockchain queueing model, given in Li et al. [75], to a more general case
not only with non-Poisson transaction inputs but also with non-exponential blockgeneration and blockchain-building times. At the same time, we further abstract the
block-generation and blockchain-building processes as a two-stage, Service-In-Random-Order and batch service queueing system by means of the MAP and the PH distribution. Such a blockchain queueing system is depicted in Fig. 1.
From Fig. 1, now we provide some model descriptions as follows:
-----
**Arrival process**
Transactions arrive at the blockchain system according to a Markovian arrival process
(MAP) with matrix representation (C, D) of order m0, where the matrix C + D is the
infinitesimal generator of an irreducible Markov process; _C indicates the state transi-_
tion rates that only the random environment changes without any transaction arrival, D
denotes the arrival rates of transactions under the random environment C; (C + D)e = 0,
and _e is a column vector of suitable size in which each element is one. Obviously, the_
Markov process C + D with finite states is irreducible and positive recurrent. Let ω be the
stationary probability vector of the Markov process C + D, it is clear that ω(C + D) = 0
and ωe = 1 . Also, the stationary arrival rate of the MAP is given by � = ωDe.
In addition, we assume that each arriving transaction must first enter a queueing waiting room of infinite size. See the lower left part corner of Fig. 1.
**A block‑generation process**
Each arriving transaction first needs to enter a waiting room. Then, it may be chosen into
a block of the maximal size b. This is regarded as the first stage of service, called a block_generation process. Note that the arriving transactions will be continually chosen into_
the block until the block-generation process is over under which a nonce is appended to
the block by a mining winner. See the lower middle part of Fig. 1 for more details.
The block-generation time begins the initial epoch of a mining process until a nonce
of the block is found (i.e., the cryptographic mathematical puzzle is solved for sending
a nonce to the block), then the mining process is terminated immediately. We assume
that all the block-generation times are i.i.d., and are of phase type with an irreducible
representation (β, S) of order m2, where βe = 1, the expected blockchain-building time
is given by 1/µ2 = −βS[−][1]e.
**The block‑generation discipline**
A block can consist of some transactions but at most b transactions. Once the mining
process begins, the transactions in the queueing waiting room are chosen into a block,
but they are not completely based on the First Come First Service (FCFS) from the order
of transaction arrivals. For example, several transactions in the back of this queue are
possible to be chosen into the block. When the block is formed, it will not receive any
new arriving transaction again. See the lower middle part of Fig. 1.
**A blockchain‑building process**
Once the mining process is over, the block with a group of transactions will be pegged to
a blockchain. This is regarded as the second stage of service due to the network latency,
called a _blockchain-building process, see the lower right corner of Fig._ 1. In addition,
the upper part of Fig. 1 also outlines the blockchain and the internal structure of every
block.
In the blockchain system, we assume that the blockchain-building times are i.i.d, and
have a common PH distribution with an irreducible representation (α, T ) of order m1,
where αe = 1, and the expected block-generation time is given by 1/µ1 = −αT [−][1]e.
-----
**The maximum block size**
To avoid the spam attacks, the maximum size of each block is limited. We assume that there
are at most b transactions in each block. If there are more than b transactions in the queueing waiting room, then the b transactions are chosen into a full block so that those redundant transactions are still left in the queueing waiting room, and they find a new choice to
set up another possible block. In addition, the block size b maximizes the batch service ability in the blockchain system.
**Independence**
We assume that all the random variables defined above are independent of each other.
_Remark 1 This paper is the first one to consider a blockchain system with non-Poisson_
transaction arrivals (MAPs) and with non-exponential block-generation and blockchainbuilding times (PH distributions), and it also provides a detailed analysis for the blockchain queueing model by means of the block-structured Markov processes and the RG
factorizations. However, so far analysis of the blockchain queues with renewal arrival
process or with general service time distributions has still been an interesting open
problem in queueing research of blockchain systems.
_Remark 2 In the blockchain system, there are some key factors including the maximum_
block size, mining reward, transaction fee, mining strategy, security of blockchain and so
on. Based on this, we may develop reward queueing models, decision queueing models,
and game queueing models in the study of blockchain systems. Therefore, analysis for
the key factors will be not only theoretically necessary but also practically important in
development of blockchain technologies.
**A Markov process of GI/M/1 type**
In this section, to analyze the blockchain queueing system, we first establish a continuoustime Markov process of GI/M/1 type. Then, we derive a system stable condition and express
the stationary probability vector of this Markov process by means of the matrix-geometric
solution.
Let N1(t), N2(t), I(t), J1(t) and J2(t) be the number of transactions in the queueing
waiting room, the number of transactions in the block, the phase of the MAP, the phase
of a blockchain-building PH time, and the phase of a block-generation PH time at time t,
respectively. We write **X = {(N1(t), N2(t), I(t), J1(t), J2(t)), t ≥** 0} . Then, it is easy to see
that X is a continuous-time Markov process with block structure whose state space is given
by
� = {(0, 0; i), 1 ≤ i ≤ m0}
∪ ��0, l; i, j�, 1 ≤ l ≤ b, 1 ≤ i ≤ m0, 1 ≤ j ≤ m1�
∪ �(k, 0; i, r), k ≥ 1, 1 ≤ i ≤ m0, 1 ≤ r ≤ m2�
k, l; i, j
, k ≥ 1, 1 ≤ l ≤ b, 1 ≤ i ≤ m0, 1 ≤ j ≤ m1�.
∪
��
�
-----
From Fig. 1, it is easy to set up the state transition relations of the Markov process X,
see Fig. 2 for more details. It is a key in understanding of Fig. 2 that there is a different
transition between State (k, 0) for the block generation and State (k, l) for the blockchain
building with 1 ≤ l ≤ b because the block-generation and blockchain-building processes
cannot simultaneously exist at a time, and specifically, a block must first be generated,
then it can enter the blockchain-building process.
Using Fig. 2, the infinitesimal generator of the Markov process X is given by
B1 B0
B2 A1 A0
B3 A1 A0
... ... ...
**Q =** , (1)
Bb+1 A1 A0
Ab+1 A1 A0
Ab+1 A1 A0
... ... ...
where ⊗ and ⊕ are the Kronecker product and the Kronecker sum of two matrices,
respectively,
C ⊕ S
I ⊗ �T [0]β� C ⊕ T
... ...
I ⊗ �T [0]β� C ⊕ T
[,]
D ⊗ I
D ⊗ I
...
D ⊗ I
A1 =
[,]
A0 =
0 · · · 0 I ⊗ �S[0]α�
,
and
Ab+1 =
C
I ⊗ T [0] C ⊕ T
... ...
I ⊗ T [0] C ⊕ T
[,]
B0 =
B2 =
D ⊗ β
D ⊗ I I
... [,][ B][1][ =]
D ⊗ I I
0 I ⊗ �S[0]α� 0 · · · 0 []
, . . ., Bb+1 =
0 . . . 0 I ⊗ �S[0]α�
.
Clearly, the continuous-time Markov process X is of GI/M/1 type.
Now, we use the mean drift method to discuss the system stable condition of the continuous-time Markov process X of GI/M/1 type. Note that the mean drift method for
checking system stability is given a detailed introduction in Chapter 3 of Li [97].
-----
From Chapter 1 of Neuts [98] or Chapter 3 of Li [97], for the Markov process of
GI/M/1 type, we write
**A = A0 + A1 + Ab+1**
D ⊗ I + C ⊕ S I ⊗ �S[0]α�
I ⊗ �T [0]β� D ⊗ I + C ⊕ T
= ... ... .
I ⊗ �T [0]β� D ⊗ I + C ⊕ T
I ⊗ �T [0]β� D ⊗ I + C ⊕ T
(2)
Clearly, the matrix A is the infinitesimal generator of an irreducible, aperiodic and positive recurrent Markov process with two levels (i.e., levels 0 and b), together with b − 1
instantaneous levels (i.e., levels 1, 2, . . ., b − 1 ) which will vanish as the time _t goes to_
infinity. On the other hand, such a special Markov process **A will not influence appli-**
cations of the matrix-geometric solution because it is only related to the mean drift
method for establishing system stable conditions.
The following theorem discusses the invariant measure θ of the Markov process **A,**
that is, the vector θ satisfies the system of linear equations θ **A = 0 and θ** e = 1.
**Theorem 1** _There exists the unique invariant measure_ θ = (θ0, 0, . . ., 0, θb) _of the_
_Markov process_ **A,** _where_ (θ0, θb) _is the stationary probability vector of the irreducible pos-_
_itive-recurrent Markov process whose infinitesimal generator_
D ⊗ I + C ⊕ S I ⊗ �S[0]α�
I ⊗ �T [0]β� D ⊗ I + C ⊕ T
�
�
�
.
R =
�
_Proof It follows from θ_ **A = 0 that**
b−1
θ1(D ⊗ I + C ⊕ S) + � θk �I ⊗ �T [0]β�� + θb�I ⊗ �T [0]β�� = 0, (3)
k=1
θk [D ⊗ I + C ⊕ T ] = 0, 1 ≤ k ≤ b − 1, (4)
� � ��
θ1 I ⊗ S[0]α + θb(D ⊗ I + C ⊕ T ) = 0. (5)
For Eq. (4), note that
D ⊗ I + C ⊕ T = D ⊗ I + C ⊗ I + I ⊗ T
= (C + D) ⊗ I + I ⊗ T
= (C + D) ⊕ T,
where C + D is the infinitesimal generator of an irreducible and a positive-recurrent
Markov process; thus, its eigenvalue of the maximal real part is zero so that all the other
eigenvalues have a negative real part; while _T, coming from the PH distribution with_
-----
irreducible representation (α, T ), is invertible with the real part of each eigenvalue be
negative due to the fact that Te ≨ 0, and the matrix T has the properties that all diagonal elements are negative, and all off-diagonal elements are nonnegative. Note that each
eigenvalue of the matrix (C + D) ⊕ T is the sum of an eigenvalue of the matrix C + D
and an eigenvalue of the matrix _T; thus, each eigenvalue of the matrix (C + D) ⊕_ T
has a negative real part (i.e., it is non-zero). This shows that the matrix (C + D) ⊕ T is
invertible by means of det ((C + D) ⊕ T ) �= 0, which is the product of all the eigenvalues of (C + D) ⊕ T . Hence, from Equation θk [D ⊗ I + C ⊕ T ] = 0 for 1 ≤ k ≤ b − 1, we
obtain
θ1 = θ2 = · · · = θb−1 = 0.
This gives
θ = (θ0, 0, . . ., 0, θb).
It follows from (3) and (5) that
I ⊗
�
T [0]β
�
θ0(D ⊗ I + C ⊕ S) + θb�I ⊗ �T [0]β�� = 0,
θ0�I ⊗ �S[0]α�� + θb(D ⊗ I + C ⊕ T ) = 0.
�
��
Thus, we have
� D ⊗ I + C ⊕ S I ⊗ �S[0]α�
(θ0, θb) I ⊗ �T [0]β� D ⊗ I + C ⊕ T
�
= (0, 0).
Let
D ⊗ I + C ⊕ S I ⊗ �S[0]α�
I ⊗ �T [0]β� D ⊗ I + C ⊕ T
�
�
�
.
R =
�
R
Then, the matrix is the infinitesimal generator of an irreducible positive-recurrent
R
Markov process. Thus, the Markov process exists the stationary probability vector (θ0, θb), that is, there exists the unique solution to the system of linear equations:
(θ0, θb)R = 0 and θ0e + θbe = 1 . This completes the proof.
The following theorem provides a necessary and sufficient conditions under which the
Markov process Q is positive recurrence.
**Theorem 2** _The Markov process_ **Q** _of GI/M/1 type is positive recurrent if and only if_
� � ��
(θ0 + θb)(D ⊗ I)e < bθ0 I ⊗ S[0]α e. (6)
_Proof Using the mean drift method given in Chapter 3 of Li [17] (e.g., Theorem 3.19_
and the continuous-time case in Page 172), it is easy to see that the Markov process Q of
GI/M/1 type is positive recurrent if and only if
-----
θ A0e < bθ Ab+1e. (7)
Note that
θ A0e = θ0(D ⊗ I)e + θb(D ⊗ I)e
= (θ0 + θb)(D ⊗ I)e
(8)
and
� � ��
bθ Ab+1e = bθ0 I ⊗ S[0]α e, (9)
thus, we obtain
(θ0 + θb)(D ⊗ I)e < bθ0
�
I ⊗
�
S[0]α
��
e.
This completes the proof.
It is necessary to consider a special case in which the transaction inputs are Poisson with
�
arrival rate, and the blockchain-building and block-generation times are exponential with
service rates µ1 and µ2, respectively. Note that this special case was studied in Li et al. [75],
here we only restate the stable condition as the following corollary.
**Corollary 3** _The Markov process_ **Q** _of GI/M/1 type is positive recurrent if and only if_
bµ1µ2
µ1 + µ2
- �.
(10)
By observing (10), it is easy to see that 1/(bµ1) + 1/(bµ2) < 1/�, that is, the complicated
service speed of transactions is faster than the transaction arrival speed, under which the
Markov process Q of GI/M/1 type is positive recurrent. However, it is not easy to understand Condition (6) which is largely influenced by the matrix computation with respect to
the MAP and the PH distribution.
If the Markov process **Q of GI/M/1 type is positive recurrent, we write its stationary**
probability vector as
π = (π0, π1, π2, . . .),
where for k = 0
π0 =�π0,0, π0,1, . . ., π0,b
�
π0,0 = π0,0[(][i][)] [:][ 1][ ≤] [i][ ≤] [m][0]
and for 1 ≤ l ≤ b
�,
�
,
�
;
π0,l =
�
π0,[(][i]l[,][j][)] : 1 ≤ i ≤ m0, 1 ≤ j ≤ m1
-----
for k ≥ 1
πk =
πk,0 =
�πk,0, πk,1, . . ., πk,b�,
�
πk[(][i],0[,][r][)] [:][ 1][ ≤] [i][ ≤] [m][0][, 1][ ≤] [r][ ≤] [m][2]
�
�
�
,
and for 1 ≤ l ≤ b
�
πk,l =
�
πk[(],[i]l[,][j][)] : 1 ≤ i ≤ m0, 1 ≤ j ≤ m1 .
a[(][i][,][j][)] : 1 ≤ i ≤ I, 1 ≤ j ≤ J
�
is based
Note that in the above expressions, the vector a =
�
on the lexicographical order of the elements, that is,
**a =**
�
a[(][1,1][)], a[(][1,2][)], . . ., a[(][1,][J] [)]; a[(][2,1][)], a[(][2,2][)], . . ., a[(][2,][J] [)]; . . . ; a[(][I][,1][)], a[(][I][,2][)], . . ., a[(][I][,][J] [)][�].
If (θ0 + θb)(D ⊗ I)e < bθ0�I ⊗ �S[0]α��e, then the Markov process Q of GI/M/1 type is
irreducible and positive recurrent. Thus, the Markov process Q exists a unique stationary probability vector, which is also matrix-geometric. Thus, to express the matrix-geometric stationary probability vector, we need to first obtain the rate matrix R, which is
the minimal nonnegative solution to the following nonlinear matrix equation
R[b][+][1]Ab+1 + RA1 + A0 = 0. (11)
In general, it is very complicated to solve this nonlinear matrix equation (11) due to the
term R[b][+][1]Ab+1 of size b + 1 . In fact, for the blockchain queueing system, here we cannot provide an explicit expression for the rate matrix R yet. In this case, we can use some
iterative algorithms, given in Neuts [98], to give its numerical solution. For example, an
effective iterative algorithm given in Neuts [98] is described as
R0 = 0,
� �
RN +1 = RN[b][+][1]Ab+1 + A0 (−A1)[−][1].
Note that this algorithm is fast convergent, that is, after a finite number of iterative steps,
we can numerically obtain a solution of higher precision which is used to approximate
the rate matrix R.
The following theorem directly comes from Theorem 1.2.1 of Chapter 1 in Neuts [98].
Here, we restate it without a proof.
**Theorem 4** _If the Markov process_ **Q** _of GI/M/1 type is positive recurrent, then the sta-_
_tionary probability vector_ π = (π0, π1, π2, . . .) _is given by_
πk = π1R[k][−][1], k ≥ 2. (12)
_where the vector_ (π0, π1) _is the stationary probability vector of the censoring Markov_
_process_ **Q[(][1,2][)]** _of levels 0 and 1 which is irreducible and positive recurrent. Thus, it is the_
_unique solution to the following system of linear equations:_
-----
�
where
(π0, π1)Q[(][1,2][)] = (π0, π1),
π0e + π1(I − R)[−][1]e = 1,
(13)
.
**Q[(][1,2][)]** =
B1 B0
b+1
� R[k][−][2]Bk A1 + R[b]Ab+1
k=2
_Proof Here, we only derive the boundary condition (13). It follows from πQ = 0 that_
�
π0B1 + π1B2 + · · · + πbBb+1 = 0,
π0B0 + π1A1 + πb+1Ab+1 = 0.
Using the matrix-geometric solution πk = π1R[k][−][1] for k ≥ 2, we have
�
π0B1 + π1�B2 + RB3 + · · · + R[b][−][1]Bb+1
π0B0 + π1�A1 + R[b]Ab+1� = 0.
�
= 0,
This gives the desired result and completes the proof.
**The stationary transaction numbers**
In this section, we discuss two key performance measures: the average stationary numbers
of transactions both in the queueing waiting room and in the block and give their simple
expressions by means of the vectors π0 and π1, and the rate matrix R. Finally, we use numerical examples to verify computability of our theoretical results and show how the performance measures depend on the main parameters of this system.
If (θ0 + θb)(D ⊗ I)e < bθ0�I ⊗ �S[0]α��e, then the blockchain system is stable. In this
case, we write that w.p.1,
N1 = limt→+∞N1(t), N2 = limt→+∞N2(t),
where N1(t) and N2(t) are the random numbers of transactions in the queueing waiting
room and of transactions in the block at time t ≥ 0, respectively.
a. The average stationary number of transactions in the queueing waiting room
It follows from (12) and (13) that
∞
�
k
k=1
m0
�
i=1
m2
�
πk[(][i],0[,][r][)] [+]
r=1
b m0
��
l=1
E[N1] =
=
=
∞
�
k
k=1
∞ b
� �
k πk,l e
k=1 l=0
∞
�
k πk e = π1R(I − R)[−][2]e.
k=1
m1
�
πk[(],[i]l[,][j][)]
j=1
i=1
-----
Note that the above three vectors e have different sizes, for example, the size of the first
one is m0 × m2 for l = 0 and m0 × m1 for 1 ≤ l ≤ b, while the sizes of the second and
third are m0 × (m2 + bm1) . For simplicity of description, here we use only a vector _e_
whose size can easily be inferred by the context.
b. The average stationary number of transactions in the block
Let h = (0, e, 2e, . . ., be)[T] . Then
m1
�
πk[(],[i]l[,][j][)]
j=1
m0
�
i=1
∞
�
k=0
E[N2] =
=
=
=
b
�
l
l=0
b ∞
� �
l πk,l e
l=0 k=0
∞
�
πk h
k=0
�
π0 + π 1(I − R)[−][1][�]
**h.**
In the remainder of this section, we provide some numerical examples to verify computability of our theoretical results, and to analyze how the two performance measures
E[N1] and E[N2] depend on some crucial parameters of the blockchain queueing system.
In the two numerical examples, we take some common parameters: The block-building service rate µ1 ∈ [0.05, 1.5], the block-generation service rate µ2 = 2, the arrival rate
� = 0.3, the maximum block size b = 40, 320, 1000, respectively.
From Fig. 3, it is seen that E[N1] and E[N2] decrease, as µ1 increases. At the same time,
E[N1] decreases as b increases, but E[N2] increases as b increases.
**The transaction–confirmation time**
In this section, we provide a matrix-analytic method based on the _RG factorizations_
for computing the average transaction–confirmation time of any transaction, which is
always an interesting but difficult topic because of the batch service for a block of transactions, and of the Service-In-Random-Order for choosing some transactions from the
queueing waiting room into a block.
In the blockchain system, the transaction–confirmation time is the time interval
from the time epoch that a transaction arrives at the queueing waiting room to the
time point that the block including the transaction is first confirmed and then it is built
in the blockchain. Obviously, the transaction–confirmation time is the sojourn time
of the transaction in the blockchain system, and it is the sum of the block-generation
I
and blockchain-building times with respect to the transaction taken in the block. Let
denote the transaction–confirmation time of any transaction when the blockchain system is stable.
I
To study the transaction–confirmation time, we need to introduce the stationary life
time Ŵs of the PH blockchain-building time Ŵ with an irreducible representation (α, T ) .
-----
-----
Let ̟ be the stationary probability vector of the Markov process T + T [0]α . Then, the
stationary life time Ŵs is also a PH distribution with an irreducible representation (̟, T ),
e.g., see Property 1.5 in Chapter 1 of Li [97]. Clearly, E[Ŵs] = −̟ T [−][1]e.
Now, we introduce a Markov process {Y (t) : t ≥ 0} with an absorbing state, whose
state transition relation is given in Fig. 4 according to Figs. 1 and 2. At the same time, we
define the first passage time as
�.
ξ = inf
�t : Y (t) = the absorbing state, t ≥ 0
For k ≥ 0, 1 ≤ i ≤ m0 and 1 ≤ r ≤ m2, if Y (0) = (k, 0; i, r), then we write the first passage time as ξ|(k,0;i,r).
_Remark 3 It is necessary to explain the absorbing rates in the below part of Fig. 4._
1. If Y (0) = (k, l) for 1 ≤ k ≤ b and 0 ≤ l ≤ b, then the k transactions can be chosen
into a block once the previous block is pegged to the blockchain, a tagged transaction
of the k transactions is chosen into the block with probability 1.
( 1,0 ) ( b-1,0 ) ( b,0 ) ( b+1,0 ) ( b+2,0 )
( 0,1 ) ( 1,1 ) ( b-1,1 ) ( b,1 ) ( b+1,1 ) ( b+2,1 )
(0,b-1) (1,b-1) (b-1,b-1) (b,b-1) (b+1,b-1) (b+2,b-1)
( 0,b ) ( 1,b ) ( b-1,b) ( b,b ) ( b+1,b ) ( b+2,b )
**An absorbing state**
-----
2. If Y (0) = (k, l) for k ≥ b + 1 and 0 ≤ l ≤ b, then any b transactions of the k transactions can randomly be chosen into a block once the previous block is pegged to the
blockchain; thus, a tagged transaction of the k transactions is chosen into the block
of the maximal size b with probability b/k.
When a transaction arrives at the queueing waiting room, it can observe the states of the
blockchain system having two different cases:
Case one: state (k, 0; i, r) for k ≥ 1; 1 ≤ i ≤ m0 and 1 ≤ r ≤ m2 . In this case, with the initial probability πk[(][i],0[,][r][)][, the transaction–confirmation time ][I][ is the first passage time ][ξ][|][(][k][,0][;][i][,][r][)]
of the Markov process with an absorbing state, whose state transition relation is given in
Fig. 4.
Case two: state (k, l; i, r) for k ≥ 1, 1 ≤ l ≤ b; 1 ≤ i ≤ m0 and 1 ≤ j ≤ m1 . In this case,
with the initial probability πk[(],[i]l[,][j][)][, the transaction–confirmation time ][I][ is decomposed into ]
the sum of the random variable Ŵs and the first passage time ξ|(k,0;i,r) given in Case one. It is
easy to see from Fig. 4 that there exists a stochastic decomposition: I = Ŵs + ξ|(k,0;i,r).
From the above analysis, it is easy to see that computation of the first passage time
ξ|(k,0;i,r) is a key in analyzing the transaction–confirmation time.
Based on the state transition relation given in Fig. 4, now we write the infinitesimal generator of the Markov process {Y (t) : t ≥ 0} as
B�1 B�0
B�2 A�1 A0
B�3 A�1 A0
... ... ...
**H =** , (14)
B�b+1 A�1 A0
Ab+1 A�[(]1[b][+][1][)] A0
Ab+1 A�1[(][b][+][2][)] A0
... ... ...
where
A0 =
A�1 =
D ⊗ I
D ⊗ I
... [,][ A][b][+][1][ =]
D ⊗ I
C ⊕ S
C ⊕ T
... [,]
C ⊕ T
S[0]α
�
,
0 · · · 0 I ⊗
�
for k ≥ b + 1
-----
A�[(]1[k][)] =
C ⊕ S
I ⊗ � k−k b [T][ 0][β]� C ⊕ T
... ...
I ⊗ � k−k b [T][ 0][β]� C ⊕ T
;
C ⊗ I
C ⊕ T
...
C ⊕ T
0 · · · 0 I ⊗ �S[0]α�
.
B�0 =
B�2 =
0 D ⊗ I
D ⊗ I
... [,][ �][B][1][ =]
D ⊗ I
I ⊗ �S[0]α� 0 0 · · · 0 []
, . . ., B[�]b+1 =
[,]
If the blockchain system is stable, then the probability that a transaction observes State
(0, 0; i) only after arrived at the instant is π1,0[(][i][,][r][)][ ; for ] [1][ ≤] [l][ ≤] [b][, the probability that a ]
transaction observes State �0, l; i, j� only after arrived at the instant is π1,[(][i]l[,][j][)][ ; for ] [k][ ≥] [2][, ]
the probability that a transaction observes State (k − 1, 0; i, r) only after arrived at the
instant is πk[(][i],0[,][r][)][ ; for ] [k][ ≥] [2, 1][ ≤] [l][ ≤] [b][, the probability that a transaction observes State ]
�k − 1, l; i, j� only after arrived at the instant is πk[(],[i]l[,][j][)][ . Obviously, for ] [0][ ≤] [l][ ≤] [b][, States ]
(0, 0; i) and �0, l; i, j� will not be encountered by the transaction only after arrived at the
instant and, thus, the stationary probabilities π0,0[(][i][)][ and ][π(]0,[i]l[,][j][)][ should be omitted by means ]
of the observation of any arriving transaction. Based on this, we introduce a new initial
probability vector for the observation of any transaction only after arrived at the instant
as follows:
γ = (γ1, γ2, γ3, . . .),
where for k ≥ 1
�
�
γk =
γk,0 =
�γk,0, γk,1, . . ., γk,b�,
� 1
k,0 [:][ 1][ ≤] [i][ ≤] [m][0][, 1][ ≤] [r][ ≤] [m][2]
1 − π0e [π] [(][i][,][r][)]
�
and for 1 ≤ l ≤ b
�
.
γk,l =
� 1
1 − π0e [π(]k,[i]l[,][j][)] : 1 ≤ i ≤ m0, 1 ≤ j ≤ m1
To emphasize on the event that the transaction observes State (k − 1, 0; i, r) only after
arrived at the instant, we introduce a new initial probability vector
ϕ = (ϕ1, ϕ2, ϕ3, . . .),
where for k ≥ 1
�.
ϕk =
�γk,0, 0, 0, . . ., 0
-----
In addition, we take
ψ = γ − ϕ.
**Theorem 5** _If the blockchain system is stable, then the first passage time_ ξ|(k,0;i,r) _is a PH_
_distribution of infinite size with an irreducible representation_ (η(k, 0; i, r), H), _where_ **H is**
_given in (14), and_
1
0, 0, . . ., 0, k,0 [, 0, 0,][ . . .][, 0]
1 − π0e [π] [(][i][,][r][)]
�
.
η(k, 0; i, r) =
_Also, we have_
**H[0]** = −He
�
b b �
e ⊗ T [0], e ⊗ T [0], . . ., e ⊗ T [0]; .
b + 1 [e][ ⊗] [T][ 0][,] b + 2 [e][ ⊗] [T][ 0][,][ . . .]
=
�
_Proof If the blockchain system is stable, then ξ|(k,0;i,r) is the first passage time of the_
Markov process H (or {Y (t) : t ≥ 0} ) with an absorbing state and under the initial state
Y (0) = (k, 0; i, r) . Note that the original Markov process Q given in (1) is irreducible and
positive recurrent and, thus, ξ|(k,0;i,r) is a PH distribution of infinite size with an irreducible representation (η(k, 0; i, r), H) . At the same time, a simple computation gives
b b �
e ⊗ T [0], e ⊗ T [0], . . ., e ⊗ T [0]; .
b + 1 [e][ ⊗] [T][ 0][,] b + 2 [e][ ⊗] [T][ 0][,][ . . .]
**H[0]** =
�
This completes the proof.
Based on Theorem 5, now we extend the first passage time ξ|(k,0;i,r) to ξ|(0,ϕ), which is
the first passage time of the Markov process H with an initial probability vector (0, ϕ) .
The following corollary shows that ξ|(0,ϕ) is PH distribution of infinite size, while its
proof is easy and is omitted here.
**Corollary 6** _If the blockchain system is stable, then the first passage time_ ξ|(0,ϕ) _is a PH_
_distribution of infinite size with an irreducible representation_ ((0, ϕ), H), _and_
= −(0, ϕ)H[−][1]e,
= (0, ϕ)H[−][2]e −
�
(0, ϕ)H[−][1]e
E
�ξ|(0,ϕ)
�2
.
Var�
ξ|(0,ϕ)
�
�
The following theorem provides a simple expression for the average transaction–
confirmation time E[I] by means of Corollary 6.
**Theorem 7** _If the blockchain queueing system is stable, then the average transaction–_
_confirmation time_ E[I] _is given by_
-----
E[I] = E
�ξ|(0,ϕ)�
+ (1 − ϕe)E[Ŵs],
_where_ Ŵs _is the stationary life time of the PH blockchain-building time with an irreducible_
_representation_ (α, T ). _Further, we have_
E[I] = −(0, ϕ)H[−][1]e − (1 − ϕe)̟ T [−][1]e,
_where_ ̟ _is the stationary probability vector of the Markov process_ T + T [0]α.
_Proof We first introduce two basic events_
� =
�The transaction observes States (0, 0; i) and (k, 0; i, r)
for 1 ≤ i ≤ m0, k ≥ 1, 1 ≤ r ≤ m2
only after arrived at the instant�
and
�[c] =�The transaction observes States
�k, l; i, j
�
for k ≥ 1, 1 ≤ l ≤ b, 1 ≤ i ≤ m0, 1 ≤ j ≤ m1
only after arrived at the instant�.
It is easy to see that � ∪ �[c] = � . Thus, the two events are complementary according to
the fact that the transaction can observe all the states of the Markov process Q only after
arrived at the instant. If the blockchain system is stable, then it is easy to compute the
probabilities of the two events as follows:
P{�} = (0, ϕ)e = ϕe
and
P�
�[c][�]
= 1 − P{�} = 1 − ϕe.
Using the law of total probability, we obtain
�[c][�]E
�I | �[c][�]
E[I] = P{�}E[I | �] + P
�
= ϕe E�ξ|(0,ϕ)� + (1 − ϕe)E�Ŵs + ξ|(0,ϕ)
= E�ξ|(0,ϕ)� + (1 − ϕe)E[Ŵs]
= −(0, ϕ)H[−][1]e − (1 − ϕe)̟ T [−][1]e.
�
The proof is completed.
As shown in Theorem 7, it is a key in the study of PH distributions of infinite sizes
whether or not we can compute the inverse matrix H[−][1] of infinite size. To this end, we
-----
need to use the RG factorizations, given in Li [97], to provide such a computable path.
In what follows, we provide only a simple interpretation on such a computation, while
some detailed discussions will be left in our another paper in the future.
In fact, it is often very difficult and challenging to compute the inverse of a matrix of
infinite size only except for the triangular matrices. Fortunately, using the RG factorizations, the infinitesimal generator H can be decomposed into a product of three matrices:
two block-triangular matrices and a block-diagonal matrix. Therefore, the RG factorizations play a key role in generalizing the PH distributions from finite dimensions to infinite dimensions.
Using Subsection 2.2.3 in Chapter 2 of Li [97] (see Pages 88 to 89), now we provide
the UL-type _RG factorization of the infinitesimal generator H . It will be seen that the_
_RG factorization of H has a beautiful block structure, which is well related to the special_
block characteristics of H corresponding to the blockchain system. To this end, we need
to define and compute the R-, U- and G-measures as follows.
**The R‑measure**
Let Rk for k ≥ 0 be the minimal nonnegative solution to the system of nonlinear matrix
equations:
R0 = B[�]0 + R0A[�]1 + R0R1 · · · Rb−1RbAb+1,
R1 = A0 + R1A[�]1 + R1R2 · · · RbRb+1Ab+1,
R2 = A0 + R2A[�]1 + R2R3 · · · Rb+1Rb+2Ab+1,
...
Rb−1 = A0 + Rb−1A[�]1 + Rb−1Rb · · · R2b−2R2b−1Ab+1,
and
Rb = A0 + RbA[�][(]1[b][+][1][)] + RbRb+1 · · · R2b−1R2bAb+1,
Rb+1 = A0 + Rb+1A[�][(]1[b][+][2][)] + Rb+1Rb+2 · · · R2bR2b+1Ab+1,
Rb+2 = A0 + Rb+2A[�][(]1[b][+][3][)] + Rb+2Rb+3 · · · R2b+1R2b+2Ab+1,
...
**The U‑measure**
Based on the R-measure Rk for k ≥ 0, we have
U0 = B[�]1 + R0B[�]2 + R0R1B[�]3 + · · · + R0R1 · · · Rb−2Rb−1B[�]b+1,
U1 = A[�]1 + R1R2 · · · Rb−1RbAb+1,
U2 = A[�]1 + R2R3 · · · RbRb+1Ab+1,
...
Ub = A[�]1 + RbRb+1 · · · R2b−2R2b−1Ab+1,
-----
and
Ub+1 = A[�][(]1[b][+][1][)] + Rb+1Rb+2 · · · R2b−1R2bAb+1,
Ub+2 = A[�][(]1[b][+][2][)] + Rb+2Rb+3 · · · R2bR2b+1Ab+1,
Ub+3 = A[�][(]1[b][+][3][)] + Rb+3Rb+4 · · · R2b+1R2b+2Ab+1,
...
**The G‑measure**
Based on the R-measure Rk for k ≥ 0 and the U-measure Uk for k ≥ 0, we have
G1,0 = (−U1)[−][1][�]
B�2 + R1B�3 + R1R2B�4 + · · · + R1R2 · · · Rb−2Rb−1B�b+1
�
,
�
G2,0 = (−U2)[−][1][�]B�3 + R2B�4 + R2R3B�5 + · · · + R2R3 · · · Rb−2Rb−1B�b+1,
...
Gb−1,0 = �−Ub−1�−1[�]B�b + Rb−1B�b+1�,
Gb,0 = (−Ub)[−][1]B[�]b+1,
G2,1 = (−U2)[−][1]R2R3 · · · Rb−1RbAb+1,
G3,1 = (−U3)[−][1]R3R4 · · · Rb−1RbAb+1,
...
Gb,1 = (−Ub)[−][1]RbAb+1,
−Ub+1�−1Ab+1,
Gb+1,1 =
and for k ≥ 3
�
Gk,k−1 = (−Uk )[−][1]Rk Rk+1 · · · Rk+b−3Rk+b−2Ab+1,
Gk+1,k−1 = �−Uk+1�−1Rk+1Rk+2 · · · Rk+b−3Rk+b−2Ab+1,
...
Gk+b−2,k−1 = �−Uk+b−2�−1Rk+b−2Ab+1,
−Uk+b−1�−1Ab+1.
Gk+b−1,k−1 =
�
Based on the _R-,_ _U- and_ _G-measures, we provide the UL-type_ _RG factorization of the_
infinitesimal generator H as follows:
**H = (I −** **RU** )U(I − **GL),**
-----
where
**RU =**
0 R0
0 R1
0 R2
0 R3
... ...
,
**U = diag(U0, U1, U2, U3, . . .)**
and
0
G1,0 0
G2,0 G2,1 0
... ... ... ...
Gb−1,0 Gb−1,1 Gb−1,b−2 · · · 0
Gb,0 Gb,1 Gb,b−2 - · · Gb,k 0
Gb+1,1 Gb+1,b−2 · · · Gb+1,k Gb+1,k+1 0
Gb+2,b−2 · · · Gb+2,k Gb+2,k+1 Gb+2,k+2 0
... ... ... ... ... ...
.
**GL =**
Based on the UL-type RG factorization H =(I − **RU** )U(I − **GL), we obtain**
**H[−][1]** = (I − **GL)[−][1]U[−][1](I −** **RU** )[−][1],
where the inverse matrices (I − **GL)[−][1], U[−][1] and (I −** **RU** )[−][1] are given some expressions
in Appendix A.3 of Li [97]: inverses of matrices of infinite size (see Pages 654 to 658).
Once the inverse of matrix H of infinite size is given, the PH distribution of infinite size
can be constructed under a computable and feasible framework. In fact, this is very
important in the study of stochastic models. Also see Li et al. [99] and Takine [100] for
more details.
_Remark 4 In general, it is always very difficult and challenging to discuss the transac-_
tion–confirmation time of any transaction in a blockchain system due to two key points:
The block service is a class of batch service, and some transactions are chosen into a
block by means of the Service-In-Random-Order. For a more general blockchain system,
this paper sets up a Markov process with an absorbing state, and shows that the transaction–confirmation time is the first passage time of the Markov process with an absorbing state. Therefore, this paper can discuss the transaction–confirmation time by means
of the PH distribution of infinite size (corresponding to the first passage time) and provides an effective algorithm for computing the average transaction–confirmation time
using the RG factorizations of block-structured Markov processes of infinite levels. We
believe that the RG factorizations of block-structured Markov processes will play a key
role in the queueing study of blockchain systems.
-----
**Concluding remarks**
In this paper, we develop a more general framework of block-structured Markov processes in the queueing study of blockchain systems. To do this, we design a two-stage,
Service-In-Random-Order and batch service queueing system with MAP transaction
arrivals and two-stages of PH service times and discuss some key performance measures
such as the average stationary number of transactions in the queueing waiting room,
the average stationary number of transactions in the block, and the average transaction–confirmation time of any transaction. Note that the study of performance measures
is a key to improve blockchain technologies sufficiently. On the other hand, an original
aim of this paper is to generalize the two-stage batch-service queueing model studied in
Li et al. [75] both “from exponential to phase-type” service times and “from Poisson to
MAP” transaction arrivals. Note that the MAP transaction arrivals and the two stages of
PH service times make our queueing model more suitable to various practical conditions
of blockchain systems with key factors, for example, the mining processes, the reward
incentive, the consensus mechanism, the block generation, the blockchain building and
so forth.
Using the matrix-geometric solution, we first obtain a sufficient stable condition of
the blockchain system. Then, we provide simple expressions for two key performance
measures: the average stationary number of transactions in the queueing waiting room,
and the average stationary number of transactions in the block. Finally, to deal with the
transaction–confirmation time, we develop a computational technique of the first passage times by means of both the PH distributions of infinite sizes and the RG factorizations. In addition, we use numerical examples to verify computability of our theoretical
results. Along these lines, we will continue our future research on several interesting
directions as follows:
- Developing effective algorithms for computing the average transaction–confirmation
times in terms of the RG factorizations.
- Analyzing multiple classes of transactions in the blockchain systems, in which the
transactions are processed in the block-generation and blockchain-building processes according to a priority service discipline.
- When the arrivals of transactions are a renewal process, and/or the block-generation
times and/or the blockchain-building times follow general probability distributions,
an interesting future research is to focus on fluid and diffusion approximations of
blockchain systems.
- Setting up reward function with respect to cost structures, transaction fees, mining reward, consensus mechanism, security and so forth. It is very interesting in our
future study to develop stochastic optimization, Markov decision processes and stochastic game models in the study of blockchain systems.
**Acknowledgements**
The authors are grateful to the editor and two anonymous referees for their constructive comments and suggestions,
which sufficiently help the authors to improve the presentation of this manuscript. Q.L. Li was supported by the National
Natural Science Foundation of China under grant No. 71671158, and the Natural Science Foundation of Hebei Province
in China under Grant No. G2017203277.
-----
**Authors’ contributions**
QL provided the main theoretical analysis and contributed ideas on content and worked on the writing. JY and YX
completed the TEX file under the present version. YX ran the numerical experiments. JY, FQ and HB checked some math‑
ematical derivations. All authors read and approved the final manuscript.
**Availability of data and materials**
Not applicable.
**Competing interests**
The authors declare that they have no competing interests.
**Author details**
1 School of Economics and Management, Beijing University of Technology, Beijing 100124, China. 2 School of Economics
and Management, Yanshan University, Qinhuangdao 066004, China. [3] School of Science, Yanshan University, Qinhuang‑
dao 066004, China.
Received: 23 April 2019 Accepted: 17 June 2019
**References**
1. Nakamoto S. Bitcoin: a peer-to-peer electronic cash system, working paper; 2008. p. 1–9.
2. Wattenhofer R. The science of the blockchain. California: CreateSpace Independent Publishing Platform; 2016.
3. Prusty N. Building blockchain projects. Birmingham: Packt Publishing Ltd; 2017.
4. Drescher D. Blockchain basics: a non-technical introduction in 25 steps. Berkely: Apress; 2017.
5. Bashir I. Mastering blockchain: distributed ledger technology, decentralization, and smart contracts explained.
Birmingham: Packt Publishing Ltd; 2018.
6. Parker JF. Blockchain technology simplified: the complete guide to blockchain management, mining, trading and
investing cryptocurrency. California: CreateSpace Independent Publishing Platform; 2018.
7. Zheng Z, Xie S, Dai H-N, Chen X, Wang H. Blockchain challenges and opportunities: a survey. Int J Web Grid Serv.
2018;14(4):352–75.
8. Constantinides P, Henfridsson O, Parker GG. Introduction-platforms and infrastructures in the digital age. Inf Syst
Res. 2018;29(2):381–400.
9. Yli-Huumo J, Ko D, Choi S, Park S, Smolander K. Where is current research on blockchain technology?—A system‑
atic review. PLoS ONE. 2016;11(10):0163477.
10. Plansky J, O’Donnell T, Richards K. A strategist’s guide to blockchain. PwC report; 2016. p. 1–12.
11. Lindman J, Tuunainen VK, Rossi M. Opportunities and risks of blockchain technologies—a research agenda. In:
Proceedings of the 50th Hawaii international conference on system sciences; 2017. p. 1533–42.
12. Risius M, Spohrer K. A blockchain research framework. Bus Inf Syst Eng. 2017;59(6):385–409.
13. Parker T. Smart contracts: the ultimate guide to blockchain smart contracts—learn how to use smart contracts for
cryptocurrency exchange!. California: CreateSpace Independent Publishing Platform; 2016.
14. Bartoletti M, Pompianu L. An empirical analysis of smart contracts: platforms, applications, and design patterns. In:
International conference on financial cryptography and data security. Springer: New York; 2017. p. 494–509.
[15. Alharby M, van Moorsel A. Blockchain-based smart contracts: a systematic mapping study. arXiv preprint arXiv](http://arxiv.org/abs/1710.06372)
[:1710.06372. 2017.](http://arxiv.org/abs/1710.06372)
16. Magazzeni D, McBurney P, Nash W. Validation and verification of smart contracts: a research agenda. Computer.
2017;50(9):50–7.
17. Diedrich H. Ethereum: blockchains, digital assets, smart contracts, decentralized autonomous organizations.
Sydney: Wildfire Publishing; 2016.
18. Dannen C. Introducing ethereum and solidity: foundations of cryptocurrency and blockchain programming for
beginners. Berkely: Apress; 2017.
19. Atzei N, Bartoletti M, Cimoli T. A survey of attacks on ethereum smart contracts (sok). In: International conference
on principles of security and trust. Springer: New York; 2017. p. 164–86.
20. Antonopoulos AM, Wood G. Mastering ethereum: building smart contracts and DApps. California: O’Reilly Media;
2018.
21. Wang W, Hoang DT, Xiong Z, Niyato D, Wang P, Hu P, Wen Y. A survey on consensus mechanisms and mining
[management in blockchain networks. arXiv preprint arXiv:1805.02707. 2018.](http://arxiv.org/abs/1805.02707)
22. Debus J. Consensus methods in blockchain systems. Frankfurt School of Finance & Management, Blockchain
Center, technical report; 2017. p. 1–58.
23. Pass R, Seeman L, Shelat A. Analysis of the blockchain protocol in asynchronous networks. In: Annual international
conference on the theory and applications of cryptographic techniques. Springer: New York; 2017. p. 643–73.
24. Pass R, Shi E. Hybrid consensus: efficient consensus in the permissionless model. In: 31st international symposium
on distributed computing (DISC 2017). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik; 2017. p. 1–16.
[25. Cachin C, Vukolić M. Blockchain consensus protocols in the wild. arXiv preprint arXiv:1707.01873. 2017.](http://arxiv.org/abs/1707.01873)
26. Karame G, Audroulaki E. Bitcoin and blockchain security. Massachusetts: Artech House; 2016.
27. Lin I-C, Liao T-C. A survey of blockchain security issues and challenges. Int J Netw Secur. 2017;19(5):653–9.
28. Joshi AP, Han M, Wang Y. A survey on security and privacy issues of blockchain technology. Math Found Comput.
2018;1(2):121–47.
29. Swan M. Blockchain: blueprint for a new economy. California: O’Reilly Media; 2015.
-----
30. Catalini C, Gans J. Some simple economics of the blockchain. Cambridge: National Bureau of Economic Research;
2016. p. 1–29.
31. Davidson S, De Filippi P, Potts J. Economics of blockchain. In: Public choice conference; 2016. p. 1–23.
32. Bheemaiah K. The blockchain alternative: rethinking macroeconomic policy and economic theory. Berkely: Apress;
2017.
33. Beck R, Müller-Bloch C, King JL. Governance in the blockchain economy: a framework and research agenda. J
Assoc Inf Syst. 2018;19(10):1020–34.
34. Biais B, Casamatta C, Bisire C, Bouvard M. The blockchain folk theorem. Rev Financ Stud. 2019;32(5):1662–715.
35. Kiayias A, Koutsoupias E, Kyropoulou M, Tselekounis Y. Blockchain mining games. In: Proceedings of the 2016 ACM
conference on economics and computation. ACM: New York; 2016. p. 365–82.
36. Abadi J, Brunnermeier M. Blockchain economics. New York: National Bureau of Economic Research; 2018. p. 1–82.
37. Foroglou G, Tsilidou A-L. Further applications of the blockchain. In: 12th student conference on managerial sci‑
ence and technology; 2015. p. 1–9.
38. Bahga A, Madisetti V. Blockchain applications: a hands-on approach. Blacksburg: VPT; 2017.
39. Xu X, Weber I, Staples M. Architecture for blockchain applications. Basel: Springer; 2019.
40. Tsai W-T, Blower R, Zhu Y, Yu L. A system view of financial blockchains. In: 2016 IEEE symposium on service-ori‑
ented system engineering (SOSE). IEEE: New York; 2016. p. 450–457.
41. Nguyen QK. Blockchain-a financial technology for future sustainable development. In: 2016 3rd international
conference on green technology and sustainable development (GTSD). IEEE: New York; 2016. p. 51–4.
42. Tapscott A, Tapscott D. How blockchain is changing finance. Harv Bus Rev. 2017;1(9):2–5.
43. Treleaven P, Brown RG, Yang D. Blockchain technology in finance. Computer. 2017;50(9):14–7.
44. Casey M, Crane J, Gensler G, Johnson S, Narula N. The impact of blockchain technology on finance: a catalyst for
change. Geneva: International Center for Monetary and Banking Studies (ICMB); 2018.
45. Mougayar W, Buterin V. The business blockchain: promise, practice, and application of the next internet technol‑
ogy. Hoboken: Wiley; 2016.
46. Morabito V. Business innovation through blockchain. Milan: Springer; 2017.
47. Fleming S. Blockchain technology and DevOps: introduction and its impact on business ecosystem. EU: Stephen
Fleming via PublishDrive; 2017.
48. Beck R, Avital M, Rossi M, Thatcher JB. Blockchain technology in business and information systems research. Bus Inf
Syst Eng. 2017;59(6):381–4.
49. Nowiński W, Kozma M. How can blockchain technology disrupt the existing business models? Entrep Bus Econ
Rev. 2017;5(3):173–88.
50. Mendling J, Weber I, Aalst WVD, Brocke JV, Cabanillas C, Daniel F, Debois S, Ciccio CD, Dumas M, Dustdar S,
et al. Blockchains for business process management-challenges and opportunities. ACM Trans Manag Inf Syst.
2018;9(1):4.
51. Hofmann E, Strewe UM, Bosia N. Supply chain finance and blockchain technology: the case of reverse securitisa‑
tion. Heidelberg: Springer; 2017.
52. Korpela K, Hallikas J, Dahlberg T. Digital supply chain transformation toward blockchain integration. In: Proceed‑
ings of the 50th Hawaii international conference on system sciences; 2017.
53. Kim HM, Laskowski M. Toward an ontology-driven blockchain design for supply-chain provenance. Intell Syst
Account Financ Manag. 2018;25(1):18–27.
54. Saberi S, Kouhizadeh M, Sarkis J, Shen L. Blockchain technology and its relationships to sustainable supply chain
management. Int J Prod Res. 2018;57(7):2117–35.
55. Petersen M, Hackius N, von See B. Mapping the sea of opportunities: blockchain in supply chain and logistics. IT
Inf Technol. 2018;60(5–6):263–71.
56. Sternberg H, Baruffaldi G. Chains in chains–logic and challenges of blockchains in supply chains. In: Proceedings
of the 51st annual Hawaii international conference on system sciences; 2018. p. 3936–43.
57. Dujak D, Sajter D. Blockchain applications in supply chain. In: SMART supply network. Springer: New York; 2019. p.
21–46.
58. Conoscenti M, Vetro A, Martin JD. Blockchain for the internet of things: a systematic literature review. In: 2016 IEEE/
ACS 13th international conference of computer systems and applications (AICCSA). IEEE: New Yrok; 2016. p. 1–6.
59. Bahga A, Madisetti VK. Blockchain platform for industrial internet of things. J Softw Eng Appl. 2016;9(10):533–46.
[60. Dorri A, Kanhere SS, Jurdak R. Blockchain in internet of things: challenges and solutions. arXiv preprint arXiv](http://arxiv.org/abs/1608.05187)
[:1608.05187. 2016.](http://arxiv.org/abs/1608.05187)
61. Christidis K, Devetsikiotis M. Blockchains and smart contracts for the internet of things. IEEE Access.
2016;4:2292–303.
62. Zhang Y, Wen J. The IoT electric business model: using blockchain technology for the internet of things. Peer-to
Peer Netw Appl. 2017;10(4):983–94.
63. Huckle S, Bhattacharya R, White M, Beloff N. Internet of things, blockchain and shared economy applications.
Procedia Comput Sci. 2016;98:461–6.
64. Hawlitschek F, Notheisen B, Teubner T. The limits of trust-free systems: a literature review on blockchain technol‑
ogy and trust in the sharing economy. Electron Commer Res Appl. 2018;29:50–63.
65. De Filippi P. What blockchain means for the sharing economy. Harvard business review digital articles,
[2017. pp 2-5. http://search.ebscohost.com.ezproxy.is.ed.ac.uk/login.aspx?direct=true&db=bth&AN=12208](http://search.ebscohost.com.ezproxy.is.ed.ac.uk/login.aspx?direct=true&db=bth&AN=122087609&site=ehost-live)
[7609&site=ehost-live](http://search.ebscohost.com.ezproxy.is.ed.ac.uk/login.aspx?direct=true&db=bth&AN=122087609&site=ehost-live)
66. Pazaitis A, De Filippi P, Kostakis V. Blockchain and value systems in the sharing economy: the illustrative case of
backfeed. Technol Forecast Soc Change. 2017;125:105–15.
67. Mettler M. Blockchain technology in healthcare: The revolution starts here. In: 2016 IEEE 18th international confer‑
ence on e-health networking, applications and services (Healthcom). IEEE: New York; 2016. p. 1–3.
68. Rabah K. Challenges & opportunities for blockchain powered healthcare systems: a review. Mara Res J Med Health
Sci. 2017;1(1):45–52 (ISSN 2523-5680).
-----
69. Griggs KN, Ossipova O, Kohlios CP, Baccarini AN, Howson EA, Hayajneh T. Healthcare blockchain system using
smart contracts for secure automated remote patient monitoring. J Med Syst. 2018;42(7):130.
70. Wang S, Wang J, Wang X, Qiu T, Yuan Y, Ouyang L, Guo Y, Wang F-Y. Blockchain-powered parallel healthcare sys‑
tems based on the ACP approach. IEEE Trans Comput Soc Syst. 2018;99:1–9.
71. Oh S-C, Kim M-S, Park Y, Roh G-T, Lee C-W. Implementation of blockchain-based energy trading system. Asia Pac J
Innov Entrep. 2017;11(3):322–34.
72. Aitzhan NZ, Svetinovic D. Security and privacy in decentralized energy trading through multi-signatures, block‑
chain and anonymous messaging streams. IEEE Trans Dependable Secur Comput. 2018;15(5):840–52.
73. Noor S, Yang W, Guo M, van Dam KH, Wang X. Energy demand side management within micro-grid networks
enhanced by blockchain. Appl Energy. 2018;228:1385–98.
74. Wu J, Tran N. Application of blockchain technology in sustainable energy systems: an overview. Sustainability.
2018;10(9):3067.
75. Li Q-L, Ma J-Y, Chang Y-X. Blockchain queue theory. In: International conference on computational social networks.
Springer: New York; 2018. p. 25–40.
[76. Kasahara S, Kawahara J. Effect of bitcoin fee on transaction–confirmation process. arXiv preprint arXiv:1604.00103.](http://arxiv.org/abs/1604.00103)
2016.
77. Kawase Y, Kasahara S. transaction–confirmation time for bitcoin: a queueing analytical approach to blockchain
mechanism. In: International conference on queueing theory and network applications. Springer: New York; 2017.
p. 75–88.
78. Ricci S, Ferreira E, Menasche DS, Ziviani A, Souza JE, Vieira AB. Learning blockchain delays: a queueing theory
approach. ACM SIGMETRICS Perform Eval Rev. 2019;46(3):122–5.
79. Memon RA, Li JP, Ahmed J. Simulation model for blockchain systems using queuing theory. Electronics.
2019;8(2):234.
[80. Bowden R, Keeler HP, Krzesinski AE, Taylor PG. Block arrivals in the bitcoin blockchain. arXiv preprint arXiv](http://arxiv.org/abs/1801.07447)
[:1801.07447. 2018.](http://arxiv.org/abs/1801.07447)
81. Papadis N, Borst S, Walid A, Grissa M, Tassiulas L. Stochastic models and wide-area network measurements for
blockchain design and analysis. In: IEEE INFOCOM 2018-IEEE conference on computer communications. IEEE: New
York; 2018. p. 2546–54.
[82. Jourdan M, Blandin S, Wynter L, Deshpande P. A probabilistic model of the bitcoin blockchain. arXiv preprint arXiv](http://arxiv.org/abs/1812.05451)
[:1812.05451. 2018.](http://arxiv.org/abs/1812.05451)
83. Bolch G, Greiner S, de Meer H, Trivedi KS. Queueing networks and Markov chains: modeling and performance
evaluation with computer science applications. New York: Wiley; 2006.
84. Eyal I, Sirer EG. Majority is not enough: Bitcoin mining is vulnerable. Commun ACM. 2018;61(7):95–102.
85. Nayak K, Kumar S, Miller A, Shi E. Stubborn mining: generalizing selfish mining and combining with an eclipse
attack. In: 2016 IEEE European symposium on security and privacy (EuroS&P). IEEE: New York; 2016. p. 305–20.
86. Carlsten M. The impact of transaction fees on bitcoin mining strategies. Ph.D. thesis, Princeton University; 2016.
87. Göbel J, Keeler HP, Krzesinski AE, Taylor PG. Bitcoin blockchain dynamics: the selfish-mine strategy in the presence
of propagation delay. Perform Eval. 2016;104:23–41.
88. Kiffer L, Rajaraman R, et al. A better method to analyze blockchain consistency. In: Proceedings of the 2018 ACM
SIGSAC conference on computer and communications security. ACM: New York; 2018. p. 729–44.
89. Huang D, Ma X, Zhang S. Performance analysis of the raft consensus algorithm for private blockchains. IEEE Trans
[Syst Man Cybern Syst. 2019;. https://doi.org/10.1109/TSMC.2019.2895471.](https://doi.org/10.1109/TSMC.2019.2895471)
90. Sapirshtein A, Sompolinsky Y, Zohar A. Optimal selfish mining strategies in bitcoin. In: International conference on
financial cryptography and data security. Springer: New York; 2016. p. 515–32.
[91. Sompolinsky Y, Zohar A. Bitcoin’s security model revisited. arXiv preprint arXiv:1605.09193. 2016.](http://arxiv.org/abs/1605.09193)
92. Gervais A, Karame GO, Wüst K, Glykantzis V, Ritzdorf H, Capkun S. On the security and performance of proof of
work blockchains. In: Proceedings of the 2016 ACM SIGSAC conference on computer and communications secu‑
rity. ACM: New York; 2016. p. 3–16.
[93. Goffard P-O. Fraud risk assessment within blockchain transactions. Working paper or preprint 2019. https://hal.](https://hal.archives-ouvertes.fr/hal-01716687)
[archives-ouvertes.fr/hal-01716687.](https://hal.archives-ouvertes.fr/hal-01716687)
[94. Jang J. Lee H-N. Profitable double-spending attacks. arXiv preprint arXiv:1903.01711. 2019.](http://arxiv.org/abs/1903.01711)
95. Frolkova M, Mandjes M. A bitcoin-inspired infinite-server model with a random fluid limit. Stoch Models. 2019;.
[https://doi.org/10.1080/15326349.2018.1559739.](https://doi.org/10.1080/15326349.2018.1559739)
[96. King C. The fluid limit of a random graph model for a shared ledger. arXiv preprint arXiv:1902.05050. 2019.](http://arxiv.org/abs/1902.05050)
97. Li Q-L. Constructive Computation in stochastic models with applications: the RG-factorizations. Berlin: Springer;
2010.
98. Neuts MF. Matrix-geometric solutions in stochastic models: an algorithmic approach. Maryland: Johns Hopkins
University; 1981.
99. Li Q-L, Lian Z, Liu L. An RG-factorization approach for a BMAP/M/1 generalized processor-sharing queue. Stoch
Models. 2005;21(2–3):507–30.
100. Takine T. Analysis and computation of the stationary distribution in a special class of Markov chains of level
dependent M/G/1-type and its application to BMAP/M/∞ and BMAP/M/c+M queues. Queueing Syst.
2016;84(1/2):49–77.
**Publisher’s Note**
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
-----
| 23,115
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1904.03598, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://computationalsocialnetworks.springeropen.com/track/pdf/10.1186/s40649-019-0066-1"
}
| 2,019
|
[
"JournalArticle"
] | true
| 2019-04-07T00:00:00
|
[
{
"paperId": "f8816fa8eb220711cb627fe5333fcb2e7725a1f2",
"title": "Profitable Double-Spending Attacks"
},
{
"paperId": "0d2faeecae05271bda3f95ebd8201c0e71bb8483",
"title": "Architecture for Blockchain Applications"
},
{
"paperId": "7dce801b2b13001d0d3b0319c550ee1977e456df",
"title": "Simulation Model for Blockchain Systems Using Queuing Theory"
},
{
"paperId": "198547dd96f25aeb36a927f4b75d6badb888f797",
"title": "The fluid limit of a random graph model for a shared ledger"
},
{
"paperId": "4ac30cd8e931b1341e24e269f07c89615bcfa1dd",
"title": "Learning Blockchain Delays"
},
{
"paperId": "93632baddaea54a6d6c4f6e5f4377c6affa357d3",
"title": "Fraud risk assessment within blockchain transactions"
},
{
"paperId": "de27111fcabdf6cd639bb01d3fd9a79ebc87da04",
"title": "Blockchain: Basics"
},
{
"paperId": "76ddcbffd24d6431e08c5db823cc54220a3c1764",
"title": "Blockchain Queue Theory"
},
{
"paperId": "c1f603c60ee0c7b919563ba66a9f94cf6db2d0a6",
"title": "Blockchain Economics"
},
{
"paperId": "b525b9cc5a48cc5f485957edc65dce94207bb1d4",
"title": "A Probabilistic Model of the Bitcoin Blockchain"
},
{
"paperId": "2e82b8539af92b4af1f5c1c59dcce9d31dcefccc",
"title": "Blockchain technology and its relationships to sustainable supply chain management"
},
{
"paperId": "305edd92f237f8e0c583a809504dcec7e204d632",
"title": "Blockchain challenges and opportunities: a survey"
},
{
"paperId": "37b544f90be7595b757c914656932149f2c71d67",
"title": "Mapping the sea of opportunities: Blockchain in supply chain and logistics"
},
{
"paperId": "9fd59c2ae6bd74e4918a42d58845e87b7bcca9e5",
"title": "A Better Method to Analyze Blockchain Consistency"
},
{
"paperId": "f1abdb39568305750bf8682177c19079213d498f",
"title": "Energy Demand Side Management within micro-grid networks enhanced by blockchain"
},
{
"paperId": "9978f6a9ff6ebbbf6cf134ea9324da6271f3e07d",
"title": "Security and Privacy in Decentralized Energy Trading Through Multi-Signatures, Blockchain and Anonymous Messaging Streams"
},
{
"paperId": "b6b15293d4f0a36aa155671023062ea3fc22e64a",
"title": "Application of Blockchain Technology in Sustainable Energy Systems: An Overview"
},
{
"paperId": "81c643888f6b1d8447a917f69466d0a726711834",
"title": "Blockchain-Powered Parallel Healthcare Systems Based on the ACP Approach"
},
{
"paperId": "d5df33b8a6314b821d6820d8e96246f87451a87e",
"title": "Supply chain finance and blockchain technology – the case of reverse securitisation"
},
{
"paperId": "6ac716d55d74f4000a3b2f07f83c27a6ca5c21c6",
"title": "Performance Analysis of the Raft Consensus Algorithm for Private Blockchains"
},
{
"paperId": "adb3c2df0657579bf0f076eb1d564d6ab0e4032f",
"title": "Blockchain Applications in Supply Chain"
},
{
"paperId": "6d661299a8207a4bff536494cec201acee3c6c1c",
"title": "Healthcare Blockchain System Using Smart Contracts for Secure Automated Remote Patient Monitoring"
},
{
"paperId": "383a9c6e5cb52edd9f94b5844fac50fdce178bc1",
"title": "Introduction - Platforms and Infrastructures in the Digital Age"
},
{
"paperId": "01ea88051f84c77a386294fc715a71198d87a9b9",
"title": "A Survey on Consensus Mechanisms and Mining Strategy Management in Blockchain Networks"
},
{
"paperId": "3990db260b4ef0882a83304ac88d8787ab03aea3",
"title": "A Survey on Consensus Mechanisms and Mining Management in Blockchain Networks"
},
{
"paperId": "e68053f7e09e4d0a665fd03729f4a71c80d42538",
"title": "A survey on security and privacy issues of blockchain technology"
},
{
"paperId": "aa206d7565bbeb8798622ed311c24b19ed7a75d2",
"title": "The limits of trust-free systems: A literature review on blockchain technology and trust in the sharing economy"
},
{
"paperId": "e21d88158dde576ec45a220055caf912c6adb7d0",
"title": "Stochastic Models and Wide-Area Network Measurements for Blockchain Design and Analysis"
},
{
"paperId": "0eecbe4aef723dd7f35c2aabae623aebe87a038f",
"title": "Block arrivals in the Bitcoin blockchain"
},
{
"paperId": "00f40a8145d665c1c9d873079c7dfbae1ed1f30c",
"title": "Blockchain Technology Simplified: The Complete Guide to Blockchain Management, Mining, Trading and Investing Cryptocurrency"
},
{
"paperId": "c0009e9ab39616a8e04ecd05718efcf058637f86",
"title": "The Blockchain Folk Theorem"
},
{
"paperId": "ae3ca0b1b2db70a148d643c520d80f50f9220cc5",
"title": "Chains in Chains - Logic and Challenges of Blockchains in Supply Chains"
},
{
"paperId": "627ecbf3559bcf54914abcdd16a97ecb2987934f",
"title": "Implementation of blockchain-based energy trading system"
},
{
"paperId": "70d66233ba53dc4bc810970b172fb6deb4b080dc",
"title": "Blockchain and Value Systems in the Sharing Economy: The Illustrative Case of Backfeed"
},
{
"paperId": "d6b5a63557c890cf363528618512c8c9c261a41e",
"title": "A Blockchain Research Framework"
},
{
"paperId": "092b82b043985b053abc3d385794bab77ccf2af4",
"title": "Blockchain Technology in Business and Information Systems Research"
},
{
"paperId": "ab2c36d77953212f87b4671f53d0ec94bb80f0c3",
"title": "Challenges & Opportunities for Blockchain Powered Healthcare Systems: A Review"
},
{
"paperId": "a59bcf23eadb7fee1b8f3810402910e21fd3496a",
"title": "Blockchain Technology in Finance"
},
{
"paperId": "6c27af4cf83f2139a1ffdab9f9e26577ece2d965",
"title": "Validation and Verification of Smart Contracts: A Research Agenda"
},
{
"paperId": "cf1216d4421a4e4225fffd96505ec679376f6758",
"title": "How Can Blockchain Technology Disrupt the Existing Business Models"
},
{
"paperId": "f61edb500c023c4c4ef665bd7ed2423170773340",
"title": "A Survey of Blockchain Security Issues and Challenges"
},
{
"paperId": "5a0e7eebceec42686d39b42de17820f0781a61aa",
"title": "Blockchain-based Smart Contracts: A Systematic Mapping Study"
},
{
"paperId": "d5f18dbe2a05051f1ed16e2e68a6366ed4f8edef",
"title": "Transaction-Confirmation Time for Bitcoin: A Queueing Analytical Approach to Blockchain Mechanism"
},
{
"paperId": "dd6c955cd84524d6a11b634204a4fbe83acf6e83",
"title": "Supply Chain Finance and Blockchain Technology: The Case of Reverse Securitisation"
},
{
"paperId": "26a286e447cd78227eddc801a5d9816e0215834b",
"title": "Blockchain Consensus Protocols in the Wild"
},
{
"paperId": "161c24b98ce3af2c0f8a5e96d5055a367b81801e",
"title": "Analysis of the Blockchain Protocol in Asynchronous Networks"
},
{
"paperId": "eb4b5d75815b9ef53ea29fcbfbd6e248b63907c7",
"title": "Building Blockchain Projects"
},
{
"paperId": "aec843c0f38aff6c7901391a75ec10114a3d60f8",
"title": "A Survey of Attacks on Ethereum Smart Contracts (SoK)"
},
{
"paperId": "267ea5b34dcc3c3967a5e09baa3c659e5b2c2d10",
"title": "Blockchains for Business Process Management - Challenges and Opportunities"
},
{
"paperId": "e74dc90e465a07b53254311b5b6d1ae0488d3cc5",
"title": "A Bitcoin-inspired infinite-server model with a random fluid limit"
},
{
"paperId": "73fd4caae8b4ce04df63437fb99e17f2cc1b6b39",
"title": "Financial Cryptography and Data Security"
},
{
"paperId": "f4368a09a9e69e6777954a90468b7d7a21d16361",
"title": "Introducing Ethereum and Solidity: Foundations of Cryptocurrency and Blockchain Programming for Beginners"
},
{
"paperId": "d809b1a1c987e13a2fbc466b87d95002b6198526",
"title": "Blockchain Basics: A Non-Technical Introduction in 25 Steps"
},
{
"paperId": "4965b91fe4112a13e7d7dc3ce49c2860171509c7",
"title": "Blockchain Applications: A Hands-On Approach"
},
{
"paperId": "29c794cd8f1a38e7c089196065374149076f0aac",
"title": "Business Innovation Through Blockchain: The B³ Perspective"
},
{
"paperId": "6ff218ab03660c71e15b3225ee7a501c0fa5beb6",
"title": "Digital Supply Chain Transformation toward Blockchain Integration"
},
{
"paperId": "07079bc43ae724c864cb52458ef554d991c00977",
"title": "Opportunities and Risks of Blockchain Technologies (Dagstuhl Seminar 17132)"
},
{
"paperId": "70a2b62fc7bdcf8cd356f7ca29e8ac523e02caa9",
"title": "Some simple economics of the blockchain"
},
{
"paperId": "bfbce9ae7fd2828c7ca6ecbbe6c46ddc7d5e3e75",
"title": "Blockchain - A Financial Technology for Future Sustainable Development"
},
{
"paperId": "8b32309a7730de87a02e38c7262307245dca5274",
"title": "On the Security and Performance of Proof of Work Blockchains"
},
{
"paperId": "628c2bcfbd6b604e2d154c7756840d3a5907470f",
"title": "Blockchain Platform for Industrial Internet of Things"
},
{
"paperId": "35ce7c6e65f77e1b09dfc38243c10023642b9e46",
"title": "Where Is Current Research on Blockchain Technology?—A Systematic Review"
},
{
"paperId": "ac013d1d21a659da4873164c43d005416e1bce7a",
"title": "Internet of Things, Blockchain and Shared Economy Applications"
},
{
"paperId": "edee7fe2e384c54a97e48ec72c35c15bdd8a263b",
"title": "Analysis and computation of the stationary distribution in a special class of Markov chains of level-dependent M/G/1-type and its application to BMAP/M/$$\\infty $$∞ and BMAP/M/c+M queues"
},
{
"paperId": "3ffe97cf62c2b58b97d41664d30cee5ac12a7c57",
"title": "Smart Contracts: The Ultimate Guide To Blockchain Smart Contracts - Learn How To Use Smart Contracts For Cryptocurrency Exchange!"
},
{
"paperId": "310e677ce23004fdf0a549c2cfda2ef15420d6ec",
"title": "Blockchain technology in healthcare: The revolution starts here"
},
{
"paperId": "e6c3af91b191a496506b947c77fd28c836a5b31b",
"title": "Bitcoin and Blockchain Security"
},
{
"paperId": "451729b3faedea24771ac4aadbd267146688db9b",
"title": "Blockchain in internet of things: Challenges and Solutions"
},
{
"paperId": "6ec8dea8891e1d313b2640299acbadede6d77842",
"title": "Blockchain Mining Games"
},
{
"paperId": "7bed3507a73099b5bb55be17fe3d436c82e39550",
"title": "Bitcoin's Security Model Revisited"
},
{
"paperId": "c998aeb12b78122ec4143b608b517aef0aa2c821",
"title": "Blockchains and Smart Contracts for the Internet of Things"
},
{
"paperId": "f572bcaa97e36d79e0cd01fb18dadb2f58eebebd",
"title": "The IoT electric business model: Using blockchain technology for the internet of things"
},
{
"paperId": "79db23f865415dc8a7521e1225160f9be038cf9f",
"title": "Effect of Bitcoin fee on transaction-confirmation process"
},
{
"paperId": "40406aeaacc4a460ced249862147655b66ee00d2",
"title": "Stubborn Mining: Generalizing Selfish Mining and Combining with an Eclipse Attack"
},
{
"paperId": "13fca50483e298e931e1b804e95089c26a773cd9",
"title": "Economics of Blockchain"
},
{
"paperId": "eb041c6bdaac9cad03edbedfe896b2f3f443d155",
"title": "A System View of Financial Blockchains"
},
{
"paperId": "c3d861ed17f6a4440e5102bc45fb5d843620018d",
"title": "The Science of the Blockchain"
},
{
"paperId": "3e77e513b888a1486f8f563656b8db72021b44b7",
"title": "Optimal Selfish Mining Strategies in Bitcoin"
},
{
"paperId": "1bda4239308c6dcc7158c34204157d77f5f5b384",
"title": "Bitcoin blockchain dynamics: The selfish-mine strategy in the presence of propagation delay"
},
{
"paperId": "97fddbbfd681bce9eeb8e0a013353b4d5b2ba0db",
"title": "Blockchain: Blueprint for a New Economy"
},
{
"paperId": "7bf81e964a7c20d829c1225685ae138bf1489c99",
"title": "Majority is not enough"
},
{
"paperId": "473024d04ad23507761761d1591d1beb559862ef",
"title": "Constructive Computation in Stochastic Models with Applications: The RG-Factorizations"
},
{
"paperId": "f861d8e4bcda72cbd821454ffed2d20be8ff5e85",
"title": "Queueing Networks and Markov Chains (Modeling and Performance Evaluation With Computer Science Applications)"
},
{
"paperId": "f7778a9b1a0e460d38ba03f9e4958dc6abf6c474",
"title": "An RG-Factorization Approach for a BMAP/M/1 Generalized Processor-Sharing Queue"
},
{
"paperId": "2ed91e95d0ed5ec03d3ab28b7d784b7e544b3cca",
"title": "Queueing Networks and Markov Chains - Modeling and Performance Evaluation with Computer Science Applications, Second Edition"
},
{
"paperId": "d14d99312cd81321de0984cd8fd831b4b9336440",
"title": "Matrix-geometric solutions in stochastic models - an algorithmic approach"
},
{
"paperId": null,
"title": "YX: Blockchain queue theory. In: Computational Data and Social Networks, Lecture Notes in Computer Science book series (LNCS, volume 11280)"
},
{
"paperId": "78d24471d60cff29cd0421e5473d4cb31fc71555",
"title": "Governance in the Blockchain Economy: A Framework and Research Agenda"
},
{
"paperId": "44ee1bf827396f8a08f54be78e1b868c11de23bc",
"title": "Toward an ontology-driven blockchain design for supply-chain provenance"
},
{
"paperId": null,
"title": "M: Blockchain economics. National Bureau of Economic Research"
},
{
"paperId": null,
"title": "Mastering Blockchain: Distributed Ledger Technology, Decentralization, and Smart Contracts Explained"
},
{
"paperId": null,
"title": "Building Smart Contracts and Dapps"
},
{
"paperId": null,
"title": "The Impact of Blockchain Technology on Finance: A Catalyst for Change. International Center for Monetary and Banking Studies (2018)"
},
{
"paperId": "1a0ce68690342ce3427b768b46c11e8d9645d5d3",
"title": "The Blockchain Alternative"
},
{
"paperId": "a6f946fa13adac9a06f8354d94deb0fc201767dd",
"title": "Business Innovation Through Blockchain"
},
{
"paperId": "a7ec2e290c7d6a91229bf978d79a45134752fd7a",
"title": "Introducing Ethereum and Solidity"
},
{
"paperId": null,
"title": "Consensus methods in blockchain systems. Frankfurt School of Finance & Management, Blockchain Center, technical report"
},
{
"paperId": null,
"title": "P: What blockchain means for the sharing economy"
},
{
"paperId": "16578ec325c1139c85d35ace62d265420f1ae72f",
"title": "Hybrid Consensus: Efficient Consensus in the Permissionless Model"
},
{
"paperId": null,
"title": "A strategist’s guide to blockchain"
},
{
"paperId": null,
"title": "The impact of transaction fees on bitcoin mining strategies"
},
{
"paperId": null,
"title": "Ethereum: Blockchains, Digital Assets, Smart Contracts"
},
{
"paperId": null,
"title": "The Business Blockchain: Promise, Practice, and Application of the Next Internet Technology"
},
{
"paperId": "0f9a089adfdf5df3cee4e17b20a0d1f597f6bcac",
"title": "Constructive Computation in Stochastic Models with Applications"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": "b99a1aafa338ccec7e4d91d8861eb41db6928da0",
"title": "A Review on"
},
{
"paperId": "8cffe6a48b2c971c33c2aa373e174e79349c784b",
"title": "Matrix-Geometric Solutions in Stochastic Models"
},
{
"paperId": "249377e09f6da6eda933ed4f39b4dbe6aa74b592",
"title": "the Internet of Things: a Systematic Literature Review"
}
] | 23,115
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Engineering",
"source": "external"
},
{
"category": "Physics",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00793bcd17c56940d437413c9078a76b07841f16
|
[
"Computer Science",
"Engineering",
"Physics"
] | 0.824612
|
Decentralizing Feature Extraction with Quantum Convolutional Neural Network for Automatic Speech Recognition
|
00793bcd17c56940d437413c9078a76b07841f16
|
IEEE International Conference on Acoustics, Speech, and Signal Processing
|
[
{
"authorId": "46962482",
"name": "Chao-Han Huck Yang"
},
{
"authorId": "145913380",
"name": "Jun Qi"
},
{
"authorId": "2107968379",
"name": "Samuel Yen-Chi Chen"
},
{
"authorId": "153191489",
"name": "Pin-Yu Chen"
},
{
"authorId": "1709878",
"name": "Sabato Marco Siniscalchi"
},
{
"authorId": "2116287993",
"name": "Xiaoli Ma"
},
{
"authorId": "9391905",
"name": "Chin-Hui Lee"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Int Conf Acoust Speech Signal Process",
"IEEE Int Conf Acoust Speech Signal Process",
"ICASSP",
"International Conference on Acoustics, Speech, and Signal Processing"
],
"alternate_urls": null,
"id": "0d6f7fba-7092-46b3-8039-93458dba736b",
"issn": null,
"name": "IEEE International Conference on Acoustics, Speech, and Signal Processing",
"type": "conference",
"url": "http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000002"
}
|
We propose a novel decentralized feature extraction approach in federated learning to address privacy-preservation issues for speech recognition. It is built upon a quantum convolutional neural network (QCNN) composed of a quantum circuit encoder for feature extraction, and a recurrent neural network (RNN) based end-to-end acoustic model (AM). To enhance model parameter protection in a decentralized architecture, an input speech is first up-streamed to a quantum computing server to extract Mel-spectrogram, and the corresponding convolutional features are encoded using a quantum circuit algorithm with random parameters. The encoded features are then down-streamed to the local RNN model for the final recognition. The proposed decentralized framework takes advantage of the quantum learning progress to secure models and to avoid privacy leakage attacks. Testing on the Google Speech Commands Dataset, the proposed QCNN encoder attains a competitive accuracy of 95.12% in a decentralized model, which is better than the previous architectures using centralized RNN models with convolutional features. We conduct an in-depth study of different quantum circuit encoder architectures to provide insights into designing QCNN-based feature extractors. Neural saliency analyses demonstrate a high correlation between the proposed QCNN features, class activation maps, and the input Mel-spectrogram. We provide an implementation1 for future studies.
|
## DECENTRALIZING FEATURE EXTRACTION WITH QUANTUM CONVOLUTIONAL NEURAL NETWORK FOR AUTOMATIC SPEECH RECOGNITION
Chao-Han Huck Yang[1] Jun Qi[1] Samuel Yen-Chi Chen[2] Pin-Yu Chen[3] Sabato Marco Siniscalchi[1][,][4][,][5] Xiaoli Ma[1] Chin-Hui Lee[1]
1School of Electrical and Computer Engineering, Georgia Institute of Technology, USA
2Brookhaven National Laboratory, NY, USA and 3IBM Research, Yorktown Heights, NY, USA
4Faculty of Computer and Telecommunication Engineering, University of Enna, Italy
5Department of Electronic Systems, NTNU, Trondheim, Norway
**ABSTRACT**
We propose a novel decentralized feature extraction approach in
federated learning to address privacy-preservation issues for speech
recognition. It is built upon a quantum convolutional neural network (QCNN) composed of a quantum circuit encoder for feature
extraction, and a recurrent neural network (RNN) based end-to-end
acoustic model (AM). To enhance model parameter protection in a
decentralized architecture, an input speech is first up-streamed to
a quantum computing server to extract Mel-spectrogram, and the
corresponding convolutional features are encoded using a quantum
circuit algorithm with random parameters. The encoded features
are then down-streamed to the local RNN model for the final recognition. The proposed decentralized framework takes advantage of
the quantum learning progress to secure models and to avoid privacy leakage attacks. Testing on the Google Speech Commands
Dataset, the proposed QCNN encoder attains a competitive accuracy
of 95.12% in a decentralized model, which is better than the previous architectures using centralized RNN models with convolutional
features. We conduct an in-depth study of different quantum circuit
encoder architectures to provide insights into designing QCNNbased feature extractors. Neural saliency analyses demonstrate a
high correlation between the proposed QCNN features, class activation maps, and the input Mel-spectrogram. We provide an
implementation[1] for future studies.
**_Index Terms— Acoustic Modeling, Quantum Machine Learn-_**
ing, Automatic Speech Recognition, and Federated Learning.
(1) Upload Input
Speech: Xi **(b) Local**
**Model**
**1. INTRODUCTION**
With the increasing concern about acoustic data privacy issues [1],
it is essential to design new automatic speech recognition (ASR) architectures satisfying the requirements of new privacy-preservation
regulations, e.g., GDPR [2]. Vertical federated learning (VFL) [3] is
one potential strategy for data protection by decentralizing an endto-end deep learning [4] framework and separating feature extraction
from the ASR inference engine. With recent advances in commercial quantum technology [5], quantum machine learning (QML) [6]
becomes an ideal building block for VFL owing to its advantages on
parameter encryption and isolation. To do so, the input to QML often
represented by classical bits, needs to be first encoded into quantum
states based on qubits. Next, approximation algorithms (e.g., quantum branching programs [7]) are applied to quantum devices based
**Fig. 1: Proposed quantum machine learning for acoustic modeling**
(QML-AM) architecture in a vertical federated learning progress
including (a) a quantum convolution layer on Noisy IntermediateScale Quantum (NISQ) servers or cloud API; and (b) a local model
(e.g., second-pass model [11, 12]) for speech recognition tasks.
on a quantum circuit [8] with noise tolerance. To implement our
proposed approach, we utilize a state-of-the-art noisy intermediatescale quantum (NISQ) [9] platform (5 to 50 qubits) for academic and
commercial applications [10]. It can be set up on accessible quantum
servers from cloud-based computing providers [5].
As shown in Fig. 1, we propose a decentralized acoustic modeling (AM) scheme to design a quantum convolutional neural network
(QCNN) [13] by combining a variational quantum circuit (VQC)
learning paradigm [6] and a deep neural network [14] (DNN). VQC
refers to a quantum algorithm with a flexible designing accessibility, which is resistant to noise [6, 8] and adapted to NISQ hardware
with light or no requirements for quantum error correction. Based
on the advantages of VQC under VFL, a quantum-enhanced data
processing scheme can be realized with fewer entangled encoded
qubits [15, 7] to assure model parameters protection and lower computational complexity. As shown in Table 1, to the best of the authors’ knowledge, this is the first work to combine quantum circuits
and DNNs and build a new QCNN [13] for ASR. To provide secure
data pipeline and reliable quantum computing, we introduce the VFL
architecture for decentralized ASR tasks, where remote NISQ cloud
servers are used to generate quantum-based features, and ASR decoding is performed with a local model [12]. We refer to our decentralized quantum-based ASR system to as QCNN-ASR. Evaluated
on the Google Speech Commands dataset with machine noises in
[1https://github.com/huckiyang/QuantumSpeech-QCNN](https://github.com/huckiyang/QuantumSpeech-QCNN)
-----
**Table 1: An overview of machine learning approaches and related**
key properties. CQ stands for a hybrid classical-quantum (CQ) [15]
model using in this paper. QA stands for quantum advantages [8],
which are related to computational memory and parameter protection. VQC indicates the variational quantum circuit. VFL means vertical federated leaning [3]. DNN stands for deep neural network [4]
Approach Input Learning Model Output Properties
Classical bits DNN and more. bits Easy implementation
Quantum qubits VQC and more. qubits QA but limited resources
hybrid CQ bits VQC + DNN bits Accessible QA over VFL
curred from quantum computers, the proposed QCNN-ASR framework attains a competitive 95.12% accuracy on word recognition.
**2. RELATED WORK**
**2.1. Quantum Machine Learning for Signal Processing**
QML [6] has been shown advantages in terms of lower memory storage, secured model parameters encryption, and good feature representation capabilities [8]. There are several variants (e.g., adiabatic
quantum computation [9], and quantum circuit learning [16]). In this
work, we use the hybrid classical-quantum algorithm [13], where the
input signals are given in a purely classical format (aka, numerical
format, e.g., digital image), and a quantum algorithm is employed
in the feature learning phase. Quantum circuit learning is regarded
as the most accessible and reproducible QML for signal processing [15], such as supervised learning in the design of quantum support vector machine [8]. Indeed, it has been widely used, and it
consists only of quantum logic gates with a possibility of deferring
an error correction [6, 16].
**2.2. Deep Learning with Variational Quantum Circuit**
In the NISQ era [10], quantum computing devices are not errorcorrected, and they are therefore not fault-tolerant. Such a constraint limits the potential applications on NISQ technology, especially for large quantum circuit depth, and a large number of qubits.
However, Mitarai et al.’s seminal work [6] describes a framework to
build machine learning models on NISQ. The key idea is to employ
VQC [17], which are subject to an iterative optimization processes,
so that the effects of noise in the NISQ devices can potentially be
absorbed into these learned circuit parameters. Recent litterature reports about several successful machine learning applications based
on VQC, for instance, deep reinforcement learning [18], and function approximation [6]. VQCs are also used in constructing quantum machine learning models capable of handling sequential patterns, such as the dynamics of of certain physical systems [19]. It
should be noted that the input dimension of the input in [19] is rather
limited [18] because of stringent requirements of currently available
quantum simulators, or real quantum devices.
**2.3. Quantum Learning and Decentralized Speech Processing**
Although quantum technology is quite new, there have been some
attempts in exploiting it for speech processing. For example, Li et
_al. [20] proposed a speech recognition system with quantum back-_
propagation (QBP) simulated by fuzzy logic computing. However,
QBP is not using the qubit directly in a real-world quantum device,
and the approaches hardly demonstrates the quantum advantages inherent in this computing scheme. Moreover, the QBP solution can
be complicated to large-scale ASR tasks with parameters protection.
From a system perspective, these accessible quantum advantages from VQL, including encryption and randomized encoding,
are prominent requirements for federated learning systems, such as
distributed ASR. Cloud computing-based federated architectures [3]
have been proven the most effective solutions for industrial applications, demonstrating quantum advantages using commercial NISQ
servers [5]. More recent works on federated keyword spotting [1],
distributed ASR [21], improved lite audio-visual processing for local
inference [22], and federated n-gram language [11] marked the the
importance of privacy-preserving learning under the requirement of
acoustic and language data protection.
**3. DESIGNING QUANTUM CONVOLUTIONAL NEURAL**
**NETWORKS FOR SPEECH RECOGNITION**
In this section, we present our framework showing how to design a
federated architecture based QCNN composed of quantum computing and deep learning for speech recognition.
**3.1. Speech Processing under Vertical Federated Learning**
We consider a federated learning scenario for speech processing,
where the ASR system includes two blocks deployed between a local user, and a cloud server or application interface (API), as shown
in Fig. 1. An input speech signal, xi, is collected at the local user
and up-streamed to a cloud server where Mel spectrogram feature
vectors are extracted, ui. Mel spectrogram features are the input of
a quantum circuit layer, Q, that learns and encodes patterns:
_fi = Q(ui, e, q, d),_ where ui = Mel-Spectrogram(xi). (1)
In Eq. (1), the computation process of a quantum neural layer,
**Q, depends on the encoding initialization e, the quantum circuit pa-**
rameters, q, and the decoding measurement d. The encoded features, fi, will be down-streamed back to the local user and used for
training the ASR system, more specifically the acoustic model (AM).
Proposed decentralized-VFL speech processing model reduces the
risk of parameter leakages [23, 1, 11] from attackers, and avoids privacy issues under GDPR, with its architecture-wise advantages [24]
on encryption [16] and without accessing the data directly [3].
|Approach|Input|Learning Model|Output|Properties|
|---|---|---|---|---|
|Classical|bits|DNN and more.|bits|Easy implementation|
|Quantum|qubits|VQC and more.|qubits|QA but limited resources|
|hybrid CQ|bits|VQC + DNN|bits|Accessible QA over VFL|
Mel-Spectrogram
Quanv-encoded
|u1 u2|f1|ch.1 2|Col4|
|---|---|---|---|
|1 2 u3 u4|1|2 3|4|
|Encoding Quantum Decoding Circuit ix= e(ux) ox= q(ix) fx= d(ox)||||
|R y|R x R y R x|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
|R y R y|||||||
||• R z •||||||
|R y||R z|||||
|||||•|||
u1 u2 f1
u3 u4
(a) QCNN Computing Process.
_|0⟩_ _Ry_ _Rx_ _Ry_ _Rx_
_|0⟩_ _Ry_
_|0⟩_ _Ry_ _•_
_|0⟩_ _Ry_ _Rz_ _•_
(b) Deployed Quantum Circuit.
**Fig. 2: The proposed variational quantum circuit for 2 × 2 QCNN.**
**3.2. Quantum Convolutional Layer**
Motivated by using VQC as a convolution filter with a quantum kernel, QCNN [13] was recently proposed to extend CNN’s properties
to the quantum domain for image processing on a digital simulator
and requires only fewer qubits to construct a convolution kernel during the QML progress. A QCNN consists of several quantum convolutional filters, and each quantum convolutional filter transforms
input data using a quantum circuit that can be designed in a structured or a randomized fashion.
-----
Figure 2 (a) show our implementation of a quantum convolutional layer. The quantum convolutional filter is consists of (i) the
encoding function e(·), (ii) the decoding operation d(·), and (iii) the
quantum circuit q(·). In detail, the following steps are performed to
obtain the output of a quantum convolutional layer:
- The 2D Mel-spectrogram input vectors are chunked into several 2 × 2 patches, and the n[th] patch is fed into the quantum circuit and encoded into intial quantum states, Ix[n] =
_e(ui[n])._
- The initial quantum states go through the quantum circuit
with the operator q(·), and generate Ox[n] = q(Ix[n]).
- The outputs after applying the quantum circuit are necessarily measured by projecting the qubits onto a set of quantum
state basis that spans all of the possible quantum states and
quantum operations. Thus we get the desired output value,
_fx,n = d(Ox[n]). More details refer to the implementation[1]._
**3.3. Random Quantum Circuit**
We deploy a random quantum circuit to realize a simple circuit U
in which the circuit design is randomly generated per QCNN model
for parameter protection. An example of random quantum circuit is
shown in Figure 2 (b), where the quantum gates Rx, Ry and Rz and
CNOT are applied. The classical vectors are initially encoded into
a quantum state Φ0 = |0000⟩, and the encoded quantum states go
through the quantum circuit U for the following phases as:
**Phase 1: Φ1 = Ry|0⟩Ry|0⟩Ry|0⟩Ry|0⟩.**
**Phase 2: Φ2 = (RxRy|0⟩)CNOT(Ry|0⟩)Ry|0⟩RzRy|0⟩.**
**Phase 3: Φ3 = CNOT((RxRy|0⟩))CNOT(Ry|0⟩)Ry|0⟩RzRy|0⟩.**
**Phase 4: Φ4 = RxRyΦ3**
Besides, since random quantum circuit may involve many
CNOT gates which bring about many unexpected noisy signals
under the current non error-corrected quantum devices and the connectivity of physical qubits, we limit the number of qubits to small
numbers to avoid exceeding the noise tolerance capabilities of VQC.
In the simulation on CPU, we use PennyLane [7], which is an opensource programming software for differentiable programming of
quantum computers, to generate the random quantum circuit, and
we build the random quantum circuit based on the Qiskit [25] for
simulation with the noise model from IBM quantum machines with
5 and 15 qubits, which is advanced than simulation only results [13].
Input (a) Quanv U-Net Conv2D bi-lstm self-attention
Dense 64 Dense 32 (b) Loss Layer Output
**Fig. 3: The proposed QCNN architecture for ASR tasks.**
**3.4. Attention Recurrent Neural Networks**
We use a benchmark deep attention recurrent neural network
(RNN) [14] model from [26] as our baseline architecture for a local
model (e.g., second-pass models [11, 12]) in the VFL setting. The
model is composed of two layers of bi-directional long short-term
memory [14] and a self-attention encoder [27] (dubbed RNNAtt).
In [26], this RNN model has been reported the best results over the
other DNN based solutions included DS-CNN [28] and ResNet [29]
for spoken word recognition.
To reduce architecture-wise variants on our experiments, we
conduct ablation studies and propose an advanced attention RNN
model with a U-Net encoder [30] (denoted as RNNUAtt). As
shown in Fig. 3, a series of multi-scale convolution layers (with a
channel size of 8-16-8) will apply on quantum-encoded (quanv) or
neural convolution-encoded (conv) features to improve generalization of acoustic by learning scale-free representations [30]. We use
RNNAtt and RNNUAtt in our experiments to evaluate the advantages of using the proposed QCNN model. As shown in Fig 3 (b), we
provide a loss calculation layer on the RNN backbone for our local
model. For spoken word recognition, we use the cross-entropy loss
for classification. The loss layer could further be replaced by connectionist temporal classification (CTC) loss [31] for a large-scale
continuous speech recognition task in our future study.
**4. EXPERIMENTS**
**4.1. Experimental Setup**
As initial assessment of the viability our novel proposed framework,
we have selected a limited-vocabulary yet reasonably challenging
speech recognition task, namely the Google Speech CommandV1 [29]. For spoken word recognition, we use the ten-classes setting that includes the following frequent speech commands[2]: [’left’,
’go’, ’yes’, ’down’, ’up’, ’on’, ’right’, ’no’, ’off’, ’stop’], with a total
of 11,165 training examples, and 6,500 testing examples with the
background white noise setup [29]. The Mel-scale spectrogram features are extracted from the input speech using the Librosa library;
this step takes place in the NISQ server. The input Mel-scale feature
is actually a 60-band Mel-scale, and 1024 discrete Fourier transform
points into the quantum circuit as the required VFL setting. The
experiments with the local model are carried out with Tensorflow,
which is used to implement DNNs and visualization.
(a) Input Mel-Spectrogram (b) 2x2 Neural-Conv Encoded
(c) 2x2 Quantum-Conv Encoded (d) 3x3 Quantum-Conv Encoded
**Fig. 4: Visualization of the encoded features from different types of**
convolution layers. The audio transcription is ”yes” of the input.
**4.2. Encoded Acoustic Features from Quantum Device**
The IBM Qiskit quantum computing tool [25] is used to simulate
the quantum convolution. We first use Qiskit to collect compiling
noises from two different quantum computers. We then load those
recorded noise to the Pennylane-Qiskit extension in order to simulate
noisy quantum circuit experiments for virtualization. According to
previous investigations [19, 18], the proposed noisy quantum device
setup can be complied with NISQ directly and attains results close
to those obtained using NISQ directly . The chosen setup preserves
quantum advantages on randomization and parameter isolation.
**Visualization of Acoustic Features. To better understand the**
nature of the encoded representation of our acoustic speech features,
[2https://ai.googleblog.com/2017/08/launching-speech-commands-](https://ai.googleblog.com/2017/08/launching-speech-commands-dataset.html)
[dataset.html](https://ai.googleblog.com/2017/08/launching-speech-commands-dataset.html)
|Input|Col2|(a) Quanv|Col4|U-Net|Col6|Conv2D|Col8|bi-lstm|Col10|self-attention|Col12|
|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||
|Col1|Dense 64|Col3|Dense 32|Col5|(b) Loss Layer|Col7|Output|
|---|---|---|---|---|---|---|---|
|||||||||
-----
we visualize the encoded features and acoustic patterns extracted
from different encoders. Fig. 4 shows QCNN-encoded results with a
2×2 kernel (in panel (c)), which seems to better relate to the acoustic pattern shown in the Mel spectrogram shown in panel (a), since
it well captures energy patterns in both high and low-frequency regions. The latter becomes more evident by comparing panel (c)
with the features encoded with a 3×3 kernel given in panel (d). Finally, the neural network-based convolution layer reported in panel
(b) shows similar results with those in panel (c), but it presents a
lower intensity in the high-frequency regions. We will discuss its
relationship between recognition performance later in Section 4.4.
**4.3. Performance of Spoken-Word Recognition**
We conduct experiments on the spoken-word recognition task and
compared the improved performance from an additional quantum
convolution layer with a 2×2 kernel (4 qubits) and a neural convolution layer with a 2×2 kernel in Table 2. From the experiments,
the recognition models with additional quantum convolution show
better accuracy than the baseline models [26]. The modified model
with a U-Net encoder, RNNUAtt, achieves the best performance of
95.12±0.18% on the evaluation data, which is better than the reproduced RNNAtt baseline (94.21±0.30%) for the recognition setup.
**Table 2: Comparisons of spoken-term recognition on Google Com-**
mands dataset with the noise setting [29] for classification accuracy
(Acc) ± standard deviation. The additional convolution (conv) and
quantum convolution (quanv) layer have the same 2×2 kernel size.
Model Acc. (↑) Parameters (Memory) (↓)
RNNAtt [26] 94.21±0.30 170,915 (32-bits)
Conv + RNNAtt 94.32±0.26 174,975 (32-bits)
Quanv + RNNAtt 94.75±0.17 174,955 (32-bits) + 4 (qubits)
RNNUAtt 94.72±0.23 176,535 (32-bits)
Conv + RNNUAtt 94.74±0.25 180,595 (32-bits)
Quanv + RNNUAtt **95.12±0.18** 180,575 (32-bits) + 4 (qubits)
**4.4. A Study on QCNN Architectures**
Next we experiment with various new QCNN [13] architectures for
ASR with different combinations of quantum encoders and neural
acoustic models. First, we study the quantum convolution encoder
with different kernel sizes. From previous works [19, 18], the current commercial NISQ devices would be challenging to provide reproducible and stable results with a size of qubits larger than 15. We
thus design our quantum convolutional encoders under this limitation with a kernel size of 1×1 (1 qubit), 2×2 (4 qubits), and 3×3 (9
qubits). We select two open source neural AMs as the local model,
DS-CNN [28], and ResNet [29], from the previous works testing on
the Google Speech Commands dataset. As shown in the bar charts
in Fig. 5, QCNNs with the 2×2 kernel show better accuracy and
lower deviations than all other models tested. QCNN attains 1.21%
and 1.47% relative improvements over DS-CNN and ResNet baseline, respectively. On the other hand, QCNNs with the 3×3 kernel
show the worst accuracy when compared with other configurations.
Increasing the kernel size does not always guarantee improved performances in the design of QCNN for the evaluation. The encoded
features obtained with a 3×3 quantum kernel used to train AMs, as
shown in Fig. 4(d), are often too sparse and not as discriminative
when compared to those obtained with 1×1 and 2×2 quantum kernels, as indicated in Fig. 4(b) and Fig. 4(c), respectively.
**Fig. 5: Performance studies of different quantum kernel size (dubbed**
kr) with DNN acoustic models for designing QCNN models.
(a) Input Mel-Spectrogram (b) Quanv + RNN (UAtt)
(c) Conv + RNN (UAtt) (d) Baseline RNN (UAtt)
**Fig. 6: Interpretable neural saliency results by class activation map-**
ping [32] over (a) Mel spectrogram features with audio transcription of ”on”; (b) a 2×2 quantum convolution layer followed by
RNNUAtt; (c) a well-trained 2×2 neural convolution layer followed
by RNNUAtt, and (d) baseline RNNUAtt.
**4.5. A Saliency Study by Acoustic Class Activation Mapping**
We use a benchmark neural saliency technique by class activation
mapping (CAM) [32] over different neural acoustic models to highlight the responding weighted features that activate the current output prediction. As shown in Fig. 6, QCNN (b) learns much more
correlated and richer acoustic features than RNN with a convolution
layer and baseline model [26]. According to the CAM displays, the
activated hidden neurons learn to identify related low-frequency patterns when making the ASR prediction from an utterance ”on.”
**5. CONCLUSION**
In this paper, we propose a new feature extraction approach to decentralized speech processing to be used in vertical federated learning that facilitates model parameter protection and preserves interpretable acoustic feature learning via quantum convolution. The
proposed QCNN models show competitive recognition results for
spoken-term recognition with stable performance from quantum machines when learning compared with classical DNN based AM models with the same convolutional kernel size. Our future work includes incorporating QCNN into continuous ASR. Although the proposed VFL based ASR architecture fulfilling some data protection
requirements by decentralizing prediction models, more statistical
privacy measurements [24] will be deployed to enhance the proposed
QCNN models from the other privacy perspectives [24, 1].
|Model|Acc. (↑)|Parameters (Memory) (↓)|
|---|---|---|
|RNN [26] Att Conv + RNN Att Quanv + RNN Att|94.21±0.30 94.32±0.26 94.75±0.17|170,915 (32-bits) 174,975 (32-bits) 174,955 (32-bits) + 4 (qubits)|
|RNN UAtt Conv + RNN UAtt Quanv + RNN UAtt|94.72±0.23 94.74±0.25 95.12±0.18|176,535 (32-bits) 180,595 (32-bits) 180,575 (32-bits) + 4 (qubits)|
-----
**6. REFERENCES**
[1] D. Leroy, A. Coucke, T. Lavril, T. Gisselbrecht, and J. Dureau,
“Federated learning for keyword spotting,” in IEEE Interna_tional Conference on Acoustics, Speech and Signal Processing_
_(ICASSP)._ IEEE, 2019, pp. 6341–6345.
[2] P. Voigt and A. Von dem Bussche, “The eu general data protection regulation (gdpr),” A Practical Guide, 1st Ed., Cham:
_Springer International Publishing, 2017._
[3] Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated machine
learning: Concept and applications,” ACM Transactions on In_telligent Systems and Technology (TIST), vol. 10, no. 2, pp._
1–19, 2019.
[4] L. Deng, G. Hinton, and B. Kingsbury, “New types of deep
neural network learning for speech recognition and related applications: An overview,” in IEEE International Conference on
_Acoustics, Speech and Signal Processing (ICASSP)._ IEEE,
2013, pp. 8599–8603.
[5] M. Mohseni, P. Read, H. Neven, S. Boixo, V. Denchev, R. Babbush, A. Fowler, V. Smelyanskiy, and J. Martinis, “Commercialize quantum technologies in five years,” Nature, vol. 543,
no. 7644, pp. 171–174, 2017.
[6] K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii, “Quantum
circuit learning,” Physical Review A, vol. 98, no. 3, p. 032309,
2018.
[7] V. Bergholm, J. Izaac, M. Schuld, C. Gogolin, C. Blank,
K. McKiernan, and N. Killoran, “Pennylane: Automatic differentiation of hybrid quantum-classical computations,” arXiv
_preprint arXiv:1811.04968, 2018._
[8] V. Havl´ıˇcek, A. D. C´orcoles, K. Temme, A. W. Harrow,
A. Kandala, J. M. Chow, and J. M. Gambetta, “Supervised
learning with quantum-enhanced feature spaces,” Nature, vol.
567, no. 7747, pp. 209–212, 2019.
[9] E. Farhi, J. Goldstone, S. Gutmann, and M. Sipser, “Quantum computation by adiabatic evolution,” arXiv preprint quant_ph/0001106, 2000._
[10] J. Preskill, “Quantum computing in the nisq era and beyond,”
_Quantum, vol. 2, p. 79, 2018._
[11] M. Chen, A. T. Suresh, R. Mathews, A. Wong, C. Allauzen,
F. Beaufays, and M. Riley, “Federated learning of n-gram language models,” arXiv preprint arXiv:1910.03432, 2019.
[12] C.-H. H. Yang, L. Liu, A. Gandhe, Y. Gu, A. Raju, D. Filimonov, and I. Bulyko, “Multi-task language modeling for
improving speech recognition of rare words,” arXiv preprint
_arXiv:2011.11715, 2020._
[13] M. Henderson, S. Shakya, S. Pradhan, and T. Cook, “Quanvolutional neural networks: powering image recognition with
quantum circuits,” Quantum Machine Intelligence, vol. 2,
no. 1, pp. 1–9, 2020.
[14] S. Hochreiter and J. Schmidhuber, “Long short-term memory,”
_Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997._
[15] J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe,
and S. Lloyd, “Quantum machine learning,” Nature, vol. 549,
no. 7671, pp. 195–202, 2017.
[16] A. C.-C. Yao, “Quantum circuit complexity,” in Proceedings
_of 1993 IEEE 34th Annual Foundations of Computer Science._
IEEE, 1993, pp. 352–361.
[17] M. Benedetti, E. Lloyd, S. Sack, and M. Fiorentini, “Parameterized quantum circuits as machine learning models,” Quan_tum Science and Technology, vol. 4, no. 4, p. 043001, 2019._
[18] S. Y.-C. Chen, C.-H. H. Yang, J. Qi, P.-Y. Chen, X. Ma, and H.S. Goan, “Variational quantum circuits for deep reinforcement
learning,” IEEE Access, vol. 8, pp. 141 007–141 024, 2020.
[19] S. Y.-C. Chen, S. Yoo, and Y.-L. L. Fang, “Quantum long shortterm memory,” arXiv preprint arXiv:2009.01783, 2020.
[20] F. Li, S. Zhao, and B. Zheng, “Quantum neural network in
speech recognition,” in 6th International Conference on Signal
_Processing, 2002., vol. 2._ IEEE, 2002, pp. 1267–1270.
[21] J. Qi, C.-H. H. Yang, and J. Tejedor, “Submodular rank aggregation on score-based permutations for distributed automatic speech recognition,” in IEEE International Conference
_on Acoustics, Speech and Signal Processing (ICASSP). IEEE,_
2020, pp. 3517–3521.
[22] S.-Y. Chuang, H.-M. Wang, and Y. Tsao, “Improved
lite audio-visual speech enhancement,” _arXiv_ _preprint_
_arXiv:2008.13222, 2020._
[23] A. Duc, S. Dziembowski, and S. Faust, “Unifying leakage
models: From probing attacks to noisy leakage.” in Annual
_International Conference on the Theory and Applications of_
_Cryptographic Techniques._ Springer, 2014, pp. 423–440.
[24] C. Dwork, V. Feldman, M. Hardt, T. Pitassi, O. Reingold, and
A. Roth, “The reusable holdout: Preserving validity in adaptive
data analysis,” Science, vol. 349, no. 6248, pp. 636–638, 2015.
[25] G. Aleksandrowicz, T. Alexander, P. Barkoutsos, L. Bello,
Y. Ben-Haim, D. Bucher, F. Cabrera-Hern´andez, J. CarballoFranquis, A. Chen, C. Chen, et al., “Qiskit: An opensource framework for quantum computing,” Accessed on: Mar,
vol. 16, 2019.
[26] D. C. de Andrade, S. Leo, M. L. D. S. Viana, and C. Bernkopf,
“A neural attention model for speech command recognition,”
_arXiv preprint arXiv:1808.08929, 2018._
[27] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones,
A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all
you need,” in Advances in neural information processing sys_tems, 2017, pp. 5998–6008._
[28] Y. Zhang, N. Suda, L. Lai, and V. Chandra, “Hello
edge: Keyword spotting on microcontrollers,” arXiv preprint
_arXiv:1711.07128, 2017._
[29] P. Warden, “Speech commands: A dataset for
limited-vocabulary speech recognition,” _arXiv_ _preprint_
_arXiv:1804.03209, 2018._
[30] C.-H. Yang, J. Qi, P.-Y. Chen, X. Ma, and C.-H. Lee, “Characterizing speech adversarial examples using self-attention u-net
enhancement,” in IEEE International Conference on Acous_tics, Speech and Signal Processing (ICASSP), 2020, pp. 3107–_
3111.
[31] A. Graves, S. Fern´andez, F. Gomez, and J. Schmidhuber, “Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks,” in Proceedings of
_the 23rd international conference on Machine learning, 2006,_
pp. 369–376.
[32] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Learning deep features for discriminative localization,”
in Proceedings of the IEEE conference on computer vision and
_pattern recognition, 2016, pp. 2921–2929._
-----
| 8,303
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2010.13309, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GREEN",
"url": "https://arxiv.org/pdf/2010.13309"
}
| 2,020
|
[
"JournalArticle",
"Conference"
] | true
| 2020-10-26T00:00:00
|
[
{
"paperId": "8b2d3fe667c8fab7c8f992cbb82e7a9805651715",
"title": "Multi-Task Language Modeling for Improving Speech Recognition of Rare Words"
},
{
"paperId": "141acbba15c446cebf81801040851f8751dbe32d",
"title": "Quantum Long Short-Term Memory"
},
{
"paperId": "6aadb4a610ac507a37550a23df3e33153c1c3bb9",
"title": "Improved Lite Audio-Visual Speech Enhancement"
},
{
"paperId": "709fe6f12c40722d66b09b371a90b8a6acd263f1",
"title": "Characterizing Speech Adversarial Examples Using Self-Attention U-Net Enhancement"
},
{
"paperId": "37f7b0261170f60d3b5c33da660d7a6889037cfc",
"title": "The EU General Data Protection Regulation (GDPR)"
},
{
"paperId": "d8ca16c8f583b299348dcff952d0522062d301db",
"title": "Submodular Rank Aggregation on Score-Based Permutations for Distributed Automatic Speech Recognition"
},
{
"paperId": "81b0af7afc5b3ac9457191d3e73abf7a892ea3a7",
"title": "Federated Learning of N-Gram Language Models"
},
{
"paperId": "8dea407aab8dd9bbd664ad1f471f7c9b5fd17980",
"title": "Scalable Multi Corpora Neural Language Models for ASR"
},
{
"paperId": "486c93a171650cdf1fff68cbbe646393517fca36",
"title": "Variational Quantum Circuits for Deep Reinforcement Learning"
},
{
"paperId": "638e41912f314c74436205aa8d332dca963ab1dc",
"title": "Parameterized quantum circuits as machine learning models"
},
{
"paperId": "5672e42c3b1436f668b63287a8a4e6c96c8e69d9",
"title": "Quanvolutional neural networks: powering image recognition with quantum circuits"
},
{
"paperId": "38d8230a7aeae6554497b253848ad5bf677e4fb3",
"title": "PennyLane: Automatic differentiation of hybrid quantum-classical computations"
},
{
"paperId": "bf5e17dea36f4eef23d8399af62560d3134dba51",
"title": "Federated Learning for Keyword Spotting"
},
{
"paperId": "0146f909debff2c13e61261c70850ed438dd0171",
"title": "A neural attention model for speech command recognition"
},
{
"paperId": "0dfcec3139b2b52a3b6a144f323f89dd37de1fa4",
"title": "Supervised learning with quantum-enhanced feature spaces"
},
{
"paperId": "da6e404d8911b0e5785019a79dc8607e0b313dc4",
"title": "Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition"
},
{
"paperId": "ff8eea01cbb5de505672cf9bbda3a6a91624cf52",
"title": "Quantum Machine Learning"
},
{
"paperId": "4d931ea98be69882f547ec6c1b42b78c3e13c36d",
"title": "Quantum circuit learning"
},
{
"paperId": "f3d594544126e202dbd81c186ca3ce448af5255c",
"title": "Quantum Computing in the NISQ era and beyond"
},
{
"paperId": "a3d4dbd03355d6b4972d7cb9257ccccdd6d33923",
"title": "Hello Edge: Keyword Spotting on Microcontrollers"
},
{
"paperId": "204e3073870fae3d05bcbc2f6a8e263d9b72e776",
"title": "Attention is All you Need"
},
{
"paperId": "753efd2f83955ab28e6654ff82ddc812a9261237",
"title": "Commercialize quantum technologies in five years"
},
{
"paperId": "31f9eb39d840821979e5df9f34a6e92dd9c879f2",
"title": "Learning Deep Features for Discriminative Localization"
},
{
"paperId": "edf27bb5272ea6fe244deb3bbc8da0429bfe3ac5",
"title": "The reusable holdout: Preserving validity in adaptive data analysis"
},
{
"paperId": "ae3276a8b2b0ba10d447df8c41c4690f8f377398",
"title": "Unifying Leakage Models: From Probing Attacks to Noisy Leakage"
},
{
"paperId": "eb9243a3b98a819539ad57b7b4f05b969510d075",
"title": "New types of deep neural network learning for speech recognition and related applications: an overview"
},
{
"paperId": "261a056f8b21918e8616a429b2df6e1d5d33be41",
"title": "Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks"
},
{
"paperId": "12a16a268dfeda1109540e0b98e07e65d30ebb02",
"title": "Quantum neural network in speech recognition"
},
{
"paperId": "3b75692ea8f3d29dfdd6eb250c6d5edae48c1f16",
"title": "Quantum Computation by Adiabatic Evolution"
},
{
"paperId": "2e9d221c206e9503ceb452302d68d10e293f2a10",
"title": "Long Short-Term Memory"
},
{
"paperId": "f64ed54b6d8e75ffeb422f94c14f12e07d57ad8e",
"title": "Quantum Circuit Complexity"
},
{
"paperId": null,
"title": "“Qiskit: An open-source framework for quantum computing,”"
}
] | 8,303
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/007dafe68d8cba5ce75ca6a253b864a2fb13a529
|
[
"Computer Science"
] | 0.891133
|
A Methodology Based on Computational Patterns for Offloading of Big Data Applications on Cloud-Edge Platforms
|
007dafe68d8cba5ce75ca6a253b864a2fb13a529
|
Future Internet
|
[
{
"authorId": "1790991",
"name": "B. D. Martino"
},
{
"authorId": "1770496",
"name": "S. Venticinque"
},
{
"authorId": "144658618",
"name": "A. Esposito"
},
{
"authorId": "2057172350",
"name": "Salvatore D'Angelo"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-156830",
"https://www.mdpi.com/journal/futureinternet"
],
"id": "c3e5f1c8-9ba7-47e5-acde-53063a69d483",
"issn": "1999-5903",
"name": "Future Internet",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-156830"
}
|
Internet of Things (IoT) is becoming a widespread reality, as interconnected smart devices and sensors have overtaken the IT market and invaded every aspect of the human life. This kind of development, while already foreseen by IT experts, implies additional stress to already congested networks, and may require further investments in computational power when considering centralized and Cloud based solutions. That is why a common trend is to rely on local resources, provided by smart devices themselves or by aggregators, to deal with part of the required computations: this is the base concept behind Fog Computing, which is becoming increasingly adopted as a distributed calculation solution. In this paper a methodology, initially developed within the TOREADOR European project for the distribution of Big Data computations over Cloud platforms, will be described and applied to an algorithm for the prediction of energy consumption on the basis of data coming from home sensors, already employed within the CoSSMic European Project. The objective is to demonstrate that, by applying such a methodology, it is possible to improve the calculation performances and reduce communication with centralized resources.
|
## future internet
_Article_
# A Methodology Based on Computational Patterns for Offloading of Big Data Applications on Cloud-Edge Platforms
**Beniamino Di Martino *** **, Salvatore Venticinque** **, Antonio Esposito** **and Salvatore D’Angelo**
Dipartimento di Ingegneria, Universita’ della Campania “Luigi Vanvitelli”, 81031 Aversa (CE), Italy;
[email protected] (S.V.); [email protected] (A.E.);
[email protected] (S.D.)
*** Correspondence: [email protected]**
Received: 10 November 2019; Accepted: 28 January 2020; Published: 7 February 2020
[����������](https://www.mdpi.com/1999-5903/12/2/28?type=check_update&version=2)
**�������**
**Abstract: Internet of Things (IoT) is becoming a widespread reality, as interconnected smart devices**
and sensors have overtaken the IT market and invaded every aspect of the human life. This kind of
development, while already foreseen by IT experts, implies additional stress to already congested
networks, and may require further investments in computational power when considering centralized
and Cloud based solutions. That is why a common trend is to rely on local resources, provided by
smart devices themselves or by aggregators, to deal with part of the required computations: this is the
base concept behind Fog Computing, which is becoming increasingly adopted as a distributed
calculation solution. In this paper a methodology, initially developed within the TOREADOR
European project for the distribution of Big Data computations over Cloud platforms, will be
described and applied to an algorithm for the prediction of energy consumption on the basis of data
coming from home sensors, already employed within the CoSSMic European Project. The objective
is to demonstrate that, by applying such a methodology, it is possible to improve the calculation
performances and reduce communication with centralized resources.
**Keywords: fog computing; cloud computing; parallelizazion strategies; patterns**
**1. Introduction**
One of the trends followed by IT experts in recent years has been the “Cloudification” of most
of the existing software applications, and the consequent movement and storage of massive amount
of data on large, remote servers. While the Cloud model offers tangible advantages and benefits,
especially in terms of revenues, return on investments and better use of existing hardware structures,
it still shows weak points. First of all, as data are not in direct possession of the customer, as it is most
of the time stored remotely, security issues may arise. Second, but not less important, the simple fact
that you need to reach a remote server to start a calculation and receive a result, can hinder the actual
applications. Real time applications need to provide fast and immediate responses, which Cloud
Platforms cannot always guarantee. Furthermore, Cloud is strongly dependant on Internet connection
to operate: if there is a network failure, services simply cannot be reached. This represent a major
difficulty when dealing with real time and potentially critical applications.
The Internet of Things strongly relies on real time to deliver results. Just imagine smart robots
in factories: they need to analyse data coming from sensors immediately, to react to the environment
accordingly. If all the calculations were made in Cloud, delays in communications could slow the work
or result in potential safety threats. Also, under a more general perspective, the huge amount of data
to be transferred using current Internet networks could further aggravate local congestion and cause
communication failures.
-----
_Future Internet 2020, 12, 28_ 2 of 12
The answer to such issues can be found in the introduction of an intermediate layer, between local
smart devices and the Cloud, in which most (if not all) of the real-time calculations can be executed,
strongly reducing the impact on the network and delays. Fog Computing promises to act as such an
intermediate level, by bringing computational power to the very edge of the applications’ architecture,
in particular by increasing the computing capabilities of devices themselves or of local gateways and
aggregators: this would reduce the amount of data to be transferred, analysed and processed by Cloud
server, which could focus on storage and take care of the heaviest computations that cannot be handled
by local devices.
Fog and Edge computing shift most of the computing burden to peripheral devices and gateways,
which can communicate with each other as long as a local network is up. Such local networks are
most of the time separated from the Internet, they are generally created ad-hoc and are self maintained
and managed. Communication failures are then handled locally, and are not dependant on the public
network’s status.
Having an infrastructure to handle real-time and critical computations locally and reduce data
traffic represent a huge advantage of Fog Computing, but it is not enough: it is also necessary to
accurately restructure the computation in order to take advantage of the infrastructure, and in particular
to balance the computational burden weighting on the calculation nodes.
In this paper we present a data and computation distribution methodology, initially developed to
distribute Big Data computation over Cloud Services within the TOREADOR European project [1],
and we apply it to parallelization and distribution of algorithms created within the CoSSMic European
Project [2] for the calculation and prediction of energy consumption in households, by exploiting local
smart devices and gateways. The application of the computation distribution methodology allows
for the exploitation of computational resources available at the edge of the software network, and
for the balancing of computational loads, which will distributed in order to minimize the needing
of a central Cloud based server. The remainder of this paper is organized as follows: Section 2 will
present related works in the field of Field Computing and computation distribution; Section 3 will
present the methodology; Section 4 will describe the Case Study used to test the approach; Section 5
will provide experimental results conducted within the Case Study; Section 6 closes the paper with
final consideration on the current work and future developments.
**2. Related Works**
Exploiting the mobile framework to develop distributed applications can open to new interesting
scenarios [3]. Among such applications, Smart Grid related platforms can take great advantages from
the application of Fog Computing paradigms. The work presented in [4] describes such advances,
focusing on challenges such as latency issues, location awareness and transmission of large amounts
of data. In order to reduce the high latencies that may potentially affect real-time applications
which exchange high volumes of data with Cloud services, there is the need of a shift in the whole
computation paradigm: Fog Computing moves the Cloud Computing paradigm to the edge of
networks, in particular those that connect all the devices belonging to the IoT [5].
A commonly accepted definition of Fog Computing, provided in [6], describes the concept as
a scenario where a high number of wireless devices communicate with each other, relying on local
network services, to cooperate and support intense computational processes.
Fog can thus represent an extension of Cloud Computing, an intermediate dedicated level of
interconnections between the Cloud and end devices, bringing benefits like reduced traffic and
latencies, and better data protection. The work presented in [7] shows that applications residing on
Fog nodes are not simply isolated, but they are integrated in a larger solutions that covers Cloud and
user level. Fog nodes are fundamental to collect and forward data for real time processing, but Cloud
resources are still necessary to run complex calculations, such as in Big Data analytic.
The work presented in [8] provides an insight on the application of Fog computing to Smart Grids,
focusing on the Fog Service Placement Problem, in order to investigate the optimal deployment of
-----
_Future Internet 2020, 12, 28_ 3 of 12
IoT applications on end devices and local gateways. Code-based approaches, that is methodologies
that start from an algorithm source code and try to obtain a distributed version of it, have also been
investigated in different studies. In [9] the authors have described an innovative auto-parallelizing
approach, based on a compiler which implements data flow algorithm. Such an approach leverages
domain knowledge as well as high-level semantics of mathematical operations to find the best
distributions of data and processing tasks over computing nodes.
Several studies have stressed the important role played by network communications when
applications need to to either transfer considerable amounts of data or rapidly exchange information
to provide real-time responses, such as in [10]. Indeed data transmission, especially when the volume
becomes consistent, is more prone to bit errors, packet dropping and high latency. Also, access networks
can contribute to the overall data transmission time, sometimes being determinant [11]. In [12] authors
have provided an insight on the issues that transmission traffic can cause to mobile communications,
even when the amount of data to be exchanged is relatively small, and have also proposed solutions
to resolve the problem in the specific case of Heartbeat Messages. However, since the transmission
of considerable amounts of data is still problematic, the re-distribution of workloads over the end
devices and the consequent reduction of traffic seem to be the better option, provided that the different
capabilities of Cloud and mobile resources are taken in consideration. The data-driven reallocation of
tasks on Edge devices has been considered in [13], with a focus on machine-learning algorithms.
Edge devices generally come with limited computational power and need to tackle energy
consumption issues, which also arise in hybrid mobile-Cloud contexts, as pointed out in [12],
where authors provide their own solution to the issue. Energy consumption is also the main issue
considered in [14], where the authors propose an Online Market mechanism to favour the partecipation
of distributed cloudlets in Emergency Demand Response (EDR) programs.
The aim of our work is to achieve data and task based reallocation of computation over Edge
devices, by guiding the re-engineering of existing applications through Computational patterns.
The approach presented in Section 3 is indeed based on the use of annotation, via pre-defined
parallelization primitives, of existing source code, in order to determine the pattern to be used.
The use of patterns can help in automatically determining the best distribution algorithm to reduce the
data exchange and, depending on the user final objective, also to minimize energy consumption.
**3. Description of the Methodology**
The methodology we exploit to distribute and balance the computation on edge nodes works
through the annotation, via pre-defined parallelization directives, of the sequential implementation
of an algorithm. The approach, has been designed within the research activities of the TOREADOR
project, has been specifically developed to distribute computational load among nodes hosted by
different platforms/technologies in multi-platform Big Data and Cloud environments, using State
of the Art orchestrators [15]. Use cases where edge computing nodes represented the main target
have been considered and demonstrated the feasibility of the approach in Edge and Fog Computing
environments [16].
The methodology requires that the user annotates her source code with a set of Parallelization
**Primitives or parallel directives, which are then analysed by a compiler. The compiler determines**
the exact operations to execute on the original sequential code, thanks to a set of transformation
rules which are unique for the specific parallel directive, and employs Skeletons to create the final
executable programs. Directives are modelled after well known Parallelization Patterns, which are
implemented and adapted according to the considered target. Figure 1 provides an overview of the
whole parallelization process, with its three main steps:
1. **Code: in the first step, we suppose the user owns a good knowledge of the original algorithm to**
be transformed from a sequential to a parallel version. The user will annotate the original code
with the provided Parallel Primitives.
-----
_Future Internet 2020, 12, 28_ 4 of 12
2. **Transform: The second step consists in the actual transformation of the sequential code, that the**
user has annotated with the aforementioned primitives, operated by Skeleton-Based Code
Compiler (Source to source Transformer). The compiler will produce a series of parallel versions
of the original code, each one customized for a specific platform/technology, according to a
3-phases sub-workflow.
(a) _Parallel Pattern Selection: on the base of the used primitives, the compiler selects_
(or asks/helps the user to select) a Parallel Paradigm.
(b) _Incarnation of agnostic Skeletons: this is the phase in which the transformation takes place._
A parallel agnostic version of the original sequential algorithm will be created, via the
incarnation of predefined code Skeletons. Transformation rules, part of the the knowledge
base of the compiler, guide the whole transformation and the operation the compiler will
perform on the Abstract Syntax Tree.
(c) _Production of technology dependent Skeletons: the agnostic Skeletons produced in the previous_
phase are specialized and multiple parallel versions of the code are created, considering
different platform and technologies as a target
3. **Deployment: Production of Deployment Scripts.**
**Figure 1. The Code-Based Approach workflow.**
_3.1. The Code Phase_
As already stated, in our approach the user is considered an expert programmer, who is
aware of the specific parts of her code that can, or cannot, be actually parallelized. However,
once she has annotated the sequential code with parallel primitives, these will allow her to distribute
the sequential computation among processing nodes residing on remote platforms and even in
multi-platform environments.
-----
_Future Internet 2020, 12, 28_ 5 of 12
The parallel directives used to guide the next transformation phase have well known meaning
within the approach, and have been studied to adapt to most of the situations. Also, directives can be
nested to achieve several levels of parallelization.
Primitives can be roughly divided into two main categories:
**Data Parallel primitives, which organize the distribution of data to be consumed among**
_•_
processing nodes. General primitives exist, which do not refer to a specific Parallel Pattern,
but most of the primitives are modelled against one.
**Task Parallel primitives, which instead focus on the process, and distribute computing loads**
_•_
according to a Pattern based schema. General primitives also exist, but they will need to be bound
to a Specific Pattern before the transformations phase.
Examples of used primitives are:
- The data_parallel_region(data, func, *params) represents the most general data parallelization
diretive, which applies a generic set of functions to input data and optional parameters.
- The producer_consumer(data, prod_func, cons_func, *params) directive implies a Producer
Consumer parallelization approach. The data input is split into independent chunks which
represent the input of the prod_func function. A set of computing node elaborates the received
chunk and puts the result in a shared support data structure (Queue, List, Stack or similar), as also
shown in Figure 2. Another set of nodes polls the shared data structure and executes cons_func
on the contained data, until exhaustion.
- The pipeline(data, [order_func_list] *params) directive represents a well known task parallel
paradigm, in which the processing functions to be executed need to be run in the precise order
they appear in the order_func_list input list.
**Figure 2. Producer Consumer.**
The input data is generally a list of elements, which are fed one by one to the first function,
whose output is passed to the second one and so on, until the last result is obtained. While the i th
function is in execution, the i 1 th can elaborate another chunk of data (if any), while the i + 1 th
_−_
needs to wait for the output from the i th in order to go on. In this way, at regimen, no computational
nodes are idle and resources are fully exploited. Figure 3 reports an example of execution of such a
Primitive, showing how functions are sequentially executed by Computing nodes and fill the pipeline.
-----
_Future Internet 2020, 12, 28_ 6 of 12
**Figure 3. Pipeline.**
_3.2. Transformation Phase_
The Transformation phase represents the core of the entire approach. The annotated code
is analysed and, when the parallelization directives are met, a series of transformation rules are
applied accordingly.
Such rules are strictly dependent on the Pattern the specific directive has been modeled against,
so if the user selects a general primitive she is asked to make a decision at this step.
The final Skeletons obtained after the filling operations can be roughly divided into three categories:
**Main Scripts contain the execution “main”, that is the entry point of the parallelized application,**
_•_
whose processing is managed by the Skeleton. All code that cannot be distributed is contained
within the Main Script, which will also take care of calling the Secondary Scripts;
**Secondary Scripts contain the distributed code, that will be directly called by the Main Script**
_•_
and then executed on different computational nodes, according to the selected Parallel Paradigm.
The number of secondary scripts is not fixed, as it depends on the selected Pattern;
**Deployment Templates provide information regarding all the computational nodes that will be**
_•_
used to execute the filled Skeletons (both Main and Secondary)
The knowledge base of the compiler comprehends a series of Skeleton filling rules, which are
used to analyze and transform the original algorithm. The rules are bound to a specific Pattern, as the
transformations needed on the sequential code will change by selecting a different Pattern. However,
since the Parser will treat the micro-functions contained in the algorithm definition and included in
the analyzed primitives always in the same way, despite the specific Pattern selected, the rules are
completely independent from the algorithm.
_3.3. The Deployment Phase_
The Deployment step is the last one which needs to performed in order to make the desired
algorithm executable on the target platform. The user does not intervene during the Deployment
phase, as it is completely transparent to her, unless she wants to monitor and dynamically act on the
execution of the algorithm.
Different target deployment environments have been considered:
-----
_Future Internet 2020, 12, 28_ 7 of 12
Distributed and parallel frameworks, among which Apache Storm, Spark and Map-Reduce
_•_
Several Cloud Platforms, as an instance Amazon and Azure
_•_
Container-based systems, with a focus on Docker, for which the approach considers two different
_•_
parallelization strategies:
**–** A centralized strategy, where a central server manages the Docker containers. The server can
reside, but non necessarily, in a Cloud environment.
**–** A distributed strategy, in which a central offloading algorithm takes care of allocating
containers on remote devices, selected by following the execution schema. This second
approach can be applied in the case of Edge and Fog Computing approaches, as also reported
in previous works [16] and further investigated in the present manuscript.
Automatic orchestrators can be employed, if the target environment allows it, as it has been
described in [15].
**4. The Case Study**
The CoSSMic project focuses on the creation and management of Microgrids, that is local power
grids confined within smart homes or buildings (even adhibited to offices) embedding local generation
and storage of solar energy, together with power consuming devices. Such devices comprehend
electric vehicles that can connect and disconnect dynamically and therefore introduce variability in the
storage capacity.
The CoSSMic user can configure all of her appliances, according to a set of constraints and
preferences: earlier or latest start time, priority in respect to other appliances, duration of the work
and so on. Also, she can supervise the energy consumption, determine how much power is produced
by the local solar facility and how much is shared with the community. All these information help
to determine, and ultimately to reduce, the overall costs. The user can also set specific goals: reduce
battery use, or maximize the consumption of energy from solar panels.
In order to determine the best course of actions, according to the constraints and goals of the user,
a Multi Agent System (MAS) has been exploited to deploy agents that actively participate in the energy
distribution. Agents make use of the information coming from the user’s plan, the weather forecast
and the consumption profiles of the appliances to find the optimal schedule, which will maximize the
neighborhood grid self-consumption.
The main configuration of the CoSSMic platform is All-In-Home, that is all the software resides
on a Home Gateway, which is connected to the local power grid and to the Internet, and encapsulates
the functions of device management, information system and MAS. The computation for the energy
optimization is performed at each home, and the energy exchange occurs within the neighborhood.
Cloud services can be used by agents to storage info about energy consumption.
In order to optimize the energy management, the local nodes execute a series of computations to
determine the consumption profiles of the several devices and appliances connected to the CoSSMic
microgrid. The consumption data coming of each device are analysed and consumption profiles are
built. The calculation of such profiles is fundamental to foresee the future behaviour of the devices
and create an optimized utilization scheduling.
In the original CoSSMic prototype users need to set in advance which kind of program they are
running manually to allow to the system for taking into account energy requirements of that program.
This is a tedious task. Moreover, it needs to run many times the same program of the appliance before
an average profile that represents energy requirements of that program is available. K-means allows
for implementing an extended functionality of the CoSSMic platform to automatically learn energy
profiles corresponding to different working programs of an appliance, and can be used to predict at
device switch time which program is actually going to run. K-means clustering is used to group similar
measured energy consumption time-series. Each cluster corresponds to a different working program.
The centroid of each cluster is used to predict the energy consumption when the same program is
-----
_Future Internet 2020, 12, 28_ 8 of 12
starting. The clusters are updated after each run of the appliance in order to use the most recent
and significant measures. Collecting and clustering of measures coming from many instances of the
same appliance could help to increase precision, but would require greater computational resources.
Automatic prediction about which program is going to start is out of the scope here, but the interested
reader can refer to [8].
K-means algorithm can be parallelized and distributed over Fog nodes, in order to achieve better
performances and lessen the load burden on each node. Indeed, in order to fully exploit the Fog
stratum, we need to rethink the distribution of the computation to also determine the best hardware
allocation: this is where the application of our approach comes into play.
In our case study we are focusing on the parallelization of the Clustering algorithms,
with particular attention to the k-means implementation. As it will be shown in Section 5 through
code-examples, the task_parallel_region primitives will be mainly used, together with a distributed
container approach, as seen in Section 3.3. The Bag of Tasks Parallel Pattern will be used in our
test case.
**5. Application of the Approach and Experimental Results**
In this Section we are going to show how we have applied our approach by using a specific
parallel primitive, and we confront the results obtained taking by running the parallelized code on
two Raspberry PIs and the sequential code on a centralized server, acting as a Gateway.
In particular, we have focused on the parallelization of a Clustering algorithm, which is executed
on a single device (the Home Gateway) in the current CoSSmic scenario. In the following, we will use
Python as a reference programming language.
The sequential program run on the Gateway is simply started through the execution of a
**compute_cluster function, whose signature is as follows:**
**_compute_cluster(run, data, history) where run is the maximum number of consecutive iterations_**
the clusterization algorithm can run before stopping and giving a result, data is a reference to the data
to be analyzed and history reports the cluster configuration obtained at the previous run.
The algorithms has been built in order to be embarrassingly parallel, so a data or task parallel
region can be immediately adopted.
As shown in Listing 1 we first provide a definition of the Task Parallel primitive, then we pass
the arguments which should be fed to the clusterization function to a parser in order to format and
prepare them for the parallel execution. Such arguments are necessary to execute the parallel function
and to correctly and store the input/output data. Finally we simply recall the Task Parallel Region
primitive using the function to be parallelized as one of its arguments.
Listing 1: Task Parallel Primitive: definition and application.
-----
_Future Internet 2020, 12, 28_ 9 of 12
```
usage = "usage:␣%prog␣[options]␣filename"
parser = ArgumentParser(usage)
parser. add_argument("filename", metavar="FILENAME")
parser. add_argument("-r", "--runs", default =0,
dest="runs", metavar="RUNS", type=int,
help="number␣of␣runs")
parser. add_argument("-d", "--docker", default =0,
dest="docker", metavar="DOCKER", type=int,
help="number␣of␣dockers")
parser. add_argument("-n", "--history", default =0,
dest="history", metavar="RUNID", type=int,
help="number␣of␣timeseries␣for␣clustering")
data_dir = "./ paper_data"
args = parser.parse_args ()
history = args.history
filename = args.filename
docker = args.docker
task_parallel_docker (list(range(args.runs)), docker, compute_cluster,
filename, history)
```
The Yaml configuration used to set-up the Dockers running on the final device has been provided
in Listing 2. In the proposed configuration, one master and 4 slaves have been taken in consideration.
The provided code only reports the configuration of the master and of one of the slaves, as they are all
identical. In particular, the instructions that will be executed by the master and the slaves are included
in two python files, which will be, in the future, automatically produced by a parser.
Listing 2: Master and Slave configurations.
-----
_Future Internet 2020, 12, 28_ 10 of 12
```
volumes:
- type: bind
source: ./
target: /fog
networks:
- redis -network
stdin_open: true
tty: true
```
The Dockers will run on the target environment simultaneously, being it a Raspberry or the
centralized server. Considering the Raspberries, each of the Dockers will run on a different virtual
CPU. Observing the measurements reported in Figure 4 it is clear that CPU001 is in charge of the
master Docker and of one of the Slaves. Also, from Figure 5 it is possible to determine that not many
process switches take place during the execution. Overall, the Raspberry are not overwhelmed by the
computations, so they can be still be exploited for other concurrent tasks.
**Figure 4. CPU utilization in one of the Raspberries.**
**Figure 5. Process Switching during execution.**
The execution times are, of course, much different if we compare the Raspberries with the
centralized server.
As shown in Figure 6, the centralized server is far more efficient than the single Raspberries,
the performances of which slightly differ from one another. However, this last fact is simply due to the
difference in the reading speed of the SD cards used in the two Raspberries, despite it being rather
small: 22.9 MB/s for Raspberry 1 and 23.1 MB/s for Raspberry 2.
-----
_Future Internet 2020, 12, 28_ 11 of 12
**Figure 6. Comparison between Execution Times for different cluster dimensions.**
If we take in consideration the medium size of a cluster, which in our case has be considered to be
of 1000 points, the server will take 986 s on average to complete the computation. If we consider a
configuration with N Raspberries, with each of them being given a portion of the points to be clustered,
we would roughly obtain an execution time of 800 s if we considered 10 Raspberries working in
parallel, and not completely focused on the specific clusterization task.
We are not taking in consideration data transmission times at them moment, as all data will be
transmitted within the same local network, with small to negligible delays.
Furthermore, the Raspberries would be available to host the computation of data coming from
different households in the neighborhood, provided they can access a common communication
network, as in the current CoSSMic scenario.
**6. Conclusions and Future Work**
In this paper, an approach for the parallelization of algorithms on distributed devices, residing in a
Fog Computing layer, has been presented. In particular, the approach has been tested against a real case
scenario, provided by the CoSSMic European Project, regarding the clusterization and classification of
data coming from sensors previously installed in households. Such clusters are then used to predict
energy consumption and plan the use of the devices to maximize the use of solar energy.
What we wanted to achieve was to demonstrate that, through the application of the approach,
it is possible to obtain a performance improvement in the algorithm execution time.
The initial results seem promising, as with the opportune configuration of data and tasks it is
possible to obtain a sensible enhancement of the algorithm performances, provided that a sufficient
number of devices (in the test case we used Raspberry PIs) are available.
However, the approach needs to be polished and to be completely automated, in order to reduce
possible setbacks in the selection of the right configuration and to support the auto-tuning of the
data and tasks distribution. Furthermore, in the future there will the possibility to automatically
detect parallelizable sections of code to support the user in the annotation phase, or possibly to even
completely automatize the whole annotation step.
**Author Contributions: Conceptualization and supervision: B.D.M.; methodolody: B.D.M., S.V. and A.E.; software:**
S.D.; data curation: S.D. and A.E.; writing–original draft preparation, writing–review and editing: B.D.M., S.V.,
A.E., S.D. All authors have read and agree to the published version of the manuscript.
**Funding: This work has received funding from the European Union’s Horizon 2020 research and innovation**
programme under the TOREADOR project, grant agreement Number 688797 and the CoSSMic project
(Collaborating Smart Solar powered Micro grids - FP7 SMARTCITIES 2013 - Project ID: 608806).
**Conflicts of Interest: The authors declare no conflict of interest.**
-----
_Future Internet 2020, 12, 28_ 12 of 12
**References**
1. [Toreador: TrustwOrthy model-awaRE Analytics Data platfORm. Available online: http://www.toreador-](http://www.toreador-project.eu/)
[project.eu/ (accessed on 30 October 2019).](http://www.toreador-project.eu/)
2. [Collaborating Smart Solar-Powered Micro-Grids. Available online: https://cordis.europa.eu/project/rcn/](https://cordis.europa.eu/project/rcn/110134/en/)
[110134/en/ (accessed on 24 September 2019).](https://cordis.europa.eu/project/rcn/110134/en/)
3. Liu, F.; Shu, P.; Jin, H.; Ding, L.; Yu, J.; Niu, D.; Li, B. Gearing resource-poor mobile devices with powerful
clouds: Architectures, challenges, and applications. IEEE Wirel. Commun. 2013, 20, 14–22.
4. Gia, T.N.; Jiang, M.; Rahmani, A.M.; Westerlund, T.; Liljeberg, P.; Tenhunen, H. Fog computing in healthcare
internet of things: A case study on ecg feature extraction. In Proceedings of the 2015 IEEE International
Conference on Computer and Information Technology; Ubiquitous Computing and Communications;
Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, Liverpool, UK,
26–28 October 2015.
5. Bonomi, F.; Milito, R.; Zhu, J.; Addepalli, S. Fog Computing and Its Role in the Internet of Things.
In Proceedings of the First Edition of the MCC Workshop on Mobile Cloud Computing, Helsinki, Finland,
[17 August 2012. [CrossRef]](http://dx.doi.org/10.1145/2342509.2342513)
6. Vaquero, L.M.; Rodero-Merino, L. Finding Your Way in the Fog: Towards a Comprehensive Definition of
[Fog Computing. SIGCOMM Comput. Commun. Rev. 2014, 44, 27–32. [CrossRef]](http://dx.doi.org/10.1145/2677046.2677052)
7. Skarlat, O.; Nardelli, M.; Schulte, S.; Borkowski, M.; Leitner, P. Optimized IoT service placement in the fog.
_[Serv. Oriented Comput. Appl. 2017, 11, 427–443. [CrossRef]](http://dx.doi.org/10.1007/s11761-017-0219-8)_
8. Venticinque, S.; Amato, A. A methodology for deployment of IoT application in fog. J. Ambient Intell. Hum.
_[Comput. 2019, 10, 1955–1976. [CrossRef]](http://dx.doi.org/10.1007/s12652-018-0785-4)_
9. Totoni, E.; Anderson, T.A.; Shpeisman, T. HPAT: High performance analytics with scripting ease-of-use.
In Proceedings of the International Conference on Supercomputing, Chicago, IL, USA, 14–16 June 2017.
10. Yi, X.; Liu, F.; Liu, J.; Jin, H. Building a network highway for big data: Architecture and challenges. IEEE Netw.
**[2014, 28, 5–13. [CrossRef]](http://dx.doi.org/10.1109/MNET.2014.6863125)**
11. Gao, B.; Zhou, Z.; Liu, F.; Xu, F. Winning at the Starting Line: Joint Network Selection and Service Placement
for Mobile Edge Computing. In Proceedings of the IEEE INFOCOM 2019—IEEE Conference on Computer
[Communications, Paris, France, 29 April–2 May 2019. [CrossRef]](http://dx.doi.org/10.1109/INFOCOM.2019.8737543)
12. Jin, Y.; Liu, F.; Yi, X.; Chen, M. Reducing Cellular Signaling Traffic for Heartbeat Messages via Energy-Efficient
D2D Forwarding. In Proceedings of the 2017 IEEE 37th International Conference on Distributed Computing
[Systems (ICDCS), Atlanta, GA, USA, 5–8 June 2017. [CrossRef]](http://dx.doi.org/10.1109/ICDCS.2017.236)
13. Chen, Q.; Zheng, Z.; Hu, C.; Wang, D.; Liu, F. Data-driven Task Allocation for Multi-task Transfer Learning
on the Edge. In Proceedings of the 2019 IEEE 39th International Conference on Distributed Computing
[Systems (ICDCS), Dallas, TX, USA, 7–10 July 2019. [CrossRef]](http://dx.doi.org/10.1109/ICDCS.2019.00107)
14. Chen, S.; Jiao, L.; Wang, L.; Liu, F. An Online Market Mechanism for Edge Emergency Demand Response
via Cloudlet Control. In Proceedings of the IEEE INFOCOM 2019—IEEE Conference on Computer
[Communications, Paris, France, 29 April–2 May 2019. [CrossRef]](http://dx.doi.org/10.1109/INFOCOM.2019.8737574)
15. Di Martino, B.; D’Angelo, S.; Esposito, A.; Martinez, I.; Montero, J.; Pariente Lobo, T. Parallelization
and Deployment of Big Data algorithms: the TOREADOR approach. In Proceedings of the 2018 32nd
International Conference on Advanced Information Networking and Applications Workshops (WAINA),
Krakow, Poland, 16–18 May 2018.
16. Di Martino, B.; Esposito, A.; D’Angelo, S.; Maisto, S.A.; Nacchia, S. A Compiler for Agnostic Programming
and Deployment of Big Data Analytics on Multiple Platforms. _IEEE Trans. Parallel Distrib. Syst. 2019,_
_[30, 1920–1931. [CrossRef]](http://dx.doi.org/10.1109/TPDS.2019.2901488)_
_⃝c_ 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
[(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.)
-----
| 8,476
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/fi12020028?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/fi12020028, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/1999-5903/12/2/28/pdf?version=1582950821"
}
| 2,020
|
[
"JournalArticle"
] | true
| 2020-02-07T00:00:00
|
[
{
"paperId": "85a0f3d869bd36425ca162bfb327db66172a264d",
"title": "A Compiler for Agnostic Programming and Deployment of Big Data Analytics on Multiple Platforms"
},
{
"paperId": "0be69a691d9ca17f28e2b740c60a506bd7fad803",
"title": "Data-driven Task Allocation for Multi-task Transfer Learning on the Edge"
},
{
"paperId": "648b269e18527115fff0855e817278e8373d0f27",
"title": "An Online Market Mechanism for Edge Emergency Demand Response via Cloudlet Control"
},
{
"paperId": "a4d4e4aa31cca81bb101010c4697e97999a6d0ba",
"title": "Winning at the Starting Line: Joint Network Selection and Service Placement for Mobile Edge Computing"
},
{
"paperId": "1fec7e3545c245c3d0d9662debcdeee94d2a86ca",
"title": "Parallelization and Deployment of Big Data Algorithms: The TOREADOR Approach"
},
{
"paperId": "040d66ea95cd782916829e2335f73aa692fac134",
"title": "A methodology for deployment of IoT application in fog"
},
{
"paperId": "135c6098f2ed4f11c78bd2e4d65721cc4fce639a",
"title": "Optimized IoT service placement in the fog"
},
{
"paperId": "195e1479cea0857554c5b8b085de248c69018aa9",
"title": "Reducing Cellular Signaling Traffic for Heartbeat Messages via Energy-Efficient D2D Forwarding"
},
{
"paperId": "f9d666cd0dc925c8d0819a3f59cf72c8d48c6287",
"title": "HPAT: high performance analytics with scripting ease-of-use"
},
{
"paperId": "508b3797a2b8b862707767572cb7ad4fad200456",
"title": "Fog Computing in Healthcare Internet of Things: A Case Study on ECG Feature Extraction"
},
{
"paperId": "ede4ffff2968c84ea26bf64f6f26f670b8ab3824",
"title": "Finding your Way in the Fog: Towards a Comprehensive Definition of Fog Computing"
},
{
"paperId": "e2f213ebbbb023620c36cc2da14b699b67bc916d",
"title": "Towards a SLA for Collaborating Smart Solar-Powered Micro-Grids"
},
{
"paperId": "17c1090e8d2de525c91dbdf53155ea9c008c889d",
"title": "Building a network highway for big data: architecture and challenges"
},
{
"paperId": "b4d99b0fc1d98abe7e33f7af492c4a8b229dec73",
"title": "Gearing resource-poor mobile devices with powerful clouds: architectures, challenges, and applications"
},
{
"paperId": "207ea0115bf4388d11f0ab4ddbfd9fd00de5e8d1",
"title": "Fog computing and its role in the internet of things"
},
{
"paperId": null,
"title": "TrustwOrthy model-awaRE Analytics Data platfORm"
},
{
"paperId": null,
"title": "Autonomic and Secure Computing"
}
] | 8,476
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Medicine",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/007f98a2cac92ce21c14b87c362d0629237aebda
|
[
"Computer Science",
"Medicine"
] | 0.810027
|
Iterative Diffusion-Based Distributed Cubature Gaussian Mixture Filter for Multisensor Estimation
|
007f98a2cac92ce21c14b87c362d0629237aebda
|
Italian National Conference on Sensors
|
[
{
"authorId": "2124509958",
"name": "Bin Jia"
},
{
"authorId": "2113203604",
"name": "Tao Sun"
},
{
"authorId": "1773716",
"name": "M. Xin"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"SENSORS",
"IEEE Sens",
"Ital National Conf Sens",
"IEEE Sensors",
"Sensors"
],
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-142001",
"http://www.mdpi.com/journal/sensors",
"https://www.mdpi.com/journal/sensors"
],
"id": "3dbf084c-ef47-4b74-9919-047b40704538",
"issn": "1424-8220",
"name": "Italian National Conference on Sensors",
"type": "conference",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-142001"
}
|
In this paper, a distributed cubature Gaussian mixture filter (DCGMF) based on an iterative diffusion strategy (DCGMF-ID) is proposed for multisensor estimation and information fusion. The uncertainties are represented as Gaussian mixtures at each sensor node. A high-degree cubature Kalman filter provides accurate estimation of each Gaussian mixture component. An iterative diffusion scheme is utilized to fuse the mean and covariance of each Gaussian component obtained from each sensor node. The DCGMF-ID extends the conventional diffusion-based fusion strategy by using multiple iterative information exchanges among neighboring sensor nodes. The convergence property of the iterative diffusion is analyzed. In addition, it is shown that the convergence of the iterative diffusion can be interpreted from the information-theoretic perspective as minimization of the Kullback–Leibler divergence. The performance of the DCGMF-ID is compared with the DCGMF based on the average consensus (DCGMF-AC) and the DCGMF based on the iterative covariance intersection (DCGMF-ICI) via a maneuvering target-tracking problem using multiple sensors. The simulation results show that the DCGMF-ID has better performance than the DCGMF based on noniterative diffusion, which validates the benefit of iterative information exchanges. In addition, the DCGMF-ID outperforms the DCGMF-ICI and DCGMF-AC when the number of iterations is limited.
|
# sensors
_Article_
## Iterative Diffusion-Based Distributed Cubature Gaussian Mixture Filter for Multisensor Estimation
**Bin Jia** **[1], Tao Sun** **[2]** **and Ming Xin** **[2,]***
1 Intelligent Fusion Technology, Germantown, MD 20876, USA; [email protected]
2 Department of Mechanical and Aerospace Engineering, University of Missouri, Columbia, MO 65211, USA;
[email protected]
***** Correspondence: [email protected]; Tel.: +1-573-882-7933
Academic Editor: Xue-Bo Jin
Received: 30 July 2016; Accepted: 15 October 2016; Published: 20 October 2016
**Abstract: In this paper, a distributed cubature Gaussian mixture filter (DCGMF) based on an iterative**
diffusion strategy (DCGMF-ID) is proposed for multisensor estimation and information fusion.
The uncertainties are represented as Gaussian mixtures at each sensor node. A high-degree cubature
Kalman filter provides accurate estimation of each Gaussian mixture component. An iterative
diffusion scheme is utilized to fuse the mean and covariance of each Gaussian component obtained
from each sensor node. The DCGMF-ID extends the conventional diffusion-based fusion strategy by
using multiple iterative information exchanges among neighboring sensor nodes. The convergence
property of the iterative diffusion is analyzed. In addition, it is shown that the convergence of the
iterative diffusion can be interpreted from the information-theoretic perspective as minimization
of the Kullback–Leibler divergence. The performance of the DCGMF-ID is compared with the
DCGMF based on the average consensus (DCGMF-AC) and the DCGMF based on the iterative
covariance intersection (DCGMF-ICI) via a maneuvering target-tracking problem using multiple
sensors. The simulation results show that the DCGMF-ID has better performance than the DCGMF
based on noniterative diffusion, which validates the benefit of iterative information exchanges.
In addition, the DCGMF-ID outperforms the DCGMF-ICI and DCGMF-AC when the number of
iterations is limited.
**Keywords: sensor networks; distributed estimation; Gaussian mixture; diffusion**
**1. Introduction**
With the rapid progress of the sensing and computing technologies, multiple sensors have been
widely used in estimation applications, such as target tracking, wireless sensor networks, guidance
and navigation, and environmental monitoring. Effective information fusion from multiple sensors is
of utmost importance. It can be done in a centralized or distributed manner. For the centralized fusion,
the information obtained by all sensors is collected and processed by the central node. This approach
enables the global solution but requires a large amount of power and resources in communication
and computation. The failure or delay on the central node may significantly degrade the estimation
performance. For the distributed estimation, the information at each sensor node is processed locally
and then fused to establish the global information by well-designed distributed fusion algorithms using
only the local information. In contrast to the centralized estimation, the distributed estimation offers
a number of advantages, such as scalability, robustness to single point of failure, low communication
load, and low operation cost.
When the estimation is processed at each local sensor, it is a regular filtering problem, which has
been intensively researched for decades. In many practical estimation problems, the system dynamics
and measurement equations are nonlinear and the uncertainties or noises are non-Gaussian. To address
-----
_Sensors 2016, 16, 1741_ 2 of 16
this challenging filtering problem, Gaussian mixture-based filters [1] and sequential Monte Carlo-based
filters [2] are two classes of widely used approaches. The rationale behind the Gaussian mixture-based
filters is that any probability density function (pdf) can be approximated by the summation of a finite
number of Gaussian distributions. The Monte Carlo-based filters or particle filters use a large number
of particles to represent the pdf. Although some solutions have been proposed to alleviate the
curse of the dimensionality problem for application of particle filters in high-dimensional problems,
the computation complexity is still prohibitive. Therefore, from the computation efficiency perspective
in the sensor network setting, the Gaussian mixture filter is a better alternative and will be used in
this paper for multiple sensor estimation. The mean and covariance of each Gaussian component are
predicted and updated using the cubature Kalman filtering (CKF) algorithm [3,4]. The fifth-degree
CKF [4] is used because it is more accurate than the conventional third-degree CKF in [3] and other
well-known nonlinear Gaussian filters such as the extended Kalman filter (EKF) [5] and the unscented
Kalman filter (UKF) [6], which is a third-degree Gaussian filter as well.
After the local estimation is obtained at each sensor node, information fusion of the estimates
from multiple sensors is conducted using the distributed estimation algorithm. Distributed estimation
has been a research subject of considerable interest in the past few years [7–17]. Olfati-Saber [7,8]
first addressed the distributed estimation problem by reducing it to two average consensus filters,
one for weighted measurement and the other for information form of the covariance matrix. Because
each sensor node only communicates with its immediate neighbors, the average consensus strategy
is effective to obtain the average of each node’s initial value. In each iteration, each node updates
its state by weighting its prior state and its neighbors’ prior states. When the number of iterations
approaches infinity, average consensus can be achieved. In the consensus-based distributed estimation
framework, certain requirement on the network topology is usually necessary. In [9,10], information
from an individual node is propagated through the entire network via a new information-weighted
consensus scheme. Although each node has limited observability of the states, even including naive
agents (not having measurement), the proposed information-weighted consensus filter for distributed
maximum a posterior parameter estimation and state estimation is capable of obtaining a final estimate
comparable to that obtained from the centralized filter. However, it only considered the scenario that
all local estimates and measurement errors are independent or uncorrelated. Sun et al. [11] proposed
a batch covariance intersection technique combined with average consensus algorithms to address
the correlation issue. But, the Gaussian assumption is made on all estimation processes. It may be
inadequate for highly nonlinear systems and/or non-Gaussian systems. On the other hand, due to
the constraints on energy and communication frequency, a large number of iterations in consensus
algorithms are not feasible in practice, especially for the systems in which the time interval between
two consecutive measurements is very small.
Diffusion strategies for distributed estimation proposed in [12] overcome the disadvantage of
excessive energy and communication requirements in the average consensus-based estimation. There
are two steps between consecutive filtering cycle in the diffusion algorithm: incremental and diffusion.
The incremental step runs a local filtering at each node with a regular time update and multiple
measurement updates by incrementally incorporating measurements from every neighboring node.
The diffusion step computes the ultimate fused estimate by convex combination of all estimates from
the present node and its neighbors. Each node only communicates with its direct neighbors twice in
each filtering cycle. The first communication collects the innovation information from its neighbors.
The second communication exchanges the state estimate among neighbors from the incremental step
to do the diffusion update. The estimate obtained through the diffusion strategy has been proved
unbiased for linear systems. The paper [12] also provides the mean, mean square, and convergence
analysis and shows that the estimate is stable under the assumption that the state space model is time
invariant and each local system (joint measurement model of one node and its immediate neighbors)
is detectable and stabilizable. As long as the individual node satisfies the assumption, this diffusion
strategy does not have any requirement for the network topology. Diffusion recursive least-squares
-----
_Sensors 2016, 16, 1741_ 3 of 16
(RLS) algorithm was developed in [13] to deal with the distributed estimation problem and achieved
the performance close to the global solution. It does not require transmission or inversion of matrices
and, therefore, reduces computational complexity. It was shown that the distributed solution is
asymptotically unbiased and stable if the regressors are zero-mean and temporally independent,
and the inverse of covariance matrices at different time indexes can be replaced by its expected
value. A diffusion least-mean-squares (LMS) algorithm was proposed in [14] with two versions:
adapt-then-combine and combine-then-adapt. Mean and mean square performance were analyzed.
Besides, the scheme of optimizing the diffusion LMS weights was discussed. The work of [15] extended
the work in [12] by using the covariance intersection to yield a consistent estimate and relaxing the
assumption made in [12]. It only requires partial local uniform observability rather than all local
systems’ observability assumed in [12]. The case of no local uniform observability was discussed
in [15] as well but relied on the consensus filter. Hlinka et al. [16] proposed the distributed estimation
scheme using the iterative covariance intersection (ICI). Like the consensus strategy, the ICI needs
recursive update of each node’s state and covariance until they converge. Each iteration can guarantee
a consistent estimate. However, the ICI does not include the incremental update as the diffusion does.
Most of the aforementioned work assumes a linear dynamic process and measurement with
Gaussian noise or initial uncertainty with Gaussian pdf. For highly nonlinear dynamic systems
with non-Gaussian statistics, the performance of those distributed estimation methods may degrade.
In this paper, we propose a new distributed Gaussian mixture filtering based on an iterative diffusion
strategy to handle the distributed nonlinear estimation. There is limited literature on the distributed
Gaussian mixture filtering. In [17], the likelihood consensus strategy was used in the design of
a distributed Gaussian mixture filter in a sensor network that was not fully connected. Unlike the
original consensus-based distributed estimation, the Gaussian mixture weight cannot be updated
through the consensus filter directly since it needs to evaluate a product term of the likelihood function.
By the natural logarithm transformation, the product term is transformed to a summation to which
the consensus algorithm can be applied. The contributions of the proposed approach in this paper
are: (1) a new distributed Gaussian mixture filtering framework with an embedded cubature rule
can more accurately handle nonlinear and non-Gaussian distributed estimation problems; (2) the
iterative diffusion strategy provides better fusion performance than the original diffusion method,
the average consensus, and the ICI; (3) it does not need intensive communications as required in the
consensus-based estimation; (4) the convergence analysis and information theoretic interpretation of
the proposed approach are given.
The remainder of this paper is organized as follows. In Section 2, a centralized cubature Gaussian
mixture filter is introduced. The distributed cubature Gaussian mixture filter using iterative diffusion
is proposed in Section 3. In Section 4, the performance demonstration via a target-tracking problem is
presented. Concluding remarks are given in Section 5.
**2. Centralized Cubature Gaussian Mixture Filter**
Consider a class of nonlinear discrete-time dynamic systems described by
**xk = f (xk−1) + vk−1** (1)
**yk,j = hj (xk) + nk,j** (2)
where xk ∈ R[n] is the state vector and yk,j ∈ R[m] is the measurement by the jth sensor where the subscript
“j” denotes the sensor index. vk−1 and nk,j are the process noise and measurement noise, respectively,
and their probability density functions (pdf) are represented by the Gaussian mixtures (GM)
_Np_ � � � � _Nq_ � � � �
_p (vk) =_ ∑ _α[p]N_ **vk[p][;]** **[v]k[p][,]** **[Q]k[p]** and p **nk,j** = ∑ _α[q]j_ _[N]_ **n[q]k,j[;]** **[n][q]k,j[,]** **[R][q]k,j**, respectively, where N **n[q]k,j[;]** **[n][q]k,j[,]** **[R][q]k,j**
_p=1_ _q=1_
denotes a normal distribution with mean n[q]
_k,j_ [and covariance][ R]k[q],j [and][ α][ is the weight of the Gaussian]
component. The superscripts “p” and “q” denote the pth and qth component of the GM; “Np” and “Nq”
-----
_Sensors 2016, 16, 1741_ 4 of 17
_Sensors 2016, 16, 1741_ 4 of 16
weight of the Gaussian component. The superscripts “p” and “q” denote the pth and qth component
of the GM; “ _N ” and “p_ _N ” denote the number of Gaussian components. Due to the non-Gaussian q_
denote the number of Gaussian components. Due to the non-Gaussian noise and nonlinear dynamics,noise and nonlinear dynamics, the estimated state will have a non-Gaussian pdf, which can be
the estimated state will have a non-Gaussian pdf, which can be modeled as the GM as well.modeled as the GM as well.
_2.1. Cubature Gaussian Mixture Kalman Filter2.1. Cubature Gaussian Mixture Kalman Filter_
Assume that the initial state pdf at the beginning of each filtering cycle can be represented by
GM pAssume that the initial state pdf at the beginning of each filtering cycle can be represented by the (xthe GM ) = ∑Nl _αp[l]_ _Nx�x; ˆxNl[l],Pl[l]N[�]. In Figurex x Pˆ;_ _l_, _l_ . In Figure 1, one cycle of the cubature Gaussian mixture filter 1, one cycle of the cubature Gaussian mixture filter (CGMF) is
_l=1_ _l1_
illustrated. The cubature Kalman filter (CKF) [(CGMF) is illustrated. The cubature Kalman filter (CKF) [3,4] runs on each component of the GM to 3,4] runs on each component of the GM to predict and
update the component’s mean and covariance. The prediction step of the CKF is first used for each ofpredict and update the component’s mean and covariance. The prediction step of the CKF is first
the Nl GM components. Note that after the prediction step, there areused for each of the _Nl GM components. Note that after the prediction step, there are Nl_ _Np Gaussian componentsN_ _l_ _N_ _p_
_×_
contributed by the GM of the initial state pdf and the GM of the process noise. After that, the updateGaussian components contributed by the GM of the initial state pdf and the GM of the process noise.
step of the CKF is used for each Gaussian component and leads toAfter that, the update step of the CKF is used for each Gaussian component and leads to Nl × Np × Nq Gaussian componentsN _l_ _N_ _p_ _N_ _q_
added by the GM of the measurement noise. It can be seen that the number of Gaussian componentsGaussian components added by the GM of the measurement noise. It can be seen that the number of
Gaussian components increases after each filtering cycle. To limit the computational complexity, the
increases after each filtering cycle. To limit the computational complexity, the number of Gaussian
number of Gaussian components has to be reduced after the update step. In the following, the
components has to be reduced after the update step. In the following, the prediction step and the
prediction step and the update step for each Gaussian component using the CKF framework [3,4] are
update step for each Gaussian component using the CKF framework [introduced. 3,4] are introduced.
**Figure 1.Figure 1. One filtering cycle of the cubature Gaussian mixture filter (CGMF). One filtering cycle of the cubature Gaussian mixture filter (CGMF).**
2.1.1. Prediction Step
2.1.1. Prediction Step
Given the initial estimate of the mean **xˆ** _[l]k_ 1|k 1 and covariance **Pkl** 1|k 1 at time _k 1_ for the lth
Given the initial estimate of the mean ˆx[l]
Gaussian component, the predicted mean and covariance can be computed by the quadrature k−1|k−1 [and covariance][ P]k[l] _−1|k−1_ [at time][ k][ −] [1 for the]
_lth Gaussian component, the predicted mean and covariance can be computed by the quadrature_
approximation [3,4]
approximation [3,4] _Nu_
**xˆ** _l pk k,|_ 1Nu W�i **_f _** _kl_ 1,i� **vkp1** (3)
**ˆx[l]k[,]|[p]k−1** [=] ∑ _Wi1if_ ξ[l]k−1,i + vk[p]−1 (3)
_i=1_
**P[l]k[,]|[p]k−1** [=] _i∑N=u1_ _WPk kil p,|_ �f1 �ξiN[l]ku1−W1,ii�f−k�l 1,ˆxi[l]k[,]|[p]k−1xˆ _l pk k[−],|_ 1[v]k[p]−vk1p���1ff �ξkl[l]k−1,i1, _i�xˆ−l pk k,|_ �1ˆx[l]k[,]|v[p]kkp−11[−]T [v]Qk[p]−kp1��T + Qk[p](4) (4)
where Nwhere u is the total number of cubature points,Nu is the total number of cubature points, l =l 1, 1, · · ·,,N Nl, l, pp =1, 1,, · · ·N _p_ ; The superscript “, Np; The superscript “l,p” _l,p”_
denotes the value using thedenotes the value using the lth Gaussian component of the GM of the initial state pdf and thelth Gaussian component of the GM of the initial state pdf and the pth _pth_
component of the GM of the process noise. v[p]
_k−1_ [is the mean of the][ p][th Gaussian component of the GM]
representation of the process noise; ξ[l]k−1,i [is the transformed cubature point given by]
� �T
ξ[l]k−1,i [=][ S]k[l] _−1[γ][i][ +][ ˆx][l]k−1|k−1[,]_ **P[l]k−1|k−1** [=][ S]k[l] _−1_ **S[l]k−1** (5)
The cubature points γi and weights Wi of the third-degree cubature rule [3] are given by
**_γi =_**
�√
_nei_ _i = 1, · · ·, n_
(6a)
_−[√]nei−n_ _i = n + 1, · · ·, 2n_
-----
_Sensors 2016, 16, 1741_ 5 of 16
_Wi = 1/ (2n), i = 1, · · ·, 2n_ (6b)
where ei is a unit vector with the ith element being 1. In this paper, the fifth-degree cubature rule [4] is
also used to improve the estimation accuracy. The weights Wi and points γi of the fifth-degree rule are
given by
_W1 = 2/ (n + 2)_ (7a)
_n[2]_ (7 _n)_
_−_
_Wi =_ _i = 2, · · · 2n + 3_ (7b)
2 (n + 1)[2] (n + 2)[2][,]
2 (n 1)[2]
_−_
_Wi =_ (7c)
(n + 1)[2] (n + 2)[2][,][ i][ =][ 2][n][ +][ 4,][ · · ·][,][ n][2][ +][ 3][n][ +][ 3]
**_γ1 = 0_** (8a)
_√_
**_γi =_** _n + 2 × si−1,_ _i = 2, · · ·, n + 2_ (8b)
_√_
**_γi = −_**
_n + 2 × si−n−2,_ _i = n + 3, · · ·, 2n + 3_ (8c)
_√_
**_γi =_**
**_γi =_** _n + 2 × �si−2n−3,_ _i = 2n + 4, · · ·, 2n + 3 + n (n + 1)/2_ (8d)
_n + 2 × �si−(2n+3+n(n+1)/2),_ _i = 2n + 4 + n (n + 1)/2, · · ·, n[2]_ + 3n + 3 (8e)
_√_
**_γi = −_**
where the points si are given by
**si = [pi,1, pi,2, · · ·, pi,n],** _i = 1, 2, · · ·, n + 1_ (9)
� _n+1_ _j < i_
_−_ _n(n−j+2)(n−j+1)_
� (n+1)(n−i+1) _i = j_
_n(n−i+2)_
0 _j > i_
_pi,j ≜_
(10)
and
2.1.2. Update Step
�� _n_ �
_{�si} ≜_ 2 (n 1) [(][s][k][ +][ s][l][)][ :][ k][ <][ l][;][ k][,][ l][ =][ 1, 2,][ · · ·][,][ n][ +][ 1] (11)
_−_
� �
**ˆxk[l][,]|[p]k[,][q]** = ˆx[l]k[,]|[p]k−1 [+][ L]k[l][,][p][,][q] **yk −** **z[l]k[,][p][,][q]** (12)
� �T
**P[l]k[,]|[p]k[,][q]** = P[l]k[,]|[p]k−1 _[−]_ **[L]k[l][,][p][,][q]** **P[l]xz[,][p][,][q]** (13)
� �−1
**L[l]k[,][p][,][q]** = P[l]xz[,][p][,][q] **R[q]k** [+][ P]zz[l][,][p][,][q] (14)
_Nu_ � _l,p�_
**z[l]k[,][p][,][q]** = ∑ _Wih_ �ξk,i + n[q]k (15)
_i=1_
_Nu_ � _l,p_ � ��� � _l,p�_ � ��T
**P[l]xz[,][p][,][q]** = _i∑=1_ _Wi_ �ξk,i _[−]_ **ˆx[l]k[,]|[p]k−1** _[−]_ **[v]k[p]** **_h_** �ξk,i _−_ **z[l]k[,][p][,][q]** _−_ **n[q]k** (16)
_Nu_ � � _l,p�_ � ��� � _l,p�_ � ��T
**P[l]zz[,][p][,][q]** = ∑ _Wi_ **_h_** �ξk,i _−_ **z[l]k[,][p][,][q]** _−_ **n[q]k** **_h_** �ξk,i _−_ **z[l]k[,][p][,][q]** _−_ **n[q]k** (17)
_i=1_
**n[q]**
_k_ [is the mean of the][ q][th Gaussian component of the GM representation of the measurement noise;]
_l,p_
�ξk,i [is the transformed cubature point given by]
-----
_Sensors 2016, 16, 1741_ 6 of 16
_l,p_ � �T
�ξk,i [=][ �][S][l]k[,][p][γ][i][ +][ ˆx]k[l][,]|[p]k−1[,] **P[l]k[,]|[p]k−1** [=][ �][S]k[l][,][p] **S�[l]k[,][p]** (18)
�
**Remark 1: The weight for the Gaussian component N** **x; ˆx[l][,][p][,][q]**
_k|k_ [,][ P]k[l][,]|[p]k[,][q]
_Nl_ _Np_ _Nq_ � �
_represented by_ _l∑=1_ _p∑=1_ _q∑=1_ _α[l]α[p]α[q]_ _N_ **x; ˆx[l]k[,]|[p]k[,][q][,][ P]k[l][,]|[p]k[,][q]** _._
�
_is α[l]_ _α[p]_ _α[q]. The final GM can be_
_·_ _·_
Note that the number of Gaussian components increases significantly as the time evolves. In order
to avoid excessive computation load, some Gaussian components can be removed or merged. There are
many GM reduction algorithms [18–20], such as pruning Gaussian components with negligible
weights, joining near Gaussian components, and regeneration of GM via Kullback–Leibler approach.
In this paper, near Gaussian components are joined to reduce the number of Gaussian components.
The detailed description of this method is omitted since it is not the focus of this paper and can be
seen in [20]. Note that to keep the estimation accuracy, the GM reduction procedure is not necessary
if the number of Gaussian components is less than a specified threshold. For the convenience of
implementing the diffusion update step in the proposed distributed estimation algorithm, the number
of reduced Gaussian components at each sensor node is specified a priori to be the same.
_2.2. Centralized Cubature Gaussian Mixture Filter_
The centralized cubature Gaussian mixture filter (CCGMF) can be more conveniently expressed
using the information filtering form. In the information filter, the information state and the
information matrix of the Gaussian component with index l, p, q at time k − 1 are defined as
� �−1 � �−1
**ˆy[l][,][p][,][q]** **P[l][,][p][,][q]** **ˆx[l][,][p][,][q]** **P[l][,][p][,][q]**, respectively. The prediction of the
_k−1|k−1_ [=] _k−1|k−1_ _k−1|k−1_ [and][ Y]k[l][,]−[p][,]1[q]|k−1 [=] _k−1|k−1_
information state and information matrix can be obtained via Equations (3) and (4). Using the information
from multiple sensors, the information state and the information matrix can be updated by [4,21]
_Nsn_
**ˆyk[l][,]|[p]k[,][q]** [=][ ˆy][l]k[,]|[p]k−1 [+] ∑ **i[l]k[,],[p]j** [,][q] (19)
_j=1_
_Nsn_
**Yk[l][,]|[p]k[,][q]** [=][ Y][l]k[,]|[p]k−1 [+] ∑ **I[l]k[,],[p]j** [,][q] (20)
_j=1_
where Nsn is the number of sensor nodes. ˆy[l]k[,]|[p]k−1 [and][ Y][l]k[,]|[p]k−1 [can be obtained from the results of]
Equations (3) and (4). The information state contribution i[l][,][p][,][q] and the information matrix contribution
_k,j_
**I[l]k[,],[p]j** [,][q] of the jth sensor are given by [4,21]
�T �
**P[l][,][p]**
_k|k−1_
� �−1 [�]� � �
**R[q]k,j** **yk,j −** **z[l]k[,],[p]j** [,][q] + **P[l]k[,]|[p]k−1,xzj**
�
(21)
�−T
**ˆx[l][,][p]**
_k|k−1_
�
**i[l]k[,],[p]j** [,][q] = **P[l]k[,]|[p]k−1**
�−1
**P[l][,][p]**
_k|k−1,xzj_
� �−1
**I[l]k[,],[p]j** [,][q] = **P[l]k[,]|[p]k−1** **P[l]k[,]|[p]k−1,xzj**
� �−1 �
**R[q]** **P[l][,][p]**
_k,j_ _k|k−1,xzj_
�T �
**P[l][,][p]**
_k|k−1_
�−T
(22)
Note that z[l][,][p][,][q] and P[l][,][p]
_k,j_ _k|k−1,xzj_ [can be calculated by the cubature rules Equations (15) and (16),]
respectively, given in Section 2.1.2.
**Remark 2: From Equations (19) and (20), it can be seen that the local information contributions of ik[l][,],[p]j** [,][q]
_and I[l]k[,],[p]j_ [,][q] _are only computed at sensor j and the total information contribution is simply the sum of the local_
_contributions. Therefore, the information filter is more convenient for multiple sensor estimation than the_
_original Kalman filter._
-----
_Sensors 2016, 16, 1741_ 7 of 16
The CCGMF needs to know the information from all sensor nodes and thus demands a large
amount of communication energy, which is prohibitive for large-scale sensor networks. In the next
section, an iterative diffusion-based distributed cubature Gaussian mixture filter is proposed to provide
more efficient multisensor estimation.
**3. Iterative Diffusion-Based Distributed Cubature Gaussian Mixture Filter**
The distributed estimation lets each sensor node process its local estimation and then fuse the
information from its neighboring nodes by distributed estimation algorithms to establish the global
estimate. In this paper, a new distributed cubature Gaussian mixture filter based on iterative diffusion
(DCGMF-ID) is introduced.
The diffusion strategy is more feasible in practice when the measurement needs to be processed in
a timely manner without many iterations as in the consensus algorithm. The ordinary diffusion Kalman
filter (DKF) [12–15] was designed for linear estimation problems. In this paper, the new DCGMF-ID
integrates the cubature rule as well as the GM into the DKF framework to address the nonlinear
distributed estimation problem. The prediction step of the DCGMF-ID at each sensor node uses the
cubature rule given in Section 2.1.1. The update steps of the DCGMF-ID include the incremental
update and the diffusion update, which are described as follows.
_3.1. Incremental Update_
Each node broadcasts its prediction information to its immediate neighbors and receives the
prediction information from its immediate neighbors at the same time step. For every node j,
once receiving the information, the information state and the information matrix are updated by
**ˆy[l][,][p][,][q]** **i[l][,][p][,][q]** (23)
_k|k,j_ [=][ ˆy]k[l][,]|[p]k−1,j [+][ ∑] _k,j[′]_
_j[′]∈Nj_
**Y[l][,][p][,][q]** **I[l][,][p][,][q]** (24)
_k|k,j_ [=][ Y]k[l][,]|[p]k−1,j [+][ ∑] _k,j[′]_
_j[′]∈Nj_
where Nj denotes the set of sensor nodes containing node j and its immediate neighbors.
_3.2. Diffusion Update_
As mentioned in Section 2.1.2, the number of Gaussian components after the GM reduction at
each node is specified a priori to be the same, for the convenience of implementing the diffusion
update. The covariance intersection algorithm can be utilized for the diffusion update. The covariance
for node j can be updated by
� �−1 � �−1
**P[l]k[,],[p]j** [,][q] = ∑ _w[l]j,[,]j[p][′][,][q]_ **P[l]k[,]|[p]k[,],[q]j[′]** (25)
_j[′]∈Nj_
or in the information matrix form
**Y[l]k[,],[p]j** [,][q] = ∑ _w[l]j,[,]j[p][′][ Y][,][q]_ _[l]k[,]|[p]k[,],[q]j,j[′]_ (26)
_j[′]∈Nj_
where P[l][,][p][,][q]
_k|k,j[′][ denotes the covariance of the][ j][′][th sensor associated with the][ l][,][ p][,][ q][th Gaussian component.]_
_w[l][,][p][,][q]_ is the covariance intersection weight.
_j,j[′]_
The state estimation for node j can be updated by
� �−1 � �−1
**P[l]k[,],[p]j** [,][q] **ˆx[l]k[,],[p]j** [,][q] = ∑ _w[l]j,[,]j[p][′][,][q]_ **P[l]k[,]|[p]k[,],[q]j[′]** **ˆx[l]k[,]|[p]k[,],[q]j[′]** (27)
_j[′]∈Nj_
-----
_Sensors 2016, 16, 1741_ 8 of 16
or in the information state form
**ˆy[l]k[,],[p]j** [,][q] = ∑ _w[l]j,[,]j[p][′][ ˆy][,][q]_ _[l]k[,]|[p]k[,],[q]j[′]_ (28)
_j[′]∈Nj_
The weights w[l][,][p][,][q] are calculated by [22]
_j,j[′]_
�� �−1[�]
1/tr **Y[l][,][p][,][q]**
_w[l]j,[,]j[p][′][,][q]_ = ��k|k,j[′] �−1[�]
∑j′∈Nj 1/tr **Yk[l][,]|[p]k[,],[q]j[′]**
_w[l]j,[,]j[p][′][,][q]_ = 0, _j[′]_ _∈/_ _Nj_
, _j[′]_ _∈_ _Nj_ (29)
where tr ( ) denotes the trace operation.
_·_
**Remark 3: Different from the conventional diffusion-based distributed estimation algorithms, the DCGMF-ID**
_performs the diffusion update multiple times iteratively, rather than updating it only once. The advantage of the_
_iterative diffusion update is that estimates from different sensors eventually converge._
The DCGMF-ID algorithm (Algorithm 1) can be summarized as follows:
**Algorithm 1**
**Step 1: Each sensor node calculates the local prediction using Equations (3) and (4), and the cubature**
rule, and transforms them to the information state ˆy[l][,][p]
_k|k−1,j_ [and the information matrix][ Y]k[l][,]|[p]k−1,j[.]
**Step 2: When new measurements are available, each node evaluates the information state contribution**
**i[l]k[,],[p]j** [,][q] and the information matrix contribution I[l]k[,],[p]j [,][q] by using Equations (21) and (22).
**Step 3: Each node communicates with its immediate neighbors to update its information state and**
information matrix through the incremental update (i.e., Equations (23) and (24)).
**Step 4: Each node runs the diffusion update by Equations (26) and (28) multiple times. Let t denote the**
_tth iteration of the diffusion update. The iterative diffusion updates can be given by_
**ˆy[l]k[,]|[p]k[,],[q]j** [(][t][ +][ 1][) =][ ∑] _w[l]j,[,]j[p][′][ (][,][q]_ _[t][)][ ˆy][l]k[,]|[p]k[,],[q]j[′][ (][t][)]_ (30a)
_j[′]∈Nj_
**Y[l]k[,]|[p]k[,],[q]j** [(][t][ +][ 1][) =][ ∑] _w[l]j,[,]j[p][′][ (][,][q]_ _[t][)][Y][l]k[,]|[p]k[,],[q]j[′]_ [(][t][)] (30b)
_j[′]∈Nj_
When t = tmax, the final estimates are Y[l]k[,],[p]j [,][q] = Y[l]k[,]|[p]k[,],[q]j [(][t][max][)][;][ ˆy]k[l][,],[p]j [,][q] = ˆy[l]k[,]|[p]k[,],[q]j [(][t][max][)]
Calculate the mean ˆx[l][,][p][,][q]
_k|k_ [and covariance][ P]k[l][,]|[p]k[,][q] [of each Gaussian component.]
_Nq_ �
_q∑=1_ _α[l]α[p]α[q]_ _N_ **x; ˆx[l]k[,]|[p]k[,][q][,][ P]k[l][,]|[p]k[,][q]**
_Nl_
The final GM can be represented by ∑
_l=1_
**Step 5: Conduct GM reduction.**
**Step 6: Let k = k + 1; continues to Step 1.**
_Np_
∑
_p=1_
�
.
The iterative diffusion update is identical to the iterative covariance intersection (ICI)
algorithm [16]. Thus, the proposed distributed estimation has the same properties of unbiasedness and
consistency as the ICI. For linear systems, if the initial estimate at each sensor node is unbiased,
the estimate through the incremental update and the diffusion update in each filtering cycle is
still unbiased. For nonlinear systems, however, the unbiasedness may not be preserved. It is also
true for the analysis of consistency. When the covariance intersection (CI) method is used for data
fusion, consistency is ensured based on the assumption that the estimate at each sensor node is
consistent [23]. If it is assumed that each node’s local estimate after the incremental step is consistent
-----
_Sensors 2016, 16, 1741_ 9 of 16
�� �� �T[�]
(i.e., Pk|k,j ≥ _E_ ˆxk|k,j − xk ˆxk|k,j − xk ), then by the diffusion update, the fused estimate is still
consistent because the CI is applied. Without this assumption, consistency is not guaranteed by
the CI technique. For linear systems, this assumption can be easily met and consistency can be
guaranteed. For nonlinear systems, the high-degree (fifth-degree) cubature rule based-filtering is
utilized in this paper for the local estimation at each node. It can provide more accurate estimate of
ˆxk|k,j and Pk|k,j than the third-degree cubature Kalman filter (CKF) and the unscented Kalman filter
(UKF). Therefore, although the unbiasedness and consistency cannot be guaranteed for nonlinear
systems, they can be better approached by the proposed distributed estimation scheme than other
distributed nonlinear filters.
It is necessary to compare the DCGMF-ID with the consensus-based distributed estimation.
For the iterative diffusion strategy in the DCGMF-ID, if the local estimate obtained at each node after
the incremental update is consistent, the fused estimate by the diffusion update is also consistent,
no matter how many iterations of the iterative diffusion update since the CI is applied. In addition,
it was shown in [16] that the covariance and estimate from each node converge to a common value
(i.e., lim
_t→∞[P][k][,][j][(][t][) =][ P][k][ and][ lim]t→∞[ˆx][k][,][j][(][t][) =][ ˆx][k][). Recall that “][t][” represents the][ t][th diffusion iteration, not the]_
time. However, for the consensus-based distributed estimation [24], even if the local estimate obtained
at each node is consistent, if the number of iterations of consensus is not infinite, the consistency of the
fused estimate cannot be preserved [24]. Because the average consensus cannot be achieved in a few
iterations, a multiplication by |N|, the cardinality of the network, will lead to an overestimate of the
information, which is not desirable. Although another approach was proposed in [24]—to fuse the
information from each node in order to preserve consistency—the new consensus algorithm results in
more computation complexity.
In the following, we provide a more complete analysis of the convergence by the following
two propositions.
**Proposition 1: The iterative diffusion update Equations (30a) and (30b) can be represented in a general form of**
**η(t + 1) = A(t)η(t), where each (j, j[′]) entry of the transition matrix A(t) denoted by aj,j′** (t) corresponds to
_the weight w[l][,][p][,][q]_
_j,j[′][ (][t][)][. Assume that the sensor network is connected. If there exists a positive constant][ α][ <][ 1][ and]_
_the following three conditions are satisfied_
_(a)_ _aj,j(t) ≥_ _α for all j, t;_
_(b)_ _aj,j′_ (t) ∈{0} ∪ [α, 1], j ̸= j[′];
_(c)_ ∑[N]j[′]=[sn]1 _[a][j][,][j][′]_ [(][t][) =][ 1][ for all j][,][ j][′][,][ t;]
_the estimates using the proposed DCGMF-ID reach a consensus value._
**Proof: The proof uses the theorem 2.4 in [25]. If the connected sensor network satisfies these three**
conditions, η(t), using the algorithm:
**η(t + 1) = A(t)η(t)** (31)
converges to a consensus value. For the scalar case (the dimension of the state is one), aj,j′ (t)
corresponds to w[l][,][p][,][q]
_j,j[′][ (][t][)][. The][ j][th element of][ η][(][t][)][ corresponds to the information state][ ˆy][l]k[,]|[p]k[,],[q]j_ [(][t][)][. For the]
vector case, the transition matrix A(t) ⊗ _In should be applied where ⊗_ denotes the Kronecker product
and n is the dimension of the state. For the matrix case, each column of the matrix can be treated as the
vector case. □
As seen from Equation (29), the weight w[l][,][p][,][q]
_j,j[′][ (][t][)][ only depends on the covariance matrix. Here we]_
assume that the covariance in the first iteration is upper bounded, and for any t there is no covariance
matrix equal to 0 (no uncertainty). As long as node j and node j[′] are connected, w[l][,][p][,][q]
_j,j[′][ (][t][)][ ∈]_ [(][0, 1][)][. Thus,]
-----
_Sensors 2016, 16, 1741_ 10 of 16
condition (b) is satisfied. In addition, from Equation (29), ∑[N]j[′]=[sn]1 _[w][l]j,[,]j[p][′][ (][,][q]_ _[t][) =][ 1 always holds; that is,]_
the transition matrix A(t) is always row-stochastic. Therefore, condition (c) is satisfied.
� �
For any arbitrary large t, say tmax, the non-zero weight set _w[l]j,[,]j[p][′][ (][,][q]_ _[t][)][,][ t][ =][ 1,][ · · ·][,][ t][max]_ for all j, j[′]
is a finite set since the number of nodes and the number of iterations are finite. There always exists
� �
a minimum value in this finite set. Thus, α can be chosen to be 0 < α ≤ min _w[l]j,[,]j[p][′][ (][,][q]_ _[t][)]_ such that
conditions (a) and (b) are satisfied.
According to the theorem 2.4 in [25] for the agreement algorithm Equation (31), the estimate η(t)
reaches a consensus value.
**Proposition 2: If the assumption and conditions in Proposition 1 are satisfied, the consensus estimate using the**
_DCGMF-ID is unique._
**Proof: Let U0,t = A(t)A(t −** 1) · · · A(0) be the backward product of the transition matrices and
lim
_t→∞[U][0,][t][ =][ U][∗]_ [according to Proposition 1. On the other hand, when the consensus is achieved,]
the covariance matrix or the information matrix Y[l][,][p][,][q]
_k|k,j[′][ associated with each node becomes the same.]_
According to Equation (29), the weights w[l][,][p][,][q]
_j,j[′][ (][t][)][ converge to the same value. Thus,][ lim]t→∞[A][(][t][) =][ A][∗]_ [and]
_A[∗]1 = 1 since A[∗]_ is a row-stochastic matrix where A[∗] = [a1 a2 · · · an][T] with aj being the row vector of
the matrix A[∗]. Furthermore, because Y[l][,][p][,][q]
_k|k,j[′][ converges to the same value, from Equation (29), all the]_
non-zero weights w[l][,][p][,][q]
_j,j[′][ (][t][)][ or all non-zero entries of the row vector][ a][j][ are identical and equal to the]_
reciprocal of the degree of the jth node, i.e., [1]
_δj_ [(where][ δ][j][ ≜] [degree of the][ j][th node][ ≜] [cardinality of][ N][j][ ).]
Hence, A[∗] is deterministic given the connected sensor network. □
_A[∗]_ is irreducible since the sensor network is connected. Moreover, the diagonal elements of A[∗]
are all positive (equal to the reciprocal of the degree of each node). Hence, 1 is a unique maximum
eigenvalue of A[∗] [26] and, in fact, A[∗] is a primitive matrix [26].
In the sense of consensus, lim
_t→∞[η][(][t][) =][ U][∗][η][(][0][)][, we have][ A][∗][U][∗]_ [=][ U][∗] [or][ (][A][∗] _[−]_ _[I][)][U][∗]_ [=][ 0][ (note, it is]
not possible for U[∗] to be 0 since it is the backward product of non-negative matrices). The column
of U[∗] belongs to the null space of A[∗] _−_ _I. Since 1 is the unique maximum eigenvalue of A[∗], 0 is_
the unique eigenvalue of A[∗] _−_ _I and the dimension of the null space of A[∗]_ _−_ _I is 1. Thus, 1 (or any_
scalar multiplication of 1) is the unique vector belonging to the null space of A[∗] _−_ _I. Therefore, U[∗]_ is
ergodic, i.e., U[∗] = 1 [α1, α2, · · ·, αn] where αi is a scalar constant. According to Theorem 4.20 in [27],
[α1, α2, · · ·, αn] and the consensus value of η(t) are unique.
The iterative diffusion update in the DCGMF-ID can be interpreted from the information
theory perspective as the process of minimizing the Kullback–Leibler divergence (KLD) [28]. In the
information theory, a measure of distance between different pdfs can be given by the KLD. Given the
local pdf pi with the weight πi, the fused pdf p f can be obtained by minimizing the KLD:
_p f = argmin_
_p_
_Nsn_
##### ∑ πiD(p||pi) (32)
_i=1_
_Nsn_
with ∑ _πi = 1 and πi ≥_ 0. D(p||pi) is the KLD defined as:
_i=1_
�
_D(p||pi) =_ _p(x)log_ _[p][(][x][)]_ (33)
_pi(x)_ _[dx]_
The KLD is always non-negative, and equal to zero only when p(x) = pi(x).
The solution to Equation (32) turns out to be [28]
-----
_Sensors 2016, 16, 1741_ 11 of 16
_p f (x) =_
_Nsn_
∏ [pi(x)][π][i]
_i=1_
(34)
� _[N]∏sn_ [pi(x)][π][i] _dx_
_i=1_
The above equation is also the Chernoff fusion [29]. Under the Gaussian assumption, which is
true for each component of the GM model in this paper, it was shown in [29] that the Chernoff fusion
yields update equations identical to the covariance intersection Equations (25)–(28).
Therefore, from the information-theoretic perspective, the iterative diffusion update Equation (30)
is actually equivalent to minimizing the KLD repeatedly. For instance, the diffusion update at the tth
iteration is equivalent to
_p f_,j(t + 1) = arg min _ωj,j′_ (t)D(pj(t + 1) _pj′_ (t)) with j = 1, . . ., Nsn (35)
_pj(t+1)j[′][∑]∈Nj_ ������
When t approaches tmax, from the convergence property of the iterative diffusion (i.e., Propositions 1
and 2), the cost for the minimization problem in Equation (35) approaches 0 since pj(tmax) = p for all
_j = 1, . . ., Nsn, and D(p||p) = 0 where p is the final convergent pdf._
**4. Numerical Results and Analysis**
In this section, the performance of DCGMF based on different fusion strategies is demonstrated
via a benchmark target-tracking problem using multiple sensors, which is to track a target executing
a maneuvering turn in a two-dimensional space with unknown and time-varying turn rate [3].
The target dynamics is highly nonlinear due to the unknown turn rate. It has been used as a benchmark
problem to test the performance of different nonlinear filters [3,30].
The discrete-time dynamic equation of the target motion is given by:
1 sin(ωk−1∆t) 0 cos(ωk−1∆t)−1 0
_ωk−1_ _ωk−1_
0 cos (ωk−1∆t) 0 _−sin (ωk−1∆t)_ 0
0 1−cos(ωk−1∆t) 1 sin(ωk−1∆t) 0
_ωk−1_ _ωk−1_
0 sin (ωk−1∆t) 0 cos (ωk−1∆t) 0
0 0 0 0 1
**xk =**
**xk−1 + vk−1** (36)
where xk = �xk, _x._ _k, yk,_ _y._ _k, ωk�T; [xk, yk] and [x._ _k,_ _y._ _k] are the position and velocity at time k, respectively;_
∆t is the time-interval between two consecutive measurements; ωk−1 is the unknown turn rate at the
time k − 1; and vk−1 is the white Gaussian noise with mean zero and covariance Qk−1,
∆t[3] ∆t[2]
3 2 0 0 0
∆t[2]
2 ∆t 0 0 0
∆t[3] ∆t[2]
0 0 3 2 0
∆t[2]
0 0 2 ∆t 0
0 0 0 0 1.75 × 10[−][4]∆t
**Qk−1 =**
(37)
The measurements are the range and angle given by
�
**yk =**
� �
_x[2]_
_k_ [+][ y][2]k
atan2 (yk, xk)
+ nk (38)
where atan2 is the four-quadrant inverse tangent function; **_nk is the measurement noise_**
with an assumed non-Gaussian distribution **_nk_** _∼_ 0.5N (n1, R1) + 0.5N (n2, R2), where
**_n1 =_** �5 m, −2 × 10[−][6] mrad�T and n2 = [−5 m, 0 mrad]T. The variances R1 and R2 are assumed
-----
_Sensors 2016, 16, 1741_ 12 of 16
�
. The sampling interval
��
to be R1 = diag 100 m[2],10 mrad[2][��] and R2 =
�
80 m[2] 10[−][1] mmrad
10[−][1] mmrad 10 mrad[2]
is ∆t = 1 s. The simulation results are based on 100 Monte Carlo runs. The initial estimate
**ˆx0 is generated randomly from the normal distribution N (ˆx0; x0, P0) with x0 being the true**
initial state x0 = [1000 m, 300 m/s, 1000 m, 0, −3 deg/s][T] and P0 being the initial covariance
��
**P0** = diag 100 m[2], 10 m[2]/s[2], 100 m[2], 10 m[2]/s[2], 100 mrad[2]/s[2][��]. Sixteen sensors are used in
simulation. The topology of the sensor network is shown in Figure 2. Note that the “circle” denotes
the sensor node. It is assumed that the target is always in the range and field of view of all sensors.Sensors 2016, 16, 1741 13 of 17
_Sensors 2016, 16, 1741_ 13 of 17
**Figure 2.Figure 2 The network of sensors.. The network of sensors.**
**Figure 2. The network of sensors.**
The metric used to compare the performance of different filters is the root mean square error
The metric used to compare the performance of different filters is the root mean square
(RMSE). The RMSEs of the position, velocity, and turn rate using different filters with the third-The metric used to compare the performance of different filters is the root mean square error
error (RMSE). The RMSEs of the position, velocity, and turn rate using different filters with the(RMSE). The RMSEs of the position, velocity, and turn rate using different filters with the third-degree cubature rule are shown in Figures 3–5, respectively. The cubature Gaussian mixture filter
third-degree cubature rule are shown in Figures(CGMF) using a single sensor, the distributed cubature Gaussian mixture filter based on the iterative 3–5, respectively. The cubature Gaussian mixture
degree cubature rule are shown in Figures 3–5, respectively. The cubature Gaussian mixture filter
filter (CGMF) using a single sensor, the distributed cubature Gaussian mixture filter based on(CGMF) using a single sensor, the distributed cubature Gaussian mixture filter based on the iterative covariance intersection [16] (DCGMF-ICI), average consensus (DCGMF-AC), iterative diffusion
the iterative covariance intersection [covariance intersection [16] (DCGMF-ICI), average consensus (DCGMF-AC), iterative diffusion strategies (DCGMF-ID), and the centralized cubature Gaussian mixture filter (CCGMF) are compared. 16] (DCGMF-ICI), average consensus (DCGMF-AC), iterative
Since DCGMF-ICI, DCGMF-AC, and DCGMF-ID all involve iterations, it is more illustrative to use
diffusion strategies (DCGMF-ID), and the centralized cubature Gaussian mixture filter (CCGMF) arestrategies (DCGMF-ID), and the centralized cubature Gaussian mixture filter (CCGMF) are compared.
the number of iterations as a parameter to compare their performance. “M” in the figures is the
compared. Since DCGMF-ICI, DCGMF-AC, and DCGMF-ID all involve iterations, it is more illustrativeSince DCGMF-ICI, DCGMF-AC, and DCGMF-ID all involve iterations, it is more illustrative to use
iteration number.
the number of iterations as a parameter to compare their performance. “M” in the figures is the
to use the number of iterations as a parameter to compare their performance. “M” in the figures is the
iteration number.
iteration number.
**Figure 2**
. The network of sensors.
. The network of sensors.
**Figure 3.Figure 3. Root mean square errors (RMSEs) of the position estimation. Root mean square errors (RMSEs) of the position estimation.**
. The network of sensors.
-----
_Sensors 2016, 16, 1741_ 13 of 16
_Sensors Sensors 20162016,, 1616, 1741, 1741_ 14 of 17 14 of 17
**Figure 4.Figure 4.Figure 4. RMSEs of the velocity estimation. RMSEs of the velocity estimation. RMSEs of the velocity estimation.**
**Figure 5.Figure 5.Figure 5. RMSEs of the turn-rate estimation. RMSEs of the turn-rate estimation. RMSEs of the turn-rate estimation.**
It can be seen from the figures that (1) DCGMFs and CCGMF exhibit better performance thanIt can be seen from the figures that (1) DCGMFs and CCGMF exhibit better performance than It can be seen from the figures that (1) DCGMFs and CCGMF exhibit better performance than
CGMF using single sensor since more information from multiple sensors can be exploited; (2) with CGMF using single sensor since more information from multiple sensors can be exploited; (2) with
CGMF using single sensor since more information from multiple sensors can be exploited; (2) with
the increase of iterations, the performance of all DCGMFs is improved; (3) the DCGMF-ICI is less the increase of iterations, the performance of all DCGMFs is improved; (3) the DCGMF-ICI is less
the increase of iterations, the performance of all DCGMFs is improved; (3) the DCGMF-ICI is less
accurate than the DCGMF-AC and the DCGMF-ID since the ICI algorithm does not do the accurate than the DCGMF-AC and the DCGMF-ID since the ICI algorithm does not do the
accurate than the DCGMF-AC and the DCGMF-ID since the ICI algorithm does not do the incrementalincremental update; (4) both the DCGMF-AC (incremental update; (4) both the DCGMF-AC (M M = 10) and the DCGMF-ID (= 10) and the DCGMF-ID (M M = 10) achieve very close = 10) achieve very close
update; (4) both the DCGMF-AC (performance to the CCGMF. However, fewer iterations have a more negative effect on the performance to the CCGMF. However, fewer iterations have a more negative effect on the M = 10) and the DCGMF-ID (M = 10) achieve very close performance
to the CCGMF. However, fewer iterations have a more negative effect on the performance of theperformance of the DCGMF-AC than that on the DCGMF-ID. The DCGMF-ID is more effective in performance of the DCGMF-AC than that on the DCGMF-ID. The DCGMF-ID is more effective in
DCGMF-AC than that on the DCGMF-ID. The DCGMF-ID is more effective in terms of iterations sinceterms of iterations since the DCGMF-ID with terms of iterations since the DCGMF-ID with MM = 1 has close performance to the DCGMF-AC with = 1 has close performance to the DCGMF-AC with
the DCGMF-ID withMM = 5. Hence, when the allowable number of information exchanges is limited, DCGMF-ID would be = 5. Hence, when the allowable number of information exchanges is limited, DCGMF-ID would be M = 1 has close performance to the DCGMF-AC with M = 5. Hence, when the
allowable number of information exchanges is limited, DCGMF-ID would be the best filter. It is also
worth noting that the DCGMF-AC requires less computational effort at each node, but requires more
communication expense than the DCGMF-ID. If the communication capability of the sensor network
is not a main constraint, the DCGMF-AC would be a competitive approach.
-----
_Sensors 2016, 16, 1741_ 14 of 16
Next, we compare the performance of DCGMFs using the third-degree cubature rule and the
DCGMFs using the fifth-degree cubature rule. The metric is the averaged cumulative RMSE (CRMSE).
The CRMSE for the position is defined by
_Nsim_
##### ∑
_i=1_ _j=[∑]1,3_
�
� 1
�
�
_Nsim_
1
CRMSEpos =
_Nmc_
_Nmc_
##### ∑
_m=1_
� �2
_xi[j],m_ _[−]_ _[x][ˆ]i[j],m_ (39)
where Nsim = 100 s is the simulation time and Nmc = 100 is the number of Monte Carlo runs.
The superscript “j” denotes the jth state variable and the subscripts “i” and “m” denote the ith
simulation time step and the mth simulation, respectively. The CRMSE for the velocity and CRMSE for
the turn rate can be similarly defined.
The results of DCGMF-AC using the third-degree cubature rule and the fifth-degree cubature
rule show indistinguishable difference. Similar results can be observed for CCGMF. DCGMF-ID and
DCGMF-ICI using the fifth-degree cubature rule, however, show better performance than those using
the third-degree cubature rule. The reason is that DCGMF-ID and DCGMF-ICI depend heavily on
the local measurement to perform estimation, while DCGMF-AC and CCGMF update the estimate
based on global observations from all sensors. Specifically, for DCGMF-AC, although each sensor
communicates measurement only with its neighbors, after convergence of the consensus iterations,
each sensor actually obtained a fused measurement information from all sensors. Because the high
degree numerical rule affects the accuracy of estimates extracted from the observations, the fifth-degree
cubature rule can more noticeably improve the performance of DCGMF-ID and DCGMF-ICI based on
only local observations. However, the benefit of using the high-degree numerical rule will be mitigated
if more information from more sensors is available as for the DCGMF-AC and CCGMF. Hence, we
only compare the results of DCGMF-ID and DCGMF-ICI using the third-degree and the fifth-degree
cubature rules in Table 1. In order to see merely the effect of the cubature rules with different accuracy
degrees on the performance, we want to minimize the effect of different iterations on the performance
of different filters. Therefore, a sufficiently large iteration number, M = 20, is used to ensure that the
different filters already converge after iterations. It can be seen from the Table 1 that DCGMF-ID and
DCGMF-ICI using the fifth-degree cubature rule can achieve better performance than those using the
third-degree cubature rule.
**Table 1. Cumulative root mean square errors (CRMSEs) of different filters.**
**Filters** **CRMSE (Position)** **CRMSE (Velocity)** **CRMSE (Turn Rate)**
DCGMF-ID (third-degree, M = 20) 5.85892 5.60166 0.019392
DCGMF-ID (fifth-degree, M = 20) 5.78748 5.57730 0.019375
DCGMF-ICI (third-degree, M = 20) 8.81274 7.22025 0.020837
DCGMF-ICI (fifth-degree, M = 20) 8.02142 7.11939 0.020804
Distributed cubature Gaussian mixture filter based on an iterative diffusion strategy (DCGMF-ID) and DCGMF
based on the iterative covariance intersection (DCGMF-ICI).
**5. Conclusions**
A new iterative diffusion-based distributed cubature Gaussian mixture filter (DCGMF-ID) was
proposed for the nonlinear non-Gaussian estimation using multiple sensors. The convergence property
of the DCGMF-ID was analyzed. It has been shown via a target-tracking problem that the DCGMF-ID
can successfully approximate the performance of the centralized cubature Gaussian mixture filter and
has all the advantages of the distributed filters. Among the iterative distributed estimation strategies,
the DCGMF-ID exhibits more accurate results than the iterative covariance intersection based method
(i.e., DCGMF-ICI). It also shows better performance than the average consensus-based method given
the same number of iterations. In addition, the fifth-degree cubature rule can improve the accuracy of
the DCGMF-ID.
-----
_Sensors 2016, 16, 1741_ 15 of 16
**Acknowledgments: This work was supported by the US National Science Foundation (NSF) under the**
grant NSF-ECCS-1407735.
**Author Contributions: All authors contributed significantly to the work presented in this manuscript. Bin Jia**
conceived the original concept, conducted the numerical simulations, and wrote the initial draft of the paper.
Tao Sun and Ming Xin provided the theoretical analysis and proofs of the results. Ming Xin contributed to the
detailed writing for the initial submission, the revision of the manuscript, and the response to the reviewers’
comments. All authors actively participated in valuable technical discussions in the process of completing
the paper.
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. Alspach, D.L.; Sorenson, H.W. Nonlinear Bayesian estimation using Gaussian sum approximation.
_[IEEE Trans. Autom. Control 1972, 17, 439–448. [CrossRef]](http://dx.doi.org/10.1109/TAC.1972.1100034)_
2. Arulampalam, M.S.; Maskell, S.; Gordon, N.; Clapp, T. A tutorial on particle filters for online
[nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Proc. 2002, 50, 174–188. [CrossRef]](http://dx.doi.org/10.1109/78.978374)
3. Arasaratnam, I.; Haykin, S. Cubature Kalman filters. IEEE Trans. Autom. Control 2009, 54, 1254–1269.
[[CrossRef]](http://dx.doi.org/10.1109/TAC.2009.2019800)
4. Jia, B.; Xin, M. Multiple sensor estimation using a new fifth-degree cubature information filter. Trans. Inst.
_[Meas. Control 2015, 37, 15–24. [CrossRef]](http://dx.doi.org/10.1177/0142331214523032)_
5. Gelb, A. Applied Optimal Estimation; MIT Press: Cambridge, MA, USA, 1974; pp. 182–202.
6. Juier, S.J.; Uhlmann, J.K.; Durrant-Whyte, H.F. A new method for the nonlinear transformation of means and
covariances in filters and estimators. IEEE Trans. Autom. Control 2000, 45, 477–482.
7. Olfati-Saber, R. Distributed Kalman filter with embedded consensus filters. In Proceedings of the 44th IEEE
Conference on Decision and Control, and 2005 European Control Conference, Seville, Spain, 15 December
2005; pp. 8179–8184.
8. Olfati-Saber, R. Distributed Kalman filter for sensor networks. In Proceedings of the 46th IEEE on Decision
and Control, New Orleans, LA, USA, 12–14 December 2007; pp. 5492–5498.
9. Kamal, A.T.; Farrell, J.A.; Roy-Chodhury, A.K. Information weighted consensus. In Proceedings of the 51st
IEEE on Decision and Control, Maui, HI, USA, 10–13 December 2012; pp. 2732–2737.
10. Kamal, A.T.; Farrell, J.A.; Roy-Chodhury, A.K. Information weighted consensus filters and their application
[in distributed camera networks. IEEE Trans. Autom. Control 2013, 58, 3112–3125. [CrossRef]](http://dx.doi.org/10.1109/TAC.2013.2277621)
11. Sun, T.; Xin, M.; Jia, B. Distributed estimation in general directed sensor networks based on batch covariance
intersection. In Proceedings of the 2016 American Control Conference, Boston, MA, USA, 6–8 July 2016;
pp. 5492–5497.
12. Cattivelli, F.S.; Sayed, A.H. Diffusion strategies for distributed Kalman filtering and smoothing. IEEE Trans.
_[Autom. Control 2010, 55, 2069–2084. [CrossRef]](http://dx.doi.org/10.1109/TAC.2010.2042987)_
13. Cattivelli, F.S.; Lopes, C.G.; Sayed, A.H. Diffusion recursive least-squares for distributed estimation over
[adaptive networks. IEEE Trans. Autom. Control 2008, 56, 1865–1877. [CrossRef]](http://dx.doi.org/10.1109/TSP.2007.913164)
14. Cattivelli, F.S.; Sayed, A.H. Diffusion LMS strategies for distributed estimation. IEEE Trans. Autom. Control
**[2010, 55, 1035–1048. [CrossRef]](http://dx.doi.org/10.1109/TAC.2010.2042987)**
15. Hu, J.; Xie, L.; Zhang, C. Diffusion Kalman filtering based on covariance intersection. IEEE Trans. Signal Proc.
**[2012, 60, 891–902. [CrossRef]](http://dx.doi.org/10.1109/TSP.2011.2175386)**
16. Hlinka, O.; Sluciak, O.; Hlawatsch, F.; Rupp, M. Distributed data fusion using iterative covariance intersection.
In Proceedings of the 2014 IEEE on Acoustics, Speech, and Signal Processing, Florence, Italy, 4–9 May 2014;
pp. 1880–1884.
17. Li, W.; Jia, Y. Distributed consensus filtering for discrete-time nonlinear systems with non-Gaussian noise.
_[Signal Proc. 2012, 92, 2464–2470. [CrossRef]](http://dx.doi.org/10.1016/j.sigpro.2012.03.009)_
18. Runnalls, A.R. Kullback-Leibler approach to Gaussian mixture reduction. IEEE Trans. Aerosp. Electron. Syst.
**[2007, 43, 989–999. [CrossRef]](http://dx.doi.org/10.1109/TAES.2007.4383588)**
19. Salmond, D. Mixture reduction algorithms for target tracking in clutter. In Proceedings of the SPIE 1305
Signal and Data Processing of Small Targets, Los Angeles, CA, USA, 16–18 April 1990; pp. 434–445.
20. Williams, J.L. Gaussian Mixture Reduction for Tracking Multiple Maneuvering Targets in Clutter.
Master’s Thesis, Air Force Institute of Technology, Dayton, OH, USA, 2003.
-----
_Sensors 2016, 16, 1741_ 16 of 16
21. Jia, B.; Xin, M.; Cheng, Y. Multiple sensor estimation using the sparse Gauss-Hermite quadrature information
filter. In Proceedings of the 2012 American Control Conference, Montreal, QC, Canada, 27–29 June 2012;
pp. 5544–5549.
22. Niehsen, W. Information fusion based on fast covariance intersection. In Proceedings of the 2002 5th
International Conference on Information Fusion, Annapolis, MD, USA, 8–11 July 2002.
23. Julier, S.; Uhlmann, J. General decentralized data fusion with covariance intersection (CI). In Handbook of
_Multisensor Data Fusion; CRC Press: Boca Raton, FL, USA, 2009._
24. Battistelli, G.; Chisci, L. Consensus-based linear and nonlinear filtering. IEEE Trans. Autom. Control 2015, 60,
[1410–1415. [CrossRef]](http://dx.doi.org/10.1109/TAC.2014.2357135)
25. Alex, O.; Tsitsiklis, J.N. Convergence speed in distributed consensus and averaging. SIAM Rev. 2011, 53,
747–772.
26. Horn, R.; Johnson, C. Matrix Analysis; Cambridge University Press: New York, NY, USA, 1985.
27. Seneta, E. Nonnegative Matrices and Markov Chains; Springer: New York, NY, USA, 2006.
28. Battistelli, G.; Chisci, L. Kullback-Leibler average, consensus on probability densities, and distributed state
[estimation with guaranteed stability. Automatica 2014, 50, 707–718. [CrossRef]](http://dx.doi.org/10.1016/j.automatica.2013.11.042)
29. Hurley, M.B. An information theoretic justification for covariance intersection and its generalization.
In Proceedings of the 5th International Conference on Information Fusion, Annapolis, MD, USA,
8–11 July 2002.
30. [Jia, B.; Xin, M.; Cheng, Y. High degree cubature Kalman filter. Automatica 2013, 49, 510–518. [CrossRef]](http://dx.doi.org/10.1016/j.automatica.2012.11.014)
© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
[(CC-BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.)
-----
| 18,738
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC5087526, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/1424-8220/16/10/1741/pdf?version=1476961188"
}
| 2,016
|
[
"JournalArticle"
] | true
| 2016-10-01T00:00:00
|
[
{
"paperId": "b8d9b10cf54629364523ec065e6307ab87f7d4f0",
"title": "iRun: Horizontal and Vertical Shape of a Region-Based Graph Compression"
},
{
"paperId": "973f639b468e5a672631937dda3231c172a26290",
"title": "Distributed estimation in general directed sensor networks based on batch covariance intersection"
},
{
"paperId": "bd52592f98ca0677f4419eb63df13eb0363dfffe",
"title": "Consensus-Based Linear and Nonlinear Filtering"
},
{
"paperId": "80cfca6dd40ddfef1b2fe1c7a19fb4c7656fd270",
"title": "Multiple sensor estimation using a new fifth-degree cubature information filter"
},
{
"paperId": "da408d0879eb84c402fac41688768fc86e5fc344",
"title": "Distributed data fusion using iterative covariance intersection"
},
{
"paperId": "7f07bd0f2af6882bdb24c763e9fc0e9d762c4f55",
"title": "Kullback-Leibler average, consensus on probability densities, and distributed state estimation with guaranteed stability"
},
{
"paperId": "a88f73634f30b3326aa0481d3757e58224c29d35",
"title": "Information Weighted Consensus Filters and Their Application in Distributed Camera Networks"
},
{
"paperId": "cd2097cbec3eebe69f5280ee315e9064e2676dd3",
"title": "High-degree cubature Kalman filter"
},
{
"paperId": "329bcf42f5c47d04c35f5699420802605c24fc73",
"title": "Information weighted consensus"
},
{
"paperId": "db64b8978a4afd39575b010ca20b37703e146fec",
"title": "Distributed consensus filtering for discrete-time nonlinear systems with non-Gaussian noise"
},
{
"paperId": "fc2745c47f2a79ee4d5d416d435f6ae1941a5e94",
"title": "Multiple sensor estimation using the sparse Gauss-Hermite quadrature information filter"
},
{
"paperId": "8d851384a52d00b06842a7d40dc791ca6aacdff6",
"title": "Diffusion Kalman Filtering Based on Covariance Intersection"
},
{
"paperId": "16bcb9f8b2742aa0b2222fec7fddf19157ff806d",
"title": "Diffusion LMS Strategies for Distributed Estimation"
},
{
"paperId": "baf83fa0a09085c7b0ed76c99f6e9a5993f05035",
"title": "Diffusion Strategies for Distributed Kalman Filtering and Smoothing"
},
{
"paperId": "b4226bae9693793ae9eb41c9886ead455f5f2cd1",
"title": "Cubature Kalman Filters"
},
{
"paperId": "4c9042c447e5afcceb2104a7b1463eb83ba403b0",
"title": "Convergence Speed in Distributed Consensus and Averaging"
},
{
"paperId": "af83ff4ce14b1ac0e2ac5ebe392a84a850714110",
"title": "Diffusion recursive least-squares for distributed estimation over adaptive networks"
},
{
"paperId": "abca3c16c31d20c9d2796c07c5b24b06dee076b2",
"title": "Distributed Kalman Filter with Embedded Consensus Filters"
},
{
"paperId": "c03011e33ad097a5a59c82399eca4065f704b656",
"title": "Gaussian Mixture Reduction for Tracking Multiple Maneuvering Targets in Clutter"
},
{
"paperId": "422b80195d24c410e9f516026d7cf6df75781a7d",
"title": "Information fusion based on fast covariance intersection filtering"
},
{
"paperId": "84e20d4b5a8b7eb603e3dc785f5a3c054b71f8be",
"title": "An information theoretic justification for covariance intersection and its generalization"
},
{
"paperId": "7f0bbe9dd4aa3bfb8a355a2444f81848b020b7a4",
"title": "A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking"
},
{
"paperId": "76c52206888eedef8d8dead3007992e53e3c4ae8",
"title": "A new method for the nonlinear transformation of means and covariances in filters and estimators"
},
{
"paperId": "1c0f03b080708e07f043032d64e0da9fed732ba8",
"title": "Mixture reduction algorithms for target tracking in clutter"
},
{
"paperId": "721f54f6fa32f5f02c5124a2b73ce5f4280b4eaf",
"title": "Matrix analysis"
},
{
"paperId": "d6fb0f67eb2cc76bc88b8422918c165d6dd55890",
"title": "Applied optimal estimation"
},
{
"paperId": "e13c11b94f6013bb2be4e205493cf082df4fc97d",
"title": "Nonlinear Bayesian estimation using Gaussian sum approximations"
},
{
"paperId": "091e029bf3e1a01a93efbe8587e34494642004f8",
"title": "A Kullback-Leibler Approach to Gaussian Mixture Reduction"
},
{
"paperId": "482c92f4467abfd8a9e35809f689e2bb9ecb84ea",
"title": "General Decentralized Data Fusion With Covariance Intersection (CI)"
},
{
"paperId": null,
"title": "Nonnegative Matrices and Markov Chains"
}
] | 18,738
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00809ca8de63a1e09b87fb5926230de931cb36ca
|
[
"Computer Science"
] | 0.903489
|
Integration of Cyber-Physical Systems in EScience Environment: State-of-the-Art, Problems and Effective Solutions
|
00809ca8de63a1e09b87fb5926230de931cb36ca
|
International Journal of Modern Education and Computer Science
|
[
{
"authorId": "9433726",
"name": "Tahmasib Fataliyev"
},
{
"authorId": "65966848",
"name": "Shakir A. Mehdiyev"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Int J Mod Educ Comput Sci"
],
"alternate_urls": [
"http://www.mecs-press.org/ijmecs/v5n4.html"
],
"id": "0e4b4b47-1f45-49d9-bef6-cecb7c0a66d6",
"issn": "2075-0161",
"name": "International Journal of Modern Education and Computer Science",
"type": "journal",
"url": "http://www.mecs-press.org/ijmecs/"
}
|
The implementation of the concept of building an information society implies a widespread introduction of IT in all areas of modern society, including in the field of science. Here, the further progressive development and deepening of scientific research and connections presuppose a special role of e-science. E-science is closely connected with the innovative potential of IT, including the Internet technologies, the Internet of things, cyber-physical systems, which provide the means and solutions to the problems associated with the collection of scientific data, their storage, processing, and transmission. The integration of cyber-physical systems is accompanied by the exponential growth of scientific data that require professional management, analysis for the acquisition of new knowledge and the qualitative development of science. In the framework of e-science, cloud technologies are now widely used, which represent a centralized infrastructure with its inherent characteristic that is associated with an increase in the number of connected devices and the generation of scientific data. This ultimately leads to a conflict of resources, an increase in processing delay, losses, and the adoption of ineffective decisions. The article is devoted to the analysis of the current state and problems of integration of cyber-physical systems in the environment of e-science and ways to effectively solve key problems. The environment of e-science is considered in the context of a smart city. It presents the possibilities of using the cloud, fog, dew computing, and blockchain technologies, as well as a technological solution for decentralized processing of scientific data.
|
Published Online September 2019 in MECS (http://www.mecs-press.org/)
DOI: 10.5815/ijmecs.2019.09.04
# Integration of Cyber-Physical Systems in E Science Environment: State-of-the-Art, Problems
and Effective Solutions
## Tahmasib Kh. Fataliyev*
Institute of Information Technology of ANAS, Baku, Azerbaijan
Email: *[email protected]
## Shakir A. Mehdiyev
Institute of Information Technology of ANAS, Baku, Azerbaijan
Email: [email protected]
Received: 24 June 2019; Accepted: 14 August 2019; Published: 08 September 2019
**_Abstract—The implementation of the concept of building_**
an information society implies a widespread introduction
of IT in all areas of modern society, including in the field
of science. Here, the further progressive development and
deepening of scientific research and connections
presuppose a special role of e-science. E-science is
closely connected with the innovative potential of IT,
including the Internet technologies, the Internet of things,
cyber-physical systems, which provide the means and
solutions to the problems associated with the collection of
scientific data, their storage, processing, and transmission.
The integration of cyber-physical systems is accompanied
by the exponential growth of scientific data that require
professional management, analysis for the acquisition of
new knowledge and the qualitative development of
science. In the framework of e-science, cloud
technologies are now widely used, which represent a
centralized infrastructure with its inherent characteristic
that is associated with an increase in the number of
connected devices and the generation of scientific data.
This ultimately leads to a conflict of resources, an
increase in processing delay, losses, and the adoption of
ineffective decisions. The article is devoted to the
analysis of the current state and problems of integration
of cyber-physical systems in the environment of e-science
and ways to effectively solve key problems. The
environment of e-science is considered in the context of a
smart city. It presents the possibilities of using the cloud,
fog, dew computing, and blockchain technologies, as well
as a technological solution for decentralized processing of
scientific data.
**_Index_** **_Terms—E-science,_** cyber-physical systems,
integration, big scientific data, smart city, cloud
computing, blockchain.
I. INTRODUCTION
The term electronic science (e-science), introduced in
1999 by Dr. John Taylor, then the director of the UK's
scientific councils, combines revolutionary methods of
conducting collective experimental research in networked
research infrastructure. This infrastructure has allowed
scientists to use technical resources in a coordinated way,
which are usually distributed, maintained by various
organizations and belong to different scientific disciplines,
and simplifies the use and access to basic research tools,
such as computing resources and databases. E-science,
providing modern solutions in the areas of online
education, virtual laboratories, global research networks,
in computer tools, etc., helps and continues to help make
rapid progress in science. Modern digital technologies
form new partners in science, such as cyberinfrastructure,
e-science, civil science, Data science, Big data. Digital
science, in turn, has led to a significant increase in the
volume of scientific data as one of the main trends in the
development of science.
Further development of information technologies has
generated such new paradigms as the Internet of things
(IoT), cyber-physical systems (CPS), industry 4.0, cloud
computing, blockchain, etc., which have brought and will
bring many advantages and potential opportunities in the
structure of e-science. Modern ideas of automation in
current research projects are also based on CPS, used as
intelligent control systems.
A wide range of CPS applications includes
transportation, agriculture, healthcare, aerospace, etc.
Science is one of its interesting applications. The
integration of this system into the e-science environment
leads to multiple increases in the flow of scientific data.
As a result, the problems of organizing and processing
large scientific data become relevant along with other [1].
-----
Problems and Effective Solutions
CPS is especially focused on accurate measurement,
storage, processing, analysis and presentation of data.
Here there is a problem of archiving information, on the
one hand, and, on the other hand, it is necessary to
organize, distribute and provide the information requested
by the user in the information retrieval service. Along
with this, the discovery of hidden knowledge from the
collected scientific data is also relevant and important.
Thus, the solution of problems with big data is
important for the modern digital age. The article explores
the problems, reveals the prospects for the development
of technological solutions for processing scientific data
generated from the integration of CPS in the e-science
environment.
II. PRECONDITIONS FOR CREATING CPS
The work of CPS is based on the principle of
integration of computational and physical processes, that
is, it is part of a system of physical objects, and the term
CPS was coined in 2006 by the US National Science
Foundation 2. CPS is a system that consists of various
subsystems in which control at the physical level is
carried out on the basis of processing signals from
multiple sensors and making decisions at the cyber level
[3,4]. In [5] introduces a new analysis framework for
classifying Cyber-Physical Production Systems
applications relatively to various items, including their
cognitive abilities, their application extent, the interaction
with human operators, the distribution of intelligence and
the network technologies that are used. CPS’s are defined
as systems with decentralized control, resulting from a
merger between the real and virtual worlds, having
autonomous behavior and depending on the context in
which they are located. They are capable of forming
complex hierarchical CPS, where deep cooperation with a
person is also assumed. For this, embedded software in
CPS uses sensors and actuators, connect to each other and
to human operators by communicating via interfaces and
have storage and data processing capabilities from the
sensors or the network.
From a technical point of view, CPS is a system
implemented via the IoT, Industry 4.0, Industrial Internet
of Things (IIoT), Machine-to-Machine Interaction
(M2M), wireless sensor networks (WSN), cloud
computing. Essentially, WSN, M2M, IoT, and CPS are
made up of similar components. Both IoT and CPS are
aimed at expanding the connection between the
cyberspace and the physical world through information
perception and interactive technologies. But they have
obvious differences: IoT focuses on the network and aims
to unite all things in the physical world. Thus, it is an
open network platform and infrastructure. CPS
emphasizes the exchange of information and feedback,
where the system should control the physical world in
addition to the perception of the physical world, forming
a closed system [6].
The similar aspects of these technologies complement
each other and extend their functionality:
- WSN, M2M, and CPS belong to IoT.
- WSN is the basic IoT and CPS scenario.
- WSNs regarded as the supplement of M2M is the
foundation of CPS.
- CPS is an evolution of M2M in intelligent
information processing.
- M2M is the main IoT model at the present stage.
- CPS will be an important technical form for IoT in
the future [7].
Wireless technologies such as Bluetooth, Wi-Fi,
ZigBee, LoRa, etc. allow you to directly extract
information from sensors installed in previously
inaccessible areas for measuring parameters of various
technological processes [8,9]. In this context, a
significant role belongs to such network technologies as
cloud, fog and dew computing, which help to store large
amounts of information and allow the use of complex
analytical tools such as big data, data mining, OLAP,
pattern recognition, etc. [10].
As you know, the emergence of CPS has become
possible because of the rapid development of information
technology. The expansion of the coverage of the Internet,
its availability, the emergence of progressive IPv6
technology, which removed the restrictions on the
number of connected sensors and devices, as well as the
emergence of functionally new primary information mini,
micro and nanosensors created comprehensive technical
capabilities for monitoring and managing physical
processes, experiments and production directly via the
Internet and became the basis for integrating CPS into
these processes.
III. RELATED WORKS
Analysis of the published works shows that due to the
unique functions of the CPS, technical solutions on their
platform are not limited to specific areas. In ref. [11]
shows common applications of CPS, among which in our
case the most interesting of them are discussed further.
CPS in transport is an integrated transportation
management system aimed at achieving safer and more
intelligent traffic [12]. This system collects, processes,
analyzes and visualizes data from sensors located on
mobile devices. The result of this CPS can be the
optimization of traffic, monitoring of road surface,
detection of hazards, automotive networks and so on.
CPS in medicine is a classic example of closed-loop
feedback control systems. Application scenarios for such
CPSs vary from patient monitoring, analgesic infusion
pumps to implantation of sensory devices [13]. Any
change of an object in the physical world can be directly
modeled and improved on its counterpart in the cyber
world, and in the physical world actions will be taken
based on instructions from cyberspace
CPS to control wind turbines [14] is used to reduce
energy costs and increase profits. Data from wind
generators tend to be very large, and therefore, to
dynamically represent the behavior of CPS, instead of
-----
Problems and Effective Solutions
traditional statistical data analysis methods, genetic
algorithms are used.
Integration of CPS in the library can improve the
quality and quantity of traditional library services, for
example, intellectual inventory, intellectual inquiry, selfemployment and self-return, searching for inappropriate
or incorrectly delivered books or materials, automatically
combating counterfeit products, providing contextual
prompts and information, signaling the presence tools and
resources, streamlining internal library processes. The
CPS will also be able to control temperature and humidity,
energy consumption, fire safety, eliminate hidden
security risks, create comfortable conditions for both
visitors and for the preservation of ancient manuscripts
and valuable books [15].
It should be noted that there are examples of the
integration of CPS for solving problems and tasks in the
field of science.
In the field of earth sciences, there is CPS for
monitoring volcanic activity. It is designed to collect data
from remote sensors in the WSN. Based on the collected
data, they are processed to further analyze and monitor
the hazards by assessing the level of volcanic unrest and
understanding the physical processes occurring in the
volcano, such as mechanisms of migration and magma
eruption [16].
Ref. 17 describes CPS for environmental monitoring,
which collects large multi-dimensional and multitemporal data from the global atmosphere. For these
purposes, space and aviation sensors are used for remote
observation of the Earth and measure background
radiation. This is the physical level of the system. At the
cyber level, special technologies are used to process and
interpret data, which make it possible to obtain images of
the earth's surface. Later, on the basis of these images,
traditional or thematic maps, resource summaries, etc. are
developed. Then, at the level of data analysis, decisions
or recommendations are made for further actions in
certain areas of activity.
Also known CPS, that monitors the environmental
conditions or the ambient conditions in indoor spaces at
remote locations. The communication between the
system’s components is performed using the existent
wireless infrastructure based on the IEEE 802.11 b/g
standards. The resulted solution provides the possibility
of logging measurements from locations all over the
world and of visualizing and analyzing the gathered data
from any device connected to the Internet [18].
A known system of adaptive control of a radio
telescope on the platform of the CPS. Adaptive control is
carried out on the basis of preliminary calculations of
data received from sensors. In this case, it is important to
provide high computational performance, because
otherwise, the reaction time may be too long. [19].
There are currently available solutions with the use of
various well-known structures for the creation of remote
laboratories with automation technologies. These
solutions provide IoT structures that can be used to build
and operate functional systems in a web browser for
different areas. They can also be considered as a platform
for the integration of CPSs in the creation of virtual
laboratories.
In ref. [20] discusses the IIoT Web-Oriented
Automation System (WOAS) platform, a prototype of a
web platform that allows the integration of CPS services,
including components of distributed devices, into a
functional system. In this functional system, it is not
necessary to have a technical process automation system
or a remote scientific or academic laboratory. The IIoT
WOAS platform allows you to fully configure and use
browser-based functional systems consisting of technical
devices and systems, such as CPS components and
related services. This platform was designed to use
automation technologies and can also be used to create
and operate a laboratory for remote experiments over the
Internet using technical equipment and systems. As a rule,
here the type of technical devices does not matter. The
only requirement is that the device is connected to the
Internet as a component of CPS and be available. Many
user-oriented WOAS portals allow you to create and
manage a virtually unlimited number of virtual
laboratories.
To ensure the sustainable functioning of the e-science
infrastructure, a complex of measures is needed to solve
the problems arising in it. To support decision making at
this level, the organization of maintenance can provide
substantial assistance in maintaining system safety,
reducing failure rates and preventing malfunctions. In ref.
[21] analyzes the performance of the electronic scientific
infrastructure as CPS, presents its conceptual model,
addresses the problems of ensuring its security and the
creation of electronic maintenance.
The integration of CPS into the e-science environment
also provides a wide range of opportunities for the
implementation of interdisciplinary research principles.
As an example, we can consider bioinformatics as an
interdisciplinary activity in cyber-physical space. It is
known that bioinformatics combines computer science,
statistics and mathematical methods for analyzing and
interpreting biological data. Here, the integration of CPS
here leads to the automatic execution of various
biological analyzes, which are very difficult to carry out
manually, an exponential increase in data analysis and the
accuracy of the results [22].
Virtual Observatory (VO) may be another example of
CPS in science. VO is a collection of interoperable data
archives, tools, and applications that together form an
environment in which original astronomical research can
be carried out. The VO is opening up new ways of
exploiting the huge amount of data provided by the evergrowing number of ground-based and space facilities, as
well as by computer simulations [23].
Ref. 24 includes the complete solution of CPS,
beginning with physical level, comprising from claiming
sensors, processor and the correspondence protocol, and
arriving at information management and stockpiling at
the digital level. The test outcomes indicate that the
suggested framework represents a feasible and
straightforward solution for economical monitoring
applications.
-----
Problems and Effective Solutions
An important component of the e-science
cyberinfrastructure is the Datacenter, which must be
immune to incidents and unforeseen circumstances
causing system failures. The Datacenter can also be
represented in the aspect of CPS, in which the
management of IT and cooling technologies in them are
classified according to the degree to which they take into
account both cyber-physical and physical factors [25].
Thus, the questions raised in the article are of great
interest because of their relevance. A comprehensive
solution to them in a single environment requires
continuation of research and the development of effective
methods.
IV. PROBLEMS INTEGRATION OF CPS TO E-SCIENCE
ENVIRONMENT
Based on the studies in the previous sections, it can be
concluded that cyber-physical integration in scientific
research can be conducted in the following aspects.
_A._ _Integration_ _of CPS into the e-science infrastructure_
Unlike the traditional definition, we consider e-science
in a broader sense. This implies the introduction of ICT in
all areas of research enterprises and organizations,
including management.
The basis of e-science is physical infrastructure, which
may include telecommunications networks, data centers,
research jobs, research laboratories, buildings, electricity,
logistics, etc.
This physical infrastructure can be viewed in the
context of integrating CPS on the smart city platform.
Definitions of a smart city are interpreted in different
variations. A smart city is used throughout the world
under different names and in different circumstances, and
therefore there are a number of conceptual options
created by replacing smart adjectives with other
alternative adjectives [26,27]. In general, the concept of
the smart city implies widespread informatization, which
implies the presence of a multitude of sensors for
retrieving information, primary devices for collecting,
processing and storing data, intelligent analytics and the
presence of smart inhabitants (in our case, these are escientists), interested in applying the above solutions.
The technological infrastructure of the smart city is a
platform of CPS, which can be applied to the
infrastructure of the National Academy of Sciences of
Azerbaijan (ANAS) (Fig. 1).
From Fig. 1. it follows that the structure of ANAS
unites six scientific divisions. In turn, specialized
research institutes function in the structure of these
divisions. In our approach, this structure is usually
perceived as a smart city, and units - smart areas and
research institutions - as smart buildings.
As follows from Fig. 1, the scope of a smart city is
ANAS. Further, at the level of a smart area, there are
scientific units. As for specialized agencies, they are
under the influence of a smart building.
CPS in such a smart structure, both globally and
locally, can solve their following problems:
- Uninterrupted power supply;
- Materials and equipment management;
- Equipment monitoring;
- Maintenance;
- Building security;
- Video observation;
- Detection and warning of danger;
- User identification;
- Tracking and identification of hazardous materials;
- Environmental monitoring;
- Creating a comfortable working environment for
researchers;
- Climate control;
- Waste management, etc.
For example, in [28] it is shown that a modern building
automation system collects data from temperature and
humidity values to the state of the engine and often
includes possibilities for optimizing energy consumption.
That is, with the optimal start/stop, the building
automation system will know when it should turn on the
air conditioning system for a specific area in the building.
Fig.1. Considering the integration of CPS into the e-science
environment like a smart city.
_B._ _Integration of CPS into the e-science research_
_environment_
Further, in the context of integrating CPS into the
research environment of e-science, its generalized
architecture consists of several levels. Their
characteristics are listed below:
- At the level of physical objects, data is collected
from sensors installed to measure various physical
parameters.
- On the cyber platform and computing level, data is
mainly processed and converted into operational
information to obtain information about the
performance of individual components or feedback
signals are generated.
- At the CPS application level, complex calculations
are performed based on data processed at lower
levels, and various types of physical object models
are created.
- The big data analysis level is performed on a
compute node, such as the cloud. At this level, new
-----
Problems and Effective Solutions
knowledge is gained, feedback from cyberspace
can be transferred to the physical space in order to
apply corrections and preventive effects on the
system.
According to the presented model, it is possible to
interpret CPS integration into the e-science environment
as follows. Scientific data can be data from different
sensors during physical experiments and chemical
experiments, biological data, results of spectral analysis,
photographs from telescopes, results of sociological
surveys in social sciences, historical works, documents,
manuscripts, etc. These data can also be transmitted to the
remote units, virtual collectives, and laboratories of
academics. Collected data is processed and converted to
new knowledge. At the next levels, a full view of
scientific research (physical events, chemical reactions,
matter structure, historical event, etc.) is made. Later, the
scientific community gets acquainted with the research
results. Therefore, as a result of the integration of CPS to
e-science environment, events take place starting from
the research stage, experiment conduction, processing of
obtained data to an acquaintance of scientific community
with proposed theory, hypothesis or scientific
recommendations, repetition and accurate results. The
territorial distribution of multidisciplinary research and
interdependence of the heterogeneous devices used
should be taken into account. Each device can be used
within the IoT concept and can be fully managed with
web technologies.
Based on the above, a five level architecture of tasks
related to the integration of CPS in the e-science
environment is proposed (Fig. 2).
Fig.2. Five level task architecture
As noted, CPS is a complex system that combines
computational, communication, and physical processes.
From Fig. 2 it follows that these CPS components are
also present in the architecture of e-science, which is
essential in solving integration problems.
V. PROCESSING OF SCIENTIFIC DATA
E-science plays a special role in the development and
expansion of scientific research and connections. It
covers all stages of solving problems in the research
process, including the creation, collection, search, storage,
processing, and analysis of scientific data, as well as
science management issues. Existing IT for these
purposes has created ample opportunity. In addition, the
exponential growth of scientific data requires
professional management of them as an essential
condition for the acquisition of new knowledge and the
rapid development of science. For these purposes, the
Internet infrastructure is used, through which users get
remote access to large-scale information and more
efficiently use their computing resources. One of the
main problems in the e-science environment is the
problem of big data. When considering e-science as a
single system; we see that it solves problems from
different subsystems. Information support of science,
scientometric analysis, intelligent analysis, and scientific
data lead to the generation of big data [29]. The
integration of CPS into this environment also plays the
role of a generator for quickly increasing big data.
It should be noted that the types, volume, frequency of
use, life cycle and other characteristics of scientific data
are different. The following data is especially important
for research:
- Observation data – obtained from telescopes,
satellites, sensor networks, demographic studies,
historical information, or one-time event recording.
In most cases, this data cannot be repeated and,
therefore, must be saved.
- Experimental data – obtained from high
productivity decides through clinical, biomedical
and pharmaceutical experiments or other controlled
experiments. It is especially important to store
some data that is considered inappropriate to
recollect due to ethical or other reasons, such as
data regarding human subjects and endangered
species.
- Computing data – generated as a result of the large
scale computation in a supercomputer, data centers,
etc. stored for a certain period and processed
through intellectual analysis technologies.
- Informational data – are used by scientific societies
for different purposes. Such data include the human
genome, proteins, seismology, oceanography,
clinical research, endangered species data.
These scientific data categories also add big data
generated from the integration of CPS into the e-science
environment. To solve this problem, various methods and
approaches are used.
CPS interacts with the physical system through
networks, the final CPS is usually the traditional
centralized closely related embedded computer system,
-----
Problems and Effective Solutions
which contains a large number of physical systems
consisting of intelligent wireless sensor networks [30]. At
the same time, CPS is a product of the integration of
heterogeneous systems: these are heterogeneous
distributed systems with deep integration and interaction
of information systems and physical systems, which
should deal with the problem of time synchronization and
the spatial arrangement of various components.
_A._ _Cloud, fog, and dew computing_
In the e-science environment, data from different
sources is often characterized by a lack of structuring,
various formats, rapid generation and a sharp increase in
volume. Processing such a data flow using existing
technologies is very complex and requires new
technological solutions. Studies show that cloud
technologies are preferable for processing big data [31].
Cloud computing provides users with remote access to
services, computing resources, and software over the
Internet [32]. Cloud technologies allow us to collect and
store big data, on the one hand, and, on the other hand,
provide the necessary processor power for data
processing. A cloud analytics service that uses statistical
analysis and machine learning helps reduce big data to an
acceptable size so that we can get information, test
hypotheses, and draw conclusions. Data can be imported
from the cloud, and users can run cloud data analysis
algorithms for big data sets, after which data can be saved
back to the cloud.
Nevertheless, the further development of the e-science
platform is accompanied by an increase in the number of
installed devices and a tendency to increase the amount of
scientific data generated in this environment, which leads
to an overload of the Internet infrastructure. In addition,
there is a significant increase in data traffic due to the
widespread use of smartphones, tablets and video
streaming. Users experience a decrease in network
bandwidth, which, in turn, leads to resource conflicts,
increased processing delays, losses, and inefficient
decision making. In some cases, it may be necessary to
move large data sets between multiple clouds — for
example, if a single cloud is not enough for
computational purposes or if the user or employees must
use several cloud resources [33].
Initially, the edge computing paradigm was proposed
to effectively address the problems described. Here, the
reduction of network load, as well as making more
operative decisions based on the data is a key requirement
and problem is solved by bringing the processing near to
the data source. It’s computing and memory resources are
used for local storage and initial data processing. But,
such periphery computing has very limited capabilities
that lead to resource conflict and increase processing
delays. For this reason, a new paradigm called fog
computing is developed, which performed the integration
of periphery clouds with cloud resources in order to
eliminate all deficiencies of edge computing [34]. Thus,
in contrast to processing data by directly sending data
from initial devices to the central server, fog computing
provides processing of data directly near to the devices
and sends necessary parts to a central server; its main
objective is to increase productivity by directly
processing network data. First computing architecture of
fog is described in [35] and here, fog level is determined
as distributed intellect between the base network and
sensor devices.
Fog system has relatively small computing resources
(memory, processing, and storage). But resources can be
increased on demand. However, the significant shortfall
in clouds and fog computing is dependence on the
availability of Internet access. The level of development
of ICT tools and methods indicates that the most
promising direction in the e-science infrastructure is the
dew computing that allows access to the data without the
constant use of the Internet. In this context, “Dew
computing is an on-premises computer softwarehardware organization paradigm in the cloud computing
environment where the on-premises computer provides
functionality that is independent of cloud services and is
also collaborative with cloud services. The goal of dew
computing is to fully realize the potentials of on-premises
computers and cloud services” [36]. Dew computing was
proposed in 2015 37. This technology ensures that the
services offered are not dependent upon accessing the
Internet and has two main features: first, local computers
(desktop, laptop, tablet, and smartphone) show rich
micro-services that do not depend on cloud services;
secondly, these services mainly collaborated with cloud
services. Dew server is a small local server that keeps the
speed of accessible data generated from primary
computing from an Internet connection or without it, and
is synchronized with cloud server with connection is
available again. This architecture can be used to ensure
accessibility of websites in offline mode. This system can
reduce the cost of data transfer in organizations with an
interrupted or limited Internet connection. Thus,
abovementioned justifies effectiveness and promising
outlook of separate or joint use of abovementioned
technologies, in accordance with specific characteristics
of solved problems, in the processing of big data,
including CPS integration.
_B._ _Blockchain_
To solve the problems of decentralized data processing,
you can also use the innovative technology of blockchain,
which, along with the computing technologies discussed
earlier, can be another integration platform of e-science
[38-40]. The blockchain is a database of distributed
entries, which consists of all the operations performed
and is divided between network members. This database
is called a distributed ledger. Each operation is stored in a
distributed registry and is approved by agreement of the
majority of participants. All executed operations are
saved in the blockchain. Thus, the blockchain provides a
decentralized model of processing operations.
Consider some of the characteristics of blockchain,
which make it suitable for both CPS and IoT.
- Decentralization: network transactions are
supported by various decentralized nodes.
-----
Problems and Effective Solutions
- Scalability: the computing power of the network
increases as the number of nodes in the network
increases.
- Reliability: transactions are verified and confirmed
by consensus between peers.
- Security: all transactions on the Blockchain
network are protected by reliable cryptography.
- Sustainability: records after reaching consensus
cannot be changed or deleted.
- Autonomy: devices can communicate with each
other directly since each device has its own account.
The blockchain technology is constantly evolving and
can make important contributions, such as protecting the
rights of authors in the e-science environment, personnel
management, collective decision-making, expert
assessments, and information security problems.
In ref. [41] it was shown that the blockchain
technology can make scientific activity open at all stages
of its implementation. As noted, research areas begin
with the collection or discovery of baseline data. The
results of studies conducted according to a certain method
become available only at the time of publication.
Everything that happens before, for example, data
collection and analysis, review, etc., is not transparent.
This lack of transparency leads to problems associated
with reproducibility, that is, the inability of researchers to
reproduce experiments to confirm the findings of
scientific papers. For example, in ref. [42], the possibility
of “notarial approval” of registration of research results
related to the time of their generation via blockchain was
investigated. This application makes it impossible to
change the approved registration data, prevents their
manipulation and can be used to publish research results.
Thus, on the basis of the foregoing, it can be concluded
that the use of decentralized computing technologies in
the processing of large scientific data obtained in various
fields of science in the e-science environment should be a
promising direction.
VI. PROPOSED MODEL
Existing scientific data processing model in ANAS on
AzScienceNet scientific computer network platform is
implemented as a cloud structure (Fig. 3).
Fig.3. Existing data processing model on AzScienceNet platform
Currently, it unites about 40 scientific institutions.
There are over 7000 computers and mobile devices in this
infrastructure, and there is a steady upward trend in
connectivity [43]. This requires an extensive development
path when it is necessary to increase both the computing
power and storage and the bandwidth of communication
channels. Within the framework of this architecture,
models were also proposed for the rational distribution of
computing resources and memory, for example, in
[44,45].
However, for more rational use of AzScienceNet
resources, taking into account the above, it is proposed to
use the architecture of decentralized processing of
scientific data in a network environment as in Fig. 4.
Fig.4. The generalized architecture of decentralized processing of
scientific data.
As seen in Fig. 4, in integrating the CPS into the e
science environment, an important factor is a physical
level, at which scientific experiments are conducted and a
large flow of scientific data is generated. Experiments are
conducted on scientific profiles and spatially separated
(Fig.1). Data is processed directly in dew computing
clusters.
The proposed model of decentralized processing could
be used, for example, in a system of round-the-clock
monitoring of the stress-strain state of the earth's crust in
seismogenic zones. At present, these studies have been
carried out using a hand-held proton magnetometer at 70
rigidly fixed points [46]. A large stream of raw data
should be collected at a central location for further
processing. Thus, the main process is time-consuming
and does not allow for optimizing the time frame for
effective warning of natural disasters.
VII. CONCLUSION
This article discusses the current trends in the
integration of CPS into the e-science environment. This
integration encompasses all phases from research data
collection, storage, processing, and analysis, as well as
-----
Problems and Effective Solutions
science management problems. In addition, the e-science
infrastructure information generated by CPS can be used
for other purposes, such as uninterrupted power supply,
materials, and equipment management, maintenance
planning and optimized management to achieve higher
overall performance and reliability of the e-science
environment. considered in the context of a smart city. As
an alternative to the centralized principle of organizing
data processing, the prospects for decentralized data
processing were presented and the possibilities of using
the cloud, fog, dew and blockchain technologies for this
purpose were considered. Decentralized computing
covers a wide range of technical problems in the field of
e-science, including equipment, operating systems,
networks, databases, browsers, servers, etc.
In the future, practical works are planned on the
integration of CPS in the environment of e-science in the
framework of the solutions studied and presented in this
article.
REFERENCES
[1] R. M. Alguliyev, R. G. Alakbarov, T. Kh. Fataliyev,
“Electronic science: current status, problems and
perspectives,” Problems of information technology, 2015,
No. 2, pp. 4–14. DOI: 10.25045/jpit.v06.i2.01.
[2] V. Gunes, et al., “A survey on concepts, applications, and
challenges in cyber-physical systems,” KSII Transactions
_on Internet and Information Systems, 2014, vol. 8, No. 12,_
pp. 4242-4268.
[3] E. A. Lee, “Cyber-physical systems: Design challenges,”
_11th IEEE international symposium on object-oriented_
_real-time distributed computing, 2008, pp. 363-369._
[4] E. A. Lee, and S. A. Seshia, _Introduction to embedded_
_systems: A cyber-physical systems approach. MIT Press,_
2016.
[5] O. Cardin, “Classification of cyber-physical production
systems applications: Proposition of an analysis
framework,” _Computers in Industry, Elsevier, 2019, 104,_
pp.11 - 21. DOI: 10.1016/j.compind.2018.10.002.
[6] C. Greer, Cyber-Physical Systems and Internet of Things,
NIST Special Publication 1900-202, 2019, p. 52.
[7] M. Chen, J. Wan, F. Li, “Machine-to-machine
communications: Architectures, standards and
applications,” _KSII_ _transactions_ _on_ _internet_ _and_
_information systems, vol. 6, No. 2, 2012, pp. 480-497._
[8] T. Kh. Fataliyev, Sh. A. Mehdiyev. “Analysis and New
Approaches to the Solution of Problems of Operation of
Oil and Gas Complex as Cyber-Physical System,”
_International Journal of Information Technology and_
_Computer Science (IJITCS), 2018, vol.10, No.11, pp.67-_
76, 2018. DOI: 10.5815/ijitcs.2018.11.07.
[9] A. Zanni. “Cyber-physical systems and smart cities,” IBM
_Big data and analytics, 2015, vol. 20, pp. 1-8._
[10] R. Alguliyev, T. Fataliyev, and Sh. Mehdiyev, “Some
issues of application of internet of things in the oil and gas
complex,” _6th International Conference on Control and_
_Optimization with Industrial Applications, 2018, vol.1, pp._
46-48.
[11] S. K. Khaitan, and J. D. McCalley, “Design techniques
and applications of cyber-physical systems: A survey,”
_IEEE Systems Journal, vol. 9, no. 2, 2014, pp. 350-365._
[12] S. H. Ahmed, G. Kim, and D. Kim, “Cyber-Physical
System: Architecture, Applications and Research
Challenges,” _2013 IFIP Wireless Days (WD). IEEE,_
2013.pp 1-5.
[13] N. Dey, et al., “Medical cyber-physical systems: A
survey,” Journal of medical systems, vol. 42, No. 4, 2018,
p. 74.
[14] P. Hehenberger, et al., “Design, modeling, simulation and
integration of cyber-physical systems: Methods and
applications,” _Computers in Industry, vol 82, 2016, pp._
[273-279. doi:10.1016/j.compind.2016.05.006.](http://dx.doi.org/10.1016/j.compind.2016.05.006)
[15] X. Liang, H. Chen, “The application of CPS in library
management: a survey,” _Library Hi Tech, 2018, DOI:_
10.1108/LHT-11-2017-0234.
[16] G. Werner-Allen, et al., “Fidelity and yield in a volcano
monitoring sensor network,” _Proceedings of the 7th_
_symposium_ _on_ _Operating_ _systems_ _design_ _and_
_implementation. USENIX Association, 2006, pp. 381-396._
[17] M. M. Rathore, et al., “Real-Time Big Data Analytical
Architecture for Remote Sensing Application,” _IEEE_
_Journal of Selected Topics in Applied Earth Observations_
_and Remote Sensing, 2015, vol. 10, No. 8, pp. 4610–4621._
[18] G. Mois, S. Teodora, and C. F. Silviu, “A cyber-physical
system for environmental monitoring,” _IEEE_
_Transactions on Instrumentation and Measurement, 2016,_
vol. 65, No.6, pp. 1463-1471. DOI:
10.1109/TIM.2016.2526669
[19] V. A. Onufriev, A. S. Sulerova, and V. V. Potekhin,
“Cyber-physical systems application for the radio
telescope’s adaptive surface control task,” _Symp. on_
_Automated Systems and Technologies. Hannover: PZH_
Verlag. 2016, pp. 51-56.
[20] R. A. Langmann, “A CPS Integration Platform as a
Framework for Generic Remote Labs in Automation
Engineering,” _Cyber-Physical_ _Laboratories_ _in_
_Engineering and Science Education, 2018, pp. 305–329._
[21] T. Kh. Fataliyev, Sh. А. Mehdiyev, “Problems of
organization of e-maintenance in a network environment
(in azerb.),” _I republic scientific-practical conference, of_
_actual problems of software engineering, Baku, 2017, pp._
291-293.
[22] H. S. Ning, H. Liu, “Cyber-physical-social-thinking
space-based science and technology framework for the
Internet of Things,” _Science China Information Sciences,_
2015, vol. 58, No. 3, pp. 1-19.
[23] E. Hatziminaoglou, “Virtual observatory: science
capabilities and scientific results,” arXiv preprint arXiv:
0911.1878. 2009, Nov 10.
[24] P. Padher, V. M. Rohokale, “A Cyber-Physical System for
Environmental Monitoring,” _Int J Sensor Networks and_
_Data Communications, 2018, vol.7, issue 2, pp. 154-158._
DOI: 10.4172/2090-4886.1000154.
[25] L. Parolini, et al., “A cyber-physical systems approach to
data center modeling and control for energy efficiency,”
_Proceedings of the IEEE, vol. 100, no.1, 2011, pp. 254-_
268.
[26] T. Nam, T. A. Pardo, “Conceptualizing smart city with
dimensions of technology, people, and institutions,”
_Proceedings of the 12th annual international digital_
_government research conference: digital government_
_innovation in challenging times, ACM, 2011, pp. 282-291._
[27] V. Albino, U. Berardi, and R. M. Dangelico, “Smart
Cities: Definitions, Dimensions, Performance, and
Initiatives,” Journal of Urban Technology, 2015, Vol. 22,
No. 1, 3–21.
[28] “Improving Performance with Integrated Smart
Buildings,” [http://www.usa.siemens.com/intelligent-](http://www.usa.siemens.com/intelligent-infrastructure/assets/pdf/smart-building-white-paper.pdf)
[infrastructure/assets/pdf/smart-building-white-paper.pdf](http://www.usa.siemens.com/intelligent-infrastructure/assets/pdf/smart-building-white-paper.pdf)
-----
Problems and Effective Solutions
[29] R. Atat, et. al., “Big data meet cyber-physical systems: A
panoramic survey,” IEEE Access, 2018, vol. 6, pp. 7360373636. DOI: 10.1109/ACCESS.2018.2878681.
[30] Y. Liu, et al., “Review on cyber-physical systems,”
_IEEE/CAA Journal of Automatica Sinica, vol. 4, No. 1,_
2017, pp. 27-40.
[31] E. Althagafy, M .R. J. Qureshi, “Novel Cloud
Architecture to Decrease Problems Related to Big Data,”
_International_ _Journal_ _of_ _Computer_ _Network_ _and_
_Information Security (IJCNIS), Vol.9, No.2, pp.53-60,_
2017. DOI: 10.5815/ijcnis.2017.02.07.
[32] R. G. Alakbarov, F. H. Pashaev, O. R. Alakbarov,
“Forecasting Cloudlet Development on Mobile
Computing Clouds,” International Journal of Information
_Technology and Computer Science (IJITCS), 2017, vol. 9,_
No. 11, pp.23-34. DOI: 10.5815/ijitcs.2017.11.03.
[33] E. S. Jung, R. Kettimuthu, “Challenges and opportunities
for data-intensive computing in the cloud,” _IEEE_
_Computer, 2014, vol. 47, No 12, pp. 82-85._
[34] D. Linthicum. “Edge Computing vs. Fog Computing:
Definitions and Enterprise Uses,” 2018. Online:
[www.cisco.com/c/en/us/solutions/enterprise-](http://www.cisco.com/c/en/us/solutions/enterprise-networks)
[networks/edge -computing.html](http://www.cisco.com/c/en/us/solutions/enterprise-networks)
[35] F. Bonomi, et al., “Fog computing and its role in the
internet of things,” Proceedings of the first edition of the
_MCC workshop on Mobile cloud computing, 2012, pp. 13-_
16.
[36] Y. Wang. “Definition and Categorization of Dew
Computing,” Open Journal of Cloud Computing (OJCC),
2016, vol. 3, issue 1, pp. 1-7.
[37] Y. Wang, “Cloud-dew architecture,” _International_
_Journal of Cloud Computing, 2015, vol. 4, No. 3, pp. 199-_
210.
[38] Z. Zheng, et al., Blockchain challenges and opportunities:
_a survey, Work Pap. 2016. pp. 1-25_
[39] R. S. Abdullah, M. A. Faizal, “BlockChain:
Cryptographic Method in Fourth Industrial Revolution,”
_International_ _Journal_ _of_ _Computer_ _Network_ _and_
_Information Security (IJCNIS), Vol.10, No.11, pp.9-17,_
2018. DOI: 10.5815/ijcnis.2018.11.02
[40] J. Lee, M. Azamfar, J. Singh, “A Blockchain-Enabled
Cyber-Physical System Architecture for Industry 4.0
Manufacturing Systems,” _Manufacturing Letters, 2019._
doi: https://doi.org/10.1016/j.mfglet.2019.05.003
[41] J. Van Rossum, Blockchain for research: Perspectives on
_a new paradigm for scholarly communication, Digital_
Science, November 2017.
[42] S. Bartling, “Blockchain for science and knowledge
creation,” Gesundheit digital, Springer, Berlin, Heidelberg,
2019, pp.159-180.
[43] https://azsciencenet.az/en/service/1
[44] R. Alakbarov, F. Pashayev, M. Hashimov, “A Model of
Computational Resources Distribution among Data Center
Users,” IJACT, 2015, vol. 7, No. 2, pp. 01-06.
[45] R. G. Alakbarov, F. H. Pashaev, M. A. Hashimov,
“Development of the Model of Dynamic Storage
Distribution in Data Processing Centers,” International
_Journal of Information Technology and Computer Science_
_(IJITCS),_ 2015, vol.7, no.5, pp.18-24. DOI:
10.5815/ijitcs.2015.05.03.
[46] A. G. Rzayev, et al “Reflection of the geodynamic regime
of the Shamakhi-Ismayilli seismogenic zone in local
anomalies of the geomagnetic field,” _Seismoprognosis_
_observations in the territory of Azerbaijan, 2019, vol. 16,_
No. 1, pp.7-16.
**Authors’ Profiles**
**Tahmasib** **Khanahmad** **Fataliyev**
graduated from Automation and Computer
Engineering faculty of Azerbaijan
Polytechnic University. His primary
research interests include various areas in
e-science, data processing and computer
networks.
He is head of the department at the
Institute of Information Technology of ANAS, Azerbaijan. He
is the author of about 120 scientific papers.
**Shakir Agajan Mehdiyev graduated from**
Automation and Computer Engineering
faculty of Azerbaijan Polytechnic
University. His primary research interests
include various areas in e-science,
computer networks, and maintenance.
He is head of the department at the
Institute of Information Technology of
ANAS, Azerbaijan. He is the author of about 25 scientific
papers.
**How to cite this paper:** Tahmasib Kh. Fataliyev, Shakir A. Mehdiyev, "Integration of Cyber-Physical Systems in EScience Environment: State-of-the-Art, Problems and Effective Solutions", International Journal of Modern Education
and Computer Science(IJMECS), Vol.11, No.9, pp. 35-43, 2019.DOI: 10.5815/ijmecs.2019.09.04
-----
| 11,456
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.5815/ijmecs.2019.09.04?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.5815/ijmecs.2019.09.04, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GOLD",
"url": "http://www.mecs-press.org/ijmecs/ijmecs-v11-n9/IJMECS-V11-N9-4.pdf"
}
| 2,019
|
[] | true
| 2019-09-08T00:00:00
|
[
{
"paperId": "9a5b986f9919b3023d9d496a83bd668d72cfa60d",
"title": "A blockchain enabled Cyber-Physical System architecture for Industry 4.0 manufacturing systems"
},
{
"paperId": "363beb463313987711007f1fa8c431c92f3cb5f6",
"title": "Cyber-physical systems and internet of things"
},
{
"paperId": "1305cc52fb5513585dd850d188de9d8d277e1085",
"title": "Analysis and New Approaches to the Solution of Problems of Operation of Oil and Gas Complex as Cyber-Physical System"
},
{
"paperId": "f5423a27841feb068fa7603b634c076898fe6690",
"title": "Block Chain: Cryptographic Method in Fourth Industrial Revolution"
},
{
"paperId": "e311eab55d98e0624e39872b91aa17a065e65822",
"title": "Big Data Meet Cyber-Physical Systems: A Panoramic Survey"
},
{
"paperId": "a10f46b6a4339faa2bc457dc732da93f28ca2947",
"title": "Classification of cyber-physical production systems applications: Proposition of an analysis framework"
},
{
"paperId": "305edd92f237f8e0c583a809504dcec7e204d632",
"title": "Blockchain challenges and opportunities: a survey"
},
{
"paperId": "bde1f9dc915dbdb4b3c1dd9e6899fd89010a5a5e",
"title": "The application of CPS in library management: a survey"
},
{
"paperId": "66cd996e9c959dfd2ebd92733a922e21c036d9f2",
"title": "Medical cyber-physical systems: A survey"
},
{
"paperId": "5b95901e08407e4b871c6f654b81b79c8bac0da1",
"title": "Blockchain for Research"
},
{
"paperId": "18f6938d9fafe949c20385ab41d5c27099dbe259",
"title": "Forecasting Cloudlet Development on Mobile Computing Clouds"
},
{
"paperId": "1ac636dba9a25eb7fd98fd6379a9c17246793fa6",
"title": "Novel Cloud Architecture to Decrease Problems Related to Big Data"
},
{
"paperId": "1c95aaee295b30b05bffbe871ccacea8dab54af8",
"title": "Review on cyber-physical systems"
},
{
"paperId": "24a154e9bb8c5567263e63c8760b8f2bbe44e2b8",
"title": "Design, modelling, simulation and integration of cyber physical systems: Methods and applications"
},
{
"paperId": "96912d79a8893b0adb527275bf57f8a4c9e8a79f",
"title": "Cloud-dew architecture"
},
{
"paperId": "61f77be49703d9b50ce04320c5fe78df8fe5bc14",
"title": "Design Techniques and Applications of Cyberphysical Systems: A Survey"
},
{
"paperId": "a57a9ae2f3853ca56e5134ab4d92fe693ff3dcf2",
"title": "Real-Time Big Data Analytical Architecture for Remote Sensing Application"
},
{
"paperId": "d8ea54cf8c8a05dc179003240a96391c941d2b3f",
"title": "Development of the Model of Dynamic Storage Distribution in Data Processing Centers"
},
{
"paperId": "90facb5df8fd9de8bf25ab4f9292a9a251ba01f1",
"title": "Cyber-physical-social-thinking space based science and technology framework for the Internet of Things"
},
{
"paperId": "ce737826200f813de4ccf54d8054538adaa78fd3",
"title": "Smart Cities: Definitions, Dimensions, Performance, and Initiatives"
},
{
"paperId": "6a1a065335d325279467b5110f98f76ef2440f82",
"title": "A Survey on Concepts, Applications, and Challenges in Cyber-Physical Systems"
},
{
"paperId": "7e127b0aca3e41465e8654ee5f30253a5f6a3c3f",
"title": "Challenges and Opportunities for Data-Intensive Computing in the Cloud"
},
{
"paperId": "71a44ea7d05b0e6fd30f15435380a69560728b05",
"title": "Cyber Physical System: Architecture, applications and research challenges"
},
{
"paperId": "c7c878013390b5b79cbcf8f83eaa7d5e6f108f6a",
"title": "Introduction to Embedded Systems - A Cyber-Physical Systems Approach"
},
{
"paperId": "421969a771b310ad8aa5861f1488ef9bc5ef17b5",
"title": "Machine-to-Machine Communications: Architectures, Standards and Applications"
},
{
"paperId": "356cc2d5f81d872caeb80840de87be2ebfdbacc9",
"title": "Conceptualizing smart city with dimensions of technology, people, and institutions"
},
{
"paperId": "adf7c243567b540d873188fcfd8034191507f386",
"title": "Virtual Observatory: Science capabilities and scientific results"
},
{
"paperId": "059e776cacf87b3ed3f6eb9aa87968247fa68be5",
"title": "Cyber Physical Systems: Design Challenges"
},
{
"paperId": "7b310a9fbb3cd9b07a35c6208b297076823e20f1",
"title": "Fidelity and yield in a volcano monitoring sensor network"
},
{
"paperId": "e61f14ad83976e8dda184bbdd4317376c08ca32b",
"title": "A Cyber-Physical System for Environmental Monitoring"
},
{
"paperId": "895fe96ee0c3182f7aea597534a3caac94e43a40",
"title": "A CPS Integration Platform as a Framework for Generic Remote Labs in Automation Engineering"
},
{
"paperId": "942492c729e84a01a3ef85d63fec117feb9e4362",
"title": "Definition and Categorization of Dew Computing"
},
{
"paperId": "20d92085910dab766fcf6e5897f01d734db7c23d",
"title": "Cyber-physical systems and smart cities Learn how smart devices , sensors , and actuators are advancing Internet of Things implementations"
},
{
"paperId": "2e15eac922143e30cdde0ee2ab8c3e729cbd3cf7",
"title": "A Cyber–Physical Systems Approach to Data Center Modeling and Control for Energy Efficiency"
},
{
"paperId": "43e4de140e63453f2f9364da208c92b0963270e5",
"title": "Blockchain for Science and Knowledge Creation"
},
{
"paperId": null,
"title": "Fataliyev , Sh . A . Mehdiyev . “ Analysis and New Approaches to the Solution of Problems of Operation of Oil and Gas Complex as Cyber - Physical System"
},
{
"paperId": null,
"title": "Edge Computing vs. Fog Computing: Definitions and Enterprise Uses"
},
{
"paperId": null,
"title": "Cyber-physical systems application for the radio telescope’s adaptive surface control task"
},
{
"paperId": null,
"title": "Some issues of application of internet of things in the oil and gas complex"
},
{
"paperId": null,
"title": "Problems of organization of e-maintenance in a network environment (in azerb.)"
},
{
"paperId": null,
"title": "Fog computing and its role in the internet of things"
},
{
"paperId": null,
"title": "Reflection of the geodynamic regime of the Shamakhi-Ismayilli seismogenic zone in local anomalies of the geomagnetic field,"
},
{
"paperId": null,
"title": "Electronic science: current status, problems and perspectives"
},
{
"paperId": null,
"title": "A Model of Computational Resources Distribution among Data Center Users"
}
] | 11,456
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0080a2d96bf02ab60e07fa6b3de72a34012cdc80
|
[
"Computer Science"
] | 0.896618
|
Private Routing in the Internet
|
0080a2d96bf02ab60e07fa6b3de72a34012cdc80
|
International Conference on High Performance Switching and Routing
|
[
{
"authorId": "1803990",
"name": "F. Tusa"
},
{
"authorId": "2066440535",
"name": "David Griffin"
},
{
"authorId": "2056743184",
"name": "Miguel Rio"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"High Performance Switching and Routing",
"Int Conf High Perform Switch Routing",
"HPSR",
"High Perform Switch Routing"
],
"alternate_urls": null,
"id": "9440cc3d-d905-402e-a274-8e60da64789f",
"issn": null,
"name": "International Conference on High Performance Switching and Routing",
"type": "conference",
"url": "http://www.ieee-hpsr.org/"
}
|
Despite the breakthroughs in end-to-end encryption that keeps the content of Internet data confidential, the fact that packet headers contain source and IP addresses remains a strong violation of users’ privacy. This paper describes a routing mechanism that allows for connections to be established where no provider, including the final destination, knows who is connecting to whom. The system makes use of inter-domain source routing with public key cryptography to establish connections and simple private symmetric encryption in the data path that allows for fully stateless packet transmission. We discuss the potential implications of real deployment of our routing mechanism in the Internet.
|
# Private Routing in the Internet
## Miguel Rio
_Department of Electronic and_
_Electrical Engineering_
_University College London_
London, United Kingdom
[email protected]
## Francesco Tusa
_Department of Electronic and_
_Electrical Engineering_
_University College London_
London, United Kingdom
[email protected]
## David Griffin
_Department of Electronic and_
_Electrical Engineering_
_University College London_
London, United Kingdom
[email protected]
**_Abstract—Despite the breakthroughs in end-to-end encryption_** do not identify the end points by public IP address. This makes
**that keeps the content of Internet data confidential, the fact** it impossible for intermediate domains or eavesdroppers to
**that packet headers contain source and IP addresses remains a** identify who is communicating with whom or the full details
**strong violation of users’ privacy. This paper describes a routing**
of the sequence of domains forming the path between end
**mechanism that allows for connections to be established where no**
**provider, including the final destination, knows who is connecting** points.
**to whom. The system makes use of inter-domain source routing** Source hosts select the path to the destination that meet
**with public key cryptography to establish connections and simple** the required characteristics of the session, e.g. to meet per**private symmetric encryption in the data path that allows for** formance targets such as throughput or latency, to increase
**fully stateless packet transmission. We discuss the potential**
resilience to failures by avoiding shared paths for critical
**implications of real deployment of our routing mechanism in**
**the Internet.** connections, or to avoid or include certain domains in the path
**_Index Terms—privacy, routing security and privacy, source_** for policy/administrative reasons. Once the path to the desired
**routing, network security** destination host has been determined by the source host it
is encrypted so that neither the destination host nor the full
I. INTRODUCTION
path can be reverse engineered, but so that each domain can
Although end-to-end encryption has proved considerably easily identify the next hop for forwarding packets towards
good to protect the confidentiality of data, the fact that IP the destination.
headers are transmitted as plaintext through the network incur Our private routing scheme uses two types of encryption in
a significant lack of privacy. Every network provider that the two main phases of a session. During session initialisation
forwards the packets knows who the source and the destination strong public-key cryptography [1] is used for the Encrypted
are and can potentially perform traffic analysis, based on IP Packet Route (EPR) created by the source host that contains
addresses, in order to track down the usage of a particular the encrypted sequence of domain hops and the final destiservice and the entities (users) involved in the communication. nation host identifier. This form of encryption is secure but
If users want to protect the confidentiality of their connec- has two main drawbacks: there is a computational overhead
tions they have a set of limited choices. They can use a virtual for decrypting the next hop, and a large minimum length of
private network service with added cost in performance and ciphertext per hop, which potentially makes the number of bits
financial cost. They can also use onion routing services like required in the EPR impracticable for a reasonable overhead
ToR which also have a serious impact on performance. to be conveyed in every packet header during the data transfer
This paper presents Private Routing (PR), a novel routing phase of the session.
mechanism that allows for users to establish private con- To reduce the performance impact, a lighter form of encrypnections using inter-domain source routing which makes it tion for the path and destination is used during the data transfer
extremely hard for any given provider to identify the com- phase rather than using the full EPR. Each domain uses its
municating entities. The paper is organised as follows: section own secret encryption method using a symmetric private key to
II presents an overview of the system. Section III details how encrypt/decrypt the next hop. The resulting Encrypted Sourcethe map dissemination works. Section IV explains in detail Destination Path (ESDP) and Encrypted Destination-Source
how sessions are established. Section V describes some related Path (EDSP) is constructed hop-by-hop during the session
work. We finish with a discussion of open questions in section initialisation phase which is then used in the headers of each
VI and conclusions in section VII. packet during the data transfer phase.
PR uses inter-domain source routing based on inter-domain
II. OVERVIEW
connectivity maps provided by extensions to BGP similar to
End points establish sessions for private communication
BGP-LS [2]. These maps allow the calculation of the best
across a sequence of domains in the Internet. Packets to
routes which then trigger the establishment of sessions with
initialise the session and to exchange data during the session
a given destination. Each domain only knows the preceding
978-1-6654-4005-9/21/$31.00 ©2021 IEEE and next domain and not the full path. The entire workflow
-----
involves three stages:
1) Inter-domain map dissemination.
2) Path calculation and session establishment using public
key cryptography. These sessions assume the same interdomain path in both directions and do not create any
state in the routers. This session establishment message
needs to be interpreted and updated by one router in
each domain.
3) Data transfer using ESDP/EDSPs based on private symmetric keys per domain.
Private routing allows for private connections without the
disadvantages of VPNs or onion routing. Users do not need
to subscribe to a, possibly costly, third party service and
there will not be performance penalties caused by detouring
through off-path servers. The session establishment part is
similar to onion routing but it is done without any network
detouring. The use of source routing actually allows for
improved performance as source hosts can select paths for the
connections according to service performance requirements. It
also does not rely on public key cryptography for every data
packet. Just for the first one.
III. MAP DISSEMINATION
The first step in PR is domain map propagation. The global
connectivity map of PR domains is built using a link-state
protocol and is sent to every device and updated accordingly,
as illustrated in Figure 1. Our assumption is that this map is
pushed periodically to users whenever there are inter-domain
topological updates. Although this task may seem challenging
we think it is perfectly feasible even today (see discussion
section).
QoS updates
Link policies
Domain
B
Domain
C
Domain
F
Fig. 1. Map Dissemination
The map consists of three parts:
1) Static information about the domains: country, administration, contact and the domain’s public keys signed by
certificate authority. Note that some edge domains who
are not offering publicly-available services may not want
to propagate this information to the public and choose
instead to selectively disseminate it through other means
only to authorised users. Note that an edge domain can
prune this map before giving it to its users although it
cannot guarantee that these removed links will not be
used if discovered by another way.
2) Policies. It also includes the type of link (customerprovider, peer-to-peer) to provide sufficient information
in order for users to avoid building routes that are
not valley-free [3]. A user should never build a route
that uses customer-provider peering as transit. If this
is violated by a user’s path selection the domain will
reject the connection. Each domain may tag a particular
link with given policies. These may or may not be
enforceable. Enforceable policies include for example:
not forward packets from domain A to domain B. Nonenforceable policies can include: do not use link 1 to
reach final domain C.
3) Performance information about links and domains (e.g.
link load) may be obtained or collected through parallel
information systems. PR does not require domains to
volunteer this information themselves which may be
difficult to trust anyway. Providers like Thousand Eyes
[4] should be able to provide this information on a
domain-neutral basis.
We foresee that our domains are roughly equivalent to
today’s Autonomous Systems (ASes). Nevertheless in future
developments, edge domains can establish new domains with
less overhead than today’s ASes, providing organizations like
IANA allow it.
IV. SESSION ESTABLISHMENT
The second step in PR involves bidirectional route construction. A sender will use an algorithm, e.g. shortest-path or
a variant to maximise throughput or improve resilience with
latency guarantees [5], for example. It may also apply its own
specific policies to avoid, e.g., certain domains or geographical
regions. Note that due to this being a source-routing system it
is not necessary for all users to use the same routing algorithm.
After calculating the desired path the source host will construct
the EPR to be used in session initialisation. In the example of
Figure 2, the path to be encrypted from the source-host is
_⟨_
domain A, link 2, domain B, link 6, domain C, link 7, domain
D, destination-host id . Note that although this example uses
_⟩_
globally unique link identifiers this is not necessary in practice.
Each domain only needs to identify which of their outgoing
links should be used for the next hop and these locally unique
identifiers will be conveyed in the domain map used by the
source host to construct the EPR.
The EPR is constructed as follows: the source-host encrypts
the outgoing link id of domain A using the public key of
domain A (the public key having been disseminated to the
host through the domain map), which we denote as EA[p] [(2)][.][ E][p]
denotes we are using public key cryptography, the subscript
of A indicates it is using the public key of domain A, and we
are encrypting outgoing link identifier “2” using that key. This
is repeated for each domain hop to construct the sequence of
encrypted hops, with the final element of the sequence being
the destination host identifier encrypted with the public key of
the destination domain, D: ED[p] [(][dest][)][. Note that the destination]
host identifier does not need to be publicly addressable; an
identifier local to the destination domain can be used provided
-----
, destination are and can potentially perform traffic analysis,w g that forwards the packets knows who the source and
a particular service and the entities (users) involved in thebased on IP addresses, in order to track down the usage ofdestination are and can potentially perform traffic analy
a particular service and the entities (users) involved in thebased on IP addresses, in order to track down the usage
II. OVERVIEW communication... a particular service and the entities (users) involved in
Private Routing (PR) uses inter-domain source routing based II. OVERVIEW communication...
on inter-domain connectivity maps provided by extensions toPrivate Routing (PR) uses inter-domain source routing based7 II. OVERVIEW
BGP similar to BGP-LS[]. These maps allow the calculationCon inter-domain connectivity maps provided by extensions toD Private Routing (PR) uses inter-domain source routing badest
of the best routes which then trigger the establishment ofBGP similar to BGP-LS[]. These maps allow the calculation8 on inter-domain connectivity maps provided by extensions
of the best routes which then trigger the establishment ofBGP similar to BGP-LS[]. These maps allow the calculat
AP DISSEMINATIONconnections with a given destination ... of the best routes which then trigger the establishment
ONNECTION ESTABLISHMENT III. MAP DISSEMINATIONconnections with a given destination ...
IV. CONNECTION ESTABLISHMENT III. MAP DISSEMINATION
_[, E]C[pub][(7)][, E]D[pub][(][dest][)][i]Test notation:_ IV. CONNECTION ESTABLISHMENT
[(7)][, E]D[p] [(][dest][)][i] _hEA[pub][(2)][, E]B[pub]EPR[(6)][, E]C[pub][(7)][, E]D[pub][(][dest][)][i]Test notation:_
(7), ED(dest)i _hEA[p]_ [(2)][, E]B[p] [(6)][, E]C[p] [(7)][, E]D[p] [(][dest][)][i] _hEA[pub][(2)][, E]B[pub]EPR[(6)][, E]C[pub][(7)][, E]D[pub][(][dest][)][i]_
ISCUSSION AND OPEN QUESTIONSB(6)⟩ _hEA(2), EB(6)ESDP, EC(7), ED(dest)i_ _hEA[p]_ [(2)][, E]B[p] [(6)][, E]C[p] [(7)][, E]D[p] [(][dest][)][i]
At the core of PR is the ability to use inter-domain source⟨EV. DA(2), ISCUSSION AND OPEN QUESTIONSEB(6), EC(7)⟩ _hEA(2), EB(6)ESDP, EC(7), ED(dest)i_
routing. This presents several additional advantages. Clients(src)⟩ At the core of PR is the ability to use inter-domain sourceEDSP ⟨EA(2), V. DEB(6), ISCUSSION AND OPEN QUESTIONSEC(7), ED(dest)⟩
can decide for specific paths given quality of service require-routing. This presents several additional advantages. Clients⟨EC(6), EB(2), EA(src)⟩ At the core of PR is the ability to use inter-domain souEDSP
ments; they can establish disjoint paths with the destinationB to C can decide for specific paths given quality of service require-Step 4: routing. This presents several additional advantages. Clie⟨ED(7), EC(6), EB(2), EA(src)⟩
to improve resilience. They can avoid particular untrustfulments; they can establish disjoint paths with the destinationINIT sent from C to D can decide for specific paths given quality of service requiStep 5:
to improve resilience. They can avoid particular untrustfulments; they can establish disjoint paths with the destinatFinal INIT sent from D to dest
VI. CONCLUSIONSdomains. to improve resilience. They can avoid particular untrust
VI. CONCLUSIONSdomains.
ESDPVI. CONCLUSIONS
⟨EA(2), EB(6), EC(7), ED(dest)⟩
Identify applicable funding agency here. If none, delete this. EDSP
Identify applicable funding agency here. If none, delete this.⟨ED(7), EC(6), EB(2), EA(src)⟩
**Step 5bIdentify applicable funding agency here. If none, delete this.: INIT-ACK sent from dest to src.**
No need to process at routers
EDSP
⟨ED(7), EC(6), EEDSPB(2), EA(src)⟩
⟨ED(7), ECData(6), EB(2), EA(src)⟩
Data
**Step 7:**
Data packets sent from dest to src
|ESDP ⟨E(2), E(6), EES(7D)P, E (dest)⟩ A B C D|Col2|Col3|
|---|---|---|
||⟨E(2), E(6), EES(7D)P, E (dest)⟩ A B C D||
||⟨E(2), E(6), E(7), E (dest)⟩ A B C D||
||Data Data||
||||
|EDSP ⟨E (7), E(6), EED(S2)P, E(src)⟩ D C B A|Col2|Col3|
|---|---|---|
||⟨E (7), E(6), EED(S2)P, E(src)⟩ D C B A||
||⟨E (7), E(6), E(2), E(src)⟩ D C B A||
||Data Data||
||||
[(][dest][)][i]
[(][dest][)][i]
Fig. 2. Session Establishment
[(][dest][)][i]
that it has been conveyed in the map and used by the source
node in the construction of the final element of the EPR.
Once encrypted, no party can decrypt the entire path and
destination host identifier without access to the private keys
of all domains in the path. Domain B, for example, will know
the identity of the preceding domain (A) because the session
initialisation request will arrive from domain A on incoming
link 2, and it will discover the identity of the outgoing link
(6), and hence the next hop domain (C), after it has decrypted
_EB[p]_ [(6)][ using its own private key, but it will not be able to]
discover the identity of further downstream domains (domain
D in this example) and it will be unable to decipher the
destination host identifier.
As plaintext domain identifiers are not used anywhere in
the EPR, a hop counter is required to be conveyed in the
initialisation packet along with the EPR. The hop counter is
zero when initiated by the source host and it is incremented
by each domain as it processes and forwards the EPR. When
a domain receives an INIT message it uses the hop counter
as the index into the sequence of hops in the EPR to identify
which element it should decrypt to discover the outgoing link
identifier.
After the path is determined by the source host, the session
is established along the path using a first INIT message that
conveys the calculated EPR between domains. The elements
[(][dest][)][i]
of the EPR are decrypted at each domain and two addresses
(ESDP and EDSP) are progressively calculated and built
as the INIT message traverses the path. These encrypted
paths/addresses are used in all subsequent packets of the data
transfer phase of the session.
ESDP and EDSP use a lighter form of encryption compared
to the public key cryptography used to construct the EPR.
Each domain uses its own secret encryption method and
private symmetric key to substitute the plaintext outgoing link
identifier with an encrypted version. Referring to Figure 2,
domain A substitutes its element of the EPR, EA[p] [(2)][ with]
_EA(2) in the ESDP, where EA denotes it is using the private_
symmetric key of domain A. At the same time the reverse
path is constructed - in this case domain A adds the encrypted
version of the source host identifier to the EDSP: EA(src).
Domain B adds the next elements of the ESDP and EDSP
and so on until the destination domain is reached. Finally
the destination domain forwards the INIT message to the
destination host with the fully constructed ESDP and EDSP.
The private symmetric key encryption method used in
each domain uses a session-specific identifier as a salt for
both encryption and decryption operations. The sessionID is
conveyed in the header of all data packets along with the ESDP
or EDSP during the packet transfer phase. The salt is required
-----
to make mappings between plaintext and ciphertext specific
to each session to avoid malicious domains or eavesdroppers
building up data across multiple sessions to potentially learn
plaintext to ciphertext mappings and to eventually guess the
private symmetric keys used by domains.
The sessionID is constructed from a deterministic hash of
the original EPR that each domain calculates when constructing the ESDP/EDSP during the session initialisation phase.
While it would be possible for the source host to use a
random number or nonce for the sessionID tying it to the
EPR prevents malicious domains from exhaustively testing
arbitrary salt values to learn plaintext to ciphertext mappings
(as discussed further in section VI).
The full process is illustrated in Figure 2 where a session
between source and destination hosts is being established.
_• Firstly the client prepares the INIT message containing_
the EPR, where each hop is encrypted with the public key
of the preceding domain. The hop counter is initialised
to zero.
_• In step 2 domain A decrypts the first element of the EPR_
to reveal that the next hop is over outgoing link 2. It
calculates the sessionID from the hash of the full EPR and
uses this as a salt for its private symmetric key encryption
of the outgoing link, which it adds as the first element of
the ESDP and its encryption of the source host identifier
which it adds as the first element of the reverse path in
the EDSP. Domain A increments the hop counter and
forwards the INIT message to domain B over link 2.
_• In step 3, domain B uses the hop counter as an index to_
see it is responsible for the second element of the EPR.It
decrypts that the next hop is domain C over outgoing link
6 and adds its encrypted elements to the ESDP and EDSP
using the calculated hash of the EPR as sessionID for the
salt of its encryption. It increments the hop counter and
forwards the INIT message to domain C.
_• In step 4, domain C adds to ESDP and EDSP components_
of the path similarly to step 3.
_• In step 5, domain D adds the destination host identifier_
to the ESDP and the final element of the reverse path
to the EDSP and forwards the INIT message to the
destination host. Now that destination has both fully
constructed ESDP and EDSP addresses, it can already
send packets to the source using the EDSP. The first
packet returned is the INIT-ACK which is used to send
the fully constructed ESDP to the source. Note that as
they are fully constructed in step 5 the ESDP and EDSP
addresses in the INIT-ACK do not need to be further
processed by the domains. The EDSP used as the address
in the header of the INIT-ACK needs to be accompanied
with the sessionID, which will be used as the salt for the
decryption of the next hop for forwarding the INIT-ACK
in each of the domains along the reverse path.
Steps 6 and 7 represent the data transfer phase of the
session.
_• In step 6, the source sends packets using the ESDP_
and the sessionID as the address. At each hop the
corresponding part of the ESDP - as indexed by the hop
counter - is decrypted and the next hop domain calculated.
When arriving at the destination domain the destination
host identifier is decrypted and the packet is sent to the
destination host.
_• In step 7, the destination host sends packets to the_
source using the EDSP and sessionID. At each hop the
corresponding part of the EDSP is decrypted and the next
hop domain calculated.
By using encryption we ensure that no domain in the path
knows the full list of domains in the path. Only the origin
domain will know who is the sender of the packet and only the
destination domain can see the destination identifier/address. It
is important to note that no per-flow state is kept in the routers
per session at any time, even during session establishment.
V. RELATED WORK
Source routing has been defined for decades [6] and several
works proposed to build on it. Examples include the Nimrod
architecture [7], Pathlets [8], NIRA [9], MIRO [10] and [11].
In the last decade work on segment routing [12] has gained
popularity and has seen some deployments. Source routing
has also been deployed in data centres [13]. Adoption has
been limited by security concerns [14] but these do not really
apply to PRI since we use domains as the unit in our sources.
Our private source routing has similarities with Tor/onion
routing [15] in the way that the full path is hidden to other
routers. However, rather than implementing overlay routing as
in Tor, PRI is designed as a network infrastructure protocol
that allows nodes to have even more efficient routes than today.
The initial session establishment borrows some ideas from
RSVP [16] and connection oriented protocols like ATM [17].
The INIT message needs to be intercepted and processed by
some routers. However, this needs to be done by only one
router per domain and, crucially, does not create any state in
the routers.
In previous work we defined a user centric framework [18]
that included the establishment of private connections but with
a significant impact in router performance due to the use of
per-flow state. In this paper we propose a completely different
method that does not require state to be maintained by routers.
VI. DISCUSSION AND OPEN QUESTIONS
_A. Security analysis_
We use two forms of private addressing in our Private
Routing scheme: EPRs are used in INIT messages during
session initialisation and ESDP/EDSPs used in the headers of
all packets during the data transfer phase of the established
sessions. EPR is based on strong public-key cryptography
where each element of the EPR sequence is the next hop
encrypted using the public key of the domain forwarding the
INIT message. Provided that the private keys of domains are
not revealed, no party is able to decrypt the entire path. Guessing private keys through brute force attacks is computationally
-----
expensive and the security implications have been extensively
studied in the literature [1].
The encryption scheme used for ESDPs and EDSPs depends
entirely on a secret symmetric method kept private to each
domain. As both encryption and decryption are undertaken
by the same entity - the domain undertaking the next hop
forwarding of packets - there is no need for any key to be
revealed to the source or destination hosts or to any other
domain. This significantly improves security while allowing
for the size of ciphertext to be minimal. The algorithm for
mapping plaintext to ciphertext and vice versa is kept secret
and depends upon a salt - which is the sessionID in our case.
Different salts will result in different mappings.
One possible attack model is that a malicious domain
attempts to learn the secret mapping used by downstream
domains. If this were possible then the malicious domain
could observe the encrypted ESDP or EDSP and reverse the
encoding to reveal the domain path and destination identity of
sessions traversing its domain.
To undertake such an attack the malicious domain would
need to gather sufficient data samples of plaintext and ciphertext mappings. It could gather these by initiating false sessions
from its own domain and observing the encrypted next hops
returned by downstream domains. However, as a salt is needed
for every encryption/decryption the attacking domain would
need to explore false sessions using a significant proportion of
the salt range in order to guess the secret mapping algorithm.
We have opted to tie the session id/salt to the destination address to avoid the possibility of malicious attackers being able
to explore encodings using arbitrary salts. The sessionID/salt
is determined by a well-known deterministic hash method of
the full EPR. Although it is possible for attackers to craft
specific salts to probe the encryption method of downstream
domains this will result in INIT messages to a very wide
range of destination hosts, making the attack only possible
if the attacker is able to collude with a very large number
of destination hosts that also represent the range of values of
sessionID/salt.
One possible approach to make such attacks even more
difficult would be to make the secret encryption algorithm used
in each domain time-dependent. When processing the INIT
messages, domains would mark the forwarded INIT message
with the time-to-live (TTL) of their encryption method, which
will be returned to the source in the INIT-ACK. Once the
TTL expires a source would need to initiate a new INIT
message to obtain the new ESDP/EDSP for the EPR. With
this approach attackers would need to restart their probing
and secret guessing from scratch in every TTL period.
_B. Advantages and disadvantages of source routing_
At the core of PR is the ability to use inter-domain source
routing. This presents several additional advantages. Clients
can decide for specific paths given quality of service requirements; they can establish disjoint paths with the destination
to improve resilience. They can avoid particular untrustful
domains. However, despite source routing being defined previously for IPv4 and IPv6, its use has been historically
discouraged for security reasons. This opposition has faded in
recent years with the advent of segment routing. We believe
that adding privacy to the list of advantages will be a strong
incentive for providers allowing its use. We see as future work
ways of providers minimizing security attacks.
_C. Scalability of domain map propagation_
The size of the data used by PR for the inter-domain
routing link-state is an important aspect to be considered. The
connectivity maps need to be propagated to every client/endhosts together with any future updates. Although at first glance
this might represent a challenge, some relevant facts should
be taken into account when analysing the scalability of this
approach in the long-term. First of all, those maps do not need
to be transmitted to all of the potential thousands of domains
in the system. Furthermore, studies on BGP suggest that the
required update frequency [11][12] is not very high. Finally,
the number of updates due to possible failures will tend to
reduce as networks become more reliable.
_D. Connections within the same domain_
The way packets are routed within domains is not prescribed
by PR. As such, providers will be offered full flexibility for
intra-domain traffic engineering.
_E. Connections traversing a small number of domains_
PR does not allow path privacy if both the source and the
destination within a packet belong to the same domain. Moreover, privacy is compromised when less than three domains are
specified within a PR path. As a workaround, for paths of two
domain hops, either the source or destination domain can be
duplicated in the source routing INIT message and the repeated
domain would just ignore the fake hop being introduced. This
will prevent the full domain path from being exposed to either
of the two involved domains.
As an example, let us consider a path that traverses only
domains D1 and D2, for which a user determined that the
PR path should be D1-D1-D2. After decrypting the first hop,
Domain 1 will find that the next domain in the list is itself
(i.e., again D1). Hence, it will also decrypt the second hop in
the list in order to retrieve the actual information about the
next domain, namely D2.
Although Domain 2 can see that the path includes two prior
hops, it will not be able to access the encrypted information
and will not know that the first hop was Domain 1. As
already mentioned, a one-to-one mapping between ISPs/ASs
and domains is not expected. Therefore, as we anticipate
that cloud providers will have their own domains, at least
an additional hop would be added to the PR path enabling
a further level of privacy.
_F. Sticky routes may impact resilience_
The set of domains involved in a PR session is established
during the initial flow set-up and is only known to the
-----
originating node. Therefore, as all the intermediate network
domains are not aware of the final destination, it is not feasible
to reroute a connection when a network outage occurs. Sticky
routes can show low resilience to failures, however, within
each domain, PR allows to deal with resilience in the same
way as today. As for the inter-domain resilience, end-users
are much more involved in the path selection and can setup several routes, with minimal common links, for critical
applications. Since PR maintains and propagates inter-domain
link state to the users, these are able to react quickly to failures
that affect inter-domain paths.
_G. Multicast_
Multicast presents challenges from the point of view of
privacy. If one wants the network to play a role in replicating
packets for network efficiency it is very hard to keep this
information entirely private. Given that, in practice, multicast
only works in intra-domain there is little we can do to apply the
principles of PR to multicast. In theory, The route definition
in PR can be extended to build a inter-domain tree, keeping
privacy violations limited to the user’s domain but this would
significantly change the way multicast works today and we
leave this for future work,
_H. Path asymmetry_
One small limitation of our scheme is that it makes it
compulsory for inter-domain routing symmetry. Packets in
both directions can however use different links in each domain
and different links connecting any two domains. This is a
necessary implication of the destination not being aware who
the source is. We believe this is not a strong limitation.
_I. Anycast_
Anycast as we know it becomes impossible because routing
choices are made by the final users. However, if the localization of several replicas is exposed to the user somehow (e.g.
through DNS) than the clients themselves can make the choice
of who to connect to.
_J. Practical implementation_
Although the ideas on this paper can be implemented in a
clean slate network, they can also be retrofitted in IPv6.
By reusing the source and destination addresses one can
use 256 bits to encode the ESDP and EPSD fields. This
will be more than enough to encrypt one final host identifier
and several domains. If, for example one uses 64 bits for
the encrypted final host identifier (more than enough for any
domain in the future) we can still we can still have 8 sets
of 24 bits to encode each domain. In the unlikely event that
one needs more domains this can be defined in an extension
header. The sessionID can be implemented in the flow label
field. The hop counter will only need a small number of bits
to indicate the number of domains and can be included in this
field. This INIT message does not have any constraints in size
since it is PDU sent between applications in adjacent domains
using TCP.
Performance wise, PIR should add little impact to packet
forwarding. Each INIT message needs to be processed by only
one router in each domain potentially with the use of SDN
packet escalation. Data forwarding adds a simple symmetric
decryption to one given component of the EDSP/ESDP which
should be negligible.
VII. CONCLUSIONS
This paper described a novel method to establish private
connections between two end points in the Internet. Using
this scheme, neither the final destination nor any domain in
the middle is able to obtain the the full source/destination
pair to reveal the identity of the communicating entities. The
scheme relies on inter-domain source routing allowing sources
to have a general choice of the connections’ path, which has
itself many other advantages. It relies on a soft connection
established message that needs to be processed by a single
router in each domain. Crucially, per-flow state is not needed
for the connection. We discuss the practical implications of
our scheme, concluding that there are no major roadblocks to
its implementation.
VIII. ACKNOWLEDGEMENTS
The authors would like to acknowledge the support of
Huawei Technologies Co., Ltd.
REFERENCES
[1] R. L. Rivest, A. Shamir, and L. Adleman. A Method for Obtaining
Digital Signatures and Public-Key Cryptosystems. _Commun. ACM,_
21(2):120–126, February 1978.
[2] Ed. L. Ginsberg, S. Previdi, Q. Wu, J. Tantsura, and C. Filsfils. RBGP
– Link State (BGP-LS) Advertisement of IGP Traffic Engineering
Performance Metric Extensions, March 2019.
[3] Sophie Y Qiu, Patrick D McDaniel, and Fabian Monrose. Toward
Valley-free Inter-domain Routing. In IEEE International Conference
_on Communications, 2007._
[4] https://www.thousandeyes.com.
[5] J. Li, T. K. Phan, W. K. Chai, D. Tuncer, G. Pavlou, D. Griffin,
and M. Rio. DR-Cache: Distributed Resilient Caching with Latency
Guarantees. In IEEE INFOCOM, 2018.
[6] RFC 791 - Internet Protocol DARPA Internet Program Protocol Specification, 1981.
[7] I. Castineyra, N. Chiappa, and M. Steenstrup. RFC 1992 - The Nimrod
Routing Architecture, 1996.
[8] P. B. Godfrey, I. A. Ganichev, S. J. Shenker, and I. Stoica. Pathlet
Routing. In ACM SIGCOMM, 2009.
[9] X. Yang, D. Clark, and A. W. Berger. NIRA: a New Inter-domain
Routing Architecture. IEEE/ACM Trans. Networking, 2007.
[10] W. Xu and J. Rexford. MIRO: Multi-path Interdomain Routing. In ACM
_SIGCOMM, 2006._
[11] X. Yang and D. Wetherall. Source Selectable Path Diversity via Routing
Deflections. In ACM SIGCOMM, 2006.
[12] C. Filsfils, S. Previdi, B. Decraene, S. Litkowski, and R. Shakir. RFC
8402 - Segment Routing Architecture, July 2018.
[13] M. Kheirkhah, I. Wakeman, and G. Parisis. MMPTCP: A Multipath
Transport Protocol for Data Centers. In IEEE INFOCOM, 2016.
[14] David Hoelzer. The dangers of source routing. Technical report, Enclave
Forensics.
[15] https://www.torproject.org.
[16] R. Braden, L. Zhang, S. Berson, S. Herzog, and S. Jamin. Resource
ReSerVation Protocol (RSVP), 1997.
[17] Martin De Prycker. Asynchronous Transfer Mode. Solutions for Broad_band ISDN. Prentice Hall, 1993._
[18] M. Kheirkhah, T. K. Phan, W. XinPeng, D. Griffin, and M Rio.
UCIP: User Controlled Internet Protocol. In IEEE INFOCOM 2020 _IEEE Conference on Computer Communications Workshops (INFOCOM_
_WKSHPS), pages 279–284, 2020._
-----
| 8,926
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/HPSR52026.2021.9481808?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/HPSR52026.2021.9481808, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://discovery.ucl.ac.uk/10148318/1/Private_Routing__camera_ready_version_.pdf"
}
| 2,021
|
[
"JournalArticle",
"Conference"
] | true
| 2021-06-07T00:00:00
|
[] | 8,926
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00825d6e42c35acca105f752afd57e1f593043a1
|
[] | 0.827777
|
Improve Quality Of Public Opinion In Website Using Blockchain Technology
|
00825d6e42c35acca105f752afd57e1f593043a1
|
Jurnal Sains dan Teknologi Industri
|
[
{
"authorId": "2273155831",
"name": "Galih Mahardika Munandar"
},
{
"authorId": "2273152235",
"name": "Imam Samsul Ma’arif"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"J Sains dan Teknol Ind"
],
"alternate_urls": null,
"id": "6a18a46f-6e4d-4eb3-bc8d-9b0e286dbdbb",
"issn": "2407-0939",
"name": "Jurnal Sains dan Teknologi Industri",
"type": "journal",
"url": "http://ejournal.uin-suska.ac.id/index.php/sitekin"
}
|
The unemployment rate in Indonesia is quite high, where the average value in Indonesia is 18%, the largest among Cambodia, Nigeria, and lower-middle-class countries, which show an average of 12%. The high unemployment rate is caused by the level of motivation of students to continue working, studying or participating in competency training with a lack of interest. To increase students' interest, it is necessary to have a large number of critical communications to arouse students' enthusiasm and motivation. The method used is qualitative and quantitative where to design the system design and system validation, the technique used is the System Development Life Cycle Waterfall. The results obtained by the Heuristic Evaluation stated that not many things needed to be improved for the system that was created and the SUS (System Usability Score) stated that it was good with a minimum score of 68 given by 5 experts. The Blockchain system can already be run or applied to the wider community. Keywords: Blockchain, Heuristic Evaluation, System Usability Scale, Siswa, System Development Cycle
|
ISSN 2407-0939 print/ISSN 2721-2041 online
# Improve Quality Of Public Opinion In Website Using Blockchain
Technology
## Galih Mahardika Munandar[1], Imam Samsul Ma’arif[ 2]
1.2Department of Industrial Engineering, Faculty of Science and Humaniora, Universitas Muhammadiyah
Gombong,
Jl. Yos Sudarso 461, Gombong, Kebumen, Jawa Tengah
[Email: [email protected], [email protected]](mailto:[email protected])
## ABSTRACT
The unemployment rate in Indonesia is quite high, where the average value in Indonesia is 18%, the largest
among Cambodia, Nigeria, and lower-middle-class countries, which show an average of 12%. The high
unemployment rate is caused by the level of motivation of students to continue working, studying or participating
in competency training with a lack of interest. To increase students' interest, it is necessary to have a large number
of critical communications to arouse students' enthusiasm and motivation. The method used is qualitative and
quantitative where to design the system design and system validation, the technique used is the System
Development Life Cycle Waterfall. The results obtained by the Heuristic Evaluation stated that not many things
needed to be improved for the system that was created and the SUS (System Usability Score) stated that it was
good with a minimum score of 68 given by 5 experts. The Blockchain system can already be run or applied to the
wider community.
**_Keywords: Blockchain, Heuristic Evaluation, System Usability Scale, Siswa, System Development Cycle_**
## Introduction
Unemployment and employment remain a major concern in every country, especially in developing
countries such as Indonesia [1]. Both problems create a dualism of conflicting issues when the government fails
to minimize their impact. Indonesia's average unemployment rate is 18%, higher than India, Cambodia, Nigeria,
and low-middle-income countries with a rate of 12.2% [2]. The lack of motivation among young people aged 15
to 24, who are vulnerable, is one of the reasons they are lazy to find a job, go to college, or take training to support
their careers. In Kebumen, there were 48,861 graduates in 2022 [3]. According to BPS Kebumen data from 2021,
the number of unemployed people was 37,408, indicating that the potential unemployment rate in Kebumen
Regency was 76.6%. Therefore, the Kebumen government is urged to provide guidance containing motivation
and basic knowledge to increase job searching, pursuing education, and seeking hard skills training.
The lack of job opportunities is one of the reasons why graduates are reluctant to find work, exacerbated
by the Covid-19 pandemic, which has forced companies to lay off employees to reduce costs. Many individuals
lost their jobs during the pandemic, causing a shortage of employment opportunities and difficulty in finding
business capital. Ahmad Alamsyah Saragih, a member of the Indonesian Ombudsman, suggests that the
government needs to go through an evaluation process and use digital approaches such as Blockchain technology,
which has been used since the beginning of the Covid-19 social assistance distribution program [4]. Blockchain
technology shows the potential for revolutionizing social practices, and its development has rapidly expanded
beyond the economic and banking sectors.
Blockchain technology was initially introduced by Satoshi Nakamoto in the E-Cash or Electronic Cash
Bitcoin system [5]. The Blockchain system began as a security measure for E-Cash users and has since been
applied in other areas, such as manufacturing, industry, social services, and health [6]. [7] combined Blockchain
technology with risk tracking in public opinion based on the NPO (Network Public Opinion) framework. This
technology is highly advanced and can improve public credibility and trust. Indonesians are highly active on the
internet, with [8] reporting that by early 2023, 212 million Indonesians, or 77% of the population, will be using
the internet. Public opinion on the internet can vary, and decision-making can change when the public receives
information on the internet without ensuring its risks or truthfulness[7], [9]–[15].
With the continuous development of science and technology and the progress of society, the spread of
network public opinion has serious consequences for society [7]. Ethics are used as a guideline for behavior and
have been expanded into etiquette, which is a guideline and determinant for individuals or groups to act following
the civilization of society or the nation [16]. Ethics (etiquette) is increasingly necessary in public relations tasks
to build positive corporate images, especially by forming public opinion.
-----
ISSN 2407-0939 print/ISSN 2721-2041 online
## Research Method
In this public opinion analysis, the FMEA method using RPN (Risk Priority Number) is used to identify
potential hazards, which will then be evaluated to determine the risk category. There are three risk categories:
low, medium, and high [17]. In the FMEA method, an opinion's severity level and appearance will be determined
for opinion filtering. Opinions that receive a low score will pass and be appropriate for students. In contrast,
opinions with medium risk will be considered for student viewing, and opinions with high risk will be locked and
cannot be accessed by students.
Student as
|Public Opinion|Col2|
|---|---|
|||
|Blockchain Gate||
|||
|FMEA Negative Word||
Figure 1 Conceptual of Blockchain
The picture above shows that all public opinions collected will be placed in the Blockchain Gate where
all data is guaranteed for its security and cannot be accessed randomly. Later the opinions will be continued to be
filtered or processed through the FMEA method. Failure Mode Effect Analysis (FMEA) is a systematic tool that
identifies the consequences of system or process failures, as well as reducing or eliminating the chances of failure
[18]–[23]. The function of FMEA in this study is to lock all opinions that negatively affect students. We treat
sentiment classification of words into Positive, Negative, and Neutral as a three-way classification problem instead
of a two-way Positive and Negative classification problem. By adding the third class, Neutral, we can prevent
classifiers from assigning positive or negative sentiment to words containing weak opinions[24]–[28]. After going
through FMEA, the data is continued to the Blockchain system, where the first process is securing the Blockchain
data by securing all data to prevent negative opinions from coming out. The data that will be decentralized is
considered neutral and positive.
In contrast, the neutral and positive data will be decentralized to facilitate and accelerate the search for data
according to the needs of the students. After the data is decentralized, all data containing constructive opinions
can be searched by students. In distributing opinion data, it will go through FMEA checking again so that there
are no opinions containing negative words for students who read them. After all opinions pass through the FMEA
stage, they will be placed in the Blockchain that students can read.
The Waterfall SDLC (Software Development Life Cycle) method and Blockchain designs are used in the
system. The system design will be shown in the following figure.
-----
ISSN 2407-0939 print/ISSN 2721-2041 online
Collecting and
Analysis Data
Planning and Design
Implementation
Software
Integration and Trial
Verification
Running and
Maintenance
Figure 2 Step by step of Waterfall
The system design is tailored to the situation in Kebumen. Each region has its own differences and
requires data authenticity so that the results obtained are in accordance with the problems that arise. After the
system design, the next step is to implement it by conducting testing and usability testing to identify failures and
errors that occur in the system. When everything has been done, the next step is system implementation, heuristic,
and System Usability Score (SUS) testing.
Figure 3 Flowchart of research
-----
ISSN 2407-0939 print/ISSN 2721-2041 online
The research begins by collecting and processing data until the required amount of data is fulfilled. Once
enough data has been collected, the next step is to design and develop the Blockchain system. The design and
development of the system must address the existing problems before proceeding with program development. If
the program design is deemed to solve the problem, the next step is to proceed with developing the Blockchain
program. Once the program is created correctly, usability testing will be conducted to ensure the data is valid and
reliable before moving on to collaborating with the Kebumen government. After collaborating with the Kebumen
government, the research is completed and can be implemented by anyone.
## Result And Discussion
This study using heuristic evaluation and system usability score then involve 5 experts in website. The
evaluation conducted by 5 experts found several issues in accessing the website prototype that uses blockchain
system. There were also satisfactory results, so there was no need for any improvement. The heuristic evaluation
will be displayed in table 1.
Table 1. Heuristic Evaluation
**No**
**_Heuristic Board_** **Information** **_Severity Rating_** **_Fixed Rating_**
**_Heuristic_**
1 _Visibility of System Status_ - Additional information
is needed for the 1 0
design parameter.
2 _Match Between system and_ - No information
_real world_ provided for the 1 0
parameter.
4 _Consistency and Standard_ - Non-standard icons
2 1
used.
5 _Error Prevention_ - None found. 0 0
6 _Recognition rather than_ - Search engine has a
2 1
_recall_ suggestion history.
7 _Flexibility and Efficiency_ - No notification
provided when a
2 1
search term is
misspelled.
8 _Aesthetic and Minimalist_ - Easy to go back to the
2 2
_Design_ previous page.
10 _Help and Documentation_ - More attractive color
1 0
selection.
**b.** **_System Usability Scale_**
The System Usability Scale will be displayed in table 2 as follows:
Tabel 2. Scoring System Usability Scale
**Respondent**
**Question**
**1** **2** **3** **4** **5** **6** **7** **8** **9** **10**
**Total**
1 4 0 4 3 3 3 4 1 4 3 72.5
2 3 1 3 3 3 3 4 1 3 3 68
3 3 0 4 3 3 3 4 0 4 3 68
4 3 0 4 4 3 3 4 0 3 4 70
5 3 0 4 3 4 4 3 1 3 4 72.5
Table 2 shows the System Usability Scale scores from the 5 experts. The first expert had a score of 72.5,
the second expert had a score of 67.5, the third expert had a score of 67.5, the fourth expert had a score of 70, and
the fifth expert had a score of 72.5.
-----
ISSN 2407-0939 print/ISSN 2721-2041 online
The experts answered 10 questions provided by the researcher to determine whether the website is usable
or not. These scores have classifications, which will be shown in table 3 below.
Table 3. Score Classification
Score Rating Classification
- 80.3 A _Excellent_
69 – 80.3 B _Good_
68 C _Okay_
51 – 67 D _Poor_
According to the value of classification that found 3 experts scored the capacity of website is good and 2
experts score the website is okay. The result shows that the expert agreed about the system, but no significant
error system shows. The system follows another reference like the benefit using blockchain system because the
system that made for public that involves many user and technical features to make the system appropriate, [13]
[14] state blockchain can adopt in specific context like major stakeholders, application areas, commercial benefits,
and technical features. The system synchronized with [15] that a Blockchain efficient rescue network to minimize
the bad word appears in website.
## Conclusion and Suggestion
Based on the results and discussion, the study concludes that using the Blockchain system on the website
minimizes negative words or sentences and bad public opinions that can decrease the motivation and spirit of
students in Kebumen. The implementation has been good and running well. However, there is still room for
improvement based on heuristic evaluation, which is not urgent because the previous improvements have already
been evaluated by heuristic evaluation.
The heuristic evaluation conducted by 5 experts to test the website's usability using System Usability Score
found that the product usability is in a good and sufficient category. The researchers suggest that future research
needs to add a better interface test not only on the functional features of the website and add a bad words state
based on the Blockchain system, so there is no need to consider good and bad sentences, as the current system
needs to weigh good and bad sentences based on user feedback ratings.
## References
[1] A. Soleh, “Strategi Pengembangan Potensi Desa,” J. Sungkai, vol. 5, no. 1, pp. 32–52, 2017.
[2] R. A. Sulistiobudi and A. L. Kadiyono, “Employability of students in vocational secondary school: Role
of psychological capital and student-parent career congruences,” Heliyon, vol. 9, no. 2, Feb. 2023, doi:
10.1016/j.heliyon.2023.e13214.
[3] kemendikbudristek, “Data Peserta Didik Kab. Cirebon,” 2022.
[4] T. Fazreen and M. D. E. Munajat, “Solusi Pemanfaatan Teknologi Blockchain Untuk
MengatasiPermasalahan Penyaluran Dana Bantuan Sosial Covid-19,” JANE (Jurnal Adm. Negara), vol.
13, no. 2, pp. 264–268, 2022.
[5] I. Keshta _et al., “Blockchain aware proxy re-encryption algorithm-based data sharing scheme,”_ _Phys._
_Commun., vol. 58, p. 102048, 2023, doi: 10.1016/j.phycom.2023.102048._
[6] K. O. B. O. Agyekum, Q. Xia, E. B. Sifah, C. N. A. Cobblah, H. Xia, and J. Gao, “A Proxy Re-Encryption
Approach to Secure Data Sharing in the Internet of Things Based on Blockchain,” IEEE Syst. J., vol. 16,
no. 1, pp. 1685–1696, 2022, doi: 10.1109/JSYST.2021.3076759.
[7] Z. Wang, S. Zhang, Y. Zhao, C. Chen, and X. Dong, “Risk prediction and credibility detection of network
public opinion using blockchain technology,” Technol. Forecast. Soc. Change, vol. 187, no. July 2022, p.
122177, 2023, doi: 10.1016/j.techfore.2022.122177.
[8] M. A. Rizaty, “Indonesia Miliki 97,38 Juta Pengguna Instagram pada Oktober 2022,” _dataindonesia.id,_
2022. .
[9] H. Sandila, M. Rizki, M. Hartati, M. Yola, F. L. Nohirza, and N. Nazaruddin, “Proposed Marketing
Strategy Design During the Covid-19 Pandemic on Processed Noodle Products Using the SOAR and AHP
Methods,” 2022.
[10] N. Saputri, F. S. Lubis, M. Rizki, N. Nazaruddin, S. Silvia, and F. L. Nohirza, “Iraise Satisfaction Analysis
Use The End User Computing Satisfaction (EUCS) Method In Department Of Sains And Teknologi UIN
Suska Riau,” 2022.
-----
ISSN 2407-0939 print/ISSN 2721-2041 online
[11] A. Nabila et al., “Computerized Relative Allocation of Facilities Techniques (CRAFT) Algorithm Method
for Redesign Production Layout (Case Study: PCL Company),” 2022.
[12] F. Lestari, “Vehicle Routing Problem Using Sweep Algorithm for Determining Distribution Routes on
Blood Transfusion Unit,” 2021.
[13] M. Rizky _et al., “Improvement Of Occupational Health And Safety (OHS) System Using Systematic_
Cause Analysis Technique (SCAT) Method In CV. Wira Vulcanized,” 2022.
[14] Afrido, M. Rizki, I. Kusumanto, N. Nazaruddin, M. Hartati, and F. L. Nohirza, “Application of Data
Mining Using the K-Means Clustering Method in Analysis of Consumer Shopping Patterns in Increasing
Sales (Case Study: Abie JM Store, Jaya Mukti Morning Market, Dumai City),” 2022.
[15] M. Yanti, F. S. Lubis, N. Nazaruddin, M. Rizki, S. Silvia, and S. Sarbaini, “Production Line Improvement
Analysis With Lean Manufacturing Approach To Reduce Waste At CV. TMJ uses Value Stream Mapping
(VSM) and Root Cause Analysis (RCA) methods,” 2022.
[16] S. Natawilaga, “Peran Etika Dalam Meningkatkan Efektivitas Pelaksanaan Public Relations,” WACANA,
_J. Ilm. Ilmu Komun., vol. 17, no. 1, p. 64, 2018, doi: 10.32509/wacana.v17i1.492._
[17] J. A. Rahadiyan and P. Adi, “Analisa Risiko Kecelakaan Kerja Di Pt. Xyz,” J. Titra, vol. 6, no. 1, pp. 29–
36, 2018.
[18] A. S. M. Absa and S. Suseno, “Analisis Pengendalian Kualitas Produk Eq Spacing Dengan Metode
Statistic Quality Control (SQC) Dan Failure Mode And Effects Analysis (FMEA) Pada PT. Sinar
Semesta,” J. Teknol. dan Manaj. Ind. Terap., vol. 1, no. III, pp. 183–201, 2022.
[19] A. Wicaksono and F. Yuamita, “Pengendalian Kualitas Produksi Sarden Mengunakan Metode Failure
Mode And Effect Analysis (FMEA) Dan Fault Tree Analysis (FTA) Untuk Meminimalkan Cacat Kaleng
Di PT XYZ,” J. Teknol. dan Manaj. Ind. Terap., vol. 1, no. III, pp. 145–154, 2022.
[20] A. Anastasya and F. Yuamita, “Pengendalian Kualitas Pada Produksi Air Minum Dalam Kemasan Botol
330 ml Menggunakan Metode Failure Mode Effect Analysis (FMEA) di PDAM Tirta Sembada,” _J._
_Teknol. dan Manaj. Ind. Terap., vol. 1, no. I, pp. 15–21, 2022, doi: https://doi.org/10.55826/tmit.v1iI.4._
[21] A. Dewangga and S. Suseno, “Analisa Pengendalian Kualitas Produksi Plywood Menggunakan Metode
Seven Tools, Failure Mode And Effect Analysis (FMEA), Dan TRIZ,” J. Teknol. dan Manaj. Ind. Terap.,
vol. 1, no. 3, pp. 243–253, 2022.
[22] A. Wicaksono and F. Yuamita, “Pengendalian Kualitas Produksi Sarden Mengunakan Metode Failure
Mode and Effect Analysis (FMEA) Untuk Meminimumkan Cacat Kaleng Di PT. Maya Food Industries,”
_J. Teknol. dan Manaj. Ind. Terap., vol. 1, pp. 1–6, 2022, doi: https://doi.org/10.55826/tmit.v1iI.6._
[23] T. Aprianto, I. Setiawan, and H. H. Purba, “Implementasi metode Failure Mode and Effect Analysis pada
Industri di Asia – Kajian Literature,” _Matrik, vol. 21, no. 2, p. 165, 2021, doi:_
10.30587/matrik.v21i2.2084.
[24] W. Amalia, D. Ramadian, and S. N. Hidayat, “Analisis Kerusakan Mesin Sterilizer Pabrik Kelapa Sawit
Menggunakan Failure Modes and Effect Analysis (FMEA),” J. Tek. Ind. J. Has. Penelit. dan Karya Ilm.
_dalam Bid. Tek. Ind., vol. 8, no. 2, pp. 369–377, 2022._
[25] I. A. B. Nirwana, A. W. Rizqi, and M. Jufryanto, “Implementasi Metode Failure Mode Effect and Analisys
(FMEA) Pada Siklus Air PLTU,” J. Tek. Ind. J. Has. Penelit. dan Karya Ilm. dalam Bid. Tek. Ind., vol.
8, no. 2, pp. 110–118, 2022.
[26] H. A. Yasin and R. P. Sari, “Pengembangan Sistem Inspeksi Digital Berbasis Macro VBA Excel Dengan
Metode Failure Mode And Effects Analysis (FMEA),” J. Tek. Ind. J. Has. Penelit. dan Karya Ilm. dalam
_Bid. Tek. Ind., vol. 7, no. 1, pp. 7–14._
[27] C. S. Bangun, “Application of SPC and FMEA Methods to Reduce the Level of Hollow Product Defects,”
_J. Tek. Ind. J. Has. Penelit. dan Karya Ilm. dalam Bid. Tek. Ind., vol. 8, no. 1, pp. 12–16, 2022._
[28] S. M. Kim and E. Hovy, “Identifying and analyzing judgment opinions,” HLT-NAACL 2006 - Hum. Lang.
_Technol. Conf. North Am. Chapter Assoc. Comput. Linguist. Proc. Main Conf., no. June, pp. 200–207,_
2006, doi: 10.3115/1220835.1220861.
[29] S. Pu and J. S. L. Lam, “The benefits of blockchain for digital certificates: A multiple case study analysis,”
_Technol. Soc., vol. 72, no. November 2022, p. 102176, 2023, doi: 10.1016/j.techsoc.2022.102176._
[30] J. Wang et al., “Building operation and maintenance scheme based on sharding blockchain,” Heliyon, vol.
9, no. 2, p. e13186, 2023, doi: 10.1016/j.heliyon.2023.e13186.
[31] B. Chen, W. Zhang, Y. Shi, D. Lv, and Z. Yang, “Reliable and efficient emergency rescue networks : A
blockchain and fireworks algorithm-based approach,” _Comput. Commun., vol. 206, no. May, pp. 172–_
177, 2023, doi: 10.1016/j.comcom.2023.05.005.
-----
| 5,547
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.24014/sitekin.v21i1.22678?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.24014/sitekin.v21i1.22678, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBYSA",
"status": "GOLD",
"url": "https://ejournal.uin-suska.ac.id/index.php/sitekin/article/download/22678/9160"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-06-07T00:00:00
|
[
{
"paperId": "666ad3b344071f7f4354d1909d92221509e72d6b",
"title": "Reliable and efficient emergency rescue networks: A blockchain and fireworks algorithm-based approach"
},
{
"paperId": "d26c0af8362877a6e9b654ac8c925da95e9eebaf",
"title": "Blockchain aware proxy re-encryption algorithm-based data sharing scheme"
},
{
"paperId": "6440c039b8d213a00f8a732ecf82701312313c0e",
"title": "Risk prediction and credibility detection of network public opinion using blockchain technology"
},
{
"paperId": "686a4ddee452cf02da74b3b52b2b9ba8eac39295",
"title": "Employability of students in vocational secondary school: Role of psychological capital and student-parent career congruences"
},
{
"paperId": "16effc56047d7bcc2740a12d190402067fba9c1b",
"title": "Building operation and maintenance scheme based on sharding blockchain"
},
{
"paperId": "e5267ad91cc1bf060b7383676f44f761e71808d1",
"title": "Analisis Kerusakan Mesin Sterilizer Pabrik Kelapa Sawit Menggunakan Failure Modes and Effect Analysis (FMEA)"
},
{
"paperId": "ff960567aa810cdd9ef785a2e2f05cfc1900664c",
"title": "Implementasi Metode Failure Mode Effect and Analisys (FMEA) Pada Siklus Air PLTU"
},
{
"paperId": "4ee8ce01d4f2240e3e50d535a1908c173a82eb8b",
"title": "The benefits of blockchain for digital certificates: A multiple case study analysis"
},
{
"paperId": "9423f0e542f854f19f07d4747f94b71790baff64",
"title": "Analisa Pengendalian Kualitas Produksi Plywood Menggunakan Metode Seven Tools, Failure Mode And Effect Analysis (FMEA), Dan TRIZ"
},
{
"paperId": "27e19b18204fd5f98c7fb774100969f6b665f255",
"title": "Pengendalian Kualitas Produksi Sarden Mengunakan Metode Failure Mode And Effect Analysis (FMEA) Dan Fault Tree Analysis (FTA) Untuk Meminimalkan Cacat Kaleng Di PT XYZ"
},
{
"paperId": "f262226b10ce1b8bdbc393e08168dd59d7bb9e9c",
"title": "Analisis Pengendalian Kualitas Produk Eq Spacing Dengan Metode Statistic Quality Control (SQC) Dan Failure Mode And Effects Analysis (FMEA) Pada PT. Sinar Semesta"
},
{
"paperId": "9211326d67a0fda02a3f7eb8bf0da1e6b7292e71",
"title": "Application of SPC and FMEA Methods to Reduce the Level of Hollow Product Defects"
},
{
"paperId": "f4f1b6d01ac2fcc67cbd1eeef01b5da6010e867d",
"title": "Pengendalian Kualitas Pada Produksi Air Minum Dalam Kemasan Botol 330 ml Menggunakan Metode Failure Mode Effect Analysis (FMEA) di PDAM Tirta Sembada"
},
{
"paperId": "57b82527a6f26e462dc623e229e385626cbea0a1",
"title": "SOLUSI PEMANFAATAN TEKNOLOGI BLOCKCHAIN UNTUK MENGATASI PERMASALAHAN PENYALURAN DANA BANTUAN SOSIAL COVID-19"
},
{
"paperId": "f190fcee19038da7a7b5133330d2aaf65ffd73ed",
"title": "Pengembangan Sistem Inspeksi Digital Berbasis Macro VBA Excel Dengan Metode Failure Mode And Effects Analysis (FMEA) (Studi Kasus di PT. Meidoh Indonesia)"
},
{
"paperId": "29e0eb8233c08e4eac205ed894c14e56993f7775",
"title": "A Proxy Re-Encryption Approach to Secure Data Sharing in the Internet of Things Based on Blockchain"
},
{
"paperId": "4fbb5a29835ad84964e05c7793970a3c7007aad9",
"title": "Implementasi metode Failure Mode and Effect Analysis pada Industri di Asia – Kajian Literature"
},
{
"paperId": "b68b1e3da5b781eb117b6407adea88e045d7033e",
"title": "PERAN ETIKA DALAM MENINGKATKAN EFEKTIVITAS PELAKSANAAN PUBLIC RELATIONS"
},
{
"paperId": "bc779d1a50fe64d8f4949bcea4d99239a3a39e64",
"title": "STRATEGI PENGEMBANGAN POTENSI DESA"
},
{
"paperId": "f5a8dc08379d61ec078995652c2f6019647ec137",
"title": "Identifying and Analyzing Judgment Opinions"
},
{
"paperId": null,
"title": "“Vehicle Routing Problem Using Sweep Algorithm for Determining Distribution Routes on Blood Transfusion Unit,”"
},
{
"paperId": null,
"title": "“Analisa Risiko Kecelakaan Kerja Di Pt. Xyz,”"
},
{
"paperId": "13afa73e82a1ea22cf9f14d0edb3dbac7a073f51",
"title": "Proposed Marketing Strategy Design During the Covid-19 Pandemic on Processed Noodle Products Using SOAR and AHP Methods"
},
{
"paperId": "c488acfecd2b95653ca870023093de03623024ee",
"title": "Iraise Satisfaction Analysis Use The End User Computing Satisfaction (EUCS) Method In Department Of Sains And Teknologi UIN Suska Riau"
},
{
"paperId": "533ebe10b4be6915a25fa7c94a1ba9af2fb1f251",
"title": "Production Line Improvement Analysis With Lean Manufacturing Approach To Reduce Waste At CV. TMJ uses Value Stream Mapping (VSM) and Root Cause Analysis (RCA) methods."
},
{
"paperId": "1b526b20d1ad1d0b4a37a6049c8aa2c42aa69c59",
"title": "Application of Data Mining Using the K-Means Clustering Method in Analysis of Consumer Shopping Patterns in Increasing Sales (Case Study: Abie JM Store, Jaya Mukti Morning Market, Dumai City"
},
{
"paperId": "2b620edc386973d2cf8ffe5e1b715ad80681cae6",
"title": "Improvement Of Occupational Health And Safety (OHS) System Using Systematic Cause Analysis Technique (SCAT) Method In CV. Wira Vulcanized"
},
{
"paperId": null,
"title": "“Computerized Relative Allocation of Facilities Techniques (CRAFT) Algorithm Method for Redesign Production Layout (Case Study: PCL Company),”"
},
{
"paperId": null,
"title": "“Indonesia Miliki 97,38 Juta Pengguna Instagram pada Oktober 2022,”"
},
{
"paperId": null,
"title": "“Data Peserta Didik Kab. Cirebon,”"
}
] | 5,547
|
en
|
[
{
"category": "Sociology",
"source": "external"
},
{
"category": "Geography",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/008291fb9581cf49b45ac2627bf749a3068f989e
|
[
"Sociology"
] | 0.968139
|
Mapping Change: Community Information Empowerment in Kibera (Innovations Case Narrative: Map Kibera)
|
008291fb9581cf49b45ac2627bf749a3068f989e
|
Innovations: Technology, Governance, Globalization
|
[
{
"authorId": "67216572",
"name": "E. Hagen"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
-----
-----
GroundTruthInitiative was then established in order to build off of the successful Map Kibera pilot by launching and advising on similar projects throughout
the world, and to initiate more experiments in participatory technology and
media. The GroundTruth[3] mission is to contribute to a culture in which digital storytelling, open data, and geographic information lead to greater influence and representation for marginalized communities.
A BIT OF HISTORY
Mikel Maron is a well-known specialist in digital mapping, particularly through
OpenStreetMap,[4] the “free editable map of the whole world.” A board member of
the OpenStreetMap Foundation, he has led projects contributing data to
OpenStreetMap in Palestine, India, and elsewhere. His experience with computer
programming and the open-source community has involved work on technology
projects within the United Nations and elsewhere, often involving digital community-building. Many in the open-source community believe in the Internet’s
potential to have a democratizing effect; for Mikel, increasing access to technology
for the greater social good became a guiding idea.
The Map Kibera project was the outgrowth of a discussion among mapping
enthusiasts in Africa, who realized Kibera was not included on OpenStreetMap,
Google Maps, or other such online maps. This project was especially interesting to
activists because of one central question: How can grassroots communities in
developing countries participate more fully in the open-source projects meant to
involve people around the globe in an egalitarian way? A small grant from
Jumpstart International set off the first phase of the project. We initially anticipated spending less than two months in Kenya, training local people in mapping and
editing the map online.
I came to this project rather unexpectedly, having met Mikel not long before
and learned of grassroots mapping. I already had a strong interest in the potential
of new media and the communications revolution to change the way development
was practiced by altering the information dynamic. Having worked in communications and evaluation for several international development agencies, I could easily see that the poor had no communication channels, and therefore no influence,
which often resulted in flawed, top-down development. I also had long been interested in supporting indigenous and marginalized people in creating and telling
their own stories, including Tibetan activists, Mexican immigrants in the United
States, and members of tribes in India. I realized that grassroots digital mapping
was another way communities could lay claim to their own narratives and collect
hard data to advocate for themselves. It could also form a wonderful anchor for
localized reporting.
However, we were only partly prepared for the Kenyan context. I had worked
briefly in Uganda but never in Kenya, and Mikel had only conducted one weeklong mapping training in Nairobi.
-----
-----
-----
-----
-----
knowledge imparted by others. We wanted the entire community to have a
resource that would harness their collective wisdom and intimate knowledge of
Kibera, so they could become the drivers of development. These aims became the
primary motivation for most of the activities we developed in Phase 2.
Sustainability and community impact were clearly much greater challenges than
the map production had ever been. The digital divide was also a complex challenge: although many residents of Kibera could access the Internet at cyber cafes,
they saw the web not as a participatory tool for change but as a way to seek information and chat with friends. Web 2.0 concepts of two-way and crowd-sourced
content hadn’t hit most of the Nairobi elites, much less Kiberans. We had a long
way to go.
However, we recognized that digital information need not be kept exclusively
online. Thus Map Kibera planned to make paper printouts of the map to post in
public places and distribute around the neighborhood. This idea grew into the
issue-based community meetings we added in Phase 2, where residents could add
information to paper maps that featured separate themes like health or education.
But perhaps the greatest challenge in Phase 1 was to inspire a sense of commitment to long-term skill building and volunteerism. This is a complex issue in a
place where few young people have any source of income and get by day-to-day
through small jobs and handouts. In wealthy countries, volunteering is the basis
-----
for the open-source technology community, but the developing world needs a different model if it wants to reach beyond the elites. In Kibera, particularly, many
NGOs come through briefly and hire residents to participate in their own information gathering and in hundreds of small projects, workshops, and events that
offer token payments but do not impart useful skills.
Whether the NGOs are conducting focus groups or user testing, needs assessments, household surveys, or impact assessments, many people participate in each
and every NGO opportunity and expect payment for it, without ever developing
any marketable skills or being hired permanently—much less receiving the results
of the assessments or considering what they mean for Kibera.
This is why we were determined that the training we offered would help the
participants go somewhere. We wanted the local youth to begin to network with
Nairobi’s technology community and start to bridge the digital divide: we wanted
them to see career possibilities in ICT, or information and communication technology.
PHASE 2: FEBRUARY-OCTOBER 2010
We started to plan for Phase 2 immediately following Phase 1, returning to Nairobi
in February 2010. In fact, it seemed like Phase 1 had just been laying the groundwork. While the map of Kibera had been created, the project would require more
work to become an information resource that was truly useful to the community.
As the project went on, we began to value more and more the intensive community-based work that would be needed to achieve our goals.
We also began to look at the entire communications environment within
Kibera. We wanted to push further, to develop a model for a comprehensive,
engaged community information project. We tried this out in two ways: by extending the mapping work to gather more in-depth, issue-based information and
engage people in the community through paper maps; and through citizen journalism, or reporting by non-professionals on important local issues and news.
Citizen reporting is an essential component of creating local, accessible information resources and a step toward Kiberans reclaiming knowledge about Kibera.
We defined citizen media by principles like independent editorial control, emphasis on content and creativity rather than professional production quality, and
opening up tools and resources to as many people as possible.
Our citizen journalism effort included an extensive program of online media
that included two new projects: the Voice of Kibera at voiceofkibera.org, an online
community information and news platform, and the Kibera News Network, kiberanewsnetwork.org, a citizen video team. We also expanded the mapping into a program of GIS and issue-based mapping that included community participation.
The goals of these projects were to allow Kibera residents to speak for themselves
on current events and issues, and to create a digital community around local information.
-----
-----
However, in order to produce media for Voice of Kibera, each media outlet
needed a site where they could produce an RSS feed.[10 ]So we initiated trainings in
Wordpress software[11 ]and helped these groups get their work online. Wordpress
allowed them to design their own sites quickly and to publish content without hiring a costly web designer. In the end, though they were supportive, they were not
ready to come together on the Voice site for a variety of reasons. Those who adopt
new technology are often not entire organizations or those who first show an interest—it is often not the leaders of organizations who have time to learn the new
tools, but unemployed youth. These youth ultimately then have to create the “proof
of concept” that convinces elders and others of the real value of a new idea.
SMS, or short message service, also presented a great opportunity for citizen
reporting. Most of Kibera’s approximately 250,000 residents either own or have
access to mobile phones through friends and family, which made almost every
Kiberan a potential reporter. Thus, in addition to Voice of Kibera itself, we were
able to use an SMS shortcode that our partner SODNET had secured from the
major mobile carriers.[12 ]SODNET’s SMS gateway filtered incoming SMS into other
applications according to keywords. Messages with the word “Kibera” fed directly
into the back end of Voice of Kibera; they then had to be mapped and approved by
an editor before appearing on the public site.
Once we had the website and the SMS code, we considered helping individual
groups use SMS to report on services and our target issue areas. We thought using
community monitors would be a good way to take the pulse of the neighborhood.
We considered partnering with KCODA’s community monitoring program to create a site where people could comment on the activities of NGOs, so that useless
or “briefcase” organizations would be rooted out and citizens could request better
services. This met our basic goal: to alter the existing power and information
dynamic so Kibera residents could increasingly influence their own local development.
While we hoped Kibera residents would simply want to send SMS reports to
the site, we did not expect them to do so quickly. We needed to advertise and to
show that there would be some result from their 5 shilling expenditure (about 6
cents US). We first talked to groups that came to the participatory mapping meetings, which we describe below. After each map-drawing exercise, we explained to
the participants that they and other Kiberans could continue to report on the
issues we had discussed by sending an SMS, and that this information, along with
the maps and drawings, would all be available online. People seemed intrigued and
to sense that something exciting was going on, and they wanted to be part of it. But
we didn’t get any SMS messages. We did, however, collect names of interested people and invite them to a focus group on the Voice of Kibera tools.
We demonstrated Voice of Kibera and the SMS function in detail to the focus
group. Several attendees suggested that it would be crucial to have a trustworthy
editorial board to approve the incoming material, citing manipulation by the
media during the recent post-election violence. So we invited them to form such
aboard. A group of six young men volunteered, and over time five of them became
-----
-----
the managers of the project. One, Douglas Namale, was editor of the _Kibera_
_Journal, and he suggested that they would need to function much like a newspaper_
editor, including verifying reports. However, they agreed that reports could be
coded as “unverified” to allow them to post nearly all submissions. They primarily checked that material had not been intentionally falsified along political lines,
and to date this has not been an issue.
We left development of the concept up to the editors as much as possible. They
quickly became advocates for the site, submitting and approving reports. While
our goal was to have the general public aware of the shortcode so they would use
it to submit information, we felt these board members could kick it off most effectively and that they had enough enthusiasm to experiment with the site and
explore its potential. The five members split up the duties of submitting SMS
reports on news in Kibera, approving incoming reports and newsfeeds, and posting them to the site (when SMS come in they are not immediately visible to viewers but must be approved and located on the map). The SMS reporters operated
like roving journalists, posting notices on breaking news as well as events and
opportunities for residents, each in 160 characters. The site began to shift from
being an aggregator of other local media to a media channel in its own right.
One problem kept the site from being completely useful: residents could only
access it on a computer in a cyber cafe. While we did convince one café to make
Voice of Kibera its homepage as a means of advertising, it had only a small impact.
Another challenge was that even a 5 shilling fee for submissions seemed prohibitively high. We needed a mobile web tool, which we found with the release of
Ushahidi 2.0, which included a plug-in architecture. One of the first plug-ins
developed was a mobile phone browser version, which promised to resolve some
of the problems with access and cost. This made it possible for Kibera residents to
both submit and view reports by phone, providing the phone was web-enabled,
which made the cost of accessing a website minimal, even negligible.[13 ]We estimated that about a quarter of young people in Kibera had this type of phone. The
group then began a broader outreach campaign and a media launch plan that used
traditional media channels (local radio, print, banners, and posters) to advertise
that Voice of Kibera was available as a platform for sharing community information. Voice of Kibera also began developing an SMS alert system for residents.
We found that the early adopters of Voice of Kibera were either interested in
promoting their own work or had an exceptional interest in technology and the
Internet. For instance, one board member ran a football NGO and posted locations of upcoming matches. These organizations generally had no other online
presence and were interested in marketing their activities to Kiberans and to the
greater Kenyan and global community.
Other organizations sometimes showed support, but for anything to move forward they had to have an internal “champion.” People interested in the Voice of
Kibera site tended to use technology more than the average Kibera resident and
seemed excited by the potential for Internet communication. Reporting, however,
-----
focused mostly on a few regions of Kibera. Clearly we had not yet reached our goal
of broadly crowdsourcing[14] news.
After a few months, the editorial board developed the following definition of
Voice of Kibera. We think this demonstrates the fact that the group has embraced
and expanded on the vision we set out with; the longer-term task is to share the
site with others in Kibera and engage as many people as possible.
1. It is a nonprofit and independent community information-sharing platform
by, for, and about Kibera.
2. It uses (a) articles, photos, videos, and SMS; (b) a unique information mapping tool; and (c) moderation of content to ensure accurate reporting.
3. It is a unifying and catalytic agent to contribute to positive change in Kibera
and Kenya.
4. It is a citizen journalist website sharing the real story of what Kibera is.
5. It aims to fill current information gaps in terms of emergency and accurate
information, adding location data when relevant.
**Citizen Video Journalism: Kibera News Network**
Based on the lessons we learned in Phase 1,we started a new and more extensive
video news project. I began training a video news team called Kibera News
Network (KNN), initially hosted by KCODA. KNN is also linked with Voice of
Kibera via an RSS newsfeed and is a major source of geo-located content for that
project.
The first group of videographers, Kibera Worldwide, had little institutional or
programmatic support; it primarily reported on the many activities of its host
organization, CFK. Our concept was to train various youth in video news production to create an asset for the entire community, and to establish a platform that
could be non-proprietary. Efforts to establish Kibera Worldwide as a cross-organizational, collaborative group faced challenges too great to overcome.
So, we decided to start KNN as a collaborative community video news channel—or, as it is called in Kibera, “TV online.” We engaged KCODA because of its
commitment to community media as the publisher of Kibera Journal. KCODA also
had intentions of becoming a “digital village”—a Kenyan ICT board designation
for community Internet resource centers. They had several donated computers and
were planning to open a cyber cafe, and two promising Kibera youth with filmmaking experience were already interested in starting a TV news project, so this
was a natural place to start. I started working with these two youth in April 2010
to train about 18 young people to use the Flip cameras and Flipshare software,
which would help them cover features and news events of their own choosing. In
fact, they chose the name KNN. They soon started to publish videos on YouTube.[15]
Initially planned as a small, once-a-week class at KCODA, the activity quickly
grew into a project. KCODA and the two leaders recruited the trainees. We started
with 6 young women and 12 young men aged approximately 19 to 25; 5 of them
came from the group of mappers. On the first day they came up with story ideas.[16]
-----
-----
trol, beyond making certain editing suggestions to improve the stories and sometimes correcting spelling. I stressed the special value of their point of view in the
community, their unique perspective as Kibera residents, and the overall importance of local media. But I hardly needed to do so: they already had a strong drive
to present Kibera’s positive side while also covering negative incidents more accurately. They had great pride in their community, which was essential for providing
a social service like local news coverage.
The drive to provide video coverage persists in spite of the challenges to filming in Kibera. The community at large is resistant to being on camera, having been
filmed and photographed repeatedly over the years by visiting foreigners. They see
no benefit in having their image taken and often believe (sometimes rightly) that
the videographer is selling their likeness for profit, whether in a movie or by pretending they’re doing some charity work and the pocketing the donations, and
they want a share of it. They either hide from the camera, demand money, or
threaten the photographer.[17] I’ve accompanied documentary crews followed by
jeering people asking for money.
The KNN team has managed to overcome the most such resistance, largely
because they are students and volunteers from Kibera. However, serious problems
have arisen on a few occasions. One KNN member was arrested for filming near a
police station and we had to pay the police to release him. During a violent event,
such as a riot, filmers have had to flee angry residents and narrowly escaped. One
member’s phone was picked from his pocket while he was filming a challenging
scene.
In spite of these challenges, the group succeeded in covering current events in
Kibera from a perspective that no media house outside the community could ever
achieve. The videos included everything from a story about a Muslim girl who
found a prophetic mark on a small frog to Kiberans’ views on the new constitution. Between April and September 2010, KNN covered 101 stories that included
the following headlines:
- Talent Show in Kibera
- Rose’s Orphanage in Kibera
- Biogas Center in Kibera Investigated
- Power Disconnections Leads to Riot in Kibera
- Fire in Kibera Claims 18 Houses
- Community Clean-up along Railroad in Kibera
- Ugandan Circumcision Ritual in Kibera
- Former Residents of Soweto East Give Mixed Reviews of Slum Upgrading
- Pascal—Bone Jewelry Maker
- Frog Decorated with Name of God Found by Kibera Girl
The potential subjects in Kibera are endless, and the team began to recognize
and seek out interesting news. They tried advertising themselves to get news tips
by creating small flyers that could be handed out like business cards. We also
thought KNN could use news tips directly from Voice of Kibera, so when someone
reported in, KNN received that information immediately via SMS or another alert
-----
-----
-----
-----
-----
-----
various clinics charged, the address of the best midwife, and the proliferation of
low-quality chemists who prescribed inappropriate remedies. We also noted that
chemists who had unlicensed examination rooms sometimes played critical roles,
that people with acute emergencies often had to be carried several kilometers along
mud paths to the government hospital, and that Kibera had no mental health services, dentists, or opticians.
We found a strong interest in using technology to support each issue, but the
challenge was to help the participants use these tools for their own advocacy and
planning. Since our goal was to be non-extractive—to avoid using the community to collect data without enhancing its own ability to use the information for
impact—we had to support small, technology-challenged groups and share information in ways that would move policy toward their objectives most effectively.
One approach we tried for this was mobile reporting, as discussed above. Another
was to engage those who stood out as innovation leaders to use our websites and
maps themselves. Certain people seemed to understand how technology and storytelling could support their objectives, and we tried to continue working with
them and to help them make a clear link to their own goals. This was a slow
process, however; while the majority of people recognize the power of the Internet,
very few in Kibera understood even the basics of how it works. Therefore, we
became interested in how to engage average residents while maintaining a core
group of Internet-savvy activists to translate information into action.
During the map-drawing exercises, the participants often were initially under
the impression that we were either researchers or experts on the issue we were presenting. What we were doing was actually quite unusual. We were talking about
specific issues, but it was not a focus group. We had no expertise in education, but
we believed that having strong information about education in a shared information commons could be useful to citizens in marginalized communities whose
children needed to go to school. In practice, we needed to establish clearer followup routes so that people could meet specific goals by using the maps. It was quite
easy to show the value of citizen-generated information on schools to larger organizations like UNICEF, since it is so hard to collect accurate data on things like the
number and quality of informal schools in Kibera. But to translate this into a community resource and tool was more difficult. We began to develop a printed atlas
to hand out with specific information on each issue.
We found that it was important to meet individually with networks of groups
involved in thematic areas to help support them—a lengthy process. People often
asked us what concrete results we could see after less than a year from the start of
the project. It’s simply not practical to expect policy shifts or large-scale results in
such a time frame. However, we also learned that access to information alone does
not lead to action, nor does it support ongoing advocacy and development. Groups
must be empowered to make use of information, which requires a tailored
approach. For instance, we helped develop a website to locate government-funded
projects and share information about their quality and budget.[22 ]The Map Kibera
-----
Trust has now been established in part to work toward greater community impact
in the longer term.
Since it is not possible to support each and every group in Kibera, it is also
important to create general awareness about the open information that is available
and about our toolset, and to continue to train interested individuals in using these
tools to support others. We hope to slowly counter the misuse and temporary
nature of tools that come with limiting factors, such as proprietary licensing and
expensive software and devices, as well as the practice of collecting data that is simply impossible to share, online or otherwise.
CONCLUSION AND UPDATE
The techniques I’ve described in this article have the potential to represent the
multiple realities of a community, and to aggregate their subjective opinions into
a collective version of truth. The facts on the ground about location, which are visible and objectively verifiable, can be layered with the lived experiences and news
reports that residents want to include. This process comes closer to local truth than
a simplistic survey methodology used to“gather” information, but the information
collected can also be combined with external data to make the case for reforms. It
provides much-needed communication tools for the community itself on a hyperlocal level, which allows Kiberans to discuss and report on what matters most to
them.
Since winding up the activities in Phase 2, we have undertaken an ambitious
scaling-up of the project from one slum to two. We chose to work next in Mathare
Valley, the second largest slum in Nairobi, because several groups requested help
there in creating projects like Map Kibera, and because we were able to partner
with Plan Kenya, which already had a participatory development project under
way there. Concurrent training in mapping and video, along with a blog and a
Voice of Mathare website, enables us to test the replicability of the concept and
allows participants from Kibera to train and support others to accomplish what
they have.[23]
We are not huge fans of the bigger-is-better concept in development; we like to
think more like artisanal craftspeople, choosing high quality of attention and
depth over breadth. So we did not attempt to go very large right away, though others hoped we would. Larger organizations and institutions were eager to see map
data for other informal or unmapped areas, particularly the type of data we were
collecting on public infrastructure and informal services. But we felt the need to
plan carefully for the next project in order to maintain community involvement—
or better yet, increase it. There is a very subtle point here about building community ownership over something so new: if we aren’t serious about listening deeply
to each community, the entire purpose of the project is lost. If there is one thing I
could stress to those who wish to do a project like this, it is that community data
collection risks being an extractive process, just like traditional surveying. A great
deal of work must be done to create something that does not just layer on top of a
-----
community but actually serves them. Unfortunately, this brings us back to an old
lesson we in the development field still seem reluctant to learn: technology is easy;
real social change is still the most difficult—and most important—part.
Our primary challenge thus becomes how to truly empower residents of
Kibera in very complicated processes that have traditionally been exclusionary.
Luckily, we have a great weapon. By virtue of being attractive, new, and global in
reach, digital technology can help Kibera youth (and others) gain a level of respect
that they have never been granted before. The fact that larger institutions want the
information they have collected means that Kibera residents could have new leverage among stakeholders, which could ultimately lead to having a greater say in
decisions that affect them most. Achieving this goal is what the Trust is undertaking as part of its mission.
In terms of methodology, we’d like to encourage dissemination of ideas, rather
than overly planned, top-down development; we believe that if an idea is good
enough, it will spread naturally. GroundTruth’s current role is to continue to train
and initiative projects, and to help support others in designing their own projects.
This is primarily because the process is tricky, whereas the products you can see
online seem deceptively easy. Technology cannot be adopted wholesale but must
be tailored to each context, thus it is never clear at the start what will end up being
useful in each local context. This is an area where experimentation and willingness
to fail, adapt, and iterate (values from the technology field) are needed to avoid the
pitfalls of overly ambitious and large-scale replication of something that was successful halfway around the world. Starting small may confound donor structures,
but it allows communities to learn and adapt and try things out.
We have also had the opportunity to reflect, to learn from Kibera, and to
restructure the program. One major development is to include more participatory development theory in our program plans. In late 2010, we collaborated with
the University of Sussex Institute for Development Studies on research that
allowed all members of Map Kibera to discuss and evaluate the program to date.[24]
We had difficult group discussions with participants on subjects like their expectations for livelihoods and community engagement. Following this process, we
incorporated many techniques from participatory development by working closely with Plan Kenya on the Mathare project, along with their local partner,
Community Cleaning Services. This included holding a large community meeting
to determine needs, and beginning the new project with key Mathare people taking on major organizational and leadership roles. The context in Mathare is very
different from that in Kibera, but we continue to evolve a methodology that is at
the intersection of participatory technology and participatory development.
The long process of incorporating in Kenya is also now complete, and the Map
Kibera Trust is official. The trust is proceeding through organizational development processes with support from Hivos,[25] and each of the three programs—mapping, Voice of Kibera, and KNN—will be represented in a leadership body. These
programs have a great deal of autonomy, and therefore responsibility, and have
been working to create their own strategic plans, including budgets and fundrais
-----
-----
12. Shortcodes are four-digit phone numbers, usually expensive and difficult to obtain; they are
often used commercially because they’re very easy to remember—for instance, to let people send
votes and opinions to companies and TV shows, such as www.bigbrotherafrica.com. However,
after several months we abandoned the shortcode for a full-length telephone number, for a few
reasons: the interface from SODNET kept breaking down, and after a price war the cost lagged
behind, with shortcode messages costing 5 Ksh and regular SMS 1Ksh.
13. About $0.25 for 25 MB http://www.safaricom.co.ke/index.php?id=1011.
14. A process of inviting large numbers of people to participate in creating a single resource.
15. www.youtube.com/kiberanewsnetwork
16. http://www.mapkibera.org/blog/2010/04/09/kibera-news-network-list-of-story-ideas/
17. See _New_ _York_ _Times_ op-ed by Kennedy Odede on the subject at
http://www.nytimes.com/2010/08/10/opinion/10odede.html; also a blog post by Brian Ekdale at
http://www.brianekdale.com/?p=62.
18. See www.ppgis.net
19.
http://mapkibera.org/wiki/index.php?title=File:Health_services_data_collection_form_FINAL
2.doc
20. http://www.flickr.com/photos/mapkibera/map
21. One international NGO told us they give money for “lunch and transport” worth about three
times the value of lunch in Kibera. Since funders would often pay for program costs but not
actual wages, these payments are euphemistically referred to as “appreciation,” “reward,” “transport,” or “lunch.” This is interesting in light of the frequency of bribery referred to as “tea” (chai);
organizations are ostensibly in favor of transparency, but they perpetuate a shadow economy. Of
course, we too gave out airtime and lunch money and sometimes small stipends.
22. http://cdf.apps.mapkibera.org/pages/home.php
23. http://matharevalley.wordpress.com/
24. See DFID, “Mediating voices and communicating realities: Using information crowdsourcing
tools, open data initiatives and digital media to support and protect the vulnerable and marginalized,” http://www.dfid.gov.uk/r4d/SearchResearchDatabase.asp?projectID=60805
25. The Dutch development agency, http://www.hivos.nl/english.
-----
| 7,059
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1162/INOV_A_00059?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1162/INOV_A_00059, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://direct.mit.edu/itgg/article-pdf/6/1/69/1626156/inov_a_00059.pdf"
}
| 2,011
|
[] | true
| 2011-07-18T00:00:00
|
[] | 7,059
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Economics",
"source": "external"
},
{
"category": "Physics",
"source": "external"
},
{
"category": "Economics",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00836d8450a7d3b71bf3ee858941bff3b198df66
|
[
"Computer Science",
"Economics",
"Physics"
] | 0.81882
|
Contagion in Bitcoin Networks
|
00836d8450a7d3b71bf3ee858941bff3b198df66
|
Business Information Systems
|
[
{
"authorId": "2080827843",
"name": "C'elestin Coquid'e"
},
{
"authorId": "102361400",
"name": "J. Lages"
},
{
"authorId": "144887536",
"name": "D. Shepelyansky"
}
] |
{
"alternate_issns": [
"2662-1797",
"2303-2537"
],
"alternate_names": [
"BIS",
"Bis-a",
"Bus Inf Syst"
],
"alternate_urls": null,
"id": "63c93d6a-fe61-4453-8e9c-b8c7aecb3ff6",
"issn": "2747-9986",
"name": "Business Information Systems",
"type": "conference",
"url": "http://bis.kie.ae.poznan.pl/"
}
|
We construct the Google matrices of bitcoin transactions for all year quarters during the period of January 11, 2009 till April 10, 2013. During the last quarters the network size contains about 6 million users (nodes) with about 150 million transactions. From PageRank and CheiRank probabilities, analogous to trade import and export, we determine the dimensionless trade balance of each user and model the contagion propagation on the network assuming that a user goes bankrupt if its balance exceeds a certain dimensionless threshold $\kappa$. We find that the phase transition takes place for $\kappa 0.55$ almost all users remain safe. We find that even on a distance from the critical threshold $\kappa_c$ the top PageRank and CheiRank users, as a house of cards, rapidly drop to the bankruptcy. We attribute this effect to strong interconnections between these top users which we determine with the reduced Google matrix algorithm. This algorithm allows to establish efficiently the direct and indirect interactions between top PageRank users. We argue that this study models the contagion on real financial networks.
|
###### **Contagion in Bitcoin networks**
C´elestin Coquid´e [1], Jos´e Lages [1], and Dima L. Shepelyansky [2]
1 Institut UTINAM, OSU THETA, Universit´e de Bourgogne Franche-Comt´e, CNRS,
Besan¸con, France
*{* `celestin.coquide,jose.lages` *}* `@utinam.cnrs.fr`
2 Laboratoire de Physique Th´eorique, IRSAMC, Universit´e de Toulouse, CNRS,
UPS, 31062 Toulouse, France
```
[email protected]
```
**Abstract.** We construct the Google matrices of bitcoin transactions for
all year quarters during the period of January 11, 2009 till April 10,
2013. During the last quarters the network size contains about 6 million
users (nodes) with about 150 million transactions. From PageRank and
CheiRank probabilities, analogous to trade import and export, we determine the dimensionless trade balance of each user and model the con
tagion propagation on the network assuming that a user goes bankrupt
if its balance exceeds a certain dimensionless threshold *κ* . We find that
the phase transition takes place for *κ < κ* *c* *≈* 0 *.* 1 with almost all users
going bankrupt. For *κ >* 0 *.* 55 almost all users remain safe. We find that
even on a distance from the critical threshold *κ* *c* the top PageRank and
CheiRank users, as a house of cards, rapidly drop to the bankruptcy. We
attribute this effect to strong interconnections between these top users
which we determine with the reduced Google matrix algorithm. This
algorithm allows to establish efficiently the direct and indirect interactions between top PageRank users. We argue that this study models the
contagion on real financial networks.
**Keywords:** Markov chains *·* Google matrix *·* Financial networks.
**1** **Introduction**
The financial crisis of 2007-2008 produced an enormous impact on financial, social and political levels for many world countries (see e.g. [1,2]). After this crisis
the importance of contagion in financial networks gained a practical importance
and generated serious academic research with various models proposed for the
description of this phenomenon (see e.g. Reviews [3,4]). The interbank contagion is of especial interest due to possible vulnerability of banks during periods
of crisis (see e.g. [5,6]). The bank networks have relatively small size with about
*N ≈* 6000 bank units (nodes) for the whole US Federal Reserve [7] and about
*N ≈* 2000 for bank units of Germany [8]. However, the access to these bank networks is highly protected that makes essentially forbidden any academic research
of real bank networks.
However, at present the transactions in cryptocurrency are open to public
and the analysis of the related networks are accessible for academic research.
-----
2 C. Coquid´e et al.
The first cryptocurrency is bitcoin launched in 2008 [9]. The first steps in the
network analysis of bitcoin transactions are reported in [10,11] and overview of
bitcoin system development is given in [12]. The Google matrix analysis of the
bitcoin network (BCN) has been pushed forward in [13] demonstrating that the
main part of wealth of the network is captured by a small fraction of users. The
Google matrix *G* describes the Markov transitions on directed networks and is at
the foundations of Google search engine [14,15]. It finds also useful applications
for variety of directed networks describe in [16]. The ranking of network nodes
is based on the PageRank and CheiRank probabilities of *G* matrix which are on
average proportional to the number of ingoing and outgoing links being similar to
import and export in the world trade network [17,18]. We use these probabilities
to determine the balance of each user (node) of bitcoin network and model
the contagion of users using the real data of bitcoin transactions from January
11, 2009 till April 10, 2013. We also analyze the direct and hidden (indirect)
links between top PageRank users of BCN using the recently developed reduced
Google matrix (REGOMAX) algorithm [19,20,21,22].
**Table 1.** List of Bitcoin transfer networks. The BC *yy* Q *q* Bitcoin network corresponds
to transactions between active users during the *q* th quarter of year 20 *yy* . *N* is the
number of users and *N* *l* is the total amount of transactions in the corresponding quarter.
|Network N N l|Network N N l|Network N N l|
|---|---|---|
|BC10Q3 37818 57437 BC10Q4 70987 111015 BC11Q1 204398 333268 BC11Q2 696948 1328505|BC11Q3 1546877 2857232 BC11Q4 1884918 3635927 BC12Q1 2186107 4395611 BC12Q2 2645039 5655802|BC12Q3 3742174 8381654 BC12Q4 4671604 11258315 BC13Q1 5997717 15205087 BC13Q2 6297009 16056427|
**2** **Datasets, algorithms and methods**
We use the bitcoin transaction data described in [13]. However, there the network
was constructed from the transactions performed from the very beginning till
a given moment of time (bounded by April 2013). Instead, here we construct
the network only for time slices formed by quarters of calendar year. Thus we
obtain 12 networks with *N* users and *N* *l* directed links for each quarter given in
Table 1. We present our main results for BC13Q1.
The Google matrix *G* of BCN is constructed in the standard way as it is
described in detail in [13]. Thus all bitcoin transactions from a given user (node)
to other users are normalized to unity, the columns of dangling nodes with
zero transactions are replaced by a column with all elements being 1 */N* . This
forms *S* matrix of Markov transitions which is multiplied by the damping factor
*α* = 0 *.* 85 so that finally *G* = *αS* + (1 *−* *α* ) *E/N* where the matrix *E* has all
elements being unity. We also construct the matrix *G* *[∗]* for the inverted direction
of transactions and then following the above procedure for *G* . The PageRank
-----
Contagion in Bitcoin networks 3
vector *P* is the right eigenvector of *G*, *GP* = *λP*, with the largest eigenvalue
*λ* = 1 ( [�] *j* *[P]* [(] *[j]* [) = 1). Each component] *[ P]* *[u]* [ with] *[ u][ ∈{][u]* [1] *[, u]* [2] *[, . . ., u]* *[N]* *[}]* [ is positive]
and gives the probability to find a random surfer at the given node *u* (user *u* ).
In a similar way the CheiRank vector *P* *[∗]* is defined as the right eigenvector of
*G* *[∗]* with eigenvalue *λ* *[∗]* = 1, i.e., *G* *[∗]* *P* *[∗]* = *P* *[∗]* . Each component *P* *u* *[∗]* [of] *[ P]* *[ ∗]* [gives]
the CheiRank probability to find a random surfer on the given node *u* (user
*u* ) of the network with inverted direction of links (see [16,23]). We order all
users *{u* 1 *, u* 2 *, . . ., u* *N* *}* by decreasing PageRank probability *P* *u* . We define the
PageRank index *K* such as we assign *K* = 1 to user *u* with the maximal *P* *u*,
then we assign *K* = 2 to the user with the second most important PageRank
probability, and so on ..., we assign *K* = *N* to the user with the lowest PageRank
probability. Similarly we define the CheiRank indexes *K* *[∗]* = 1 *,* 2 *, . . ., N* using
CheiRank probabilities *{P* *u* *[∗]* 1 *[, P]* *u* *[ ∗]* 2 *[, . . ., P]* *u* *[ ∗]* *N* *[}]* [.] *[ K]* *[∗]* [= 1 (] *[K]* *[∗]* [=] *[ N]* [) is assigned to]
user with the maximal (minimal) CheiRank probability.
The reduced Google matrix *G* R is constructed for a selected subset of *N* *r*
nodes. The construction is based on methods of scattering theory used in different fields including mesoscopic and nuclear physics, and quantum chaos. It
describes, in a matrix of size *N* *r* *×* *N* *r*, the full contribution of direct and indirect
pathways, happening in the global network of *N* nodes, between *N* *r* selected
nodes of interest. The PageRank probabilities of the *N* *r* nodes are the same as
for the global network with *N* nodes, up to a constant factor taking into account
that the sum of PageRank probabilities over *N* *r* nodes is unity. The ( *i, j* )-element
of *G* R can be viewed as the probability for a random seller (surfer) starting at
node *j* to arrive in node *i* using direct and indirect interactions. Indirect interactions describes pathways composed in part of nodes different from the *N* *r* ones of
interest. The computation steps of *G* R offer a decomposition into matrices that
clearly distinguish direct from indirect interactions, *G* R = *G* rr + *G* pr + *G* qr [20].
Here *G* rr is generated by the direct links between selected *N* *r* nodes in the global
*G* matrix with *N* nodes. The matrix *G* pr is usually rather close to the matrix in
which each column is given by the PageRank vector *P* *r* . Due to that *G* pr does not
bring much information about direct and indirect links between selected nodes.
The interesting role is played by *G* qr . It takes into account all indirect links
between selected nodes appearing due to multiple pathways via the *N* global
network nodes (see [19,20]). The matrix *G* qr = *G* qrd + *G* qrnd has diagonal ( *G* qrd )
and non-diagonal ( *G* qrnd ) parts where *G* qrnd describes indirect interactions between nodes. The explicit mathematical formulas and numerical computation
methods of all three matrix components of *G* R are given in [19,20,21,22].
Following [18,21,22], we remind that the PageRank (CheiRank) probability
of a user *u* is related to its ability to buy (sell) bitcoins, we therefore determine
the balance of a given user as *B* *u* = ( *P* *[∗]* ( *u* ) *−* *P* ( *u* )) */* ( *P* *[∗]* ( *u* )+ *P* ( *u* )). We consider
that a user *u* goes to bankruptcy if *B* *u* *≤−κ* . If it is the case the user *u* ingoing
flow of bitcoins is stopped. This is analogous to the world trade case when
countries with unbalanced trade stop their import in case of crisis [17,18]. Here
*κ* has the meaning of bankruptcy or crisis threshold. Thus the contagion model
is defined as follows: at iteration *τ*, the PageRank and CheiRank probabilities
-----
4 C. Coquid´e et al.
100
80
60
40
20
1
BC13Q2
BC13Q1
BC12Q4
BC12Q3
BC12Q2
BC12Q1
BC11Q4
BC11Q3
BC11Q2
BC11Q1
BC10Q4
BC10Q3
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
user user
**Fig. 1.** Twenty most present users in top100s of BCyyQq networks (see Tab. 1) computed with PageRank (left panel) and CheiRank (right panel) algorithms. In horizontal
axis the twenty users labeled from 1 to 20 are ranked according to the number of occurrences in the time slice top100s. The color ranges from red (user is ranked at the
1st position, *K* = 1 or *K* *[∗]* = 1) to blue (user is ranked at the 100th position, *K* = 100
or *K* *[∗]* = 100). Black color indicates a user absent from the top100 of the corresponding
time slice.
are computed taking into account that all ingoing bitcoin transactions to users
went to bankruptcy at previous iterations are stopped (i.e., these transactions
are set to zero). Using these new PageRank and CheiRank probabilities we
compute again the balance of each user, determining which additional users went
to bankruptcy at iteration *τ* . Initially at the first iteration, *τ* = 1, PageRank and
CheiRank probabilities and thus user balances are computed using the Google
matrices *G* and *G* *[∗]* constructed from the global network of bitcoin transactions ( *a*
*priori* no bankrupted users). A user who went bankrupt remains in bankruptcy
at all future iterations. In this way we obtain the fraction, *W* *c* ( *τ* ) = *N* *u* ( *τ* ) */N*,
of users in bankruptcy or in crisis at different iteration times *τ* .
**3** **Results**
The PageRank and CheiRank algorithms have been applied to the bitcoin networks BCyyQq presented in Tab. 1. An illustration showing the rank of the
twenty most present users in the top 100s of these bitcoin networks is given in
Fig. 1. We observe that the most present user (#1 in Fig. 1) was, from the third
quarter of 2011 to the fourth quarter of 2012, at the very top positions of both
the PageRank ranking and of the CheiRank ranking. Consequently, this user was
very central in the corresponding bitcoin networks with a very influential activity of bitcoin seller and buyer. Excepting the case of the most present user (#1
in Fig. 1), the other users are (depending of the year quarter considered) either
-----
Contagion in Bitcoin networks 5
top sellers (well ranked according to CheiRank algorithm, *K* *[∗]* *∼* 1 *−* 100) or top
buyers of bitcoins (well ranked according to PageRank algorithm, *K ∼* 1 *−* 100).
In other words excepting the first column associated to user #1 there is almost
no overlap between left and right panels of Fig. 1.
From now on we concentrate our study on the BC13Q1 network. For this
bitcoin network, the density of users on the PageRank-CheiRank plane ( *K, K* *[∗]* )
is shown in Fig. 2a. At low *K, K* *[∗]*, users are centered near the diagonal *K* = *K* *[∗]*
that corresponds to the fact that on average users try to keep balance between
ingoing and outgoing bitcoin flows. Similar effect has been seen also for world
trade networks [17].
The dependence of the fraction of bankrupt users *W* *c* = *N* *u* */N* on the
bankruptcy threshold *κ* is shown in Fig. 2b at different iterations *τ* . At low
*κ < κ* *c* *≈* 0 *.* 1 almost 100% of users went bankrupt at large *τ* = 10.
1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
κ
10 [6]
10 [5]
10 [4]
10 [3]
10 [2]
-2
-4
-6
-8
-10
0.8
0.6
0.4
0.2
10 [1]
1 10 [1] 10 [2] 10 [3] 10 [4] 10 [5] 10 [6]
K
1
**Fig. 2.** Panel a: density of users, *dN* ( *K, K* *[∗]* ) */dKdK* *[∗]*, in PageRank–CheiRank plane
( *K, K* *[∗]* ) for BC13Q1 network; density is computed with 200 *×* 200 cells equidistant in
logarithmic scale; the colors are associated to the decimal logarithm of the density;
the color palette is a linear gradient from green color (low user densities) to red color
(high user densities). Black color indicates absence of users. Panel b: fraction *N* *u* */N* of
BC13Q1 users in bankruptcy shown as a function of *κ* for *τ* = 1 *,* 3 *,* 5 *,* and 10.
Indeed, Fig. 3 shows that the transition to bankruptcy is similar to a phase
transition so that at large *τ* we have *W* *c* = *N* *u* */N ≈* 1 for *κ < κ* *c* *≈* 0 *.* 1, in the
range *κ* *c* *≈* 0 *.* 1 *< κ <* 0 *.* 55 there are only about 50%–70% of users in bankrupcy
while for *κ >* 0 *.* 55 almost all users remain safe at large times.
The distribution of bankrupt and safe users on PageRank–CheiRank plane
( *K, K* *[∗]* ) is shown in Fig. 4 at different iteration times *τ* . For crisis thresholds
*κ* = 0 *.* 15 and *κ* = 0 *.* 3, we see that very quickly users at top *K, K* *[∗]* *∼* 1 indexes
go bankrupt and with growth of *τ* more and more users go bankrupt even if they
are located below the diagonal *K* = *K* *[∗]* thus having initially positive balance
-----
6 C. Coquid´e et al.
###### 10 8 6
# τ
###### 4 2
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
###### 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
# κ
**Fig. 3.** Fraction *N* *u* */N* of BC13Q1 users in bankruptcy as a function of *κ* and *τ* .
*B* *u* . However, the links with other users lead to propagation of contagion so that
even below the diagonal many users turn to bankruptcy. This features are similar
for *κ* = 0 *.* 15 and *κ* = 0 *.* 3 but of course the number of safe users is larger for
*κ* = 0 *.* 3. For a crisis threshold *κ* = 0 *.* 6, the picture is stable at every iterations *τ*,
the contagion is very moderate and concerns only the white region comprising
roughly the same number of safe and bankrupt users. This white region broadens
moderately as *τ* increases. We note that even some of the users above *K* = *K* *[∗]*
remain safe. We observe also that for *κ* = 0 *.* 6 about a third of top *K, K* *[∗]* *∼* 1
users remain safe.
Fig. 5 presents the integrated fraction, *W* *c* ( *K* ) = *N* *u* ( *K* ) */N*, of users which
have a PageRank index below or equal to *K* and which went bankrupt at *τ ≤*
10. We define in a similar manner the integrated fraction of CheiRank users
*W* *c* ( *K* *[∗]* ) = *N* *u* ( *K* *[∗]* ) */N* being bankrupts. From Fig. 5 we observe *W* ( *K* ) *≈* *K/N*
and *W* ( *K* *[∗]* ) *≈* *K* *[∗]* */N* . Formal fits *W* *c* ( *K* ) = *µ* *[−]* [1] *K* *[β]* of the data in the range
10 *< K <* 10 [5] give ( *µ* = 5 *.* 94557 *×* 10 [6] *±* 95 *, β* = 0 *.* 998227 *±* 1 *×* 10 *[−]* [6] ) for
-----
Contagion in Bitcoin networks 7
1
0. 5
0
-0.5
-1
**Fig. 4.** BC13Q1 users in bankruptcy (red) and safe (blue) for *κ* = 0 *.* 15 (top row), for
*κ* = 0 *.* 3 (middle row), and for *κ* = 0 *.* 6 (bottom row). For each panel the horizontal
(vertical) axis corresponds to PageRank (CheiRank) indexes *K* ( *K* *[∗]* ). In logarithmic
scale, the ( *K, K* *[∗]* ) plane has been divided in 200 *×* 200 cells. Defining *N* cell as the total
number of users in a given cell and *N* *u,* cell as the number of users who went bankrupt
in the cell until iteration *τ*, we compute, for each cell, the value (2 *N* *u,* cell *−* *N* cell ) */N* cell
giving +1 if every user in the cell went bankrupt (dark red), 0 if the number of users
went bankrupt is equal to the number of safe users, and *−* 1 if no user went bankrupt
(dark blue). Black colored cells indicate cell without any user.
*κ* = 0 *.* 15 and ( *µ* = 5 *.* 65515 *×* 10 [6] *±* 231 *, β* = 0 *.* 99002 *±* 4 *×* 10 *[−]* [6] ) for *κ* = 0 *.* 3.
Formal fits *W* *c* ( *K* *[∗]* ) = *µ* *[−]* [1] *K* *[∗][β]* of the data in the range 10 *< K* *[∗]* *<* 10 [5] give
( *µ* = 1 *.* 03165 *×* 10 [7] *±* 3956 *, β* = 1 *.* 02511 *±* 3 *×* 10 *[−]* [5] ) for *κ* = 0 *.* 15 and ( *µ* =
1 *.* 67775 *×* 10 [7] *±* 1 *.* 139 *×* 10 [4] *, β* = 1 *.* 05084 *±* 6 *×* 10 *[−]* [5] ) for *κ* = 0 *.* 3.
The results of contagion modeling show that PageRank and CheiRank top
users *K, K* *[∗]* *∼* 1 enter in contagion phase very rapidly. We suppose that this happens due to strong interlinks existing between these users. Thus it is interesting
to see what are the effective links and interactions between these top PageRank
-----
8 C. Coquid´e et al.
###### 1
###### 10 [-1] 10 [-2] 10 [-3] 10 [-4] 10 [-5] 10 [-6] 10 [-7]
###### 1 10 [1] 10 [2] 10 [3] 10 [4] 10 [5] 10 [6]
#### K,K*
**Fig. 5.** Integrated fractions, *W* *c* ( *K* ) and *W* *c* ( *K* *[∗]* ), of BC13Q1 users which went
bankrupt at *τ ≤* 10 for *κ* = 0 *.* 15 (solid lines) and for *κ* = 0 *.* 3 (dashed lines) as a
function of PageRank index *K* (black lines) and CheiRank index *K* *[∗]* (red lines). The
inset shows *W* *c* ( *K* ) *N/K* as a function of *K* and *W* *c* ( *K* *[∗]* ) *N/K* *[∗]* as a function of *K* *[∗]* .
and top CheiRank users. With this aim we construct the reduced Google matrix
*G* R for the top 20 PageRank users of BC13Q1 network. This matrix *G* R and
its three components *G* pr, *G* rr and *G* qrnd are shown in Fig. 6. We characterize
each matrix component by its weight defined as the sum of all matrix elements
divided by *N* *r* = 20. By definition the weight of *G* R is *W* R = 1. The weights
of all components are given in the caption of Fig. 6. We see that *W* pr has the
weight of about 50% while *W* rr and *W* qr have the weight of about 25%. These
values are significantly higher comparing to the cases of Wikipedia networks (see
e.g. [20]). The *G* rr matrix component (Fig. 6 bottom left panel) is similar to the
bitcoin mass transfer matrix [13] and the ( *i, j* )-element of *G* rr is related to direct
bitcoin transfer from user *j* to user *i* . As *W* rr = 0 *.* 29339, the PageRank top20
-----
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
Contagion in Bitcoin networks 9
0.45
0.40
0.35
0.30
0.25
0.20
0.15
0.10
0.05
0.00
0.25
0.20
0.15
0.10
0.05
0.00
**Fig. 6.** Reduced Google matrix *G* R associated to the top 20 PageRank users of BC13Q1
network. The reduced Google matrix *G* R (top left) has a weight *W* R = 1, its components
*G* rr (bottom left), *G* pr (top right), and *G* qrnd (bottom right) have weights *W* rr =
0 *.* 29339, *W* pr = 0 *.* 48193, and *W* qr = 0 *.* 22468 ( *W* qrnd = 0 *.* 11095). Matrix entries are
ordered according to BC13Q1 top 20 PageRank index.
users directly transfer among them on average about 30% of the total of bitcoins
exchanged by these 20 users. In particular, about 70% of the bitcoin transfers
from users *K* = 5 and *K* = 14 are directed toward user *K* = 2. Also user
*K* = 5 buy about 30% of the bitcoins sold by user *K* = 2. We observe a closed
loop between users *K* = 2 and *K* = 5 which highlights between them an active
bitcoin trade during the period 2013 Q1. Also 30% of bitcoins transferred from
user *K* = 19 were bought buy user *K* = 1. The 20 *×* 20 reduced Google matrix
*G* R (Fig. 6 top left panel) gives a synthetic picture of bitcoin direct and indirect
transactions taking into account direct transactions between the *N ∼* 10 [6] users
encoded in the global *N ×* *N* Google matrix *G* . We clearly see that many bitcoin
transfers converge toward user *K* = 1 since this user is the most central in the
-----
10 C. Coquid´e et al.
bitcoin network. Although the *G* rr matrix component indicates that user *K* = 1
obtains about 10% to 30% of the bitcoins transferred from its direct partners,
the *G* pr matrix component indicates that indirectly the effective amount transferred from direct and indirect partners are greater about 10% to more than
45%. In particular, although no direct transfer exists from users *K* = 11 and
*K* = 16 to user *K* = 1, about 45% of the bitcoins transferred in the network
from users *K* = 11 and *K* = 16 converge indirectly to user *K* = 1. Looking
at the diagonal of the *G* R matrix we observe that about 60% of the transferred
bitcoins from user *K* = 1 returns effectively to user *K* = 1, the same happen,
e.g, with user *K* = 2 and user *K* = 15 with about 30% of transferred bitcoins
going back. The *G* qr matrix component (Fig. 6 bottom right panel) gives the interesting picture of hidden bitcoin transactions, i.e., transactions which are not
encoded in the *G* rr matrix component since they are not direct transactions, and
which are not captured by the *G* pr matrix component as they do not necessarily
involve transaction paths with the most central users. Here we clearly observe
that 25% of the total transferred bitcoins from user *K* = 15 converge indirectly
toward user *K* = 2. We note that this indirect transfer is the result of many
indirect transaction pathways involving many users other than the PageRank
top20 users. We observe also a closed loop of hidden transactions between users
*K* = 17 and *K* = 18.
**4** **Discussion**
We performed the Google matrix analysis of Bitcoin networks for transactions
from the very start of bitcoins till April 10, 2013. The transactions are divided
by year quarters and the Google matrix is constructed for each quarter. We
present the results for the first quarter of 2013 being typical for other quarters
of 2011, 2012. We determine the PageRank and CheiRank vectors of the Google
matrices of direct and inverted bitcoin flows. These probabilities characterize
import (PageRank) and export (CheiRank) exchange flows for each user (node)
of the network. In this way we obtain the dimensionless balance of each user *B* *u*
( *−* 1 *< B* *u* *<* 1) and model the contagion propagation on the network assuming
that a user goes bankrupt if its dimensional balance exceeds a certain bankruptcy
threshold *κ* ( *B* *u* *≤−κ* ). We find that the phase transition takes place in a vicinity
of the critical threshold *κ* = *κ* *c* *≈* 0 *.* 1 below which almost 100% of users become
bankrupts. For *κ >* 0 *.* 55 almost all users remain safe and for 0 *.* 1 *< κ <* 0 *.* 55 about
60% of users go bankrupt. It is interesting that, as house of cards, the almost all
top PageRank and Cheirank users rapidly drop to bankruptcy even for *κ* = 0 *.* 3
being not very close to the critical threshold *κ* *c* *≈* 0 *.* 1. We attribute this effect
to strong interconnectivity between top users that makes them very vulnerable.
Using the reduced Google matrix algorithm we determine the effective direct
and indirect interactions between the top 20 PageRank users that shows their
preferable interlinks including the long pathways via the global network of almost
6 million size.
-----
Contagion in Bitcoin networks 11
We argue that the obtained results model the real situation of contagion
propagation of the financial and interbank networks.
*Acknowledgments:* We thank L.Ermann for useful discussions. This work was
supported by the French “Investissements d’Avenir” program, project ISITEBFC (contract ANR-15-IDEX-0003) and by the Bourgogne Franche-Comt´e Region 2017-2020 APEX project (conventions 2017Y-06426, 2017Y-06413, 2017Y[07534; see http://perso.utinam.cnrs.fr/](http://perso.utinam.cnrs.fr/~lages/apex/) *[∼]* lages/apex/). The research of DLS is
supported in part by the Programme Investissements d’Avenir ANR-11-IDEX0002-02, reference ANR-10-LABX-0037-NEXT France (project THETRACOM).
**References**
1. *Financial crisis of 2007 - 2008* [, https://en.wikipedia.org/w/index.php?title=](https://en.wikipedia.org/w/index.php?title=Financial_crisis_of_2007%E2%80%932008&oldid=882711856)
[Financial crisis of 2007%E2%80%932008&oldid=882711856](https://en.wikipedia.org/w/index.php?title=Financial_crisis_of_2007%E2%80%932008&oldid=882711856) (Accessed April
(2019)).
2. *Three weeks that changed the world* [, The Guardian Dec 27 (2008), https://www.](https://www.theguardian.com/business/2008/dec/28/markets-credit-crunch-banking-2008)
[theguardian.com/business/2008/dec/28/markets-credit-crunch-banking-2008](https://www.theguardian.com/business/2008/dec/28/markets-credit-crunch-banking-2008)
(Accessed April (2019)).
3. Gai P. and Kapadia S.: *Contagion in financial networks*, Proc. R. Soc. A **466**, 2401
[(2010). https://doi.org/10.1098/rspa.2009.0410](https://doi.org/10.1098/rspa.2009.0410)
4. Elliott M., Golub B. and Jackson M.: *Financial networks and contagion*, Am. Econ,
Rev. **104(10)** [, 3115 (2014). https://doi.org/10.1257/aer.104.10.3115](https://doi.org/10.1257/aer.104.10.3115)
5. Anand K., Craig B. and von Peter G.: *Filling* *in* *the* *blanks:* *network*
*structure and interbank contagion*, Quantitative Finance **15(4)** m 625 (2015).
[https://doi.org/10.1080/14697688.2014.968195](https://doi.org/10.1080/14697688.2014.968195)
6. Fink K., Kruger U., Meller B. and Wong L.-H.: *The credit quality chan-*
*nel: modeling contagion in the interbank market*, J. Fin. Stab. **25**, 83 (2016).
[https://doi.org/10.1016/j.jfs.2016.06.002](https://doi.org/10.1016/j.jfs.2016.06.002)
7. Soramaki K., Bech M.L., Arnold J., Glass R.J. and Beyeler W.E.:
*The* *topology* *of* *interbank* *payment* *flows*, Physica A **379**, 317 (2007).
[https://doi.org/10.1016/j.physa.2006.11.093](https://doi.org/10.1016/j.physa.2006.11.093)
8. Craig B. and von Peter G.: *Interbank tiering and money center banks*, J. Finan.
Intermediation **23(3)** [, 322 (2014). https://doi.org/10.1016/j.jfi.2014.02.003](https://doi.org/10.1016/j.jfi.2014.02.003)
9. Nakamoto Satichi: *Bitcoin: A Peer-to-Peer Electronic Cash System* [, https://](https://bitcoin.org/bitcoin.pdf)
[bitcoin.org/bitcoin.pdf (2008) (accessed April 2019).](https://bitcoin.org/bitcoin.pdf)
10. Ron D. and Shamir A.: *Quantitative analysis of the full bitcoin transaction*
*graph*, in Sadeghi AR. (eds) *Financial Cryptography and Data Security*, FC
2013. Lecture Notes in Computer Science, **7859**, 6 (2013), Springer, Berlin.
[https://doi.org/10.1007/978-3-642-39884-1 2](https://doi.org/10.1007/978-3-642-39884-1_2)
11. Biryukov A., Khovratovich D. and Pustogarov I.: *Deanonymisation of clients in*
*Bitcoin P2P network*, Proc. 2014 ACM SIGSAC Conf. Comp. Comm. Security
[(CCS’14) ACM N.Y., p.15 (2014); arXiv:1405.7418v3[cs.CR] (2014). https://arxiv.](http://arxiv.org/abs/1405.7418)
[org/abs/1405.7418](https://arxiv.org/abs/1405.7418)
12. Bohannon J.:, *The* *Bitcoin* *busts*, Science **351**, 1144 (2016).
[https://doi.org/10.1126/science.351.6278.1144](https://doi.org/10.1126/science.351.6278.1144)
13. Ermann L., Frahm K.M. and Shepelyansky D.L.: *Google matrix of Bitcoin network*,
Eur. Phys. J. B **91** [, 127 (2018). https://doi.org/10.1140/epjb/e2018-80674-y](https://doi.org/10.1140/epjb/e2018-80674-y)
-----
12 C. Coquid´e et al.
14. Brin S. and Page L.: *The* *anatomy* *of* *a* *large-scale* *hypertextual* *Web*
*search* *engine*, Computer Networks and ISDN Systems **30**, 107 (1998).
[https://doi.org/10.1016/S0169-7552(98)00110-X](https://doi.org/10.1016/S0169-7552(98)00110-X)
15. Langville A.M. and Meyer C.D.: *Google’s PageRank and beyond: the science of*
*search engine rankings*, Princeton University Press, Princeton (2006).
16. Ermann L., Frahm K.M. and Shepelyansky D.L.: *Google* *matrix*
*analysis* *of* *directed* *networks*, Rev. Mod. Phys. **87**, 1261 (2015).
[https://doi.org/10.1103/RevModPhys.87.1261](https://doi.org/10.1103/RevModPhys.87.1261)
17. Ermann L. and Shepelyansky D.L.: *Google* *matrix* *of* *the* *world*
*trade* *network*, Acta Physica Polonica A **120**, A158 (2011).
[https://doi.org/10.12693/APhysPolA.120.A-158](https://doi.org/10.12693/APhysPolA.120.A-158)
18. Ermann L. and Shepelyansky D.L.: *Google* *matrix* *analysis* *of* *the*
*multiproduct* *world* *trade* *network*, Eur. Phys. J. B **88**, 84 (2015).
[https://doi.org/10.1140/epjb/e2015-60047-0](https://doi.org/10.1140/epjb/e2015-60047-0)
19. Frahm K.M. and Shepelyansky D.L.: *Reduced* *Google* *matrix*,
[arXiv:1602.02394[physics.soc] (2016). https://arxiv.org/abs/1602.02394](http://arxiv.org/abs/1602.02394)
20. Frahm K.M., Jaffres-Runser K. and Shepelyansky D.L.: *Wikipedia mining*
*of hidden links between political leaders*, Eur. Phys. J. B **89**, 269 (2016) .
[https://doi.org/10.1140/epjb/e2016-70526-3](https://doi.org/10.1140/epjb/e2016-70526-3)
21. Coquid´e C., Ermann L., Lages J. and Shepelyansky D.L.: *Influence of petroleum*
*and gas trade on EU economies from the reduced Google matrix analysis of*
*UN COMTRADE data* [, arXiv:1903.01820[q-fin.ST] (2019). https://arxiv.org/abs/](http://arxiv.org/abs/1903.01820)
[1903.01820](https://arxiv.org/abs/1903.01820)
22. Coquid´e C., Lages J. and Shepelyansky D.L.: *Interdependence of sectors of eco-*
*nomic activities for world countries from the reduced Google matrix analysis of*
*WTO data* [, arXiv:1905.06489 [q-fin.TR] (2019). https://arxiv.org/abs/1905.06489](http://arxiv.org/abs/1905.06489)
23. Chepelianskii A.D.: *Towards* *physical* *laws* *for* *software* *architecture*,
[arXiv:1003.5455 [cs.SE] (2010) https://arxiv.org/abs/1003.5455](http://arxiv.org/abs/1003.5455)
-----
| 9,879
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1906.01293, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/1906.01293"
}
| 2,019
|
[
"JournalArticle"
] | true
| 2019-06-04T00:00:00
|
[
{
"paperId": "2c7e37fbc49bd3f73c7e21ca53381222259f8abc",
"title": "Financial Crisis of 2007–2009"
},
{
"paperId": "01bebf9f546a590f34422830c21540b64c467f5a",
"title": "Interdependence of Sectors of Economic Activities for World Countries from the Reduced Google Matrix Analysis of WTO Data"
},
{
"paperId": "c161cfeaae0f0379b3746bbd205257434ca84e9d",
"title": "Influence of petroleum and gas trade on EU economies from the reduced Google matrix analysis of UN COMTRADE data"
},
{
"paperId": "601cca524ce49f1fb870c9606bbcb64895e1cab5",
"title": "Google matrix of Bitcoin network"
},
{
"paperId": "86253e496fa9cbfd5f588a99d0291cca97c437c4",
"title": "Inferring hidden causal relations between pathway members using reduced Google matrix of directed biological networks"
},
{
"paperId": "2c2c170e39e877cd017ec1819143c71f34a17db4",
"title": "Wikipedia mining of hidden links between political leaders"
},
{
"paperId": "57d779b284d6702a750da0ca6f1b2daad4ee0ed4",
"title": "The Credit Quality Channel: Modeling Contagion in the Interbank Market"
},
{
"paperId": "bcc6af90b78c1c3848827aa7921c4ac5adac4c25",
"title": "The Bitcoin busts."
},
{
"paperId": "e1549535b50fa0f05f29cd61195d0caf26c5cbc6",
"title": "Reduced Google matrix"
},
{
"paperId": "260e05fc40aeb7dd0ee235481db6c84d9bcba382",
"title": "Google matrix analysis of the multiproduct world trade network"
},
{
"paperId": "e8be0d28a4cdb2e7b282cbd65d01abc1858809df",
"title": "Google matrix analysis of directed networks"
},
{
"paperId": "0bb806ff4335eaab425087a9d4129c558fdce049",
"title": "Filling in the blanks: network structure and interbank contagion"
},
{
"paperId": "e4b92eccc5bc7ededff579232f5bed5186bf8302",
"title": "Deanonymisation of Clients in Bitcoin P2P Network"
},
{
"paperId": "3557986f42cb998d0803486b98dbca14f66328e3",
"title": "Financial Networks and Contagion"
},
{
"paperId": "94c56c0910e96c978aa536a6cfc71bccfaa2cf35",
"title": "Quantitative Analysis of the Full Bitcoin Transaction Graph"
},
{
"paperId": "92ca579a2deb1cecdbadb01de659446ee04607bb",
"title": "Google matrix of the world trade network"
},
{
"paperId": "e23ba1d424ff486056151a9ecd0d9275c718c878",
"title": "Towards physical laws for software architecture"
},
{
"paperId": "0e50ef770c4e1a48c5b18e8c485f66a7670123f5",
"title": "Contagion in financial networks"
},
{
"paperId": "dee89541feff018c45e4a2701e03efcd1d3d3fc5",
"title": "Google's PageRank and beyond - the science of search engine rankings"
},
{
"paperId": "10d6778bc45aebcd58d336b4062b935861d2fe8a",
"title": "The Anatomy of a Large-Scale Hypertextual Web Search Engine"
},
{
"paperId": "258ea44d33a319f181e3731962754d44ae23e9ea",
"title": "Bank for International Settlements Communications"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": null,
"title": "Three weeks that changed the world"
},
{
"paperId": "e0eeaeff0dfce2858968daf9fca6a0fce4be0c09",
"title": "Google's PageRank and Beyond"
},
{
"paperId": "d4d8d7556af1f644428b0fc9b4a02216c284e4d8",
"title": "Federal Reserve Bank of New York Staff Reports the Topology of Interbank Payment Flows the Topology of Interbank Payment Flows"
}
] | 9,879
|
en
|
[
{
"category": "Business",
"source": "external"
},
{
"category": "Law",
"source": "s2-fos-model"
},
{
"category": "Economics",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0083da2bffac8e3496a4ae646a103c0ea60f7838
|
[
"Business"
] | 0.932168
|
The Costs and Benefits of Mandatory Securities Regulation: Evidence from Market Reactions to the JOBS Act of 2012
|
0083da2bffac8e3496a4ae646a103c0ea60f7838
|
Social Science Research Network
|
[
{
"authorId": "66181313",
"name": "Dhammika Dharmapala"
},
{
"authorId": "41169950",
"name": "Vikramaditya S. Khanna"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"SSRN, Social Science Research Network (SSRN) home page",
"SSRN Electronic Journal",
"Soc Sci Res Netw",
"SSRN",
"SSRN Home Page",
"SSRN Electron J",
"Social Science Electronic Publishing presents Social Science Research Network"
],
"alternate_urls": [
"www.ssrn.com/",
"https://fatcat.wiki/container/tol7woxlqjeg5bmzadeg6qrg3e",
"https://www.wikidata.org/wiki/Q53949192",
"www.ssrn.com/en",
"http://www.ssrn.com/en/",
"http://umlib.nl/ssrn",
"umlib.nl/ssrn"
],
"id": "75d7a8c1-d871-42db-a8e4-7cf5146fdb62",
"issn": "1556-5068",
"name": "Social Science Research Network",
"type": "journal",
"url": "http://www.ssrn.com/"
}
|
The effect of mandatory securities regulation on firm value has been a longstanding concern across law, economics and finance. In 2012, Congress enacted the Jumpstart Our Business Startups (“JOBS”) Act, relaxing disclosure and compliance obligations for a new category of firms known as “emerging growth companies” (EGCs) that satisfied certain criteria (such as having less than $1 billion of annual revenue). The JOBS Act’s definition of an EGC involved a limited degree of retroactivity, extending its application to firms that conducted initial public offerings (IPOs) between December 8, 2011 and April 5, 2012 (the day the bill became law). The December 8 cutoff date was publicly known prior to the JOBS bill’s key legislative events, notably those of March 15, 2012, when Senate consideration began and the Senate Majority Leader expressed strong support for the bill. We analyze market reactions for EGCs that conducted IPOs after the cutoff date, relative to a control group of otherwise similar firms that conducted IPOs in the months preceding the cutoff date. We find positive and statistically significant abnormal returns for EGCs around March 15, relative to the control firms. This suggests that the value to investors of the disclosure and compliance obligations relaxed under the JOBS Act is outweighed by the associated compliance costs. The baseline results imply a positive abnormal return of between 3% and 4%, and the implied increase in firm value is at least $20 million for an EGC with the median market value in our sample.
|
University of Chicago Law School University of Chicago Law School
##### Chicago Unbound Chicago Unbound
[Coase-Sandor Working Paper Series in Law and](https://chicagounbound.uchicago.edu/law_and_economics)
[Coase-Sandor Institute for Law and Economics](https://chicagounbound.uchicago.edu/coase_sandor_institute)
[Economics](https://chicagounbound.uchicago.edu/law_and_economics)
2014
##### The Costs and Benefits of Mandatory Securities Regulation: The Costs and Benefits of Mandatory Securities Regulation:
Evidence from Market Reactions to the JOBS Act of 2012 Evidence from Market Reactions to the JOBS Act of 2012
Dhammika Dharmapala
[email protected]
Vikramaditya S. Khanna
[email protected]
[Follow this and additional works at: https://chicagounbound.uchicago.edu/law_and_economics](https://chicagounbound.uchicago.edu/law_and_economics?utm_source=chicagounbound.uchicago.edu%2Flaw_and_economics%2F713&utm_medium=PDF&utm_campaign=PDFCoverPages)
[Part of the Law Commons](https://network.bepress.com/hgg/discipline/578?utm_source=chicagounbound.uchicago.edu%2Flaw_and_economics%2F713&utm_medium=PDF&utm_campaign=PDFCoverPages)
Recommended Citation Recommended Citation
Dhammika Dharmapala & Vikramaditya Khanna, "The Costs and Benefits of Mandatory Securities
Regulation: Evidence from Market Reactions to the JOBS Act of 2012" (Coase-Sandor Institute for Law &
Economics Working Paper No. 701, 2014).
This Working Paper is brought to you for free and open access by the Coase-Sandor Institute for Law and
Economics at Chicago Unbound. It has been accepted for inclusion in Coase-Sandor Working Paper Series in Law
and Economics by an authorized administrator of Chicago Unbound. For more information, please contact
[[email protected].](mailto:[email protected])
-----
## HICAGO
# C
**COASE-SANDOR INSTITUTE FOR LAW AND ECONOMICS WORKING PAPER NO.** **701**
**(2D SERIES)**
#### The Costs and Benefits of Mandatory Securities Regulation: Evidence from Market Reactions to the
JOBS Act of 2012
###### Dhammika Dharmapala and Vikramaditya S. Khanna
**THE LAW SCHOOL**
**THE UNIVERSITY OF CHICAGO**
August 2014
This paper can be downloaded without charge at:
The University of Chicago, Institute for Law and Economics Working Paper Series Index:
http://www.law.uchicago.edu/Lawecon/index.html
and at the Social Science Research Network Electronic Paper Collection.
-----
##### The Costs and Benefits of Mandatory Securities Regulation:
Evidence from Market Reactions to the JOBS Act of 2012
Dhammika Dharmapala
University of Chicago Law School
[email protected]
Vikramaditya Khanna
University of Michigan Law School
[email protected]
August 2014
**Abstract**
The effect of mandatory securities regulation on firm value has been a longstanding concern
across law, economics and finance. In 2012, Congress enacted the Jumpstart Our Business
Startups (“JOBS”) Act, relaxing disclosure and compliance obligations for a new category of
firms known as “emerging growth companies” (EGCs) that satisfied certain criteria (such as
having less than $1 billion of annual revenue). The JOBS Act’s definition of an EGC involved a
limited degree of retroactivity, extending its application to firms that conducted initial public
offerings (IPOs) between December 8, 2011 and April 5, 2012 (the day the bill became law). The
December 8 cutoff date was publicly known prior to the JOBS bill’s key legislative events,
notably those of March 15, 2012, when Senate consideration began and the Senate Majority
Leader expressed strong support for the bill. We analyze market reactions for EGCs that
conducted IPOs after the cutoff date, relative to a control group of otherwise similar firms that
conducted IPOs in the months preceding the cutoff date. We find positive and statistically
significant abnormal returns for EGCs around March 15, relative to the control firms. This
suggests that the value to investors of the disclosure and compliance obligations relaxed under
the JOBS Act is outweighed by the associated compliance costs. The baseline results imply a
positive abnormal return of between 3% and 4%, and the implied increase in firm value is at least
$20 million for an EGC with the median market value in our sample.
**Acknowledgments: We thank Jennifer Arlen, John Armour, Ken Ayotte, Bobby Bartlett, Bernie Black, Mike**
Guttentag, Todd Henderson, Allan Horwich, Bob Lawless, Yoon-ho Alex Lee, Yair Listokin, Kate Litvak, Anup
Malani, Peter Molk, Ed Morrison, Adam Pritchard, Holger Spamann, Jim Speta, Tom Stratmann, Susan Yeh,
workshop participants at Northwestern University, George Mason University and the University of Chicago, and
conference participants at the American Law and Economics Association and the Midwest Law and Economics
Association meetings for helpful comments and discussions. We also thank Michael Gough, Ye Tu and Brandon
Une for outstanding research assistance. Any remaining errors or omissions are, of course, our own.
-----
**1) Introduction**
Securities law in the United States is governed by a regime of mandatory disclosure
established by the Securities Act of 1933 and the Securities Exchange Act of 1934. Mandatory
disclosure potentially benefits both issuers and investors to the extent that the information
disclosed by the former is valuable to the latter, and the disclosures cannot be fully replicated
using voluntary mechanisms. On the other hand, these mandatory disclosures entail compliance
costs, and issuers and investors cannot contract to waive these requirements in situations where
the costs exceed the benefits. Thus, there is a long-standing debate across law, economics and
finance regarding the justification for a mandatory disclosure regime (e.g. Easterbrook and
Fischel, 1984; Coffee, 1984; Mahoney, 1995) and whether, on balance, mandatory disclosure
increases the value of firms. The latter question has been analyzed using a variety of different
empirical approaches (e.g. Stigler, 1964; La Porta, Lopez de Silanes and Shleifer, 2006;
Greenstone, Oyer and Vissing-Jorgensen, 2006).
The Jumpstart Our Business Startups (“JOBS”) Act was passed by Congress in March
2012 and signed by the President on April 5, 2012. It relaxed disclosure and compliance
obligations for a new category of firms defined by the Act, known as “emerging growth
companies” (EGCs), that satisfied certain criteria (including, most prominently, generating less
than $1 billion of revenue in its most recently completed fiscal year). The JOBS Act contained
an element of partial retroactivity (as described below) that provides an unusual quasi
experimental setting in which to measure market expectations of the consequences of relaxing
regulatory obligations for a subset of firms. It also appears to be unique, in relation to episodes
studied in the prior literature, in relaxing rather than a strengthening regulation.
The JOBS Act relaxed existing requirements for EGCs conducting initial public offerings
(IPOs) on US equity markets, and also relaxed EGCs’ post-IPO disclosure obligations for a 5
year period. The latter provisions reduced the number of years of financial data that had to be
disclosed, provided a longer timeframe for complying with new accounting standards, and
exempted EGCs from certain executive compensation disclosure requirements. Perhaps most
importantly, EGCs were permitted an exemption from auditor attestation of internal controls
1
-----
under Section 404(b) of the Sarbanes-Oxley (SOX) Act of 2002, as well as exemption from
certain future changes to accounting rules.[1]
While the JOBS Act’s provisions were primarily prospective (applying largely to firms
conducting IPOs after April 5, 2012), the Act’s definition of an EGC involved a limited degree
of retroactivity. In particular, the Act’s definition of an EGC excludes firms whose first sale of
common equity securities on public markets occurred on or before December 8, 2011.
Conversely, firms that conducted IPOs after December 8, 2011 but prior to the enactment of the
Act are eligible for EGC status and the associated reduced disclosure and compliance obligations
(if they satisfy the other EGC criteria, such as the $1 billion revenue threshold). Moreover, it was
known from at least the beginning of March 2012 that the legislation (if passed) would include a
December 8 cutoff (as this was part of draft legislation produced by the House Committee on
Financial Services on March 1, 2012). Thus, there is a group of firms that conducted IPOs after
December 8, 2011 for which we can observe price data during the sequence of legislative events
in March 2012 that propelled the JOBS bill into law. Firms within this group that satisfied the
EGC criteria (notably, the $1 billion revenue threshold) were expected to become subject to the
reduced disclosure and compliance obligations if the bill passed, while all other firms then
trading on US markets would remain subject to the existing regime.
This paper uses an event study approach to measure abnormal returns for these affected
(“treatment”) firms around major legislative events in March 2012 that increased the probability
of the JOBS bill’s enactment. This provides a test of investors’ expectations about whether or not
the value of the mandatory disclosure and compliance obligations that the JOBS bill relaxed
exceeds the associated compliance costs. As firms subject to the “treatment” (i.e. EGC status) are
all newly traded on public markets, the rest of the market may not necessarily provide an ideal
baseline. For the primary control group, we use firms that conducted IPOs from July 2011 to
December 8, 2011 and that satisfied the EGC criteria (apart from their IPO date). This yields a
control group that is of comparable size to the treatment group, and that has very similar
observable characteristics.
1 The JOBS Act included a variety of other provisions, as described in Section 3 below. However, it is only the
changed obligations for EGCs in Title I of the JOBS Act that are analyzed in this paper. It should also be noted that
EGC status is elective, in the sense that eligible firms can choose whether to opt in to each of the relevant provisions
of the JOBS Act or to comply with the obligations that apply to non-EGCs. As discussed in Section 5.6 below,
election into EGC status was common with respect to the SOX-related provisions of the JOBS Act - about 75% of
the EGCs in our sample eventually chose to opt in to these reduced compliance obligations.
2
-----
Our empirical tests compare abnormal returns for the treatment firms with abnormal
returns for the control firms over various relevant event windows. The basic identifying
assumption is that, conditional on a firm conducting an IPO over the July, 2011 to April, 2012
period, whether it did so before or after the December 8 cutoff can be considered to be quasi
random with respect to the factors that generate abnormal returns on the key event dates for the
JOBS Act. This assumption appears reasonable, given the significant lead time involved in
preparing and implementing an IPO.
We collect data on IPOs conducted on the US market over the period from July 2011 to
April 5, 2012 from various sources, including the Securities and Exchange Commission’s
(SEC’s) Electronic Data Gathering and Retrieval (EDGAR) system. We find a total of 87 firms
that conducted IPOs over this period. For these firms, we also collect Compustat financial
statement information and Center for Research in Security Prices (CRSP) data on firms’ daily
returns and on daily market returns. We use the data on IPO date, revenue in the most recently
completed fiscal year, and other relevant variables to determine which of these firms satisfy the
JOBS Act’s criteria for EGC status. Taking account of missing data, our control group consists
of 33 firms (with less than $1 billion in revenues that conducted IPOs prior to December 8,
2011). The treatment group of EGCs varies in size from 25 to 41, depending on the date; we
have 27 treatment firms for our most important tests. While the sample size is relatively small,
this serves primarily to create a bias against finding any significant results.
The bill that eventually became Title I of the JOBS Act (defining EGCs and relaxing
their disclosure obligations) was introduced in the US House of Representatives on December 8,
2011. This initial bill did not backdate EGC status to December 8, 2011, although the cutoff date
was later chosen to coincide with the date of its introduction. The bill was referred to the House
Financial Services Committee, which produced an amended version on March 1, 2012 that
included the December 8, 2011 cutoff date for EGC status. The House passed the bill on March
8, 2012 by an overwhelming margin. However, widespread opposition to the bill emerged
immediately following the House vote, exemplified by an editorial in the influential New York
_Times describing the bill as “a terrible package . . . that would undo essential investor protections_
[and] undermine market transparency . . .”[2] This opposition created substantial uncertainty about
whether the bill would be considered by the Senate. The uncertainty was largely resolved on
2 See: http://www.nytimes.com/2012/03/11/opinion/sunday/washington-has-a-very-short-memory.html?_r=0
3
-----
March 15, when the Senate Majority Leader signaled the importance of the bill by scheduling a
vote and describing it as “a measure the Senate should consider expeditiously and pass in short
order.”[3] The Senate passed the bill (with some amendments that did not pertain to the EGC
provisions) on March 22. The House then passed the amended Senate version on March 27, and
it was signed by the President on April 5, 2012.
We use both the market model and the Fama-French model (augmented by Carhart’s
momentum factor) to compute abnormal returns for the firms in our sample. Abnormal returns
are calculated over a (-1, +1) event window that spans the period from the release of the House
Financial Services Committee report on March 1, 2012 to the Presidential signature (this “full”
event window spans February 29 to April 9, 2012). As many of the firms in our sample have
only a limited pre-event returns history, our estimation window uses both the pre-event period
and post-event returns data through December 31, 2012. We compute cumulative abnormal
returns (CARs) for the full event window and for various shorter windows, in particular for the
March 15 Senate event on which we focus.[4] We then use a regression framework to test whether
the CARs for the treatment firms are significantly different from those for the control group of
firms (controlling for various firm-level variables).
Our central result is that the March 15 Senate event was associated with positive and
statistically significant abnormal returns for treatment firms (i.e. EGCs), relative to the control
firms. A critical empirical challenge is that this sample consists of firms that are close to their
IPO date, which may raise concerns related to the large literature in finance on IPO underpricing
(e.g. Ljungqvist, 2008). However, this is a phenomenon that primarily affects the first trading
day, which is excluded from all of our tests. Moreover, we find robust results when we control
for the number of trading days since a firm’s IPO and exclude firms that are one month or less
from their IPO date. The result is also robust to controlling for revenue in the most recent fiscal
year and a number of financial statement variables (such as assets, debt, earnings, and R&D
expenditures). It is also robust to the inclusion of industry fixed effects (although the effective
sample size becomes quite small) and to using as an alternative control group those larger firms
(non-EGCs above the $1 billion threshold) that conducted IPOs after December 8, 2011.
3 See the Congressional Record, available at: http://thomas.loc.gov/cgi-bin/query/R?r112:FLD001:S51694
4 In an alternative test, we aggregate the EGCs into a single portfolio and compute the portfolio CARs around March
15. This approach addresses concerns about the potential cross-correlation of returns among EGCs, and leads to
similar results (as described in Section 5).
4
-----
Reassuringly, two tests using firms that conducted IPOs after December 8, 2011 but were not
subject to the JOBS Act - firms above the $1 billion threshold, and registered investment
companies - as placebo “treatment” groups find no effects.
The baseline results imply a positive abnormal return of between 3% and 4%. The
implied increase in firm value is at least $20 million for an EGC with the median market value in
our sample. This is comparable in magnitude to, albeit larger than, estimates in the literature of
the compliance costs associated with Section 404(b) of SOX (a provision relaxed for EGCs
under the JOBS Act). Some evidence suggests that part of the effect is attributable to the
relaxation of SOX requirements. Firms that are classified by the SEC as “nonaccelerated filers”
(with a public float of less than $75 million) were exempt from compliance with SOX 404(b)
prior to the JOBS Act. The effect for EGCs in our sample that are nonaccelerated filers is
essentially zero, although any conclusions are tentative due to the small number of
nonaccelerated filers.
We also address a number of potential alternative explanations and interpretations. If the
partial retroactivity of the JOBS Act were attributable to lobbying by EGCs, this may potentially
confound our results. Thus, we collect data on lobbying activity by EGCs and on campaign
contributions by associated political action committees (PACs), and find that the results are
unaffected by omitting the “politically active” EGCs. We also search for other news events
(unrelated to the JOBS Act) about EGCs in the relevant window. Omitting EGCs that were the
subject of unrelated news stories also does not affect the results. They are also unaffected by
Winsorizing or omitting two firms that experienced particularly large positive abnormal returns.
A possible alternative interpretation of the result is that the relaxation of regulation may
create greater opportunities for sophisticated incumbent shareholders to sell in the future to
uninformed “noise traders” at inflated prices. To test this alternative interpretation, we collect
data on analyst coverage of EGCs from the International Brokers Estimate System (I/B/E/S)
dataset. Potential mispricing would presumably be more relevant for firms without analyst
coverage, but we find that the EGC effect is virtually identical for firms with and without analyst
coverage. This casts doubt on the alternative interpretation based on mispricing.
This paper addresses a central question in the analysis of securities regulation, and so it is
related to a number of different strands of literature across law, economics and finance. The
pioneering empirical literature on the effects of securities regulation used time-series
5
-----
comparisons of various outcomes before and after the Securities Acts were enacted (e.g. Stigler,
1964; Friend and Herman, 1964).[5] More recently, a literature using cross-country empirical
analysis has studied the impact of securities regulation and its (public and private) enforcement
on the extent of stock market development (e.g. La Porta et al., 2006).
Our paper is most directly related to a literature using single-country quasi-experiments
to analyze the effects of changes in securities law. For example, Greenstone et al. (2006) use as a
quasi-experiment the 1964 amendments that extended the mandatory disclosure requirements of
US securities law to certain firms trading over-the-counter (OTC).[6] They hand-collect price data
for OTC firms, and compare abnormal returns for the firms that were subject to the amendments
to those for a control group of otherwise similar exchange-traded firms that were already subject
to these disclosure requirements and therefore unaffected by the amendments. This approach
implies large positive abnormal returns for the affected firms of between 11.5% and 22.1% over
the full event window, relative to the control group.
In contrast to Greenstone _et al. (2006), our paper finds a negative effect of securities_
regulation on firm value in the US. However, this should not be viewed as in any way
contradicting their findings, as we examine a much later time period and a very different
regulatory environment. In particular, the 1964 amendments involved a much more extensive
change in regulation for the affected firms than did the JOBS Act. In addition, the baseline level
of regulation for OTC firms prior to the 1964 Amendments was very limited, whereas public
firms were subject to very extensive regulation at the time of the JOBS Act. Rather, both our
results and theirs can be encompassed within a simple conceptual framework outlined in Section
2 below, in which securities regulation initially increases firm value, but beyond a certain point
may decrease value as compliance costs exceed the benefits of regulation to investors. Our
results also point towards a less ambiguous interpretation in terms of social welfare than do
theirs, a point that is developed in Section 2 below.
As the relaxation of the SOX internal control requirements is a significant component of
the JOBS Act, our paper is also related to the empirical literature evaluating the effects of SOX
5 Benston (1973) uses an event study approach to analyze the effects of the Securities Exchange Act of 1934, using
firms that were already disclosing the required information as a control group.
6 Ferrell (2007) also analyzes the consequences of the 1964 amendments, finding positive abnormal returns and a
reduction in volatility for OTC firms. Bushee and Leuz (2005) analyze the further extension of disclosure
requirements in 1999 to the small firms that trade on the OTC Bulletin Board. They find significant benefits from
this extension for certain firms, but also find that the increased compliance costs led some firms to exit the Bulletin
Board.
6
-----
(e.g. Chhaochharia and Grinstein, 2007; Litvak, 2007; Bartlett, 2009; Kamar, Talley and Karaca
Mandic, 2009; for a comprehensive recent review of this literature, see Coates and Srinivasan
(2013)). Our paper is also related to single-country quasi-experimental studies of broader
corporate governance reforms outside the US, which typically include some provisions relating
to disclosure (e.g. Black, Jang and Kim, 2006; Dharmapala and Khanna, 2013). Finally, our
paper is related to the large and growing legal literature on the JOBS Act (e.g. Langevoort and
Thompson, 2013; Guttentag, 2013). However, this literature does not empirically analyze the
consequences of the Act.[7]
This paper proceeds as follows. Section 2 develops a simple conceptual framework that is
helpful in interpreting the results. Section 3 provides a brief overview and history of the JOBS
Act. Section 4 describes the data and elaborates on the empirical strategy. Section 5 discusses the
results, and Section 6 concludes.
**2) A Simple Conceptual Framework**
This section develops a simple conceptual framework that encapsulates many of the
insights of the theoretical literature on securities disclosure, insider diversion and firm value (see
e.g. Shleifer and Wolfenzon, 2002) and provides a simple framework within which to interpret
the paper’s results. Consider a firm that has (exogenously fixed) fundamental value V. Let r be a
measure of the strength of securities regulation. Higher values of _r_ entail higher compliance
costs, but also reduce the expected diversion of private benefits by insiders. Suppose that insiders
own a fraction _α_ - 0 of the firm, that _B(r)_ is a decreasing, convex function representing the
private benefits diverted by insiders, and that C(r) is an increasing, convex function representing
the costs of compliance with securities regulation (which are borne pro rata by all shareholders).
The diversion of private benefits is assumed to generate a deadweight loss, in the sense that $1 of
private benefits costs outside shareholders $(1 + γ), where γ > 0.
Under these assumptions, the value placed on the firm by outside investors (VM) and the
value placed on it by insiders (VI) can be expressed as:
!! = 1 − ! ! −1 + ! ! ! −1 −! ! ! (1)
and:
7 A partial exception is Berdejo (2014), but its focus is on firms that went public after the enactment of the JOBS
Act, rather than on the EGC sample analyzed here.
7
-----
!! = !" + ! ! −!" ! (2)
It is immediately obvious from this simple framework that a decrease in r can either increase or
decrease _VM, depending on the balance between private benefits and compliance costs (as_
illustrated in Figure 1). Moreover, the fact that the JOBS Act was widely supported by the
business community does not render it a foregone conclusion that market reactions would be
positive. It is entirely possible that a decrease in r could both decrease VM and increase VI, if the
increase in B(r) is sufficiently large.
Summing VM and VI, the aggregate value of the firm is:
!! + !! = ! −!" ! −! ! (3)
In the absence of externalities, this aggregate value can be interpreted as a measure of social
welfare. Suppose that an exogenous legal reform (such as the JOBS Act) reduces _r. In the_
absence of a sale of control, the observed market response reflects outside investors’ value (VM).
Thus, if we observe a decline in _VM, it follows that the magnitude of the increase in private_
benefits borne by outside shareholders exceeds the magnitude of the decrease in outside
shareholders’ share of compliance costs. It does not necessarily follow, however, that the
magnitude of the increase in the deadweight cost of private benefits exceeds the magnitude of the
decrease in compliance costs.[8] Thus, it is unclear whether or not social welfare is decreased by
the legal reform. While outsiders’ value falls, the gains to insiders may be sufficient to offset this
loss. This is essentially the situation implied by the findings of Greenstone et al. (2006), albeit in
reverse. They find that an increase in r led to an increase in VM; as they point out, however, this
is not sufficient to establish that social welfare increases.
On the other hand, suppose that an exogenous legal reform (such as the JOBS Act)
reduces r, and we then observe an increase in VM. This entails that the magnitude of the increase
in private benefits borne by outside shareholders is smaller than the magnitude of the decrease in
outside shareholders’ share of compliance costs. From this, it necessarily follows that the
magnitude of the decrease in compliance costs exceeds the magnitude of the increase in the
deadweight loss from private benefits.[9] Therefore, social welfare necessarily increases in this
scenario as a result of the decrease in r.[10]
!"
!"[. However, this does not necessarily ]
8 More precisely, the decrease in _VM entails that_ 1 − !
!"
!" [< −(1 + !)]
!" !"
imply that
!" [< − !] !"[, which is required for (][V][M][ + ][V][I][) to increase. ]
9 More precisely, the increase in VM entails that 1 − ! !"
!" [> −(1 + !)]
!"
!"[. This necessarily implies that ]
8
-----
**3) The JOBS Act and its Legislative History**
**3.1) US Securities Law and the Context of the JOBS Act**
The JOBS Act is the most recent in a series of statutes regulating the US securities
markets. The key statutes in this area are the Securities Act 1933 (SA), Securities & Exchange
Act 1934 (SEA), Sarbanes-Oxley Act 2002 (SOX), and now the JOBS Act.[11] The SA, and rules
promulgated thereunder, are the primary means of regulating the capital raising process in the
US. Thus, a substantial part of the regulations surrounding an IPO, a private placement of
securities to large investors, or a debt issuance emanate from the SA. The SEA and associated
rules cover a range of activities in the securities markets, ranging from the continuing disclosure
obligations of firms to insider trading and a host of other items; the SEA also established the
Securities and Exchange Commission (SEC). Together, the SA and SEA represent the bulk of
Federal Securities Laws in the US.
Although there have been other significant enactments in this area (e.g., the Investment
Advisors Act of 1940 and the 1964 Amendments), the next set of major reforms that were
applicable across the securities markets came with the enactment of SOX in 2002. SOX was
enacted as a response to the accounting scandals in the early 2000s, such as those involving
Enron and Worldcom. It put in place a panoply of measures, including enhanced internal controls
to provide more accurate financial disclosure. This was supplemented by requirements for top
executives to certify financial statements (and the process for generating them) as well as
requiring external auditors to certify/assess these internal controls. In addition to this, SOX
required more disclosure of Off-Balance Sheet items as well as prohibiting the improper
influence of an audit.
These enactments all increased disclosure, required more steps to be taken by firms and
executives, amongst others, and enhanced penalties. The ratchet, so to speak, moved upward in
!" !"
!" [> − !] !"[, which implies that (][V][M][ + ][V][I][) increases. ]
10 Guttentag (2013, p. 186) argues that models emphasizing private benefits from suboptimal disclosure are not
particularly relevant to the US context, where there exist robust private contracting mechanisms that can implement
optimal solutions. If one adopts this view, then in the limit B(r) = 0 for all r, and the deadweight costs of private
benefits are not a concern. The ambiguity in the social welfare implications of Greenstone _et al. (2006) would_
disappear, but the interpretation of this paper’s findings would not be substantially altered.
11 For a comprehensive account and discussion, see e.g. Choi and Pritchard (2012).
9
-----
each case.[12] However, the JOBS Act was arguably unique in the sense that the ratchet moved
downwards – it took steps that were generally perceived to loosen some regulations, to allow for
some firms to have fewer obligations, and to permit new ways to fund certain ventures. The key
motivation for the JOBS Act appears to have been the decline in the number of IPOs since the
technology boom of the 1990s and early mid 2000s (attributed by some to onerous regulation,
including SOX)[13] combined with enthusiasm in Congress for legislation that could be presented
as fostering employment creation after one of the greatest economic downturns in US history.
**3.2) Provisions of the JOBS Act**
The JOBS Act puts in place a number of provisions reflecting a variety of different
amendments to the securities laws, ostensibly designed to enhance the ability of some firms –
especially smaller firms – to raise capital. In particular, the Act begins by creating a new
category of firm – the “emerging growth company” (EGC) for both the SA and SEA (and hence
for SOX as well).[14] These are firms that in their most recent fiscal year had annual gross revenue
of less than $1 Billion.[15] Firms remain EGCs until the earliest of the following events occurs:
(i) Five (5) years have elapsed since the firm’s IPO.[16]
(ii) The Firm’s annual gross revenue exceeds $1 Billion or more.[17]
(iii) The Firm issues more than $1 Billion in non-convertible debt over three (3)
years.[18]
12 The ratchet moved upwards with the Dodd-Frank Act (DFA) of 2010 as well. The DFA is important for a number
of reasons – for instance, it introduced the “say-on-pay” votes on executive compensation that was one of the
measures relaxed for certain firms by the JOBS Act. However, the DFA’s changes to the regulatory structure and
requirements of the SA and SEA are limited and hence we do not discuss it in detail.
13 See e.g. the IPO Task Force report on “Rebuilding the IPO On-Ramp: Putting Emerging Companies and the Job
Market Back on the Road to Growth” available at: http://www.sec.gov/info/smallbus/acsec/rebuilding_the_ipo_onramp.pdf
14 In addition to the creation of this new category, the Act operates in at least four other large arenas. First, the Act
relaxes some regulations and enacts new ones that are designed to facilitate the use of “crowdfunding” for certain
businesses. This does not form the primary focus of our paper and hence we do not discuss it in any depth. Second,
the Act eases restrictions for firms considering a private placement under Regulation D (and Rule 144A), which, in
part, facilitates easier communication with some sets of potential investors. Third, the Act increases the amount that
can be raised by firms using Regulation A (which is targeted to smaller issuers) from $5 Million to $50 Million.
Fourth, the Act amends the registration requirements under the SEA such that now a firm is subject to parts of the
SEA only when it has more than 2000 shareholders (as compared to the 500 shareholder threshold of the past) and
more than $10 Million in assets (as compared to the $1 Million asset threshold of the past). All these measures
appear designed to reduce or ease regulations on smaller or newer firms, especially those that might be designated as
EGCs. We focus our discussion in the text on the regulation of EGCs and what the JOBS Act has done that makes
their regulatory burdens lighter.
15 See §§ 101(a) & (b), JOBS Act 2012.
16 See id.
17 See id.
10
-----
(iv) The Firm meets the definition of a “large accelerated filer”.[19]
To be considered an EGC, the firm’s first sales of shares in its IPO must have occurred after
December 8, 2011.[20] If a firm is an EGC then it is entitled to receive less onerous regulatory
treatment in a number of spheres, as described below. It is noteworthy that an EGC can choose
not to be treated as an EGC (and hence be treated as a “regular” issuer).[21]
If a firm is an EGC, and wishes to be treated as one, then it will receive more lenient
compliance and disclosure obligations:
(i) The EGC will not be required to comply with the auditor attestation
requirements of section 404(b) under SOX.[22]
(ii) The EGC will not be subject to audit firm rotation or auditor discussion and
analysis requirements.[23]
(iii) The EGC is not subject to any future rules of the Public Company Accounting
Oversight Board (PCAOB) unless the SEC explicitly decides that EGCs
should be subject to the new rule.[24]
(iv) The EGC will receive a longer transition period to comply with new audit
standards.[25]
(v) The EGC is not required to include more than two (2) years of financial
statements in the filings that make up part of an IPO.[26]
(vi) The EGC is not required to comply with the “say on pay” and “pay versus
performance” requirements.[27]
18 See id.
19 See id. A large accelerated filer is a firm that:
“(i) [has] an aggregate worldwide market value of the voting and non-voting common equity held by its nonaffiliates of $700 million or more;
(ii) [has] been subject to the requirements of section 13(a) or 15(d) of the Act for a period of at least twelve calendar
months;
(iii) [has] filed at least one annual report pursuant to section 13(a) or 15(d) of the Act; and
(iv) … is not eligible to use the requirements for smaller reporting companies …for its annual and quarterly reports.”
(See 17 Code of Federal Regulations (CFR) § 240.12b-2).
20 See § 101(d), JOBS Act 2012. The registration statement for the IPO must be “effective”.
21 See §107, JOBS Act 2012. At the time that we analyze market reactions, it would not have been known whether a
particular EGC would elect to be treated as such. As discussed in Section 5.6 below, election into EGC status was
common with respect to the SOX-related provisions of the JOBS Act - about 75% of the EGCs in our sample
eventually chose to opt in to these reduced compliance obligations.
22 See §103, JOBS Act 2012.
23 See §104, JOBS Act 2012.
24 See §104, JOBS Act 2012.
25 See §102, JOBS Act 2012.
26 See §102(b)(1), JOBS Act 2012.
11
-----
(vii) The EGC is not required to include certain financial data that relates to a time
before the earliest audited statements included in its IPO filings.[28]
(viii) The EGC can start the IPO process by confidentially submitting its draft
registration to the SEC for non-public review (although if the firm decides to
go forward with an IPO the registration statement must be publicly available at
least 21 days prior to the start of the “roadshow” for the IPO.[29]
(ix) The EGC can “test the waters” with large and sophisticated investors (e.g.,
Qualified Institutional Buyers, Accredited Investors) before and during the
registration process.[30] This usually means the EGC can now have
communications with these investors, whereas prior to the JOBS Act such
communications may have triggered a host of disclosure requirements and
penalties.
(x) Investment Banks will now be allowed to both provide analyst research reports
on the EGC as well as work as an underwriter for the EGC’s public offering
(in the past there were restrictions on communications made by such parties).[31]
The JOBS Act thus lessens the regulatory requirements for EGCs in a number of spheres.
In particular, it allows the EGC to avoid being subject to some accounting, auditing and internal
control requirements enacted under SOX as well as providing EGCs with a longer transition
period to comply with some of these requirements. In addition, EGCs will have lesser disclosure
burdens in their IPO filings and executive compensation disclosures as well as the ability to
submit their filings confidentially (at least for some period of time). Finally, EGCs (and those
associated with their offerings) will have fewer restrictions on their ability to communicate with
potential investors compared to non-EGCs.
**3.3) The Legislative History of the JOBS Act**
The legislative history of the JOBS Act and the key event dates in its progress through
Congress are summarized in Table 1. The bill that eventually became Title I of the JOBS Act
27 See §102(a)(1) – (3), JOBS Act 2012.
28 See §102(b)(2), JOBS Act 2012.
29 See §106(a), JOBS Act 2012. A “roadshow” (defined in 17 CFR §230.433(h)(4)) is a particular method of
communicating the upcoming IPO to potential investors.
30 See §105(c), JOBS Act 2012.
31 See §105, JOBS Act 2012.
12
-----
(H.R. 3606, defining EGCs and relaxing their disclosure and compliance obligations) was
introduced in the US House of Representatives on December 8, 2011. This initial version did not
backdate the effective date for EGC status to December 8, 2011, although the effective date was
later chosen to coincide with the date of the bill’s introduction. The bill was referred to the
House Financial Services Committee, which produced an amended version on March 1, 2012
that included the December 8, 2011 cutoff date for EGC status.[32]
The House passed the bill on March 8, 2012 with overwhelming (and bipartisan) support.
Moreover, President Obama had endorsed legislation of this type in his 2012 State of the Union
address. Thus, one might ordinarily expect that there would subsequently be little uncertainty
about eventual Senate passage and enactment (even though in an era of divided partisan control
of the two chambers of Congress, it is common for the House to vote for a bill that is
subsequently ignored by the Senate). However, widespread opposition to the JOBS bill began to
emerge upon its passage in the House. Perhaps most notable is an editorial in the influential New
_York Times that described the various elements of the proposed reforms as: “A terrible package_
of bills that would undo essential investor protections, reduce market transparency, and distort
the efficient allocation of capital.”[33] There were also expressions of opposition from advocacy
groups, former SEC officials, and some Democratic Senators. The JOBS bill also became
embroiled in ongoing political disputes over the confirmation of Federal judicial nominees, with
the perception that the Senate would not take up the JOBS bill until (or unless) these disputes
were resolved.[34]
The emergence of widespread opposition after March 8 arguably created substantial
uncertainty regarding whether the Senate would consider the bill (and hence about whether it
would ever be enacted). The Senate Majority Leader Harry Reid (D-NV) had previously spoken
in favor of the bill, but was perceived as being only lukewarm in his support; in particular, he
was thought to favor alternative measures believed to promote “job creation” such as a
transportation bill. Despite these uncertain expectations of a prompt Senate vote, the JOBS bill
was taken up in the Senate on March 15, when Senator Reid signaled the importance of the bill
by scheduling a vote. Perhaps most importantly, he described the legislation as follows:
32 This account is based on information in the Congressional Record, available at: http://thomas.loc.gov
33 See “They Have Very Short Memories” New York Times, March 10, 2012, available at:
http://www.nytimes.com/2012/03/11/opinion/sunday/washington-has-a-very-short-memory.html?_r=0
34 See e.g.
http://talkingpointsmemo.com/dc/reid-dares-gop-block-judicial-nominees-and-you-will-also-stall-the-jobs-act
13
-----
“[L]et me take a moment to review what has transpired this morning. Last week the
House passed the pending small business capital formation bill by a vote of 390 to 23
[This refers to the House vote on March 8 in favor of H.R. 3606]. President Obama has
endorsed the bill very publicly; thus, this is a measure the Senate should consider
expeditiously and pass in short order.”[35]
A limited number of amendments were scheduled. The Senate passed the bill (with some
amendments that pertained to the crowdfunding provisions but not to the provisions regarding
EGCs) on March 22. The House then passed the amended Senate version on March 27, and the
JOBS bill was signed into law by the President on April 5, 2012.
The March 15 developments and the speech by Senator Reid are likely to have resolved
much of the uncertainty described above. In particular, given the overwhelming support in the
House, the support of the President, and widespread support within the business community, any
uncertainty surrounding the bill would have been likely to be about whether the bill would be
sufficiently prioritized to reach a vote, rather than on whether it would pass, conditional on
reaching the floor. In view of these circumstances, the March 15 consideration by the Senate and
the strong endorsement by the Senate Majority Leader are likely to be of particular importance.[36]
Consequently, our empirical tests (while examining a number of different event windows) focus
in particular on the March 15 event date. In contrast, many of the other events (especially the
Presidential signature on April 5, 2012, but perhaps also the initial passage in the House) may be
expected to have conveyed little new information.
It is quite reasonable to ask why the effective date for EGC status was partially
retroactive, especially as this is the cornerstone of our empirical strategy. This practice is not
common in securities legislation, and the legislative record does not provide an explicit rationale.
One possible explanation is that it was intended to prevent firms that were contemplating IPOs
during the legislative process from delaying them to wait and see whether the bill would be
enacted. Delaying IPOs would be a perverse consequence of legislation ostensibly intended to
35 See the Congressional Record, available at: http://thomas.loc.gov/cgi-bin/query/R?r112:FLD001:S51694
36 It is important to note that we are not claiming that the Reid speech was necessarily the most important element in
the enactment of the JOBS Act; for instance, the President’s State of the Union speech in January 2012 may well
have been more important. However, our empirical strategy (described more fully in Section 4 below) requires
events that occurred after the retroactive application of the bill became known on March 1, 2012. Among these
events, the March 15 consideration by the Senate and the strong endorsement by the Senate Majority Leader are
likely to be the most important in affecting the perceived likelihood of eventual enactment.
14
-----
promote them.[37] If the retroactivity provision was the result of lobbying by specific firms that
had already conducted their IPOs after December 8 (or were about to do so), then it is possible
that EGC status is correlated with firms’ valuation of the JOBS Act. As this may confound our
results, we undertake a robustness check that omits EGCs that lobbied for the Act or were
otherwise politically active (see Section 5 below).
Another key question in terms of research design is whether the market anticipated the
retroactive application of certain provisions of the JOBS Act and whether this may confound our
interpretation of the findings. As noted earlier, we do not find the retroactivity provision in the
public record prior to March 1, 2012 and it is not very common to see retroactivity in the
securities law context. However, it may still be possible that the market anticipated the
retroactivity provision, perhaps even from the beginning of the legislative process on December
8, 2011. If so, then the anticipated costs and benefits of the JOBS Act provisions would
subsequently have been capitalized into the value of new IPO firms on their IPO date. It is thus
important to our analysis that there was a subsequent (post-IPO) event that affected the
likelihood of the bill’s enactment. As argued above, the March 15 events in the Senate can be
viewed as resolving much of the remaining uncertainty (as to the likely date, and likelihood, of
enactment). Thus, even if there was some anticipation of the retroactivity provision, we would
still expect a market reaction around March 15.[38]
**4) Data and Empirical Strategy**
**4.1) Data**
The dataset for this analysis is based on hand-collected data on firms that conducted IPOs
in the months immediately before and after the December 8, 2011 cutoff for EGC status. In
37 Note, however, that firms that conducted IPOs after December 8 and before April 5 only obtained the post-IPO
benefits (e.g. not being subject to certain SOX provisions), and not the reduced costs of conducting an IPO. Thus,
firms that viewed the costs of conducting the IPO as being substantial may still have delayed their IPO beyond April
5 to take advantage of the cost reductions included in the JOBS Act. This may entail potential selection bias, as
firms that delay would presumably be those that place the most value on the new IPO process. If firms’ valuation of
the post-IPO reductions in disclosure obligations is positively correlated with their valuation of the new IPO process
(which seems to be a reasonable assumption), then this response by firms would merely create a bias against our
findings. Essentially, the sample of firms that conduct IPOs prior to the enactment of the JOBS Act would consist of
firms that place a lower value on the easing of regulatory burdens.
38 Even if firms contemplating IPOs anticipated the retroactivity provision, it is unlikely that they would accelerate
their IPOs as a result. As discussed in Section 4.2 below, the IPO process typically takes somewhere between 6
months and a year, leaving little scope for such a response. Moreover, firms that accelerated their IPOs would have
had to conduct their IPOs under the (costlier) pre-JOBS Act regime.
15
-----
particular, we collect data on IPOs conducted on the US market over the period from July 2011
to April 5, 2012, using the Securities Data Company (SDC) new issues database, the Securities
and Exchange Commission’s Electronic Data Gathering and Retrieval (EDGAR) system, and the
IPO database maintained by Jay Ritter at the University of Florida.[39] Using these sources, we
find a total of 87 firms that conducted IPOs over this period. For these firms, we also hand
collect data on revenue in the most recently completed fiscal year, the public float (the aggregate
worldwide market value of the voting and non-voting common equity held by non-affiliated
shareholders), accelerated filer status and other variables from the SEC’s Electronic Data
Gathering and Retrieval (EDGAR) system. A few of these IPOs are by publicly-traded
investment companies (typically, closed-end funds). We identify these funds through their SEC
filings (for instance, whether they report being subject to the Investment Company Act of 1940)
and exclude them from the main analysis as they are largely unaffected by the JOBS Act (they
are, however, used in a placebo test, as described in Section 5).
We merge this data with Compustat financial statement information (on assets, revenue,
earnings, debt, R&D expenditures, market value, IPO date and other variables) and Center for
Research in Security Prices (CRSP) data on firms’ daily returns and market returns. We use the
data on IPO date, revenue in the most recently completed fiscal year, and other relevant variables
to determine which of these firms satisfy the JOBS Act’s criteria for EGC status. To compute the
number of trading days since a firm’s IPO, we use as the IPO date the first date on which CRSP
data is available for the firm. However, the results are similar when using instead a combination
of SEC and Compustat data to define the number of trading days since a firm’s IPO.[40]
The central variable determining whether a firm with a post-December 8 IPO is an EGC
is its revenue in the most recently completed fiscal year. The revenue variable used in the
analysis combines the Compustat variable REVT (Total Revenue) with hand-collected data on
revenue from SEC filings for those firms with missing Compustat data. At the time that the key
event dates occurred (March, 2012), the most recently completed fiscal year for a typical firm
with a December fiscal year-end would have been fiscal year 2011. We use the Compustat
39 This dataset is available at: http://bear.warrington.ufl.edu/ritter/ipodata.htm, and is an updated version of the
dataset described in Loughran and Ritter (2004).
40 There are three distinct sources of data on IPO dates – the hand-collected data from the SEC filings that includes
the date of the IPO, the Compustat variable IPODATE (defined as “Company Initial Public Offering Date”), and the
first date on which CRSP data is available for the firm. There are some missing values of the Compustat variable
IPODATE, and some minor discrepancies among the three data sources. These discrepancies do not, however, affect
the classification of any firms as conducting IPOs before or after December 8, 2011.
16
-----
variable “Fiscal Year-End” to determine the month in which each firm’s fiscal year ends. For
virtually all firms in the sample, the most recently completed fiscal year is fiscal year 2011. A
few firms, however, have different fiscal year-ends, and this is taken into account in defining the
appropriate fiscal year for measuring revenue.[41]
Certain other factors are also included in the JOBS Act as criteria for determining EGC
status, but are of limited relevance for most firms in our sample. Firms classified by the SEC as
large accelerated filers (with a public float exceeding $700 million) are not eligible for EGC
status. We hand-collect data on each firm’s public float from SEC filings, but only one firm that
would otherwise be an EGC is sufficiently large in terms of public float to be above the $700
million threshold (and omitting this firm from our analysis does not affect the results). Similarly,
very few firms in our sample report sufficient outstanding debt to potentially be above the debt
issuance threshold (omitting these firms also does not affect the analysis).
Taking account of missing data, our control group consists of 33 firms (with less than $1
billion in revenues that conducted IPOs prior to December 8, 2011). The treatment group of
EGCs varies in size from 25 to 41, depending on the date. We have 25 EGCs that conducted
IPOs prior to the first major legislative event (on March 1). We have 27 treatment firms for our
most important tests, which relate to the events in the Senate on March 15. There are 41 EGCs
that conducted IPOs prior to the final event (the Presidential signature on April 5). Very few
firms that went public in this period exceeded the $1 billion revenue threshold, with 5 such firms
conducting IPOs after December 8, of which only 2 conducted IPOs prior to the events in the
Senate on March 15.
**4.2) Empirical Strategy**
This paper’s empirical strategy is based on using an event study approach to measure
abnormal returns for EGCs around major legislative events in March 2012 that increased the
probability of the JOBS bill’s enactment. This provides a direct test of investors’ expectations
about whether or not the value of the mandatory disclosure obligations that the JOBS bill relaxed
exceed the associated compliance costs. The partial retroactivity of the JOBS Act’s definition of
an EGC is thus crucial to this strategy. As described in Section 3 and depicted in Figure 2, the
JOBS Act provides potential quasi-experimental variation along both a firm size dimension (the
41 For instance, a firm with a March fiscal year-end would have completed its most recent fiscal year (prior to the
first major legislative event on March 1, 2012) on March 31, 2011, and its revenue in the most recently completed
fiscal year would be revenue in fiscal year 2010.
17
-----
$1 billion revenue threshold) and a temporal dimension (the December 8 cutoff). However, a
regression discontinuity approach around the $1 billion revenue threshold, while attractive in
principle, is precluded by the small number of firms that lie above the threshold, with 5 such
firms conducting IPOs after December 8, of which only 2 conducted IPOs prior to the events in
the Senate on March 15.
The firms subject to the “treatment” (i.e. EGC status) are all newly traded on public
markets and within a few months at most of their IPO. Identifying a control group for these firms
is a challenge, as the rest of the market may not necessarily provide an ideal baseline.[42]
Moreover, the number of firms that conducted IPOs over the same period (after December 8,
2011 and before the key event dates in March 2012) and that did not satisfy EGC criteria
(notably by having revenues greater than $1 billion) is very small, with only two firms having
usable data. This effectively precludes using the “large” firms as the control group (though a
supplementary analysis that uses them as the control group leads to similar results). Thus, for the
primary control group, we use firms that conducted IPOs from July 2011 to December 8, 2011
and that satisfied the EGC criteria (apart from the IPO date). This yields a control group that is of
comparable size to the treatment group, and that has very similar observable characteristics
Our empirical tests compare abnormal returns for the treatment firms with abnormal
returns for the control firms over various relevant event windows. The basic identifying
assumption is that, conditional on a firm conducting an IPO over the July, 2011 to April, 2012
period, whether it did so before or after December 8 can be considered to be quasi-random with
respect to the factors that generate abnormal returns on the key event dates for the JOBS Act.
This assumption appears reasonable, given the significant lead time involved in preparing and
implementing an IPO (which is often considered to be at least 6 months).[43]
A critical empirical challenge is that this sample, especially the treatment firms, consists
of firms that are close to their IPO date. This may raise concerns, given the large literature in
finance on IPO underpricing (e.g. Loughran and Ritter, 2004; Ljungqvist, 2008). We address
these concerns in a number of ways. In the regression analysis, we find robust results when we
42 A propensity score matching approach that matches the treatment firms with otherwise similar existing firms is
possible in principle, but it would fail to address the critical issue of the treatment firms’ youth as publicly-traded
entities.
43 For instance, one guide prepared by a financial consulting firm specifies the timeframe as 6-9 months - see
http://www.publicfinancial.com/articles/timeframe-to-go-public.html. Pwc’s guide for 2011 “Roadmap for an IPO:
A Guide to Going Public” (available at: http://www.pwc.com/us/en/transaction-services/publications/roadmap-foran-ipo-a-guide-to-going-public.jhtml) envisages a timeframe of 6-12 months (p. 35).
18
-----
control for the number of trading days since a firm’s IPO and exclude firms that are one month
or less from their IPO date. It should also be borne in mind that IPO underpricing in the US
market appears to be primarily a phenomenon that affects the first trading day. Indeed, a standard
practice in the IPO underpricing literature is to measure underpricing using first-day returns;
using first-week returns leads to very similar underpricing measures (e.g. Ljungqvist, 2008). We
exclude firms’ first trading day from all of our tests. Firms may also experience greater volatility
during the earlier phases of public trading, but this would tend to create a bias against any
significant findings.
**4.3) The Market Model and the Computation of Abnormal Returns**
Event studies in the scholarly literature use a variety of approaches to estimate firms’
normal or predicted returns. We use the market model and the Fama-French model (described in
Section 4.4 below), both of which are widely used in the literature. The market model does not
rely on a specific set of economic assumptions, and is thus in some respects less restrictive. We
use a market model to compute abnormal returns for the firms in our sample over a (-1, +1) event
window that spans the period from the release of the House Financial Services Committee report
on March 1, 2012 to the Presidential signature. This period from February 29 to April 9, 2012 is
referred to as the “full event window” in the discussion below. A (-1, +1) window, which starts
one trading day before the event and ends one trading day afterwards, is frequently used in the
event study literature, as it accommodates some degree of anticipation or leakage of information
immediately prior to the event, and allows some scope for delayed reaction. However, it does not
unduly dilute the impact of the event by extending the window beyond a day on either side of the
event.
The market model for firm i uses daily returns for firm i and for the market, and can be
represented as follows (see e.g. Bhagat and Romano, 2002, p. 146):
!!" = !! + !!!! + !!" (4)
where Rit is firm i’s return on day t, M is the market return on day t, and e is the error term. We
run this regression separately for each firm over an estimation window that begins on the first
day that returns data is reported for that firm in CRSP (if that date is prior to February 29) and
ends on December 31, 2012, excluding the full event window defined above (February 29 to
April 9, 2012). For example, for a firm that first appears in CRSP on August 15, 2011, we use as
the estimation window the period from August 15, 2011 to February 28, 2012 and the period
19
-----
from April 10, 2012 to December 31, 2012. For a firm that first appears in CRSP in March 2012,
we use the period from April 10, 2012 to December 31, 2012 as the estimation window. Using a
post-event period as part of the estimation window is fairly common in event studies, although
the more standard practice is to use the pre-event period. In our situation, many of the firms in
our sample have only a limited pre-event returns history (and some have no pre-event return
history), so the use of an estimation window that includes the post-event period through
December 31, 2012 is indispensable to our analysis.
We use the results of running Equation (4) separately for each firm to compute (for each
firm i) a predicted return on each day of the full event window (February 29 to April 9, 2012).
We then subtract this predicted return from the actual return on each day of the full event
window to obtain the abnormal return (ARit) for each firm i on each of these days:
!"!" = !!" −!!" (5)
where !!" is the predicted return for firm i (i.e. !!" = ! + !!!, where ! and ! are the estimated
coefficients from the regression in Equation (4) for firm i). These abnormal returns are then used
to compute cumulative abnormal returns (CARs) for each firm for the full event window and for
various relevant shorter windows. For firm i:
!"#! = ! !"!" (6)
where the abnormal returns (ARit) for firm i are summed over each of the relevant intervals.
**4.4) The Fama-French and Carhart Four-Factor Model**
A widely used set of alternatives to the market model is based on the Capital Asset
Pricing Model (CAPM), which posits that _Rit depends on the difference between the market_
return (Mt) and the risk-free rate of return (denoted Ft) on day t. To improve the ability of the
model to predict returns, Fama and French (1993) added two factors to the CAPM – a “small
minus big” factor (SMBt) that represents the difference between returns on day t of stocks with a
small market capitalization and those of stocks with a large market capitalization, and a “high
minus low” (HMLt) factor that represents the difference between returns on day t of stocks with a
high book-to-market ratio and those of stocks with a low book-to-market ratio. Carhart (1997)
further augmented the model by introducing an “up minus down” momentum factor (UMDt) that
represents the difference between returns on day t of stocks that have increased in value over the
past year and those of stocks that have decreased in value over the past year.
20
-----
This four-factor model, which is now widely used in the literature, can be represented as
follows (see e.g. Kothari and Warner (2007, p. 25)), using the notation introduced above:
!!" = !! + !!! !! −!! + !!!!"#! + !!!!"#! + !!!!"#! + !!" (7)
We use the results of running Equation (7) separately for each firm to compute (for each firm i) a
predicted return on each day of the full event window. We then subtract this predicted return
from the actual return to obtain Fama-French abnormal returns and CARs, in a manner
analogous to that shown in Equations (5) and (6) above.
**4.5) Regression Analysis**
The central empirical hypothesis of this paper concerns whether the CARs for the
treatment firms differ from those for the control firms during the windows defined by crucial
legislative events in the history of the JOBS Act. To formally test this hypothesis, we use a
regression framework to test whether the CARs for the treatment firms are significantly different
from those for the control group of firms. The basic regression model is:
!"#! = ! + !!"#! + !! (8)
where EGCi is an indicator variable that is equal to 1 if firm i conducted its IPO after December
8, 2011, and had less than $1 billion of revenue in its most recently completed fiscal year (the
primary criteria for EGC status), and is equal to zero otherwise.
Augmented with various control variables, the regression model is:
!"#! = ! + !!"#! + !"#$! + !"#$%! + !!! + !! (9)
where:
_REVi_ is firm i’s revenue in its most recently completed fiscal year (typically fiscal year
2011, but defined taking into account firm _i’s own fiscal year end-date, as described_
above)
_DAYSi is the number of trading days since firm i’s IPO, calculated at the beginning of the_
event window to which CARi pertains.[44]
**Xi is a vector of additional control variables from Compustat. These include total assets**
(Compustat variable AT), long-term debt (Compustat variable DLTT), earnings before
interest, taxes, depreciation and amortization (Compustat variable EBITDA), and
44 For example, for the full event window, this would be the number of trading days from firm _i’s IPO date to_
February 29; for the March 14-16 event window, this would be the number of trading days from firm i’s IPO date to
March 14). The IPO date is based on the date the firm first appears in the CRSP data, but the results are robust to
using the IPODATE variable from Compustat and hand-collected IPO dates from the SEC website.
21
-----
research and development (R&D) expenditures (Compustat variable XRD) for fiscal year
2011. R&D expenditures are defined such that missing values are set to zero.
We also use a number of other variables for additional robustness checks. These include
the Compustat variables listed above for fiscal year 2012 (although there is a significant number
of missing values for these), and the Compustat variable reporting market value (MKVALT) for
fiscal year 2012. We also use a similar set of Compustat quarterly variables for the first quarter
of 2012. Firms’ public float (which is important in defining accelerated filers) is hand-collected
from SEC filings for fiscal year 2012.[45]
Before proceeding with the analysis, it is important to check whether the treatment and
control groups appear to be comparable in terms of the various firm characteristics represented
by the control variables. Table 2 reports descriptive statistics for the control variables used in the
regression analysis and in robustness checks, separately for the treatment firms and the control
firms. The set of treatment firms here consists of those that had completed IPOs before March
14, 2012, to correspond to the sample used in the regression analysis. On the whole, the two
groups look very similar along these dimensions. In particular, the crucial variable for
determining EGC status (revenues in the most recently completed fiscal year) is very similar
across the two groups. Many of the variables, such as earnings, are remarkably similar across
treatment and control firms. While there are some differences, there is nothing to indicate that
the treatment and control firms are of substantially different size, or have other substantially
divergent characteristics.[46] The exception, of course, is the number of trading days from a firm’s
IPO to March 14: this is approximately 31 days on average for the treatment firms and
approximately 122 days on average for the control firms. This difference, however, is
unavoidable given the construction of these groups, and the limitations of the quasi-experiment
that Congress has provided.
**5) Results**
**5.1) Comparing Abnormal Returns for Treatment and Control Firms**
45 Market value and public float are not meaningful for many of the treatment firms in 2011, as they were not
publicly traded for most or all of that year.
46 Formal t-tests show that the differences in the means of these variables across the treatment and control groups are
statistically insignificant, except for the difference in the number of trading days since a firm’s IPO.
22
-----
Having obtained the daily abnormal returns for each firm, a first step in the analysis is to
compare the CARs over this period for the treatment and control firms. Table 3 reports the
average CARs for the treatment and control firms for the full event window and for six
potentially relevant shorter windows. The first of these shorter windows is around the House
Committee report of March 1 and spans February 29 to March 2. The second window extends
the first one to encompass the entire period of House deliberation and the March 8 vote
(February 29 to March 9). The third is around the March 15 event that signaled prioritization of
the bill in the Senate (March 14-16). The fourth window extends this to the March 22 Senate
vote (March 14-23). The fifth window is around the March 27 House vote on the amended
Senate bill (March 26-28). The final window is around the President’s signature (April 4-9).
The third column of Table 3 reports the mean CAR among treatment firms, the standard
error, and the number of firms in the group for each of these windows.[47] The CARs reported in
Table 3 are obtained using the market model, but the patterns are very similar for the Fama
French CARs (with the partial exception of the March 1 event, as discussed below). The fourth
column of Table 3 reports corresponding values for the control firms. The final column reports
whether the differences between the CARs for the treatment and control firms are statistically
significant. This is determined using a regression similar to that in Equation (8), in which the
CARs for both groups of firms are regressed on an indicator variable for EGC status. However, a
series of t-tests with unequal variances gives qualitatively similar results.
If we were to take the event study results over the full event window at face value, it
would appear that there was a large positive and statistically significant CAR for the treatment
firms. However, the control firms also experienced a large CAR over this period (albeit one that
is not statistically significant). The difference between the CARs for the treatment and control
groups is not statistically significant. This may be due to the length of the window (especially
given the relatively small number of affected firms), and because the full event window
potentially dilutes the effect by including many events that may not have conveyed any
information to market participants. Thus, we focus on the shorter windows defined above.
The central result that emerges from Table 3 is the importance of the March 15 event,
when the Senate Majority Leader signaled the importance of the bill and its high priority. As
47 Mechanically, the mean CAR and standard error are obtained by regressing the CARs for the treatment firms on a
constant.
23
-----
may be expected a priori, there is a substantial abnormal return for the treatment firms (of about
3.5%). This is statistically significant, and is also significantly higher than the abnormal return
experienced by control firms. This is the only event to give rise to a statistically significant
difference in abnormal returns between the treatment and control groups (and, as discussed
below, March 15 is the only date anywhere within the full event window on which there is a
statistically significant difference between the treatment and control firms). The March 1 event
represents a partial exception, in that the treatment firms experienced an abnormal return that is
of borderline statistical significance. The difference between the treatment and control firm
market model CARs is statistically significant. However, this difference is insignificant using
Fama-French CARs (and is not robust to the inclusion of even a minimal set of controls in a
regression framework). Thus, we treat the March 1 outcome as being statistically insignificant
(see Section 5.6 below for further discussion).
When the March 15 window is extended to encompass the Senate deliberations and vote
(March 14-23), the CAR for the treatment firms remains significant. However, it is no longer
significantly different from the CAR for the control firms. This suggests that the impact of the
Senate deliberations was concentrated immediately around the March 15 event. The period of
House deliberation (February 29 to March 9) gave rise to a higher CAR for the treatment firms,
but this CAR is not statistically significantly different from zero, and is not statistically
significantly different from the CAR experienced by the control firms over that period. The
House vote on the amended Senate bill (March 26-28) gave rise to a higher CAR for the
treatment firms. However, this CAR is not statistically significantly different from zero, and is
not statistically significantly different from the CAR experienced by the control firms over that
period. Finally, the President was widely viewed as being favorable to the bill, and so it is not
surprising that the abnormal returns for the treatment firms around the Presidential signature are
essentially zero, and statistically insignificant.
While this is not shown in Table 3, we also conduct the same analysis for all other dates
within the full window (February 29 to April 9). For the “nonevent” dates (on which no new
information about the JOBS bill appeared), this serves as a placebo test to determine whether
there were significant differences between the treatment and control firms for reasons unrelated
to the JOBS Act. This analysis reinforces the basic conclusion that the only statistically
significant difference between these two groups of firms occurs around March 15. The two
24
-----
groups of firms both experience essentially zero abnormal returns on most nonevent days (as
well as on many “event” days), and the difference between their abnormal returns is not
statistically significant on any nonevent day. In particular, there is no preexisting trend or pattern
indicating higher abnormal returns for EGCs in the days immediately prior to the March 14-16
window. Around March 12, there is a quantitatively large negative CAR for EGCs. While there
was widespread expression of opposition to the JOBS bill around this time, there were no
legislative events. Thus, we are cautious about interpreting this negative CAR as being related to
the JOBS bill; in any case, the difference between the CARs for the treatment and control firms
is not statistically significant.
As all EGCs experience a given legislative event on the same day, a potential problem for
inference is the possible cross-correlation of returns across EGCs on the event dates. A common
approach to addressing this potential problem is to aggregate the sample firms into a single
portfolio and to estimate the portfolio CARs around the event dates (see e.g. Kothari and Warner,
2007). This procedure renders moot any cross-correlation among the returns of different firms.
We thus aggregate all of the EGCs in our sample into an “EGC portfolio” and compute its CAR
around March 15. This portfolio experiences a 4.2% CAR over March 14-16, and this CAR is
statistically significant (the test statistic is 2.22). Another approach to addressing cross
correlation and other potential problems with conventional standard errors is to use bootstrapping
(Kothari and Warner, 2007). Inferences using bootstrapped standard errors are very similar to
those using the conventional standard errors reported in Table 3.
Overall, the results in Table 3 confirm the a priori expectation of the importance of the
March 15 event, and reflect the comparative lack of importance of the various other events (and
of the nonevent days within the full window). Thus, the regression analysis focuses on the CARs
over the March 14-16 window, as described in the next subsection.
**5.2) Basic Regression Results**
The results from the regression in Equation (8), for the market model CAR over the
March 14-16 window, are reported in Column 1 of Table 4. The indicator for EGC status is
positive and significant, confirming that the treatment firms experienced a significantly higher
CAR (of close to 4%) over this window than did the control firms. The results are very similar
when using the Fama-French CARs, as reported in Column 2 of Table 4 (where the use of CARs
25
-----
based on Equation (7) implicitly controls for differential returns over the event window by size,
book-to-market ratio and momentum).
It is possible that the shorter period since the IPO date for the treatment firms may bias
the results, as might differences in firm size. Column 3 of Table 4 reports the results when two
control variables – revenue in the most recently completed fiscal year and trading days since the
firm’s IPO – are added to the regression model. To further mitigate any bias that may be due to
differential post-IPO returns behavior, Column 3 of Table 4 excludes firms with an IPO date one
month or less prior to the event window (i.e. all firms with IPOs on February 15 or later are
excluded). This entails omitting 6 firms, but the results shown in Column 3 are very similar to
the baseline results.
Column 4 of Table 4 reports the results of a regression corresponding to Equation (9).
This includes a wider set of controls, including the Compustat variables total assets, long-term
debt, earnings (EBITDA) and R&D expenditures for fiscal year 2011 (as well as revenues and
trading days since IPO). Once again, the results are very similar to the baseline results. They are
also very similar when similar variables from the Compustat quarterly data for the first quarter of
2012 are used instead (these results are not reported for reasons of space). Another specification
involves adding the Compustat variables total assets, long-term debt, earnings (EBITDA) and
R&D expenditures for fiscal year 2012, in addition to the same variables for fiscal year 2011
(and revenues and trading days since IPO). The fiscal year 2012 variables would not have been
known to market participants at the time of the legislative events we examine. However,
including both the 2011 and 2012 variables provides a flexible specification of changes in these
variables that may have been anticipated by market participants and thus could potentially affect
the abnormal returns. Missing values in Compustat for the 2012 variables leads to a substantial
reduction in sample size, but the EGC variable remains significant (these results are also not
reported for reasons of space).
As previously discussed, all EGCs experience a given legislative event (such as the
March 15 developments in the Senate) on the same day. Thus, a potential problem with inference
using regression specifications such as Equations (8) and (9) is that the standard errors may be
contemporaneously correlated across firms (e.g. Salinger, 1992). Assuming that such correlation
is stronger within industries, one possible approach to addressing this issue is to cluster the
standard errors at the industry level. We use 2-digit Standard Industrial Classification (SIC)
26
-----
industries, obtained from Compustat and augmented with hand-collected SIC codes from the
SEC’s EDGAR website. The results in Table 4 are robust to clustering standard errors at the 2
digit level (these results are also not reported for reasons of space). Unfortunately, due to the
small sample size, it is not possible to use a finer degree of disaggregation of industries than the
2-digit level.[48]
It is also possible that abnormal returns over the event window differ across industries for
reasons unrelated to the JOBS Act. Thus, we use these 2-digit SIC codes to create industry fixed
effects to take account of this possibility. Column (1) of Table 5 reports the results of a
regression corresponding to Equation (8), augmented with industry effects at the 2-digit level. As
this specification restricts the estimation to within-industry variation, the effective sample size is
substantially reduced (there are 23 industry clusters among the 60 firms). Nonetheless, the basic
result is robust to the inclusion of industry effects. When industry effects are combined with an
extensive set of control variables, however, the EGC coefficient’s significance drops away. We
attribute this not to the absence of an effect, but to the very limited effective sample size in
specifications of this type.
**5.3) An Alternative Test**
The main analysis uses firms with pre-December 8 IPOs as the control group. An
alternative control group consists of the large firms that conducted IPOs after December 8. Using
this control group potentially controls better for immediate post-IPO effects, since the control
firms have very similar IPO dates to the treatment firms. However, it may control less well for
size and associated characteristics, if the returns experienced by firms depend on size. As
foreshadowed earlier, the problem with this control group is the small number of non-EGCs that
conducted IPOs over the relevant period. Five such firms conducted IPOs after December 8, only
2 of which conducted IPOs prior to the events in the Senate on March 15.
Nonetheless, if we use these 2 large firms as the control group, the basic result is robust.
Column 2 of Table 5 reports the results of a regression analogous to that in Equation (8), but
with the sample consisting of treatment firms and the 2 large firms in the alternative control
48 The small sample size also limits the scope for implementing other cross-sectional tests. For instance, if regulatory
burdens are more severe for smaller firms, we might expect that the EGC effect would be larger for smaller firms.
However, interactions between the EGC dummy and various size variables are statistically insignificant. Whether
the EGC effect is larger for firms with stronger governance may help shed light on whether disclosure and
governance are substitutes or complements. However, interactions between the EGC dummy and proxies for
governance (such as institutional ownership) are statistically insignificant.
27
-----
group (with the pre-December 8 control group omitted). The coefficient on the EGC variable is
significant and very similar in magnitude to that in the baseline results. Of course, this result
should be treated with great caution, given the small size of the control group. Nonetheless, it
provides some evidence that the higher CARs for EGCs over March 14-16 are not due to
confounding post-IPO returns behavior.
**5.4) Placebo Tests**
A potential concern with the baseline results is that differences in abnormal returns across
the treatment and control firms are driven by their (slightly) different IPO dates, rather than by
investors’ reactions to the JOBS Act. A general approach to addressing these types of concerns is
to use placebo tests - in particular, false experiments in which the ostensible treatment group
conducted IPOs over the same (post-December 8) period as the EGCs, but were not subject to
the JOBS Act provisions. If these firms also experience higher abnormal returns over March 14
16 than do the control firms, then the baseline results cannot be attributed to the JOBS Act.
There are two potential placebo groups in our data, but unfortunately both are quite small
in size. The first is the set of large firms (with revenues exceeding $1 billion) that conducted
IPOs after December 8. As discussed above, there are only two of these firms with usable data.
Column 3 of Table 5 reports the results from a regression similar to Equation (8) in which the
“treatment” group consists of the 2 large post-December 8 IPO firms and the control group is the
standard one used in the baseline results (i.e. firms with pre-December 8 IPOs and less than $1
billion in revenue). The coefficient on the indicator variable for the “treatment” firms is not only
statistically insignificant (which may simply reflect the small sample size) and negative in sign,
but also small in magnitude. The 95% confidence interval is [-0.0240, 0.01578], implying that
we can rule out a positive CAR of more than about 1.6%. This is substantially smaller than the
effect found in the baseline results.[49]
A second potential placebo group consists of investment companies (typically, closed
end funds) that conducted IPOs over the post-December 8 period. These funds are subject to the
Investment Company Act of 1940, and this different regulatory regime implies that they were
49 It is possible that the small firms in our control group form a poor control for these large post-December 8 firms,
for instance, if abnormal returns are driven by firm size or associated characteristics. An alternative placebo test is
thus to use as the control group the large firms (with revenue above $1 billion) that conducted IPOs prior to
December 8. There are only 2 such firms in our dataset, however, so regression analysis would not be meaningful.
Instead, we examine the mean CARs for these two groups of firms. The large post-December 8 firms (the placebo
“treatment” group) experienced negative and statistically insignificant abnormal returns around March 15. There is
no indication that this placebo treatment group experienced CARs comparable to those of the true treatment group.
28
-----
largely unaffected by the JOBS Act. However, they may be subject to some of the same effects
associated with “newness” (such as investor sentiment) as the EGCs. Unfortunately, there are
only 2 such funds that conducted IPOs over the relevant period. Column 4 of Table 5 reports the
results of a regression similar to Equation (8) in which the “treatment” group consists of the 2
post-December 8 IPO funds and the control group is the standard one used in the baseline results
(i.e. firms with pre-December 8 IPOs and less than $1 billion in revenue). Again, the coefficient
on the indicator variable for the “treatment” firms is not only statistically insignificant (which
may simply reflect the small sample size) and negative in sign, but also small in magnitude. The
95% confidence interval is [-0.0259, 0.01521], implying that we can rule out a positive CAR of
more than about 1.5%. This is substantially smaller than the effect found in the baseline results.
Taken together, these placebo tests suggest that the baseline results are not driven simply by
differences in IPO dates.
**5.5) Interpreting the Magnitude of the Effect**
In combination with the CAR for treatment firms reported in Table 3, the coefficients on
the EGC indicators in Columns 1 and 2 of Table 4 entail that the treatment firms experienced a
positive abnormal return of between 3% and 4% as a result of the March 15 event that increased
the likelihood of the enactment of the JOBS Act. The mean market value for EGCs in our sample
is $760 million (as reported in Table 2), while the median market value is about $600 million.
Thus, for the median firm, this result implies an increase in market value of over $20 million
around March 15.
To quantify the total change in value associated with the relaxed disclosure and
compliance obligations of the JOBS Act, we need to know the change around March 15 in
investors’ perception of the probability of the enactment of the JOBS bill. While this is obviously
impossible to observe directly, the nature of the events surrounding the JOBS bill provides a
means of inferring this change in probability, under certain additional assumptions. Suppose that
investors’ estimate of the total treatment effect associated with the JOBS Act remained fixed
over the full event window (February 29 to April 9). As a first step, note that events subsequent
to March 15 did not give rise to any statistically significant abnormal returns for EGCs relative to
control firms (see Table 3 and the discussion in Section 5.1 above). Thus, the perceived
probability of enactment after March 15 can be presumed to be 1, as otherwise there would have
been some further subsequent updating of beliefs.
29
-----
The probability of enactment combines two conceptually separate notions – the
probability of the bill’s passage, and the probability that its provisions would be retroactively
applied to our treatment firms. The latter probability can reasonably be assumed to have been
zero prior to March 1 (as there was no public announcement of the December 8 cutoff before
March 1) and to have increased to 1 on March 1 (as all subsequent versions of the bill contained
the partial retroactivity provision). Prior to March 1, investors held some belief about the
probability of enactment, but this would not have been reflected in their valuation of our
treatment firms, as there was no indication at that time that these firms would become subject to
the new legal regime. The market reaction around March 1, however, would have capitalized this
preexisting probability of enactment (along with any increase in that probability due to the House
Committee report) into the value of our treatment firms. Thus, this market reaction allows us to
infer investors’ perceived probability of enactment.
There is a 2% abnormal return for treatment firms around March 1 (see Table 3).
However, as discussed in Section 5.1, this is only of borderline statistical significance, and is not
robustly significantly different from the returns for control firms. If we thus view the March 1
CAR for EGCs as indistinguishable from zero, then the aggregate increase in EGCs’ value over
the full period is simply the March 15 effect (about 3.5% in Table 3). Moreover, a zero March 1
CAR implies that the perceived probability of the JOBS bill’s enactment was zero at that time.[50]
Therefore, this probability can be inferred to have increased from zero to 1 on March 15, with
the concomitant implication that the total change in value associated with the relaxed disclosure
and compliance obligations of the JOBS Act is equal to the March 15 effect (i.e. around $20
million for the median EGC). Although there may be reason to view the March 1 CARs as being
effectively zero,[51] if we were to adopt the somewhat less conservative position that the March 1
50 Let _pE_ be the probability of enactment, _pR be the probability of retroactivity, and_ _X be the aggregate treatment_
effect of the JOBS Act. On March 1, _pE∆pR_ _X =_ 0 (note that this is _pE, rather than the change in_ _pE, because the_
entire prior probability of enactment is reflected in treatment firms’ value upon the announcement that they will
become subject to the JOBS bill provisions). Then, assuming that ∆pR = 1, and for any nonzero X, it follows that pE
= 0.
51 Given the President’s support for legislation of this type, and the overwhelming popularity of the JOBS bill in the
House, it may seem surprising that investors would have perceived a very low or zero likelihood of enactment prior
to March 1. This may not be unreasonable, however, given the prospect of opposition in the Senate, as well as
general (and perhaps - at least in _ex ante terms - well-founded) skepticism about the possibility of any legislative_
action, however popular the cause, in an era of divided partisan control of Congress.
30
-----
effect was nonzero, then the total impact of the JOBS Act would be about $33 million for the
median EGC.[52]
Another important issue that bears on the magnitude of the total change in value
associated with the relaxed disclosure and compliance obligations of the JOBS Act is the elective
nature of EGC status. At the time that we measure market reactions, there was no information
about which EGC-eligible firms would choose to opt in to some or all of the JOBS Act
provisions. However, it can be presumed that investors held some belief about the average
probability of a firm choosing to take advantage of the new regime. To address this issue, we
hand-collect data from firms’ SEC filings about their SOX compliance status (as the SOX
provisions were arguably the most important among the JOBS Act provisions). Of the 27
treatment firms in our primary empirical tests, we are able to classify 26 using the firms’
disclosures about their SOX compliance status. Of these, 19 are not fully SOX-compliant,
implying that they have elected to make use of the relevant JOBS Act exemptions, and 7 are
fully SOX-compliant (indicating that they have opted out of EGC status for the SOX provisions).
Thus, about three quarters of the treatment firms in our sample opt in to EGC status for the SOX
provisions. If this is representative of a wider pattern of firm choices over other JOBS Act
provisions, and if investors correctly anticipated this fraction, the baseline magnitude derived
above would increase from about $20 million to about $27 million for the median EGC,
discounting for the probability of opting out.[53]
**5.6) The Role of SOX Compliance Costs**
One of the potentially most important provisions of the JOBS Act involves the relaxation
of SOX 404(b) compliance obligations. There is a large literature in accounting that analyzes the
compliance costs associated with SOX 404. This literature has found the compliance costs to be
substantial, especially (in relative terms) for smaller firms. Alexander _et al. (2013) use survey_
responses of firms to estimate compliance costs (including additional audit fees and the cost of
52 Using the 2% abnormal return for EGCs around March 1 in Table 3, the total treatment effect would be about
5.5% (the sum of the March 1 and March 15 effects). Investors’ prior perception of the probability of enactment
would be inferred to be about 0.36, with that probability rising to 1 on March 15.
53 If investors could predict which firms would opt in, then we might expect the market reaction to be concentrated
among those firms. It does not appear, however, that the firms that ultimately chose to opt in enjoyed higher CARs
than those that did not. It is possible that this may be because the firms that opted out of EGC status were
substantially smaller than average – if it is the case that compliance costs are more burdensome for smaller firms,
then this is the opposite of the pattern that investors may have anticipated. Thus, investors may not have been able to
predict that these firms would opt out, and the observed market reaction would be averaged across all EGC-eligible
firms.
31
-----
employees’ time). They find that on average the cost of compliance is $2.3 million per year. This
would amount to about $12 million over the 5-year horizon of the JOBS Act exemption.
However, SOX compliance is likely to involve both fixed costs (for instance, of initially
establishing internal control mechanisms) and variable costs (that are incurred each year that the
firm is in compliance, such as audit fees). The EGCs in our sample went public prior to the
enactment of the JOBS Act, and so would have expected to have to comply with SOX
immediately. Thus, they are likely to have incurred the initial fixed costs of SOX at the time they
went public. Once the JOBS Act was enacted, they could potentially save the variable costs for a
five-year period. Thus, it is the variable rather than fixed costs of SOX compliance that are of
greatest relevance to the effect we find. Grundfest and Bocher (2007) report evidence that the
first-year cost of implementing SOX 404 was approximately $1.5 million for firms with market
capitalization in the same range as that of the median EGC in our sample. This seems to be a
reasonable proxy for the initial setup costs. Subtracting this fixed cost from the approximately
$12 million cost over 5 years implies a variable cost of over $10 million over the 5-year horizon
of the JOBS Act exemption.[54]
Hence, it appears that there is a substantial potential cost saving from the JOBS Act
exemption with respect to SOX (of course, the JOBS Act does not exempt firms from all SOX
Section 404 requirements, but the internal control requirements and auditor attestation are often
thought to be particularly burdensome). The size of the effect we find on March 15 is thus of the
same order of magnitude as (albeit larger than) the compliance cost savings from SOX 404(b)
exemption.
To test empirically whether SOX compliance costs play a role in the effect we find, we
use the fact that firms that are classified by the SEC as “nonaccelerated filers” (with a public
float of less than $75 million) were exempt from compliance with the Sarbanes-Oxley internal
control disclosures prior to the JOBS Act. These firms would thus be expected to derive smaller
benefits from EGC status. We use the public float variable (hand-collected from SEC filings) to
54 Our conversations with senior practitioners in corporate and securities laws suggest that the costs of SOX
compliance in the early years after its enactment (to which the Grundfest and Bocher (2007) estimate refers) would
largely have been centered on the setup cost for the first year. This fixed cost component in those early years would
have included a large “learning curve” element. Over time, however, firms and their attorneys became more familiar
with SOX compliance. As a result, the fraction of compliance costs that were incurred at the beginning (e.g. at the
IPO stage) declined. Thus, by the time of the JOBS Act, initial fixed costs are likely to have represented a smaller
fraction of total SOX compliance costs than in earlier years; variable costs would have represented a corresponding
larger fraction of the total cost of SOX.
32
-----
classify firms as nonaccelerated filers; 4 of the EGCs in our sample have a public float of less
than $75 million. Column 5 of Table 5 reports the results of a regression of the form:
!"#! = ! + !!"#! + ! !"#! ∗!"#! + !!"#! + !! (10)
where NAFi is an indicator variable that is equal to 1 if firm i has a public float of less than $75
million.
The effect for EGCs in our sample that are nonaccelerated filers is indeed smaller than
that for other EGCs. The magnitude of the coefficient indicates that the positive effect of the
JOBS Act largely does not apply to nonaccelerated filers. However, the interaction term is not
statistically significant, perhaps because of the small number of nonaccelerated filers in the
sample. Running the basic specification (Equation 8) on a sample that consists only of the
control firms and EGCs that are nonaccelerated filers yields a coefficient on the EGC variable
that is very close to zero (a point estimate of 0.0049) and statistically insignificant (this is not
reported for reasons of space). This suggests that the JOBS Act effect exists only for those EGCs
that were subject to SOX internal control disclosures, although conclusions are necessarily
tentative given the small sample.[55]
**5.7) Tests for Potential Alternative Explanations**
**_5.7.1) Lobbying for the JOBS Act_**
If the partial retroactivity provision of the JOBS Act was the result of lobbying by
specific firms that had already conducted their IPOs after December 8 (or were about to do so),
then it is possible that EGC status is correlated with firms’ benefits from the JOBS Act. In
particular, under the lobbying assumption, firms in the control group (those that conducted IPOs
from July 2011 to December 8, 2011) failed to obtain retroactivity to July 2011, and so might be
presumed to value the JOBS Act less than do the treatment firms (which were successful in
obtaining retroactivity back to December 8). Thus, it is important to test for the possibility that
the retroactivity provision was the result of lobbying. To do so, we collect data on lobbying
activity by EGCs and on political contributions by political action committees (PACs) associated
55 Iliev (2010) exploits the discontinuity in the application of SOX Section 404 at the threshold of a $75 million
public float to analyze the impact of this SOX provision on market value when implementation began in 2004.
Using a regression discontinuity design that compares firms around the $75 million threshold, Iliev (2010) finds that
SOX Section 404 reduced firm value. This suggests that the compliance costs exceed the benefits of this provision,
at least for small firms. This result is quite consistent with our findings regarding the broader set of disclosure and
compliance provisions in the JOBS Act (including the relaxation of SOX Section 404(b)).
33
-----
with EGCs.[56] Only one EGC reported lobbying for the JOBS Act. A broader group of 6 EGCs
were “politically active” at any time for which data exists – i.e. they either lobbied Congress on
any issue (not necessarily the JOBS Act specifically), or campaign contributions were reported
from associated PACs. Column 1 of Table 6 reports the results of a regression that excludes
these 6 EGCs from the sample. This specification is similar to that in Equation (9), and includes
the set of controls from Column 4 of Table 4. The basic result is robust, suggesting that the
findings are not confounded by lobbying or other political activity by EGCs.
**_5.7.2) Other Confounding Events Involving EGCs, and the Role of Outliers_**
While the EGCs in our sample are chosen based on the partially retroactive application of
the JOBS Act, it is possible that the firms within this treatment group experienced other events
during the window around March 15. To ensure that the results are not due to other potentially
confounding events, we search for news stories mentioning any of the EGCs in our sample over
the March 14-16 period that could potentially affect their share price. These include, for instance,
stories about earnings announcements, press releases about firms’ plans or operations, and the
release of analysts’ forecasts. In all, we find 12 EGCs that were mentioned in news stories in the
relevant period. Column 2 of Table 6 reports the results of a regression that excludes these 12
EGCs. The basic result is robust, suggesting that the findings are not confounded by news stories
reporting information about the EGCs unrelated to the JOBS Act.
The subset of firms mentioned in news stories includes two that are potential outliers,
with particularly large positive abnormal returns. Of course, the robustness check reported above
automatically excludes these firms. In addition, we exclude these two firms alone, and Winsorize
the CARs to address potential outliers. The results are very similar in these additional robustness
checks.
**_5.7.3) An Alternative Interpretation Involving Future Mispricing_**
The basic framework we use to interpret our results, developed in Section 2, emphasizes
the tradeoff between the compliance costs associated with securities regulation and the value to
outside investors of compliance. While this is a very standard conceptual framework, an
alternative approach from the behavioral finance tradition emphasizes instead the possibility of
56 This information is from the Federal Election Commission website and the website opensecrets.org. Note that it is
also possible that firms may exert political influence through their membership of trade associations or industry
lobby groups. However, we focus on independent lobbying by EGCs, as it is unlikely that an industry-wide group
would differentially advance the interests of the EGCs relative to the control firms.
34
-----
mispricing. In particular, in a framework such as that of Bolton, Scheinkman and Xiong (2006),
incumbent (sophisticated) shareholders value the opportunity to sell in the future to uninformed
noise traders who overvalue the stock. In theory, it is possible that a legal reform that relaxes
mandatory disclosure obligations may increase the likelihood of future mispricing (including
overvaluation) – essentially, it would become easier to generate positive investor sentiment
through selective or misleading disclosures. This would increase incumbent shareholders’ option
value of selling to noise traders in the future.
Observationally, the mispricing theory sketched above is substantially equivalent to our
basic result, in that it would predict an increase in value for EGCs relative to control firms
(which did not experience any change in disclosure obligations). To test whether the evidence is
more consistent with our interpretation or with the mispricing interpretation, we collect data on
analyst coverage from the International Brokers Estimate System (I/B/E/S) database. This
database provides extensive information about analyst estimates. We focus in particular on the
number of analysts following a given firm, and assume that there is no analyst coverage of firms
that do not appear in the I/B/E/S data. The basic idea underlying this test is that mispricing is
more likely to occur among firms with more limited analyst coverage (or none). Thus, the
mispricing story should imply that the EGC effect would be concentrated among firms with less
analyst coverage. This approach is consistent with a substantial literature in finance premised on
the notion that greater analyst coverage is associated with less information asymmetry and
mispricing (e.g. Chang, Dasgupta and Hilary, 2006).
Of the 27 EGCs, we classify 11 as having analyst coverage and 16 as having no analyst
coverage. Column 3 of Table 6 reports the results of a regression where the treatment group
consists only of EGCs without analyst coverage, while Column 4 of Table 6 reports the results of
a regression where the treatment group consists only of EGCs with analyst coverage. The EGC
coefficient is positive and statistically significant for both treatment groups, and moreover is
virtually identical in magnitude. Using the full sample of EGCs and including an interaction
between EGC status and analyst coverage results in the interaction term being statistically
insignificant (this is not reported for reasons of space). Thus, this evidence does not suggest that
the JOBS Act effect is concentrated among EGCs for which mispricing is more likely. Instead, it
appears more consistent with the interpretation we have adopted (based on the framework in
Section 2) rather than with the alternative mispricing interpretation.
35
-----
**_5.7.4) Other Robustness Checks_**
As part of the IPO process, firm insiders generally agree not to sell more than a specified
number of their shares for a specified period of time (typically, 180 days) following the IPO.
These agreements are known as “lockups.” The empirical literature has found that the end of the
lockup period is associated with an increase in the supply of shares and with a significant
decrease in the share price (e.g. Field and Hanka, 2001). It is possible that our results may be
confounded by the expiration of lockups for the control firms (which may depress their price and
make it appear that the treatment firms’ relative value increases). We thus identify those control
firms with IPO dates approximately 180 days prior to the March 15 window (i.e. an IPO date in
September, 2011). Only one control firm has a September 2011 IPO date; excluding this firm
from the analysis does not affect the results. Thus, it does not appear that our results are
confounded by the expiration of lockups.
The definition of EGCs in the JOBS Act excludes firms that are classified by the SEC as
“large accelerated filers” (with a public float of over $700 million), and also excludes firms that
issue more than $1 billion of nonconvertible debt over a three-year period. One of the EGCs in
our sample has a public float that exceeds $700 million (though it should be borne in mind that
such a firm may still derive benefits from EGC status for a year or so, as large accelerated filer
status is not attained until the firm files reports with the SEC for a year). Omitting the small
number of firms in our sample that are large accelerated filers, or that have high debt levels, does
not affect the results. EGCs may be subject to alternative forms of monitoring (e.g. by creditors)
that make disclosure and SOX compliance less relevant; the exclusion of firms with high debt
levels (and the use of a debt control in Table 4, Column 4) helps to address this possibility.
Foreign private issuers are eligible for EGC status, but may benefit less from it than other firms.
However, excluding the small number of foreign private issuers in our sample does not affect the
basic results.
**6) Discussion and Conclusion**
In this paper, we use an unusual quasi-experimental setting created by the JOBS Act of
2012 to find what is, to the best of our knowledge, the first empirical evidence that “ratcheting”
down securities regulation is associated with a positive market response. However, great care
must be exercised in interpreting these results. First, although market responses may be treated as
36
-----
indicative of the value that investors place on the reforms, it is not clear that the reforms only
have value to investors of the particular firms subject to the regulatory changes. Reforms could
have effects on other parties who are not accounted for in our tests.[57] A related point is that our
empirical strategy requires measuring these market responses for firms that went public prior to
the enactment of the JOBS Act (and which presumably originally expected to be subject to the
old legal regime). It is possible that the relaxation of disclosure and compliance obligations may
encourage fraudulent issuers to issue securities in the period after enactment. Such an effect, if it
exists, would not be captured in our empirical analysis.
Second, even if we use the market response as the best first approximation of the value of
the reforms, we caution that this should not be interpreted as evidence that mandatory disclosure
is value reducing for investors as a general matter. Moreover, our findings, properly construed,
should not be viewed as being in tension with prior studies finding large, significant and positive
market responses to increases in regulation. These prior studies examine different types of
reforms and have very different baselines. For example, Greenstone _et al. (2006) find large_
positive effects when looking at the extensive reforms enacted in the OTC market in 1964. The
OTC market was fairly lightly regulated prior to the reforms. The 1964 Amendments involved
almost the entire corpus of the SEA being applied to many (but not all) OTC firms. Thus, their
study addressed a situation where a lightly regulated market became much more heavily
regulated. Our study, in contrast, looks at a situation where a particularly heavily regulated
market becomes somewhat less heavily regulated for a subset of firms. For similar reasons, our
results do not call into question the extensive body of cross-country evidence (e.g. La Porta et
_al., 2006) finding that stronger securities laws foster stock market development, nor the single-_
country studies (e.g. Dharmapala and Khanna, 2013) finding positive effects of corporate
governance reforms on firm value.
Assuming that regulation (like most other things) is subject to diminishing and ultimately
negative returns, it is entirely consistent to find that large increases in regulation (relative to a
low baseline) generate large increases in market value, while small reductions in regulation
(relative to a high baseline) also generate an increase. This simple idea is depicted in Figure 1
(which represents the simple conceptual framework developed in Section 2). Note also that,
57 For instance, Langevoort and Thompson (2013) argue that a persistent theme in the history of securities regulation
is a desire to hold large business enterprises accountable to the general public, in a way that is only tenuously related
to standard notions of investor protection.
37
-----
while Figure 1 assumes a single dimension of the “strength of regulation,” in reality regulation is
multidimensional. It is entirely possible that different dimensions of regulation (for instance,
financial statement disclosure versus internal control requirements) may have differing impacts
on shareholder value, and this may also help reconcile our findings with those of the previous
literature. Within this context, we interpret our findings as providing quasi-experimental
empirical evidence of the impact of regulation being relaxed when it may have gone beyond the
optimal point for a specific set of firms (EGCs). Against the backdrop of the existing literature,
this is an important and novel result regarding securities regulation in general, as well as being
an important finding about the specific effects of the JOBS Act.
However, there are a number of important limitations to this analysis that should be
emphasized. In general, these stem from the nature of the (presumably unintended) quasi
experiment that Congress has provided. First, the number of firms affected by the JOBS Act’s
partial retroactivity is small. In itself, this primarily creates a bias against finding any significant
results. While we find a quite robust positive effect notwithstanding this limitation, the small
sample makes it difficult to analyze how the effect varies across subsets of firms. The events that
transpired during the legislative process, while providing some variation in the apparent
probability of enactment, are also less than ideal. For instance, there are also no clearly negative
events that reduce the probability of enactment (such as votes against the bill in committee or on
the floor).
As a result of these limitations, we do not have conclusive evidence on which aspect of
the reforms applicable to EGCs might have the greatest impact in generating the positive market
response. The treatment firms in our study do not benefit from the provisions reducing IPO costs
(because their IPOs occurred prior to April 5, 2012), but do benefit from the post-IPO provisions,
including the SOX and accounting-related changes and a few changes in disclosure on executive
compensation. Given that EGC firms that have just completed an IPO often have managers and
owners whose interests are closely aligned, we would not expect that the disclosure costs of
executive compensation would be very great (especially as they would have borne some of them
in the IPO process). This suggests that, on an a priori basis, most of the post-IPO benefits are
likely to center on the SOX and accounting-related changes.
One piece of evidence regarding the importance of the SOX-related provisions comes
from the response of nonaccelerated filers (small firms that were not subject to the relevant SOX
38
-----
provisions even prior to the JOBS Act). As discussed in Section 5.6, the magnitude of the market
response for nonaccelerated filers is essentially zero, suggesting that they derived little benefit
from the JOBS Act. However, caution must be exercised in interpreting this result, as there are
few nonaccelerated filers in the EGC sample, and the difference between nonaccelerated filers
and other EGCs is not statistically significant.
The magnitude of the positive reaction that we find for EGCs around the March 15 event
is of the same order of magnitude, albeit larger than, the estimated savings in Section 404 SOX
compliance costs (attributable to the internal control requirements). It is not necessarily
surprising that the magnitude would be larger than can be directly attributed to SOX 404, as
EGCs also benefited from other accounting-related changes, such as not being subject to audit
firm rotation or auditor discussion and analysis requirements,[58] not being subject to any future
rules of the PCAOB (unless the SEC explicitly subjects EGCs to them),[59] and receiving a longer
transition period to comply with new audit standards.[60] There are also many aspects of the
internal control requirements, such as their effects on risk-taking, employee time and effort, and
litigation risk, that are difficult to quantify and may not be fully captured in existing estimates of
compliance costs.
This paper represents a first attempt at the empirical analysis of the JOBS Act. There are
many potential avenues for further research that may clarify some of these unresolved issues. For
example, EGC status is elective for firms meeting the revenue and other criteria. It may be
possible to analyze the market reactions to firms electing to be treated as EGCs to shed more
light on the impact of the relaxation of disclosure and compliance obligations, as more data
becomes available over time.
The effect of mandatory securities regulation on firm value has been a longstanding
concern across law, economics and finance. However, it has proved challenging to find quasi
58 See §104, JOBS Act 2012. The JOBS Act also relaxed compensation disclosure and analysis (CD&A)
requirements by permitting an EGC to be considered a “smaller reporting company” for purposes of satisfying the
executive compensation disclosure requirements of Item 402 of Regulation S-K (see §102(c), JOBS Act 2012). This
in essence means EGCs will (i) not have to file a CD&A, (ii) disclose compensation only for the CEO and two other
named officers, (iii) disclose compensation information for the current fiscal year only, and (iv) not have to include
certain tables. This may arguably have disproportionately benefited technology firms. See COMPENSIA, _Executive_
_Pay Disclosure Trends of Emerging Growth Companies, THOUGHTFUL_ PAY ALERT, May 3, 2013. Available at:
http://www.compensia.com/tpa_050313_emerging_growth.html. However, while the interaction between the EGC
dummy and an indicator for technology firms is positive (suggesting a larger benefit for technology firms), it is not
statistically significant.
59 See §102, JOBS Act 2012.
60 See §104, JOBS Act 2012.
39
-----
experimental variation in the application of securities regulation, for example because securities
law typically applies to all firms listed in a given jurisdiction. The JOBS Act of 2012 involved a
limited degree of retroactivity that provides a rare quasi-experimental setting in which to address
this question. Although this limited retroactivity applies to a relatively small number of firms, it
provides an important source of evidence on the impact of the JOBS Act not just for these firms,
but for all those firms that will be subject to the new regime in the future. Our results also shed
light on the costs and benefits of mandatory securities regulation more generally.
**References**
Alexander, C. R., S.W. Bauguess, G. Bernile, Y.H.A. Lee, and J. Marietta-Westberg (2013) “The
Economic Effects of SOX Section 404 Compliance: A Corporate Insider Perspective”
_Journal of Accounting and Economics, 56, 267-290._
Bartlett III, R. P. (2009) “Going Private but Staying Public: Reexamining the Effect of Sarbanes
Oxley on Firms’ Going-Private Decisions” University of Chicago Law Review, 76, 7-44.
Berdejo, C. (2014) “Going Public after the JOBS Act” Ohio State Law Journal, forthcoming.
Benston, G. J. (1973) “Required Disclosure and the Stock Market: An Evaluation of the
Securities Exchange Act of 1934” American Economic Review, 63, 132–155.
Bhagat, S. and R. Romano (2002) “Event Studies and the Law: Part I: Technique and
Corporate Litigation” American Law and Economics Review, 4, 141-167.
Black, B. S., H. Jang and W. Kim (2006) “Does Corporate Governance Affect Firms’
Market Values? Evidence from Korea” Journal of Law, Economics, & Organization, 22,
366-413.
Bolton, P., Scheinkman, J., and W. Xiong (2006) “Executive Compensation and Short-termist
Behaviour in Speculative Markets” Review of Economic Studies, 73, 577-610.
Bushee, B. J., and C. Leuz (2005) “Economic Consequences of SEC Disclosure Regulation:
Evidence from the OTC Bulletin Board,” Journal of Accounting _and Economics, 39,_
233–264.
Carhart, M. M. (1997) “On Persistence in Mutual Fund Performance” Journal of Finance, 52,
57-82.
Chang, X., S. Dasgupta and G. Hilary (2006) “Analyst Coverage and Financing Decisions”
_Journal of Finance, 61, 3009-3048._
40
-----
Chhaochharia, V. and Y. Grinstein (2007) “Corporate Governance and Firm Value: The
Impact of the 2002 Governance Rules” Journal of Finance, 62, 1789-1825.
Choi, S. J. and A. C. Pritchard (2012) Securities Regulation: Cases and Analysis 3[rd] ed.,
Foundation Press.
Coates, J. C., IV and S. Srinivasan (2013) “SOX After Ten Years: A Multidisciplinary Review”
Working Paper.
Coffee, J. C. (1984) “Market Failure and the Economic Case for a Mandatory Disclosure
System,” Virginia Law Review, 70, 717–753.
Dharmapala, D. and V. S. Khanna (2013) “Corporate Governance, Enforcement and Firm Value:
Evidence from India” Journal of Law, Economics, & Organization, 29, 1056-1084.
Easterbrook, F. H. and D. R. Fischel (1984) “Mandatory Disclosure and the Protection of
Investors,” Virginia Law Review, 70, 669–715.
Fama, E. F., and K. R. French (1993) “Common Risk Factors in the Returns on Stocks and
Bonds” Journal of Financial Economics, 33, 3-56.
Ferrell, A. (2007) “Mandatory Disclosure and Stock Returns: Evidence from the Over-the
Counter Market” Journal of Legal Studies, 36, 215-251.
Field, L. C. and G. Hanka (2001) “The Expiration of IPO Share Lockups” Journal of Finance,
56, 471-500.
Friend, I. and E. S. Herman (1964) “The SEC through a Glass Darkly,” Journal of Business, 37,
382–405.
Greenstone, M., P. Oyer and A. Vissing-Jorgensen (2006) “Mandated Disclosure, Stock Returns,
and the 1964 Securities Acts Amendments” Quarterly Journal of Economics, 121, 399460.
Grundfest, J. A. and S. E. Bocher (2007) “Fixing 404” Michigan Law Review, 105, 1643-1676.
Guttentag, M. D. (2013) “Patching a Hole in the JOBS Act: How and Why to Rewrite the Rules
that Require Firms to Make Periodic Disclosures” Indiana Law Journal, 88, 151-212.
Iliev, P. (2010) “The Effect of SOX Section 404: Costs, Earnings Quality, and Stock Prices”
_Journal of Finance, 65, 1163-1196._
Kamar, E., E. Talley and P. Karaca-Mandic (2009) “Going-Private Decisions and the Sarbanes
Oxley Act of 2002: A Cross-Country Analysis” _Journal of Law, Economics, &_
_Organization, 25, 107-133._
41
-----
Kothari, S. P. and J. B. Warner (2007) “Econometrics of Event Studies” in B. Espen Eckbo (ed.)
_Handbook of Corporate Finance, Vol. 1, Elsevier, 3-36._
Langevoort, D. C. and R. B. Thompson (2013) “‘Publicness’ in Contemporary Securities
Regulation After the JOBS Act” Georgetown Law Journal, 101, 337-386.
La Porta, R., F. Lopez de Silanes and A. Shleifer (2006) “What Works in Securities Laws?”
_Journal of Finance, 61, 1-32._
Litvak, K. (2007) “The Effect of the Sarbanes-Oxley Act on non-US Companies Cross-Listed
in the US” Journal of Corporate Finance, 13, 195-228.
Loughran, T. and J. Ritter (2004) “Why has IPO Underpricing Changed Over Time?” Financial
_Management, 33, 5-37._
Ljungqvist, A. (2008) “IPO Underpricing” in B. Espen Eckbo (ed.) Handbook of Empirical
_Corporate Finance, Vol. 1, Elsevier, 375-422._
Mahoney, P. G. (1995) “Mandatory Disclosure as a Solution to Agency Problems,” University
_of Chicago Law Review, 62, 1047–1112._
Salinger, M. D. (1992) “Standard Errors in Event Studies” Journal of Financial and Quantitative
_Analysis, 27, 39-53._
Shleifer, A. and D. Wolfenzon (2002) “Investor Protection and Equity Markets,” Journal of
_Financial Economics, 62, 3–27._
Stigler, G. (1964) “Public Regulation of the Securities Markets,” Journal of Business, 37, 117–
142.
42
-----
##### Strength of regulation (r)
###### IPO Date
##### $
V
V – B(0)
###### $
Revenue = $1 billion
**Figure 1: Conceptual Framework**
##### Outsiders’ value = (1 - α)V –
r*
**Figure 2: Empirical Strategy**
|2 firms|2 firms|
|---|---|
|||
||Treatment Group (25 to 41 firms)|
|Control Group (33 firms)||
|||
###### July 2011 Dec. 8, 2011 March 2012 April 5, 2012
Key event dates
43
-----
**Table 1: Important Event Dates for the JOBS Act**
**Date** **Event**
December 8, 2011 The bill (H.R. 3606) is introduced in the House, and referred to the
House Financial Services Committee.
February 16, 2012 The bill is ordered to be reported by the House Financial Services
Committee (by a vote of 54-1).
March 1, 2012 The bill is reported (amended) by the House Committee on Financial
Services (H. Rept. 112-406). This report includes the December 8,
2011 cutoff date for eligibility for EGC status (this appears to be the
first public appearance of this cutoff date).
March 8, 2012 The bill is passed by the House by a vote of 390-23.
March 15, 2012 The measure is laid before the Senate by unanimous consent, and
committed to the Senate Committee on Banking, Housing and Urban
Affairs. Speech by Senate Majority Leader describing the bill as “a
measure the Senate should consider expeditiously and pass in short
order.”
March 21, 2012 Cloture on the bill is invoked in the Senate (by a 76 – 22 vote).
March 22, 2012 The (amended) bill is passed by the Senate (by a 73-26 vote). The
Senate amendment relates to the “crowdfunding” provisions of the
bill, not to the EGC provisions.
March 27, 2012 The amended Senate bill is passed by the House (by a 380-41 vote).
April 5, 2012 Presidential signature; the JOBS Act becomes law.
Note: These legislative events are based on information reported on the Library of Congress
THOMAS system, available at http://thomas.loc.gov, supplemented by various media reports.
44
|Date|Event|
|---|---|
|December 8, 2011|The bill (H.R. 3606) is introduced in the House, and referred to the House Financial Services Committee.|
|February 16, 2012|The bill is ordered to be reported by the House Financial Services Committee (by a vote of 54-1).|
|March 1, 2012|The bill is reported (amended) by the House Committee on Financial Services (H. Rept. 112-406). This report includes the December 8, 2011 cutoff date for eligibility for EGC status (this appears to be the first public appearance of this cutoff date).|
|March 8, 2012|The bill is passed by the House by a vote of 390-23.|
|March 15, 2012|The measure is laid before the Senate by unanimous consent, and committed to the Senate Committee on Banking, Housing and Urban Affairs. Speech by Senate Majority Leader describing the bill as “a measure the Senate should consider expeditiously and pass in short order.”|
|March 21, 2012|Cloture on the bill is invoked in the Senate (by a 76 – 22 vote).|
|March 22, 2012|The (amended) bill is passed by the Senate (by a 73-26 vote). The Senate amendment relates to the “crowdfunding” provisions of the bill, not to the EGC provisions.|
|March 27, 2012|The amended Senate bill is passed by the House (by a 380-41 vote).|
|April 5, 2012|Presidential signature; the JOBS Act becomes law.|
-----
**Table 2: Descriptive Statistics for Control Variables**
**Variable** **Treatment Firms**
Mean
(Standard deviation)
(Number of firms)
194.13
(231.72)
(27)
278.72
(296.55)
(22)
30.59
(17.10)
(27)
413.33
(575.67)
(27)
946.90
(1630.24)
(23)
107.88
(275.55)
(27)
364.48
(1180.00)
(23)
45.51
(93.94)
(27)
74.52
(134.86)
(22)
9.53
(12.19)
(27)
10.83
(16.26)
(27)
760.14
(701.91)
(23)
541.03
(1526.79)
(27)
45
|Variable|Treatment Firms Mean (Standard deviation) (Number of firms)|Control Firms Mean (Standard deviation) (Number of firms)|
|---|---|---|
|Revenue in the most recently completed fiscal year (typically 2011)|194.13 (231.72) (27)|182.96 (217.19) (33)|
|Revenue (fiscal year 2012)|278.72 (296.55) (22)|299.89 (326.31) (21)|
|Trading days since IPO|30.59 (17.10) (27)|121.52 (36.32) (33)|
|Total assets (fiscal year 2011)|413.33 (575.67) (27)|364.80 (605.00) (30)|
|Total assets (fiscal year 2012)|946.90 (1630.24) (23)|512.65 (723.58) (21)|
|Long-term debt (fiscal year 2011)|107.88 (275.55) (27)|98.75 (278.80) (30)|
|Long-term debt (fiscal year 2012)|364.48 (1180.00) (23)|179.74 (405.80) (21)|
|Earnings (fiscal year 2011)|45.51 (93.94) (27)|41.91 (86.86) (30)|
|Earnings (fiscal year 2012)|74.52 (134.86) (22)|72.42 (102.70) (20)|
|R&D (fiscal year 2011)|9.53 (12.19) (27)|5.90 (10.31) (33)|
|R&D (fiscal year 2012)|10.83 (16.26) (27)|4.49 (8.20) (33)|
|Market value (fiscal year 2012)|760.14 (701.91) (23)|832.09 (776.09) (21)|
|Public float (fiscal year 2012)|541.03 (1526.79) (27)|381.59 (534.09) (31)|
-----
Note: This table reports descriptive statistics for the control variables used in the regression
analysis and in various robustness checks. Revenue in the most recently completed fiscal year is
hand-collected from the SEC’s EDGAR database, taking account of each firm’s fiscal year. The
number of trading days from each firm’s IPO date to March 14, 2012 is calculated using CRSP
data. “Public float” is the aggregate worldwide market value of the voting and non-voting
common equity held by its non-affiliates), which is hand-collected from 10-K filings in the
SEC’s EDGAR database. Note that this is shown only for 2012, as the public float is not defined
for 2011 for firms that went public in 2012. All other variables are from Compustat. Earnings
represents EBITDA; R&D is defined such that missing values are set to zero. All variables (apart
from the number of trading days) are reported in millions of dollars.
46
-----
**Table 3: Cumulative Abnormal Returns (CARs) for Key Event Windows**
**Event** **Window** **Treatment Firms** **Control Firms** **Statistically**
**(-1, +1)** Mean CAR Mean CAR **significant**
(Standard error) (Standard error) **difference?**
(Number of firms) (Number of firms)
Entire window February 29- 0.1211*** 0.0646 No
April 9, 2012 (0.0354) (0.0495)
(25) (33)
House Committee February 29- 0.0200* -0.0114 No
report March 2, (0.0104) (0.0077) (not robust)
2012 (25) (33)
House deliberation February 29- 0.0181 -0.0027 No
and vote March 9, (0.0138) (0.0162)
2012 (25) (33)
Beginning of Senate March 14- 0.0358** -0.0035 Yes
consideration March 16, (0.0167) (0.0084)
2012 (27) (33)
Senate deliberation March 14- 0.0629*** 0.0215 No
and vote March 23, (0.0223) (0.0178)
2012 (27) (33)
House vote on March 26- 0.0216 -0.0092 No
amended Senate bill March 28, (0.0154) (0.0170)
2012 (33) (33)
Presidential April 4- 0.0043 -0.0056 No
signature April 9, (0.0059) (0.0087)
2012 (41) (33)
Note: This table reports mean cumulative abnormal returns (CARs) for the various windows
specified, separately for the treatment firms (which conducted IPOs after December 8, 2011, and
meet the basic criterion for eligibility for emerging growth company (EGC) status of having less
than $1 billion of revenues in the most recently completed fiscal year) and the control firms
(which conducted IPOs from July, 2011 to December 8, 2011, and had less than $1 billion of
revenues in the most recently completed fiscal year). Conventional standard errors are reported
in the table, but the results are essentially identical using bootstrapped standard errors. The test of
statistical significance in Column 4 uses a regression of the CAR on an indicator variable for the
treatment firms.
*: significant at 10%; ** significant at 5%; *** significant at 1%.
47
|Event|Window (-1, +1)|Treatment Firms Mean CAR (Standard error) (Number of firms)|Control Firms Mean CAR (Standard error) (Number of firms)|Statistically significant difference?|
|---|---|---|---|---|
|Entire window|February 29- April 9, 2012|0.1211*** (0.0354) (25)|0.0646 (0.0495) (33)|No|
|House Committee report|February 29- March 2, 2012|0.0200* (0.0104) (25)|-0.0114 (0.0077) (33)|No (not robust)|
|House deliberation and vote|February 29- March 9, 2012|0.0181 (0.0138) (25)|-0.0027 (0.0162) (33)|No|
|Beginning of Senate consideration|March 14- March 16, 2012|0.0358** (0.0167) (27)|-0.0035 (0.0084) (33)|Yes|
|Senate deliberation and vote|March 14- March 23, 2012|0.0629*** (0.0223) (27)|0.0215 (0.0178) (33)|No|
|House vote on amended Senate bill|March 26- March 28, 2012|0.0216 (0.0154) (33)|-0.0092 (0.0170) (33)|No|
|Presidential signature|April 4- April 9, 2012|0.0043 (0.0059) (41)|-0.0056 (0.0087) (33)|No|
-----
**Table 4: Basic Regression Results**
Dependent variable: Cumulative Abnormal Return (CAR), March 1416, 2012
Full sample Full sample Excluding Full sample
(using Fama- Recent IPOs
French CARs)
EGC **0.03929** **0.03813** **0.04946** **0.06057**
**(0.01865)**** **(0.01841)**** **(0.02289)**** **(0.02497)****
Revenue in most -0.00001 0.00003
recent fiscal year (0.00003) (0.00003)
Number of trading 0.00017 0.00029
days since IPO (0.00024) (0.00025)
Total assets -0.00003
(0.00002)
Long-term debt 0.00006
(0.00003)*
Earnings -0.00010
(0.00014)
R&D expenditure 0.00127
(0.00104)
Constant -0.00351 0.00538 -0.02230 -0.04262
(0.00841) (0.00846) (0.02416) (0.03023)
Number of 60 60 54 57
Observations
R[2] 0.08 0.08 0.08 0.14
Note: This table reports the results of a series of regressions for the CAR for the March 14-16
interval (during which Senate consideration of the bill commenced). The primary variable of
interest (EGC) is an indicator = 1 for firms satisfying the JOBS Act’s criteria for an “emerging
growth company” (notably, having revenue of less than $1 billion in the most recently completed
fiscal year). Revenue in the most recently completed fiscal year is hand-collected from the SEC’s
EDGAR database, taking account of each firm’s fiscal year. The number of trading days from
each firm’s IPO date to March 14, 2012 is calculated using CRSP data. All other variables are
from Compustat (for 2011). Earnings represents EBITDA; R&D is defined such that missing
values are set to zero. Robust standard errors are reported in parentheses.
*: significant at 10%; ** significant at 5%; *** significant at 1%.
48
|Col1|Dependent variable: Cumulative Abnormal Return (CAR), March 14- 16, 2012|Col3|Col4|Col5|
|---|---|---|---|---|
||Full sample|Full sample (using Fama- French CARs)|Excluding Recent IPOs|Full sample|
|EGC|0.03929|0.03813|0.04946|0.06057|
||(0.01865)**|(0.01841)**|(0.02289)**|(0.02497)**|
||||||
|Revenue in most|||-0.00001|0.00003|
|recent fiscal year|||(0.00003)|(0.00003)|
||||||
|Number of trading|||0.00017|0.00029|
|days since IPO|||(0.00024)|(0.00025)|
||||||
|Total assets||||-0.00003|
|||||(0.00002)|
||||||
|Long-term debt||||0.00006|
|||||(0.00003)*|
||||||
|Earnings||||-0.00010|
|||||(0.00014)|
||||||
|R&D expenditure||||0.00127|
|||||(0.00104)|
||||||
|Constant|-0.00351|0.00538|-0.02230|-0.04262|
||(0.00841)|(0.00846)|(0.02416)|(0.03023)|
|Number of Observations|60|60|54|57|
|R2|0.08|0.08|0.08|0.14|
-----
**Table 5: Additional Regression Results**
Dependent variable: Cumulative Abnormal Return (CAR), March 14-16,
2012
Including Alternative Placebo Placebo Test of
Industry Test Test (using Test (using differential
Effects (using (using “large” non- investment effect for firms
Fama-French “large” non- EGCs as the companies not subject to
CARs) EGCs with “treatment” as the SOX 404
post-Dec 8 group) “treatment”
IPOs as the group)
control
group)
EGC **0.05412** **0.04340** 0.04503
**(0.02540)**** **(0.01764)**** (0.02162)**
“Large” firm **-0.00412**
with post-Dec 8 **(0.00978)**
IPO
Investment co. **-0.00533**
with post-Dec 8 **(0.01009)**
IPO
EGC*NAF **-0.03779**
**(0.02736)**
NAF -0.00257
(0.01520)
Industry effects? Yes No No No No
Constant 0.00033 -0.00763 -0.00351 -0.00351 -0.00328
(0.00936) (0.00484) (0.00851) (0.00851) (0.00934)
Number of 59 29 35 35 60
Observations
R[2] 0.45 0.02 0.0004 0.0007 0.10
Note: This table reports the results of a series of regressions for the CAR for the March 14-16
interval (during which Senate consideration of the bill commenced). In Columns 1, the primary
variable of interest (EGC) is an indicator = 1 for firms satisfying the JOBS Act’s criteria for an
“emerging growth company” (notably, having revenue of less than $1 billion in the most recently
completed fiscal year). “Large firm with post-December 8 IPO” is an indicator variable = 1 for
firms with revenue exceeding the $1 billion threshold that conducted IPOs after December 8,
2011. “Investment company with post-December 8 IPO” is an indicator variable = 1 for
registered investment companies (typically closed-end funds) that conducted IPOs after
December 8, 2011. NAF is an indicator variable =1 for nonaccelerated filers. Robust standard
errors are reported in parentheses.
*: significant at 10%; ** significant at 5%; *** significant at 1%.
49
|Col1|Dependent variable: Cumulative Abnormal Return (CAR), March 14-16, 2012|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
||Including Industry Effects (using Fama-French CARs)|Alternative Test (using “large” non- EGCs with post-Dec 8 IPOs as the control group)|Placebo Test (using “large” non- EGCs as the “treatment” group)|Placebo Test (using investment companies as the “treatment” group)|Test of differential effect for firms not subject to SOX 404|
|EGC|0.05412|0.04340|||0.04503|
||(0.02540)**|(0.01764)**|||(0.02162)**|
|||||||
|“Large” firm|||-0.00412|||
|with post-Dec 8|||(0.00978)|||
|IPO||||||
|Investment co.||||-0.00533||
|with post-Dec 8||||(0.01009)||
|IPO||||||
|EGC*NAF|||||-0.03779|
||||||(0.02736)|
|||||||
|NAF|||||-0.00257|
||||||(0.01520)|
|||||||
|Industry effects?|Yes|No|No|No|No|
|Constant|0.00033|-0.00763|-0.00351|-0.00351|-0.00328|
||(0.00936)|(0.00484)|(0.00851)|(0.00851)|(0.00934)|
|Number of Observations|59|29|35|35|60|
|R2|0.45|0.02|0.0004|0.0007|0.10|
-----
**Table 6: Tests for Potential Alternative Explanations**
Dependent variable: Cumulative Abnormal Return (CAR), March 1416, 2012
Excluding Excluding Including only Including only
“Politically EGCs with EGCs without EGCs with
Active” EGCs Other Events Analyst Analyst
Coverage Coverage
EGC 0.06864 0.05809 0.07151 0.06967
(0.03002)** (0.02824)** (0.03522)** (0.02926)**
Revenue in most 0.00002 0.00001 0.00003 0.00001
recent fiscal year (0.00005) (0.00003) (0.00004) (0.00003)
Number of trading 0.00039 0.00033 0.00036 0.00038
days since IPO (0.00026) (0.00025) (0.00026) (0.00025)
Total assets -0.00002 -0.00003 -0.00002 -0.00010
(0.00002) (0.00002) (0.00001) (0.00006)
Long-term debt 0.00005 0.00006 0.00004 0.00020
(0.00003)* (0.00003)* (0.00002)* (0.00011)*
Earnings -0.00010 -0.00007 -0.00011 0.00002
(0.00015) (0.00014) (0.00020) (0.00013)
R&D expenditure 0.00153 0.00064 0.00055 0.00194
(0.00117) (0.00091) (0.00088) (0.00115)
Constant -0.05515 -0.04136 -0.04465 -0.04771
(0.03442) (0.02803) (0.02875) (0.02992)
Number of 51 45 46 41
Observations
R[2] 0.16 0.13 0.11 0.31
Note: This table reports the results of a series of regressions for the CAR for the March 14-16
interval, testing various potential alternative explanations. The primary variable of interest
(EGC) is an indicator = 1 for firms satisfying the JOBS Act’s criteria for an “emerging growth
company” (notably, having revenue of less than $1 billion in the most recently completed fiscal
year). Control variables are identical to those in Table 4. Robust standard errors are reported in
parentheses.
*: significant at 10%; ** significant at 5%; *** significant at 1%.
50
|Col1|Dependent variable: Cumulative Abnormal Return (CAR), March 14- 16, 2012|Col3|Col4|Col5|
|---|---|---|---|---|
||Excluding “Politically Active” EGCs|Excluding EGCs with Other Events|Including only EGCs without Analyst Coverage|Including only EGCs with Analyst Coverage|
|EGC|0.06864|0.05809|0.07151|0.06967|
||(0.03002)**|(0.02824)**|(0.03522)**|(0.02926)**|
||||||
|Revenue in most|0.00002|0.00001|0.00003|0.00001|
|recent fiscal year|(0.00005)|(0.00003)|(0.00004)|(0.00003)|
||||||
|Number of trading|0.00039|0.00033|0.00036|0.00038|
|days since IPO|(0.00026)|(0.00025)|(0.00026)|(0.00025)|
||||||
|Total assets|-0.00002|-0.00003|-0.00002|-0.00010|
||(0.00002)|(0.00002)|(0.00001)|(0.00006)|
||||||
|Long-term debt|0.00005|0.00006|0.00004|0.00020|
||(0.00003)*|(0.00003)*|(0.00002)*|(0.00011)*|
||||||
|Earnings|-0.00010|-0.00007|-0.00011|0.00002|
||(0.00015)|(0.00014)|(0.00020)|(0.00013)|
||||||
|R&D expenditure|0.00153|0.00064|0.00055|0.00194|
||(0.00117)|(0.00091)|(0.00088)|(0.00115)|
||||||
|Constant|-0.05515|-0.04136|-0.04465|-0.04771|
||(0.03442)|(0.02803)|(0.02875)|(0.02992)|
|Number of Observations|51|45|46|41|
|R2|0.16|0.13|0.11|0.31|
-----
Readers with comments should address them to:
Professor Dhammika Dharmapala
[email protected]
-----
Chicago Working Papers in Law and Economics
(Second Series)
For a listing of papers 1–600 please go to Working Papers at
http://www.law.uchicago.edu/Lawecon/index.html
601. David A. Weisbach, Should Environmental Taxes Be Precautionary? June 2012
602. Saul Levmore, Harmonization, Preferences, and the Calculus of Consent in Commercial and Other
Law, June 2012
603. David S. Evans, Excessive Litigation by Business Users of Free Platform Services, June 2012
604. Ariel Porat, Mistake under the Common European Sales Law, June 2012
605. Stephen J. Choi, Mitu Gulati, and Eric A. Posner, The Dynamics of Contrat Evolution, June 2012
606. Eric A. Posner and David Weisbach, International Paretianism: A Defense, July 2012
607 Eric A. Posner, The Institutional Structure of Immigration Law, July 2012
608. Lior Jacob Strahilevitz, Absolute Preferences and Relative Preferences in Property Law, July 2012
609. Eric A. Posner and Alan O. Sykes, International Law and the Limits of Macroeconomic
Cooperation, July 2012
610. M. Todd Henderson and Frederick Tung, Reverse Regulatory Arbitrage: An Auction Approach to
Regulatory Assignments, August 2012
611. Joseph Isenbergh, Cliff Schmiff, August 2012
612. James Melton and Tom Ginsburg, Does De Jure Judicial Independence Really Matter?, September
2014
613. M. Todd Henderson, Voice versus Exit in Health Care Policy, October 2012
614. Gary Becker, François Ewald, and Bernard Harcourt, “Becker on Ewald on Foucault on Becker”
American Neoliberalism and Michel Foucault’s 1979 Birth of Biopolitics Lectures, October 2012
615. William H. J. Hubbard, Another Look at the Eurobarometer Surveys, October 2012
616. Lee Anne Fennell, Resource Access Costs, October 2012
617. Ariel Porat, Negligence Liability for Non-Negligent Behavior, November 2012
618. William A. Birdthistle and M. Todd Henderson, Becoming the Fifth Branch, November 2012
619. David S. Evans and Elisa V. Mariscal, The Role of Keyword Advertisign in Competition among
Rival Brands, November 2012
620. Rosa M. Abrantes-Metz and David S. Evans, Replacing the LIBOR with a Transparent and
Reliable Index of interbank Borrowing: Comments on the Wheatley Review of LIBOR Initial
Discussion Paper, November 2012
621. Reid Thompson and David Weisbach, Attributes of Ownership, November 2012
622. Eric A. Posner, Balance-of-Powers Arguments and the Structural Constitution, November 2012
623. David S. Evans and Richard Schmalensee, The Antitrust Analysis of Multi-Sided Platform
Businesses, December 2012
624. James Melton, Zachary Elkins, Tom Ginsburg, and Kalev Leetaru, On the Interpretability of Law:
Lessons from the Decoding of National Constitutions, December 2012
625. Jonathan S. Masur and Eric A. Posner, Unemployment and Regulatory Policy, December 2012
626. David S. Evans, Economics of Vertical Restraints for Multi-Sided Platforms, January 2013
627. David S. Evans, Attention to Rivalry among Online Platforms and Its Implications for Antitrust
Analysis, January 2013
628. Omri Ben-Shahar, Arbitration and Access to Justice: Economic Analysis, January 2013
629. M. Todd Henderson, Can Lawyers Stay in the Driver’s Seat?, January 2013
630. Stephen J. Choi, Mitu Gulati, and Eric A. Posner, Altruism Exchanges and the Kidney Shortage,
January 2013
631. Randal C. Picker, Access and the Public Domain, February 2013
632. Adam B. Cox and Thomas J. Miles, Policing Immigration, February 2013
633. Anup Malani and Jonathan S. Masur, Raising the Stakes in Patent Cases, February 2013
634. Arial Porat and Lior Strahilevitz, Personalizing Default Rules and Disclosure with Big Data,
February 2013
635. Douglas G. Baird and Anthony J. Casey, Bankruptcy Step Zero, February 2013
636. Oren Bar-Gill and Omri Ben-Shahar, No Contract? March 2013
637. Lior Jacob Strahilevitz, Toward a Positive Theory of Privacy Law, March 2013
638. M. Todd Henderson, Self-Regulation for the Mortgage Industry, March 2013
639 Lisa Bernstein, Merchant Law in a Modern Economy, April 2013
640. Omri Ben-Shahar, Regulation through Boilerplate: An Apologia, April 2013
-----
641. Anthony J. Casey and Andres Sawicki, Copyright in Teams, May 2013
642. William H. J. Hubbard, An Empirical Study of the Effect of Shady Grove v. Allstate on Forum
Shopping in the New York Courts, May 2013
643. Eric A. Posner and E. Glen Weyl, Quadratic Vote Buying as Efficient Corporate Governance, May
2013
644. Dhammika Dharmapala, Nuno Garoupa, and Richard H. McAdams, Punitive Police? Agency
Costs, Law Enforcement, and Criminal Procedure, June 2013
645. Tom Ginsburg, Jonathan S. Masur, and Richard H. McAdams, Libertarian Paternalism, Path
Dependence, and Temporary Law, June 2013
646. Stephen M. Bainbridge and M. Todd Henderson, Boards-R-Us: Reconceptualizing Corporate
Boards, July 2013
647. Mary Anne Case, Is There a Lingua Franca for the American Legal Academy? July 2013
648. Bernard Harcourt, Beccaria’s On Crimes and Punishments: A Mirror of the History of the
Foundations of Modern Criminal Law, July 2013
649. Christopher Buccafusco and Jonathan S. Masur, Innovation and Incarceration: An Economic
Analysis of Criminal Intellectual Property Law, July 2013
650. Rosalind Dixon & Tom Ginsburg, The South African Constitutional Court and Socio-economic
Rights as “Insurance Swaps”, August 2013
651. Maciej H. Kotowski, David A. Weisbach, and Richard J. Zeckhauser, Audits as Signals, August
2013
652. Elisabeth J. Moyer, Michael D. Woolley, Michael J. Glotter, and David A. Weisbach, Climate
Impacts on Economic Growth as Drivers of Uncertainty in the Social Cost of Carbon, August
2013
653. Eric A. Posner and E. Glen Weyl, A Solution to the Collective Action Problem in Corporate
Reorganization, September 2013
654. Gary Becker, François Ewald, and Bernard Harcourt, “Becker and Foucault on Crime and
Punishment”—A Conversation with Gary Becker, François Ewald, and Bernard Harcourt: The
Second Session, September 2013
655. Edward R. Morrison, Arpit Gupta, Lenora M. Olson, Lawrence J. Cook, and Heather Keenan,
Health and Financial Fragility: Evidence from Automobile Crashes and Consumer Bankruptcy,
October 2013
656. Evidentiary Privileges in International Arbitration, Richard M. Mosk and Tom Ginsburg, October
2013
657. Voting Squared: Quadratic Voting in Democratic Politics, Eric A. Posner and E. Glen Weyl,
October 2013
658. The Impact of the U.S. Debit Card Interchange Fee Regulation on Consumer Welfare: An Event
Study Analysis, David S. Evans, Howard Chang, and Steven Joyce, October 2013
659. Lee Anne Fennell, Just Enough, October 2013
660. Benefit-Cost Paradigms in Financial Regulation, Eric A. Posner and E. Glen Weyl, April 2014
661. Free at Last? Judicial Discretion and Racial Disparities in Federal Sentencing, Crystal S. Yang,
October 2013
662. Have Inter-Judge Sentencing Disparities Increased in an Advisory Guidelines Regime? Evidence
from Booker, Crystal S. Yang, March 2014
663. William H. J. Hubbard, A Theory of Pleading, Litigation, and Settlement, November 2013
664. Tom Ginsburg, Nick Foti, and Daniel Rockmore, “We the Peoples”: The Global Origins of
Constitutional Preambles, April 2014
665. Lee Anne Fennell and Eduardo M. Peñalver, Exactions Creep, December 2013
666. Lee Anne Fennell, Forcings, December 2013
667. Stephen J. Choi, Mitu Gulati, and Eric A. Posner, A Winner’s Curse?: Promotions from the Lower
Federal Courts, December 2013
668. Jose Antonio Cheibub, Zachary Elkins, and Tom Ginsburg, Beyond Presidentialism and
Parliamentarism, December 2013
669. Lisa Bernstein, Trade Usage in the Courts: The Flawed Conceptual and Evidentiary Basis of
Article 2’s Incorporation Strategy, November 2013
670. Roger Allan Ford, Patent Invalidity versus Noninfringement, December 2013
671. M. Todd Henderson and William H.J. Hubbard, Do Judges Follow the Law? An Empirical Test of
Congressional Control over Judicial Behavior, January 2014
672. Lisa Bernstein, Copying and Context: Tying as a Solution to the Lack of Intellectual Property
Protection of Contract Terms, January 2014
-----
673. Eric A. Posner and Alan O. Sykes, Voting Rules in International Organizations, January 2014
674. Tom Ginsburg and Thomas J. Miles, The Teaching/Research Tradeoff in Law: Data from the
Right Tail, February 2014
675. Ariel Porat and Eric Posner, Offsetting Benefits, February 2014
676. Nuno Garoupa and Tom Ginsburg, Judicial Roles in Nonjudicial Functions, February 2014
677. Matthew B. Kugler, The Perceived Intrusiveness of Searching Electronic Devices at the Border:
An Empirical Study, February 2014
678. David S. Evans, Vanessa Yanhua Zhang, and Xinzhu Zhang, Assessing Unfair Pricing under
China's Anti-Monopoly Law for Innovation-Intensive Industries, March 2014
679. Jonathan S. Masur and Lisa Larrimore Ouellette, Deference Mistakes, March 2014
680. Omri Ben-Shahar and Carl E. Schneider, The Futility of Cost Benefit Analysis in Financial
Disclosure Regulation, March 2014
681. Yun-chien Chang and Lee Anne Fennell, Partition and Revelation, April 2014
682. Tom Ginsburg and James Melton, Does the Constitutional Amendment Rule Matter at All?
Amendment Cultures and the Challenges of Measuring Amendment Difficulty, May 2014
683. Eric A. Posner and E. Glen Weyl, Cost-Benefit Analysis of Financial Regulations: A Response to
Criticisms, May 2014
684. Adam B. Badawi and Anthony J. Casey, The Fannie and Freddie Bailouts Through the Corporate
Lens, March 2014
685. David S. Evans, Economic Aspects of Bitcoin and Other Decentralized Public-Ledger Currency
Platforms, April 2014
686. Preston M. Torbert, A Study of the Risks of Contract Ambiguity, May 2014
687. Adam S. Chilton, The Laws of War and Public Opinion: An Experimental Study, May 2014
688. Robert Cooter and Ariel Porat, Disgorgement for Accidents, May 2014
689. David Weisbach, Distributionally-Weighted Cost Benefit Analysis: Welfare Economics Meets
Organizational Design, June 2014
690. Robert Cooter and Ariel Porat, Lapses of Attention in Medical Malpractice and Road Accidents,
June 2014
691. William H. J. Hubbard, Nuisance Suits, June 2014
692. Saul Levmore & Ariel Porat, Credible Threats, July 2014
693. Douglas G. Baird, One-and-a-Half Badges of Fraud, August 2014
694. Adam Chilton and Mila Versteeg, Do Constitutional Rights Make a Difference? August 2014
695. Maria Bigoni, Stefania Bortolotti, Francesco Parisi, and Ariel Porat, Unbundling Efficient Breach,
August 2014
696. Adam S. Chilton and Eric A. Posner, An Empirical Study of Political Bias in Legal Scholarship,
August 2014
697. David A. Weisbach, The Use of Neutralities in International Tax Policy, August 2014
698. Eric A. Posner, How Do Bank Regulators Determine Capital Adequacy Requirements? September
2014
699. Saul Levmore, Inequality in the Twenty-First Century, August 2014
700. Adam S. Chilton, Reconsidering the Motivations of the United States? Bilateral Investment Treaty
Program, July 2014
701. Dhammika Dharmapala and Vikramaditya S. Khanna, The Costs and Benefits of Mandatory
Securities Regulation: Evidence from Market Reactions to the JOBS Act of 2012, August 2014
702. Dhammika Dharmapala, What Do We Know About Base Erosion and Profit Shifting? A Review
of the Empirical Literature, September 2014
703. Dhammika Dharmapala, Base Erosion and Profit Shifting: A Simple Conceptual Framework,
September 2014
-----
| 38,387
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.2139/ssrn.2293167?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.2139/ssrn.2293167, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://www.cesifo.org/DocDL/cesifo1_wp4796.pdf"
}
| 2,014
|
[] | true
| 2014-05-03T00:00:00
|
[] | 38,387
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0084d3e63e0f67f736cbd8ca38545bc0d6b496dc
|
[
"Computer Science"
] | 0.888917
|
JEL: unified resource tracking for parallel and distributed applications
|
0084d3e63e0f67f736cbd8ca38545bc0d6b496dc
|
Concurrency and Computation
|
[
{
"authorId": "1769279",
"name": "N. Drost"
},
{
"authorId": "3357424",
"name": "R. V. Nieuwpoort"
},
{
"authorId": "144103378",
"name": "J. Maassen"
},
{
"authorId": "1790652",
"name": "F. Seinstra"
},
{
"authorId": "144680288",
"name": "H. Bal"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Concurr Comput Pract Exp",
"Concurrency and Computation: Practice and Experience",
"Concurr Comput"
],
"alternate_urls": [
"http://www3.interscience.wiley.com/cgi-bin/jtoc?ID=77004395",
"http://onlinelibrary.wiley.com/journal/10.1002/(ISSN)1532-0634"
],
"id": "312ca99c-9149-490d-813e-c60d5e949f65",
"issn": "1532-0626",
"name": "Concurrency and Computation",
"type": "journal",
"url": "http://www3.interscience.wiley.com/cgi-bin/jhome/77004395?CRETRY=1&SRETRY=0"
}
| null |
# JEL: unified resource tracking for parallel and distributed applications
## Niels Drost
To cite this version:
### Niels Drost. JEL: unified resource tracking for parallel and distributed applications. Concurrency and Computation: Practice and Experience, 2010, 23 (1), pp.17. 10.1002/cpe.1592. hal-00686074
## HAL Id: hal-00686074
https://hal.science/hal-00686074
### Submitted on 7 Apr 2012
### HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
### L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
-----
CONCURRENCY AND COMPUTATION: PRACTICE AND EXPERIENCE
Concurrency Computat.: Pract. Exper. 0000; 00:1–0 Prepared using cpeauth.cls [Version: 2002/09/19 v2.02]
# JEL: Unified Resource Tracking for Parallel and Distributed Applications
### Niels Drost[∗][,][†], Rob V. van Nieuwpoort, Jason Maassen, Frank Seinstra and Henri E. Bal
Dept of Computer Science, VU University, Amsterdam, The Netherlands
SUMMARY
When parallel applications are run in large scale distributed environments such as
grids, peer-to-peer systems, and clouds, the set of resources used can change dynamically
as machines crash, reservations end, and new resources become available. It is vital for
applications to respond to these changes. Therefore, it is necessary to keep track of the
available resources — a problem which is known to be notoriously difficult.
In this paper we argue that resource tracking must be provided as standard
functionality in lower parts of the software stack. We propose a general solution to
resource tracking: the Join-Elect-Leave (JEL) model. JEL provides unified resource
tracking for parallel and distributed applications across environments. JEL is a simple yet
powerful model based on notifying when resources have Joined or Left the computation.
We demonstrate that JEL is suitable for resource tracking in a wide variety of
programming models, ranging from the fixed resource sets traditionally used in MPI-1 to
flexible grid-oriented programming models. We compare several JEL implementations,
and show these to perform and scale well in several real-world scenarios involving grids,
clouds and peer-to-peer systems applied concurrently, and wide-area systems with failing
resources. Using JEL, we have won first prize in a number of international distributed
computing competitions.
key words: Resource Tracking, Programming Models, Parallel Applications
∗Correspondence to: Niels Drost, Dept. of Computer Science, VU University, De Boelelaan 1081A, 1081 HV
Amsterdam, The Netherlands.
†E-mail: [email protected]
Contract/grant sponsor: Netherlands Organization for Scientific Research (NWO); contract/grant number:
612.060.214
Copyright c⃝ 0000 John Wiley & Sons, Ltd.
-----
2 NIELS DROST ET AL.
1. Introduction
Traditionally, supercomputers and clusters are the main computing environments[†] for running
high performance parallel applications. When a job is scheduled and started, it is assigned a
number of machines, which it uses until the computation is finished. Thus, the set of resources
used for an application in these environments is generally fixed.
In recent years, parallel applications are also run on large-scale grid systems [11], where
a single parallel application may use resources across multiple grid sites simultaneously.
Recently, peer-to-peer (P2P) systems [7], desktop grids [27], and clouds [8] are also used
for running parallel and distributed applications. In all such environments, resources may
become unavailable at any time, for instance when machines fail or reservations end. Also,
new resources may become available after the application has started. As a result, it is no
longer possible to assume that resource allocation is static.
To run successfully in these increasingly dynamic environments, applications must be
able to handle the inherent problems of these environments. Specifically, applications must
incorporate both malleability [23], the capability to handle changes in the resources used
during a computation, and fault tolerance, the capability to continue a computation despite
failures. Without mechanisms for malleability and fault-tolerance, the reliable execution of
applications on dynamic systems is hard, if not impossible.
A first step in creating a malleable and fault-tolerant system is to obtain an accurate and
up-to-date view of the resources participating in a computation, and what roles they have.
We therefore require some form of signaling whenever changes to the resource set occur. This
information can then be used by the application itself, or by the runtime system (RTS) of the
application’s programming model, to react to these changes. In this paper we refer to such
functionality as resource tracking.
An important question is at what level in the software hierarchy resource tracking should
be implemented. One option is to implement it in the application itself. However, this requires
each application to implement resource tracking separately. Another option is to implement
resource tracking in the RTS of the programming model of the application. Unfortunately, this
still requires implementing resource tracking for each programming model separately. Also, an
implementation of resource tracking designed for use on a grid will be very different from
one designed for a P2P environment. Therefore, the resource tracking functionality of each
programming model will have to be implemented for each target environment as well. This
situation is clearly not ideal.
Based on the observations above, we argue that resource tracking must be an integral part
of a system designed for dynamic environments, in addition to the low level communication
primitives already present in such systems [21, 22, 24]. Figure 1 shows the position of resource
tracking in a software hierarchy. There, a programming models’ RTS uses low-level resource
tracking functionality to implement the higher level fault-tolerance and malleability required.
†We will use the term environment for collections of compute resources such as supercomputers, clusters, grids,
desktop grids, clouds, peer-to-peer systems, etcetera, throughout this paper.
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
JEL: UNIFIED RESOURCE TRACKING 3
Figure 1. Abstract system hierarchy with resource tracking and communication primitives being
the central low-level primitives for developing fault-tolerant and malleable programming models and
applications.
This way, resource tracking (indirectly) allows applications to run reliably and efficiently on
dynamic systems such as grids and clouds.
In this paper we propose a general solution for resource tracking: the Join-Elect-Leave
(JEL) model. JEL acts as an intermediate layer between programming models and the
environment they run on. Since different environments have different characteristics, using
a single implementation is impractical, if not impossible. Instead, several implementations of
the JEL API are required, each optimized for a particular environment.
We have implemented JEL efficiently on clusters, grids, P2P systems, and clouds. These
different JEL implementations can be used transparently by a range of programming models,
in effect providing unified resource tracking for parallel and distributed applications across
environments.
The contributions of this paper are as follows.
- We show the need for unified resource tracking models in dynamic environments such as
grids, P2P systems, and clouds, and explore the requirements of these models.
- We define JEL: a unified model for tracking resources in dynamic environments. JEL
is explicitly designed to be simple yet powerful, scalable, and flexible. The flexibility of
JEL allows it to support parallel as well as distributed programming models.
- We show how JEL suits the resource tracking requirements of several programming
models. We have implemented 7 different programming models using JEL, ranging from
traditional models such as MPI-1 (in the form of MPJ [4]), to Satin [23], a high level
divide-and-conquer grid programming model that transparently supports malleability
and fault-tolerance.
- We show that JEL is able to function on a range of environments by discussing
multiple implementations of JEL. These include a centralized solution for relatively
stable environments such as clusters and grids, and a fault-tolerant P2P implementation.
In part, these implementations are based on well-known techniques of information
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
4 NIELS DROST ET AL.
dissemination in distributed systems. Notably, JEL can be implemented efficiently in
different environments, due to the presence of multiple consistency models.
Our research is performed in the context of the Ibis [22] Java based grid computing project.
In previous work we presented the Ibis Portability Layer (IPL) [22], a communication library
specifically targeted at dynamic systems such as grids. We augmented the IPL with our JEL
resource tracking model, leading to a software system which can efficiently run applications
on clusters, grids, P2P systems, and clouds. Using the software[‡] developed in this project,
including our implementations of JEL, we have been first prize winner in a number of
international competitions [2]. Notably, our winning submission to the Fault-Tolerant Category
of the DACH 2008 Challenge[§] at Cluster/Grid 2008 in Tsukuba, Japan made extensive use of
the JEL model for detecting and reporting node failures.
This paper is structured as follows. Section 2 discusses the requirements of a general resource
tracking model. Section 3 shows one possible model fulfilling these requirements: our JoinElect-Leave (JEL) model. Section 4 explains how JEL is used in several programming models.
In Section 5 we discuss a (partially) centralized and a fully distributed implementation of JEL.
Section 6 compares the performance of our implementations, and shows the applicability of
JEL in real-world scenarios. As a worst case, we show that JEL is able to support even shortlived applications on large numbers of machines. Section 7 discusses related work. Finally,
Section 8 describes future work and concludes.
2. Requirements of Resource Tracking models
In this section we explore the requirements of resource tracking in a dynamic system. As said,
resource tracking functionality can best be provided at a level between programming models
and the computational environment (see Figure 1). A programming models’ RTS uses this
functionality to implement fault-tolerance and malleability. This naturally leads to two sets of
requirements for resource tracking: requirements imposed by the programming model above,
and requirements resulting from the environment below. We will discuss each in turn.
2.1. Programming Model Requirements
For any resource tracking model to be generally applicable, it needs to support multiple
programming models, including both parallel and distributed models. Below is a list of
requirements covering the needs of most, if not all, parallel and distributed programming
models.
List of participants: The most obvious requirement of a resource tracking model is the
capability to build up a list of all computational resources participating in a computation.
‡Implementations of programming models and other software referred to in this paper can be freely downloaded
from http://www.cs.vu.nl/ibis
§http://www.cluster2008.org/challenge/
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
JEL: UNIFIED RESOURCE TRACKING 5
When communicating and cooperating with other participants of a computation, one
must know who these other participants are.
Reporting of changes: Simply building a list of participants at start-up is not sufficient.
Since resources may be added or removed during the runtime of a computation, a method
for updating the current list of participants is also required. This can be done for instance
by signaling the programming models’ RTS whenever a change occurs.
Fault detection: Not all resources are removed gracefully. Machines may crash, and processes
may be terminated unannounced by a scheduling system. For this reason, the resource
tracking model also needs to include a failure detection and reporting mechanism.
Role Selection: It is often necessary to select a leader from a set of resources for a specific
task. For instance, a primary object may have to be selected in primary-copy replication,
or a master may have to be selected in a master-worker application. Therefore, next to
keeping track of which resources are present in a computation, a method for determining
the roles of these resources is also required.
2.2. Environment Requirements
Next to supporting multiple programming models, a generally applicable resource tracking
model must also support multiple environments, including clusters, grids, clouds, and P2P
systems. We now determine the requirements resulting from the environment in which a
resource tracking model is used.
Small, Simple Interface: Different environments may have wildly different characteristics.
On cluster systems, the set of resources is usually constant. On grids and clouds resource
changes occur, albeit at a low rate. P2P systems, however, are known for their high rate
of change. Therefore, different (implementations of) algorithms are needed for efficient
resource tracking on different environments. To facilitate the efficient re-targeting of a
resource tracking model, its interface must be as small and simple as possible.
Flexible Quality of Service: Even with a small and simple interface, it may not be possible
to implement all features of a resource tracking model efficiently on all environments
with the same quality of service. For instance, reliably tracking each and every change
to the set of resources in a small-scale cluster system is almost trivial, while in a largescale P2P environment this is hard to implement efficiently, if possible at all. However,
not all programming models require the full functionality of a resource tracking model.
Therefore, a resource tracking model should include quality of service features. If the
resource tracking model allows for a programming model to specify the required features
and their quality of service, a suitable implementation could be selected at runtime. This
flexibility would greatly increase the applicability of a resource tracking model.
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
6 NIELS DROST ET AL.
interface JEL {
void init( Consistency electionConsistency,
Consistency joinLeaveConsistency );
void join(String poolName, Identifier identifier );
void leave ();
void maybeDead(Identifier identifier );
Identifier elect(String electionName );
Identifier getElectionResult (String electionName );
}
// interface for notifications, called by JEL
interface JELNotifications {
void joined( Identifier identifier );
void left(Identifier identifier );
void died(Identifier identifier );
}
Figure 2. JEL API (pseudocode, simplified)
3. The Join-Elect-Leave Model
We will now describe our resource tracking model: Join-Elect-Leave (JEL). JEL fulfills all
stated requirements of a resource tracking model. As shown in Figure 1, JEL is located at
the same layer of the software hierarchy as low-level communication primitives. Applications
use a programming model, ideally with support for fault-tolerance and malleability. The
programming model’s RTS uses JEL for resource tracking, as well as a communication library.
In this section we refer to programming models as users of JEL.
Figure 2 shows the JEL API. Next to an initialization function, the API consists of two
parts, Joins and Leaves, and Elections. Together, these fulfill the requirements of parallel and
distributed programming models as stated in the previous section.
In general, each machine used in a computation initializes JEL once, and is tracked as a
single entity. However, modern machines usually contain multiple processors and/or multiple
compute cores per processor. In some cases, it is therefore useful to start multiple processes
per machine for a single computation, which then need to be individually tracked. In this
paper, we therefore use the abstract term node to refer to a computational resource. Each
node represents a single instance in a computation, be it an entire machine, or one processor
of that machine.
JEL has been designed to work together with any communication library. The
communication library is expected to create a unique identifier containing a contact address
for each node in the system. JEL uses this address to identify nodes in the system, allowing a
user to contact a node whenever JEL refers to it.
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
JEL: UNIFIED RESOURCE TRACKING 7
3.1. Joins and Leaves
In JEL, the concept of a pool is used to denote the collection of resources used in a
computation. To keep track of exactly which nodes are participating in a pool, JEL supports
join notifications. Users are being notified whenever a new node joins a pool. When a node joins
a pool, it also is notified of all nodes already present in the pool via the same notifications,
given using the JELNotifications interface. This is typically done using callbacks, although
a polling mechanism can be used instead if callbacks are not supported by a programming
language.
JEL also supports nodes leaving a computation, both gracefully and due to failures. If a
node notifies JEL that it is leaving the computation, users of the remaining nodes in the
pool receive a leave notification for this node. If a node does not leave gracefully, but crashes
or is killed, the notification will consist of a died message instead. Implementations of JEL
try to detect failing nodes, but the user can also report suspected failures to JEL using the
maybeDead function.
3.2. Elections
It is often necessary to select a leader node from a set of resources for a specific task. To
select a single resource from a pool, JEL supports Elections. Each election has a unique name.
Nodes can nominate themselves by calling the elect function with the name of the election as a
parameter. The identifier of the winner will be returned. Using the getElectionResult function,
nodes can retrieve the result without being a candidate.
Elections are not democratic. It is up to the JEL implementation to select a winner from
the candidates. For instance, an implementation may simply select the first candidate as the
winner. At the user level, all that is known is that some candidate will be chosen. When
a winner of an election leaves or dies, JEL will automatically select a new winner from the
remaining living candidates. This ensures that the election mechanism will function correctly
in a malleable pool.
3.3. Consistency models
Together, join/leaves and elections fulfill all resource tracking requirements of fault-tolerant
and malleable programming models as stated in Section 2.1. However, we also require our
model to be applicable to a wide range of environments, from clusters to P2P systems. To
this end, JEL supports several consistency models for the join/leave notifications and the
elections. These can be selected independently when JEL is initialized using the init function.
Joins/leaves or elections can also be turned off completely, if either part is not used. For
examples of situations of when some parts of JEL remain unused, see Section 4.
Relaxing the consistency model allows JEL to be used on more dynamic systems such as
P2P environments, where implementing strict consistency models cannot be done efficiently,
if at all. For example, Section 5.2 describes a fully distributed implementation that is robust
against failures, under a relaxed consistency model.
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
8 NIELS DROST ET AL.
Figure 3. Position of JEL in the Ibis grid programming software stack
JEL offers two consistency models for joins and leaves. The reliable consistency model
ensures that all notifications arrive in the same order on all nodes. Using reliable joins and
leaves, a user can build up a list of all nodes in the pool. As an alternative, JEL also supports
unreliable joins and leaves, where notifications are delivered on a best effort basis, and may
arrive out of order, or not at all.
Similarly, JEL supports multiple consistency models for elections. If uniform elections are
used, a single winner is guaranteed for each election, known at all nodes. Using the nonuniform model, an election is only guaranteed to converge to a single winner in unbounded
time. The implementation of JEL will try to reach consensus on the winner of an election as
soon as possible, but in a large system this may be time-consuming. Before a consensus is
reached, different nodes may perceive different winners for a single election. Intuitively, this
non-uniform election has a very weak consistency. However, it is still useful in a number of
situations (Section 4.2 shows an example).
4. Applicability of JEL
JEL has been specifically designed to cover the required functionality of a range of
programming models found in distributed systems. We have implemented JEL in the Ibis
Portability Layer (IPL) [22], the communication library of the Ibis project. Figure 3 shows the
position of JEL in the software stack of the Ibis project. All programming models implemented
in the Ibis project use JEL to track resources, notably:
- Satin [23], a divide-and-conquer model
- Java RMI, an object oriented RPC model [28]
- GMI [19], a group method invocation model
- MPJ [4], a Java binding for MPI-1
- RepMI [19], a replicated object model
- Maestro [2], a fault-tolerant and self optimizing dataflow model
- Jorus [2], a user-transparent parallel model for multimedia computing
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
JEL: UNIFIED RESOURCE TRACKING 9
Model Joins and Leave Elections
Master-Worker - Uniform
Divide-and-Conquer (elected master) Unreliable Uniform
Divide-and-Conquer (selected master) Unreliable Non-Uniform
Message Passing Reliable
Table I. Parts and consistency models of JEL used in the example
programming models
As JEL is a generic model, it also supports other programming models. In addition to the
models listed, we have implemented a number of prototype programming models, including
data parallel, master-worker and Bulk Synchronous Parallel (BSP) models. Although our
current JEL implementations are implemented using Java, the JEL model itself is not limited
to this language. The foremost problem when porting JEL to other programming languages is
the possible absence of a callback mechanism. This problem can be solved by using downcalls
instead. In addition, parts of current JEL implementations could be reused, for instance
by combining the server of the centralized implementation with a client written in another
language.
We will now illustrate the expressiveness of JEL by discussing several models in more detail.
These programming models use different parts and consistency models of JEL, see Table I for
an overview.
4.1. Master-Worker
The first programming model we discuss is the master-worker [12] model, which requires a
single node to be assigned as the master. Since the master controls the application, its identity
must be made available to all other (worker ) nodes. Depending on the application, the number
of suitable candidates for the role of master may range from a single node to all participating
nodes. For this selection, the master-worker model uses uniform elections.
Since workers do not communicate, the only information a worker needs in a master-worker
model is the identity of the master node. So, in this model, joins and leaves are not needed,
and can simply be switched off.
4.2. Divide-and-Conquer
The second programming model we discuss is divide-and-conquer. As an example of such a
system we use Satin [23]. Satin is malleable, can handle failures, and hides many intricacies of
the grid from the application programmer. It also completely hides which resources are used.
Distribution and load balancing are performed automatically by using random work stealing
between nodes. Satin is cluster-aware: it exploits the hierarchical nature of grids to optimize
load balancing and data transfer. For instance, nodes prefer to steal work from nodes inside
their local cluster, as opposed to from remote sites. The Satin programming model requires
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
|Model|Joins and Leave|Elections|
|---|---|---|
|Master-Worker Divide-and-Conquer (elected master) Divide-and-Conquer (selected master) Message Passing|- Unreliable Unreliable Reliable|Uniform Uniform Non-Uniform -|
-----
10 NIELS DROST ET AL.
support from the resource tracking model for adding new nodes, as well as removing running
nodes (either gracefully or due to a crash). Satin applies this information to re-execute subtasks
if a processor crashes. Also, it dynamically schedules subtasks on new machines that become
available during the computation, and it migrates subtasks if machines leave the computation.
Although Satin requires notifications whenever nodes join or leave the computation, these
notifications do not need to be completely reliable, nor do they need to be ordered in any
way. Satin uses the joins and leaves to build up a list of nodes in the pool. This list is then
used to randomly select nodes to steal work from. As long as each node has a reasonably
up-to-date view of who is participating in the application, Satin will continue to work. When
the information is out of date or incomplete, the random sampling will be skewed slightly, but
in practice the negative impact on performance is small (see Section 6.4). Satin therefore uses
the unreliable consistency of the join and leave notifications.
An election is used to select a special coordinator per cluster. These coordinators are used to
optimize the distribution of fault tolerance related data in wide area systems. When multiple
coordinators are present, more data will be transferred, which may lead to lower performance.
Satin will still function correctly, however. Therefore, the election mechanism used to select
the cluster coordinators does not necessarily have to return a unique result, meaning that the
non-uniform elections of JEL can be used.
When an application is starting, Satin needs to select a master node that starts the main
function of the application. This node can be explicitly specified by the user or application, or
it can be automatically selected by Satin. The latter requires the uniform election mechanism
of JEL. If the master node is specified in advance by the user, no election is needed for this
functionality.
From the discussion above, we can conclude that the requirements of Satin differ depending
on the circumstances. If the user has specified a master node, Satin requires unreliable join
and leave notifications for the list of nodes, as well as non-uniform elections for electing cluster
coordinators. If, on the other hand, a master node must be selected by Satin itself, uniform
elections are an additional requirement.
4.3. Message Passing (MPI-1)
The last programming model we discuss is the Message Passing model, in this case represented
by the commonly used MPI [21] system. MPI is widely used on clusters and even for multi-site
runs on grid systems. We implemented a Java version of MPI-1, MPJ [4]. The MPI model
assigns ranks to all nodes. Ranks are integers uniquely identifying a node, assigned from 0 up
to the number of nodes in the pool. In addition, users can retrieve the total number of nodes
in the system.
Joins and leaves with reliable consistency are guaranteed to arrive in the same order on all
nodes. This allows MPI to build up a totally ordered list of nodes, by assigning rank 0 to the
first node that joins the pool, rank 1 to the second, etcetera. Like the master-worker model,
MPI does not require all functionality of JEL, as elections are not used.
MPI-1 has very limited support for changes of resources and failures. Applications using
this model cannot handle changes to the resources such as nodes leaving or crashing. Using an
MPI implemented on top of JEL will not fix this problem. However, some extensions to MPI
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
JEL: UNIFIED RESOURCE TRACKING 11
are possible. For instance, MPI-2 supports new nodes joining the computation, Phoenix [26]
adds supports for nodes leaving gracefully, and FT-MPI [10] allows the user to handle faults,
by specifying the action to be taken when a node dies. All these extensions to MPI can be
implemented using JEL for the required resource tracking capabilities.
5. JEL Implementations
It is impractical, if not impossible, to use the same implementation of JEL on clusters, grids,
clouds, as well as P2P systems. As these different environments have different characteristics,
there are different trade-offs in implementation design. We have explored several alternative
designs, and discuss these in this section.
On cluster systems, resources used in a computation are mostly fixed, and do not change
much over time. Therefore, our JEL implementation targeted at single cluster environments
uses a relatively simple algorithm for tracking resources, based on a central coordinator. This
ensures high performance and scalability, and the simple design leads to a more robust, less
error prone implementation. This central implementation provides reliable joins and leaves and
uniform elections. As this implementation uses a central coordinator for tracking resources,
these stronger consistency models can be implemented without much effort.
On more dynamic systems such as grids, clouds and desktop grids, the simple implementation
design used on clusters is not sufficient. As the number of machines in the system increases,
so does the number of failures. Moreover, any change to the set of resources needs to be
disseminated to a larger set of machines, possibly with high network latencies. Thus, these
environments require a more scalable implementation of JEL. We used a number of techniques
to decrease the effort required and amount of data transferred by the central coordinator, at
the cost of an increased complexity of the implementation. As the resource tracking still uses
a central coordinator, the stronger consistency models for joins, leaves and elections of JEL
are still available.
Lastly, we implemented JEL on P2P environments. By definition, it is not possible to use
centralized components in P2P systems. Therefore, our P2P implementation of JEL is fully
distributed. Using Lamport clocks [17] and a distributed election algorithm [13] it is possible to
implement strong consistency models in a fully distributed manner. However, these algorithms
are prohibitively difficult to implement. Therefore, our P2P implementation only provides
unreliable joins and leaves and non-uniform elections, making it extremely simple, robust and
scalable. We leave implementing a P2P version of JEL with strong consistency models as future
work.
As said, we have augmented our Ibis Portability Layer (IPL) [22] with JEL. The IPL is a low
level message-based communication library implemented in Java, with support for streaming
and efficient serialization of objects. All functionality of JEL is exported in the IPL’s Registry.
JEL is implemented in the IPL as a separate thread of the Java process. Notifications are
passed to the programming models’ RTS or application using a callback mechanism.
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
12 NIELS DROST ET AL.
Figure 4. Example of an event stream
5.1. Centralized JEL Implementation
Our centralized JEL implementation uses a single server to keep track of the state of the
pool. Using a centralized server makes it possible to implement stronger consistency models.
However, it also introduces a single point of failure, and a potential performance bottleneck.
The server has three functions. First, it handles requests of nodes participating in the
computation. For example, a node may signal that it has joined the computation, is leaving,
or is running for an election. By design, these requests require very little communication or
computation.
Second, the server tracks the current resources in the pool. It keeps a list of all nodes
and elections, and detects failed nodes. Our current implementation is based on a leasing
mechanism, where nodes are required to periodically contact the server. If a node has had no
contact with the server for a certain number of seconds, it sends a so-called heartbeat to the
server. If it fails to do so, the server will try to connect to the node, to see if the node is still
functional. If the server cannot reach the node, this node is declared dead, and removed from
the pool.
Third, the server disseminates all changes of the state of the pool to the nodes. The nodes
use these updates to generate join, leave, died, and election notifications for the application. If
there are many nodes, the dissemination may require a significant amount of communication
and lead to performance problems. To alleviate these problems we use a simple yet effective
technique. Any changes to the state of the pool are mapped to events. These events have a
unique sequence number, and are totally ordered. An event represents a node joining, a node
leaving, a node dying, or an election result.
A series of state changes to a sequence of events can now be perceived as a stream of events.
Dissemination of this stream can be optimized using well-known techniques such as broadcast
trees or gossiping. Figure 4 shows an example of a stream of events. In this case, two nodes
join, one leaves, one is elected master, and then dies. This stream of events thus results in an
empty pool.
We have experimented with four different methods of disseminating the event stream: a
simple serial send, serial send with peer bootstrap, a broadcast tree, and gossiping. The
different mechanisms and their implementations are described below.
5.1.1. Serial Send
In our first dissemination technique, the central server forwards all events occurring in the
pool to each node individually. Such a serial send approach is straightforward to implement,
and is very robust. It may lead to performance problems though, as a large amount of data
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
JEL: UNIFIED RESOURCE TRACKING 13
may have to be sent by the server. To optimize network usage, the server sends to multiple
nodes concurrently.
In this implementation, a large part of the communication performed by the server consists
of sending a list of all nodes to a new, joining node (the so-called bootstrap data). If many
nodes join a computation at the same time, this may cause the server to become overloaded.
5.1.2. Peer Bootstrap
As an optimization of the serial send technique, we implemented peer bootstrapping, where
joining nodes use other nodes (their peers) to obtain the necessary bootstrap data. When a
node joins, the server sends it a small list of randomly chosen nodes in the pool. The joining
node then tries to obtain the bootstrap data from the nodes in this list. If, for some reason,
none of the nodes in the list can be reached, the joining node uses the server as a backup
source of bootstrap data. This approach guarantees that the bootstrap process will succeed
eventually.
5.1.3. Broadcast tree
A more efficient way of disseminating the stream of events from the server to all nodes is a
broadcast tree. Broadcast trees limit the load on the server by using the nodes themselves to
forward data. Broadcast trees also have disadvantages, as the tree itself is a distributed data
structure that needs to be managed. This requires significant effort, and makes broadcast trees
less robust than serial send.
Our broadcast implementation uses a binomial tree structure with the server as the root of
the tree, which is also commonly used in MPI implementations [16]. To minimize the overhead
of managing the tree, we use the data stream being broadcast to manage the tree. Since this
stream includes totally ordered notifications of all joining and leaving nodes, we can use it to
construct the broadcast tree at each node.
To increase the robustness of our broadcast implementation, we implemented fallback
information dissemination. Periodically, the server directly connects to each node in the
pool, and sends it any events it did not receive yet. This fallback mechanism guarantees
the functioning of the system, regardless of the number, and type, of failures occurring. Also,
it causes very little overhead if there are no failures.
5.1.4. Gossiping
A fourth alternative for disseminating the events of a pool to all its nodes is the use of gossiping
techniques. Gossiping works on the basis of periodic information exchanges (gossips) between
peers (nodes). Gossiping is robust, easy to implement and has low resource requirements.
In the gossiping dissemination, all nodes record the event stream. Periodically, a node
contacts one of its peers. The event stream of those two nodes are then merged by sending any
missing events from one peer to the other. To reduce memory usage old events are eventually
purged from the system.
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
14 NIELS DROST ET AL.
Although the nodes exchange events amongst themselves, the pool is still managed by the
central server. The server still acts as a contact point for nodes that want to join, leave, or run
for an election. Also the server creates all events, determines the ordering of events, detects
failing nodes, etc.
To seed the pool of nodes with data, the server periodically contacts a random node, and
sends it any new events. The nodes will then distribute these new events amongst themselves
using gossiping. When the nodes gossip at a fixed interval, the events travel through the system
at an exponential rate. The dissemination process thus requires a time that is logarithmically
proportional to the pool size.
To speed up the dissemination of the events to all nodes, we implemented an adaptive
gossiping interval at the server. Instead of waiting a fixed time between sending events to
nodes, we calculate the interval based on the size of the pool by dividing the standard interval
by the base 2 logarithm of the pool size. Thus, events are seeded at a speed proportionally
to the pool size. The dissemination speed of events becomes approximately constant, at the
expense of an increase in communication load on the server.
Since gossip targets are selected randomly, there is no guarantee that all nodes will receive
all events. To ensure reliability, we use the same fallback dissemination technique we used in
the broadcast tree implementation. Periodically, the server contacts all nodes and sends them
any events they do not have.
5.2. Distributed JEL Implementation
Although the performance problems of the centralized implementation are largely solved by
using broadcast trees and gossiping techniques, the server component is still a central point
of failure, and not suitable for usage in P2P systems. As an alternative, we created a fully
distributed implementation of JEL using P2P techniques. It has no central components, so
failures of individual nodes do not lead to a failure of the entire system.
Our implementation is based on our ARRG [6] gossiping algorithm. ARRG is resilient against
failures, and can handle network connectivity problems such as firewalls and NATs. Each node
in the system has a unique identifier in the form of a UUID [18], which is generated locally
at startup. ARRG needs the address of an existing node at startup to bootstrap, so this must
be provided. This address is used as an initial contact point in the pool. ARRG provides a
so-called peer sampling service [15], guaranteeing a random sampling of the entire pool even
if failures and network problems occur.
On top of ARRG, we use another gossiping algorithm to exchange data on nodes and
elections. Periodically, a node connects to a random node (provided by ARRG) and exchanges
information on other nodes and elections. It sends a random subset of the nodes and elections
it knows and includes information on itself. It then receives a number of members and elections
from the peer node, and merges these with its own state. Over time, nodes build up a list of
nodes and elections in the pool.
If a node wants to leave the computation, it sends out this information to a number of nodes
in the system. Eventually, this information will reach all nodes. Since a crashed node cannot
send a notification to the other nodes indicating it has died, a distributed failure detection
mechanism is needed.
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
JEL: UNIFIED RESOURCE TRACKING 15
The failure detection mechanism uses a witness system. A timeout is kept in every entry
on a node, indicating the last time this node has successfully been contacted. Whenever the
timeout expires, a node is suspected of having died. Nodes with expired entries in their node
list try to contact these suspects. If this fails, they add themselves as a witness to this node’s
demise. The witness list is part of the gossiped information. If a sufficient number of nodes
declare that a node has died, it is pronounced dead.
Besides joins and leaves, the distributed implementation also supports elections. Because of
the difficulties of implementing distributed election algorithms [13], and the lack of guarantees
even when using the more advanced algorithms, we only support the non-uniform election
consistency model. In this model, an election converges to a single winner. Before that time,
nodes may not agree on the winner of that election.
Election results are gossiped. When a node needs the result of a unknown election, it simply
declares itself as the winner. If a conflict arises when merging two different election results, one
of the two winners is selected deterministically (the node with the numerically lowest UUID
wins). Over time, only a single winner remains in the system.
As a consequence of the aforementioned design, the distributed implementation of JEL is
fault tolerant in many aspects. First, the extensive use of gossiping techniques inherently leads
to fault tolerance. The ARRG protocol adds further tolerance against failures, for example
by using a fallback cache containing previously successful contacts [6]. Most importantly,
the distributed implementation lacks any centralized components, providing fully distributed
implementations of all required functionality instead.
6. Evaluation
To evaluate the performance and scalability of our JEL implementations, we performed several
experiments. These include low-level and application-level tests on multiple environments.
In particular, we want to assess how much performance is sacrificed to gain the robustness
of a fully distributed implementation, as we expect this implementation to have the
lowest performance. Exact quantification of performance differences between implementations,
however, is hard — if not impossible. As shown below, performance results are highly
dependent on the characteristics of the underlying hardware. Furthermore, the impact on
application performance, in turn, is dependent on the programming model used. For example,
MPI can not proceed until all nodes have joined, while Satin starts as soon as a resource is
available. All experiments were performed multiple times. Numbers shown are taken from a
single representative experiment.
6.1. Low level benchmark: Join test
The first experiment is a low-level stress test using a large number of nodes. We ran
the experiment on two different clusters. The purpose of the experiment is to determine
the performance of our JEL implementations under different network conditions. In the
experiment, all nodes join a single pool and, after a predetermined time, leave again. As a
performance metric, we use the average perceived pool size. To determine this metric, we keep
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
16 NIELS DROST ET AL.
1000
800
600
400
200
0
0 20 40 60 80 100
Time (seconds)
Figure 5. 1000 nodes Join test (DAS-2)
Central, Serial Send
Central, Peer Bootstrap
Central, Broadcast Tree
Central, Gossip
Central, Adaptive Gossip
Distributed
track of the pool size at all nodes. Ideally, this number is equal to the actual pool size. However,
if a node has not received all notifications, the perceived pool size will be smaller. We then
calculate the average perceived pool size over all nodes in the system. The average is expected
to increase over time, eventually becoming equal to the actual pool size. This indicates that
all nodes have received all notifications. The shorter the stabilization time, the better.
This experiment was done on our DAS-2 and DAS-3 clusters. The DAS-2 cluster consists
of 72 dual processor Pentium III machines, with 2Gb Myrinet interconnect. The DAS3 cluster consists of 85 dual-CPU dual-core Opteron machines, with 10Gb Myrinet. See
http://www.cs.vu.nl/das2 and http://www.cs.vu.nl/das3 for more information.
Since neither the DAS-2 nor DAS-3 have a sufficiently large number of machines to
stress test our implementation, we started multiple nodes per machine. As neither our JEL
implementations or the benchmark are CPU bound, the sharing of CPU resources does not
influence our measurements. The nodes do share the network bandwidth though. However, all
implementations of JEL are affected equally, so the relative results of all tested implementations
remain valid. The server of the centralized implementation of JEL is started on the front-end
machine of the cluster.
6.1.1. DAS-2
Figure 5 shows the performance of JEL on the DAS-2 system. We started 10 nodes per
processor core on 50 dual processor machines, for a total of 1000 nodes. Due to the sharing
of network resources, all nodes, as well as the frontend running the server, have an effective
bandwidth of about 100Mbit/s.
For convenience, we only show the first 100 seconds of the experiment, when all nodes are
joining. The graph shows that the serial send dissemination suffers from a lack of network
bandwidth, and is the lowest performing implementation.
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
2000
1500
1000
500
JEL: UNIFIED RESOURCE TRACKING 17
Central, Serial Send
Central, Peer Bootstrap
Central, Broadcast Tree
Central, Gossip
Central, Adaptive Gossip
Distributed
0
0 20 40 60 80 100
Time (seconds)
Figure 6. 2000 nodes Join test (DAS-3)
The peer bootstrap and broadcast tree techniques perform equally well on this system.
This is not surprising, as the broadcast tree and peer bootstrap techniques utilize all nodes
to increase throughput. As the graph shows, adaptive gossip dissemination is faster than the
normal central gossip version, as it adapts its speed to the pool size.
While not shown in the graph, the fully distributed implementation is also converging to the
size of the pool, albeit slower than most versions of the centralized implementation. The slow
speed is caused by an overload of the bootstrap service, which receives 1000 gossip requests
within a few milliseconds when all the nodes start. This is an artifact of this artificial test that
causes all the nodes to start simultaneously. In a P2P environment this is unlikely to occur.
Multiple instances of the bootstrap service would solve this problem. Still, the performance
of the distributed implementation is acceptable, especially considering the high robustness of
this implementation.
6.1.2. DAS-3
Next, we examine the performance of the same benchmark on the newer DAS-3 system (see
Figure 6). As a faster network is available on this machine, congestion of the network is less
likely. Since the DAS-3 cluster has more processor cores, we increased the number of nodes
to 2000, resulting in 250Mbit/s of bandwidth per node. The frontend of our DAS-3 cluster
has 10Gbit/s of bandwidth. Performance on the DAS-3 increases significantly compared to the
DAS-2, mostly because of the faster network. The serial send and gossip techniques no longer
suffer from network congestion at the server or bootstrap service. As a result, performance
increases dramatically for both. Also, the graph shows that the performance of the broadcast
tree is now significantly better than any other dissemination technique.
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
18 NIELS DROST ET AL.
Server Node Average
Implementation Dissemination
(MB) (MB)
Serial Send 1521.47 0.76
Peer Bootstrap 677.23 0.45
Central Broadcast Tree 5.57 1.32
Gossip 9.83 0.49
Adaptive Gossip 40.36 0.57
Distributed Gossip n.a. 25.37
Table II. Total data transferred in Join test with 2000 nodes on the
DAS-3
Performance of the central implementation with gossiping is influenced by the larger size of
the pool. It takes considerably longer to disseminate the information to all nodes. As before,
the adaptive gossiping manages to adapt, and reaches the total pool size significantly faster.
From our low level benchmark on both the DAS-2 and DAS-3 we conclude that it is possible
to implement JEL such that it is able to scale to a large number of nodes. Also, a number of
different implementation designs are possible for JEL, all leading to reasonable performance.
6.2. Network bandwidth usage
To investigate the cost of using JEL, we recorded the total data transferred by both the server
and the clients in the previous experiment. Table II shows the total traffic generated by the
experiment on DAS-3, after all the nodes have joined and left the pool.
Using the serial send version, the server transferred over 1500 MB in the 10 minute
experiment. Using peer bootstrap already halves the traffic needed at the server. However,
the broadcast tree dissemination uses less than 5 MB of server traffic to accomplish the same
result. It does this by using the nodes of the system, leading to a slightly higher traffic at the
nodes (1.32 MB instead of 0.76 MB).
From this experiment we conclude that the dissemination techniques significantly increase
the scalability of our implementation. Also, the broadcast tree implementation is very suited
for low bandwidth environments. For the distributed implementation, the average traffic per
node is 25 MB, an acceptable cost for having a fully distributed implementation.
6.3. Low level benchmark in a dynamic environment
We now test the performance of JEL in a dynamic environment, namely the DAS-3 grid.
Besides the cluster at the VU used in the previous tests, the DAS-3 system consists of 4 more
clusters across the Netherlands. For this test we started our Join benchmark on two clusters
(800 nodes), and add two clusters later, for a total of 1600 nodes. Finally, two clusters also
leave, either gracefully, or by crashing.
Results of the test when the nodes leave gracefully are shown in Figure 7. We tested
both the central implementation of JEL and the distributed implementation. For the central
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
|Implementation|Dissemination|Server (MB)|Node Average (MB)|
|---|---|---|---|
|Central|Serial Send|1521.47|0.76|
||Peer Bootstrap|677.23|0.45|
||Broadcast Tree|5.57|1.32|
||Gossip|9.83|0.49|
||Adaptive Gossip|40.36|0.57|
|Distributed|Gossip|n.a.|25.37|
-----
JEL: UNIFIED RESOURCE TRACKING 19
800 nodes join 800 nodes leave
1600
1400
1200
1000
800
600
400
200
Central, Serial Send
Distributed
0
|00 nodes join 800 nodes leave|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|00 nodes join 800 nodes leave||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
0 100 200 300 400 500 600 700 800
Time (seconds)
Figure 7. Join/Leave test run on 4 clusters across the DAS-3 grid. Half of the nodes only start after
200 seconds, and leave after 400 seconds
implementation we have selected the serial send dissemination technique, which performs
average on DAS-3 (see Figure 6). On the scale of the graph of Figure 7 results obtained
for the other techniques are indistinguishable.
Figure 7 shows that both implementations are able to track the entire pool. As said, the
pool size starts at 800 nodes, and increases to 1600 nodes 200 seconds into the experiment.
The dip in the graph at 200 seconds is an artifact of the metric used: At the moment 800
extra nodes are started, these nodes have a perceived pool size of 0. Thus, the average over
all nodes in the pool halves. As in the previous test, the central implementation is faster than
the distributed implementation. After 400 seconds, two of the four clusters (800 of the 1600
nodes) leave the pool. The graph shows that JEL correctly handles nodes leaving, with both
implementations processing the leaves shortly.
As said, we also tested with the nodes crashing by forcibly terminating the node’s process.
The results can be seen in Figure 8. When nodes crash instead of leaving, it takes longer for
JEL to detect these nodes have died. This delay is due to the timeout mechanism in both
implementations. A node is only declared dead if it cannot be reached for a certain time (a
configuration property of the implementations, in this instance set to 120 seconds). Thus, nodes
are declared dead with a delay after crashing. The central implementation of JEL has a slightly
longer delay, as it tries to contact the faulty nodes one more time after the timeout expires.
From this benchmark we conclude that JEL is able to function well in dynamic systems, with
both leaving and failing nodes.
6.4. Satin Gene Sequencing Application
To test the performance of our JEL implementations in a real world setting, we used 256 cores
of our DAS-3 cluster to run a gene sequencing application implemented in Satin [23]. Pairwise
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
20 NIELS DROST ET AL.
800 nodes join 800 nodes fail
1600
1400
1200
1000
800
600
400
200
0
0 100 200 300 400 500 600 700 800
Time (seconds)
Central, Serial Send
Distributed
|00 nodes join 800 nodes fail|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
|00 nodes join 800 nodes fail|||||||||
||||||||||
||||||||||
||||||||||
||||||||||
||||||||||
||||||||||
||||||||||
||||||||||
||||||||||
Figure 8. Join/Fail test run on 4 clusters across the DAS-3 grid. Half of the nodes only start after 200
seconds, and crash after 400 seconds
Run time Join Time
Implementation Dissemination [Time]
Small Large
Serial Send 71.7 408.0 18.2
Peer Bootstrap 70.5 406.1 17.2
Central Broadcast Tree 66.4 402.9 10.6
Gossip 67.7 426.6 14.6
Adaptive Gossip 67.5 426.4 11.1
Distributed Gossip 82.3 462.4 14.1
Table III. Gene sequencing application on 256 cores of the DAS-3. Listed are total runtime (in
seconds) of the application for two problem sizes and time (in seconds) until all nodes have joined
fully (average perceived pool size is equal to the actual pool size). Runtime includes the join time.
|Implementation|Time Dissemination|Run time Small Large|Col4|Join Time|
|---|---|---|---|---|
|Central|Serial Send|71.7|408.0|18.2|
||Peer Bootstrap|70.5|406.1|17.2|
||Broadcast Tree|66.4|402.9|10.6|
||Gossip|67.7|426.6|14.6|
||Adaptive Gossip|67.5|426.4|11.1|
|Distributed|Gossip|82.3|462.4|14.1|
sequence alignment is a bioinformatics application where DNA sequences are compared with
each other to identify similarities and differences. We run a large number of instances of
the well-known Smith-Waterman [25] algorithm in parallel using Satin’s divide-and-conquer
programming style. The resulting application achieves excellent performance (93%efficiency
on 256 processors).
Table III lists the performance of the application for various JEL implementations, and two
different problem sizes. We specifically chose to include a small problem on a large number of
cores to show that our JEL implementations are also suitable for short-running applications
where the overhead of resource tracking is relatively large. In this very small problem, the
application only ran for little over a minute. The table shows similar performance for all
versions of JEL. Moreover, the relative difference is even smaller in the large problem size. An
exception are the implementations based on gossiping techniques. The periodic gossiping causes
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
JEL: UNIFIED RESOURCE TRACKING 21
a small but constant amount of network traffic. Unfortunately, the load balancing mechanism of
Satin is very sensitive to this increase in network load. Though the distributed implementation
lacks the guaranteed delivery of notifications present in the central implementation, Satin is
able to perform the gene sequencing calculations with only minor delay. This is an important
result, given Satin’s transparent support for malleability and fault-tolerance, as explained in
Section 4.2.
To give an impression of the overhead caused by JEL, we also list the join time, the amount
of time from the start of the application it takes for the average perceived pool size to reach
the actual pool size, i.e. the time JEL needs to notify all nodes of all joins. The join time of an
application is independent of the runtime of the application, and mainly influenced by number
of nodes, JEL implementation, and resources used. Therefor, we only list the join time once,
for both problem sizes. The performance of the various JEL implementations is in line with the
low-level benchmark results, with the broadcast tree implementation being the fastest. Our
gene sequencing experiment shows that our model and implementations are able to handle
even these short running applications.
6.5. World Wide Experiment
To show that JEL is suitable for a large number of different environments, we performed a
world wide experiment using the central implementation of JEL with serial send dissemination.
We used a prototype of the pending re-implementation of Satin, especially designed for limited
connectivity environments. In our world-wide experiment, connectivity between sites is often
limited because of firewalls, and the network includes a number of low bandwidth and high
latency links.
As an application we used an implementation of First Capture Go, a variant of the Go
board game where a win is completed by capturing a single stone. Our application determines
the optimal move for a given player, given any board. It uses a simple brute-force algorithm
for determining the solution, trying all possible moves recursively using a divide-and-conquer
algorithm. Since the entire space needs to be searched to calculate the optimal answer, our
application does not suffer from search overhead.
Table IV shows an overview of the sites used. These consist of two grids (the DAS-3 in the
Netherlands, and the InTrigger [14] system in Japan), a desktop grid consisting of student PCs
at the VU University Amsterdam, and a number of machines in the Amazon EC2 [8] compute
cloud in the USA. We used a total of 176 machines, with a total of 401 cores. As we started
a single process per machine, and used threads to distribute work among cores, this amounts
to 176 JEL nodes.
Figure 9 shows the communication structure of the experiment. The graph shown is produced
by the visualization of the SmartSockets [20] library, which is used to connect all the nodes
despite of the firewalls present. In the graph, each site is represented by a different color. Next
to the compute nodes themselves (called Instances in the graph), and the central server, a
number of support processes is used. All part of the SmartSockets [20] library, these support
processes allow communication to pass through firewalls, monitor the communication, and
produce the visualization shown. The support processes run on the frontend machines of the
sites used.
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
22 NIELS DROST ET AL.
Location Country Type Nodes Cores Efficiency
VU University, Amsterdam 32 128 97.3%
University of Amsterdam Grid 16 64 96.5%
The Netherlands
Delft University (DAS-3) 32 64 94.0%
Leiden University 16 32 96.7%
Nat. Inst. of Informatics, Chiba Grid 8 16 84.0%
Japan
University of Tsukuba (InTrigger) 8 64 81.1%
VU University, Amsterdam The Netherlands Desktop Grid 16 17 98.0%
Amazon EC2 USA Cloud 16 16 93.2%
Total 176 401 94.4%
Table IV. Sites used in the world wide divide-and-conquer experiment. Efficiency is calculated
as the difference between total runtime of the application process, and time spent computing.
Overhead includes joining and leaving, as well as application communication for load
balancing, returning results, etc.
Our world wide system finishes the capture Go application in 35 minutes. We measured the
efficiency of the machines, comparing the total time spent computing to the total runtime of
the processes. Overhead includes joining and leaving, as well as time spent communicating
with other nodes to load balance the application, return results, etc. Efficiency of the nodes
ranges from 79.8% to 99.1%. The low efficiency on some nodes is due to the severely limited
connectivity of these nodes: the nodes of the InTrigger grid in Japan can only communicate
with the outside world through an ssh tunnel, with a bandwidth of only 1Mbit/s and a latency
of over 250ms to the DAS-3. Even with some nodes having a somewhat diminished efficiency,
the average efficiency over all nodes in the world-wide experiment is excellent, at 94.4%.
Although JEL adds to the overhead of the application, running the experiment without
JEL would be difficult, if not impossible. Without JEL, all nodes would have to be known
before starting the application, and this list would have to be spread manually to all nodes.
Also, the connectivity problems of the InTrigger grid in Japan lead to these nodes starting
the computation with a significant delay. With JEL, these nodes simply join the running
computation later, when the rest of the nodes have already done a significant amount of work.
Our experiment shows that JEL is suitable for running applications on a large scale and a
wide range of systems, including desktop grids and clouds.
6.6. Competitions
Recently, the software produced by the Ibis project (which includes JEL as one of its core
components) has been put to the test in two international competitions [2] organized by the
IEEE Technical Committee on Scalable Computing, as part of the CCGrid 2008 (Lyon, France)
and Cluster/Grid 2008 (Tsukuba, Japan) international conferences.
The first competition we participated in was SCALE 2008, or the First IEEE International
Scalable Computing Challenge. Our submission consisted of a multimedia application, which is
able to recognize objects from webcam images. These images are sent to a grid for processing,
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
|Location|Country|Type|Nodes|Cores|Efficiency|
|---|---|---|---|---|---|
|VU University, Amsterdam|The Netherlands|Grid (DAS-3)|32 16 32 16|128 64 64 32|97.3% 96.5% 94.0% 96.7%|
|University of Amsterdam||||||
|Delft University||||||
|Leiden University||||||
|Nat. Inst. of Informatics, Chiba|Japan|Grid (InTrigger)|8 8|16 64|84.0% 81.1%|
|University of Tsukuba||||||
|VU University, Amsterdam|The Netherlands|Desktop Grid|16|17|98.0%|
|Amazon EC2|USA|Cloud|16|16|93.2%|
-----
JEL: UNIFIED RESOURCE TRACKING 23
Figure 9. Communication structure of the world wide divide-and-conquer experiment. Nodes in
this graph represent processes, edges represent connections. The experiment contains both nodes
performing the computation, as well as a number of support processes which allow communication to
pass through firewalls, monitor the communication, and produce this image. Each color represents a
different location.
and the resulting image descriptions are used to search for objects in a database. In our
application, JEL is used to keep track of precisely which grid resources are available for
processing images.
The second competition was DACH 2008, or the First International Data Analysis Challenge
for Finding Supernovae. Here, the goal was to find ’supernova candidates’ in a large distributed
database of telescope images. Again, we used JEL in our submission to keep track of all the
available resources.
The DACH challenge consisted of two categories: a Basic Category where the objective was
to search the entire database as fast as possible, and a Fault-Tolerant category, where next
to speed, fault tolerance was also measured by purposely killing over 30% of the nodes in
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
24 NIELS DROST ET AL.
the computation. Especially in the Fault-Tolerant category, JEL was vital for the successful
completion of the application.
Using our software (including JEL), we have won first prize in both SCALE 2008 and DACH
2008. Moreover, we won both the Basic and the Fault-Tolerant categories at DACH 2008. These
prizes show that JEL is very effective in many real-world scenarios, including dynamic systems
with failing nodes.
7. Related Work
Other projects have investigated supporting malleability and fault tolerance in various
environments, and resource tracking in these systems. However, most of these projects focus
on a single programming model, and a single target environment.
One area of active research for supporting applications on more dynamic environments is the
MPI standard. As said, the MPI-1 standard does not have support for nodes joining or leaving
the computations. To alleviate this problem the follow-up MPI-2 [21] standard also supports
changes to the nodes in a system. A process may spawn new instances of itself, or connect to
a different running set of MPI-2 processes. A very basic naming service is also available.
Although it is possible to add new processes to an MPI application, the resource tracking
capabilities of MPI-2 are very limited by design and a MPI implementation is not required
to handle node failures. Also, notifications of changes such as machines joining, leaving or
crashing are not available. Thus, resource tracking of MPI-2 is very limited, unlike our generic
JEL model.
One MPI derivative that does offer explicit support for fault-tolerance is FT-MPI [10]. FTMPI extends the MPI standard with functionality to recover the MPI library and run-time
environment after a node fails. In FT-MPI, an application can specify if failed nodes must be
simply removed (leaving gaps in the ranks used), replaced with new nodes, or if the groups and
communicators of MPI must be shrunk so that no gap remains. Recovering the application
must still be done by the application itself.
FT-MPI relies on the underlying system to detect failures and notify it of these failures.
The reference implementation of FT-MPI uses HARNESS [3], a distributed virtual machine
with explicit support for adding and removing hosts from the virtual machine, as well as
failure detection. HARNESS shares much of the same goals as JEL, and is able to overcome
many of the same problems JEL tries to solve. However, HARNESS focuses on a smaller
set of applications and environments than JEL. HARNESS does not explicitly support
distributed applications, as JEL does. Also, HARNESS does not offer the flexibility to select
the concurrency model required by the application, hindering the possibility for more loosely
coupled implementations of the model, such as the P2P implementation of JEL.
Other projects have investigated supporting dynamic systems. One example is Phoenix [26],
where an MPI-like message passing model is used. This model is extended with support for
virtual nodes, which are dynamically mapped to physical nodes, the actual machines in the
system. GridSolve [29] is a system for using resources in a grid based on a client-agent-server
architecture. The “View Synchrony” [1] shared data model also supports nodes joining, leaving
and failing. Again, all these programming models focus on resource tracking for a single model,
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
JEL: UNIFIED RESOURCE TRACKING 25
not the generic resource tracking functionality offered by JEL. All models mentioned can be
implemented using the functionality of JEL.
Although all our current JEL implementations use gossiping and broadcast trees as a means
for information dissemination, other techniques exist. One example is the publish-subscribe
model [9]. Despite the fact that information dissemination is an important part of JEL, our
model offers much more functionality to provide a full solution for the resource tracking
problem. Most importantly, further functionality includes the active creation and gathering of
information regarding (local) changes in the resource set.
All current implementations of JEL are build from the ground up, with little external
dependencies. However, JEL implementations could in principal interface with external
systems, for instance Grid Information Services (GIS [5]). These systems can be used both
for acquiring (monitoring) data, as well as disseminating the resulting information. One key
difference between JEL and current monitoring systems is the fact that JEL tracks resources of
applications, not systems. An application crashing usually does not cause the entire system to
cease functioning. Sole reliance of system monitoring data will therefore not detect applicationlevel errors.
8. Conclusions and Future Work
With the transition from static cluster systems to dynamic environments such as grids,
clusters, clouds, and P2P systems, fault-tolerance and malleability are now essential features
for applications running in these environments. A first step in creating a fault-tolerant and
malleable system is resource tracking: the capability to track exactly which resources are part
of a computation, and what roles they have. Resource tracking is an essential feature in any
dynamic environment, and should be implemented on the same level of the software hierarchy
as communication primitives.
In this paper we presented JEL: a unified model for tracking resources. JEL is explicitly
designed to be scalable and flexible. Although the JEL model is simple, it supports both
traditional programming models such as MPI, and flexible grid oriented models like Satin. JEL
allows programming models such as Satin to implement both malleability and fault-tolerance.
With JEL as a common layer for resource tracking, the development of programming models
is simplified considerably. In the Ibis project, we developed a number of programming models
using JEL, and we continue to add models regularly.
JEL can be used on a number of environments, ranging from clusters to highly dynamic
P2P environments. We described several implementations of JEL, including a centralized
implementation that can be combined with decentralized dissemination techniques, resulting in
high performance, yet with low resource usage at the central server. Furthermore, we described
several dissemination techniques that can be used with JEL. These include a broadcast tree
and gossiping based techniques. In addition, we showed that JEL can be implemented in a
fully distributed manner, efficiently supporting flexible programming models such as Satin,
and increasing fault-tolerance.
There is no single resource tracking model implementation that serves all purposes perfectly.
Depending on the circumstances and requirements of the programming model and application
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
26 NIELS DROST ET AL.
a different implementation is appropriate. In a reliable cluster environment, a centralized
implementation performs best. If applications are run on low bandwidth networks, the
broadcast tree dissemination technique has the benefit of using very little bandwidth. In a
hostile environment, such as desktop grids or P2P systems, a fully distributed implementation
is robust against failures. JEL explicitly supports different algorithms and implementations,
making it applicable in a large number of environments.
We evaluated JEL in a number of real-world scenarios. The scenarios include starting 2000
instances of an application, wide area tests with new machines joining, and resources failing,
and running an application on a world-wide system, including grids, P2P systems and cloud
computing resources. In addition to these experiments, we have won a number of international
competitions, showing the suitability of JEL for real-world applications.
Future work consists of implementing additional programming models using JEL, such as
a distributed hash table (DHT), and redesigning our implementation of the Satin divide-andconquer model to explicitly support low connectivity environments. In addition, we plan to
implement a fully distributed version of JEL that supports reliable joins and leaves and uniform
elections. One way of implementing this would be using Lamport clocks [17] and a distributed
election algorithm [13].
ACKNOWLEDGEMENT
This work was carried out in the context of the Virtual Laboratory for e-Science project (www.vle.nl). This project is supported by a BSIK grant from the Dutch Ministry of Education, Culture and
Science (OC&W) and is part of the ICT innovation program of the Ministry of Economic Affairs (EZ).
This work has been supported by the Netherlands Organization for Scientific Research (NWO) grant
612.060.214 (Ibis: a Java-based grid programming environment).
We kindly thank Ceriel Jacobs, Kees Verstoep, Roelof Kemp, Nick Palmer and Kees van Reeuwijk
for all their help. We would also like to thank the people of the InTrigger grid (Japan) for access
to their system. We also like to thank the anonymous reviewers for their insightful and constructive
comments.
REFERENCES
1. O. Babao˘glu, A. Bartoli, and G. Dini. Enriched view synchrony: A programming paradigm for partitionable
asynchronous distributed systems. IEEE Trans. Comput., 46(6):642–658, 1997.
2. H. E. Bal, N. Drost, R. Kemp, J. Maassen, R. V. van Nieuwpoort, C. van Reeuwijk, and F. J. Seinstra.
Ibis: Real-world problem solving using real-world grids. In IPDPS ’09: Proceedings of the 2009 IEEE
International Symposium on Parallel&Distributed Processing, pages 1–8, Washington, DC, USA, 2009.
IEEE Computer Society.
3. M. Beck, J. J. Dongarra, G. E. Fagg, G. A. Geist, P. Gray, J. Kohl, M. Migliardi, K. Moore, T. Moore,
P. Papadopoulous, S. L. Scott, and V. Sunderam. Harness: a next generation distributed virtual machine.
Future Generation Computer Systems, 15(5-6):571–582, 1999.
4. M. Bornemann, R. V. van Nieuwpoort, and T. Kielmann. MPJ/Ibis: a flexible and efficient message
passing platform for Java. In Proceedings of PVM/MPI’05, Sorrento, Italy, September 2005.
5. K. Czajkowski, C. Kesselman, S. Fitzgerald, and I. Foster. Grid information services for distributed
resource sharing. High-Performance Distributed Computing, International Symposium on, 0:0181, 2001.
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
JEL: UNIFIED RESOURCE TRACKING 27
6. N. Drost, E. Ogston, R. V. van Nieuwpoort, and H. E. Bal. Arrg: real-world gossiping. In HPDC ’07:
Proceedings of the 16th international symposium on High performance distributed computing, pages 147–
158, New York, NY, USA, 2007. ACM.
7. N. Drost, R. V. van Nieuwpoort, and H. Bal. Simple locality-aware co-allocation in peer-to-peer
supercomputing. In CCGRID ’06: Proceedings of the Sixth IEEE International Symposium on Cluster
Computing and the Grid, page 14, Washington, DC, USA, 2006. IEEE Computer Society.
8. Amazon ec2 website. http://aws.amazon.com/ec2.
9. P. T. Eugster, P. A. Felber, R. Guerraoui, and A.-M. Kermarrec. The many faces of publish/subscribe.
ACM Comput. Surv., 35(2):114–131, 2003.
10. G. E. Fagg, E. Gabriel, G. Bosilca, T. Angskun, Z. Chen, J. Pjesivac-Grbovic, K. London, and J. J.
Dongarra. Extending the MPI specification for process fault tolerance on high performance computing
systems. In Proceedings of ICS’04, June 2004.
11. I. Foster, C. Kesselman, and S. Tuecke. The anatomy of the grid: Enabling scalable virtual organizations.
Int. J. High Perform. Comput. Appl., 15(3):200–222, 2001.
12. J.-P. Goux, S. Kulkarni, M. Yoder, and J. Linderoth. An enabling framework for master-worker
applications on the computational grid. In HPDC ’00: Proceedings of the 9th IEEE International
Symposium on High Performance Distributed Computing, page 43, Washington, DC, USA, 2000. IEEE
Computer Society.
13. I. Gupta, R. v. Renesse, and K. P. Birman. A probabilistically correct leader election protocol for large
groups. In DISC ’00: Proceedings of the 14th International Conference on Distributed Computing, pages
89–103, London, UK, 2000. Springer-Verlag.
14. Intrigger website. http://www.intrigger.jp.
15. M. Jelasity, R. Guerraoui, A.-M. Kermarrec, and M. van Steen. The peer sampling service: experimental
evaluation of unstructured gossip-based implementations. In Middleware ’04: Proceedings of the 5th
ACM/IFIP/USENIX international conference on Middleware, pages 79–98, New York, NY, USA, 2004.
Springer-Verlag New York, Inc.
16. T. Kielmann, R. F. H. Hofman, H. E. Bal, A. Plaat, and R. A. F. Bhoedjang. Magpie: Mpi’s collective
communication operations for clustered wide area systems. In PPoPP ’99: Proceedings of the seventh
ACM SIGPLAN symposium on Principles and practice of parallel programming, pages 131–140, New
York, NY, USA, 1999. ACM.
17. L. Lamport. Time, clocks, and the ordering of events in a distributed system. Commun. ACM, 21(7):558–
565, 1978.
18. P. Leach, M. Mealling, and R. Salz. A Universally Unique IDentifier (UUID) URN Namespace. RFC
4122 (Proposed Standard), July 2005.
19. J. Maassen. Method Invocation Based Communication Models for Parallel Programming in Java. PhD
thesis, Vrije Universiteit, Amsterdam, The Netherlands, June 2003.
20. J. Maassen and H. E. Bal. Smartsockets: solving the connectivity problems in grid computing. In HPDC
’07: Proceedings of the 16th international symposium on High performance distributed computing, pages
1–10, New York, NY, USA, 2007. ACM.
21. MPI forum website. http://www.mpi-forum.org/.
22. R. Nieuwpoort, J. Maassen, G. Wrzesi´nska, R. F. H. Hofman, C. J. H. Jacobs, T. Kielmann, and H. E.
Bal. Ibis: a flexible and efficient java-based grid programming environment: Research articles. Concurr.
Comput. : Pract. Exper., 17(7-8):1079–1107, 2005.
23. R. Nieuwpoort, G. Wrzesinska, C. J. Jacobs, and H. E.Bal. Satin: a high-level and efficient grid
programming model. ACM Transactions on Programming Languages and Systems (TOPLAS), 32(3),
2010.
24. J. Postel. Transmission Control Protocol. RFC 793 (Standard), Sept. 1981. Updated by RFCs 1122,
3168.
25. T. Smith and M. Watherman. Identification of common molecular subsequences. Journal of Molecular
biology, 147, 1981.
26. K. Taura, K. Kaneda, T. Endo, and A. Yonezawa. Phoenix: a parallel programming model for
accommodating dynamically joining/leaving resources. In PPoPP ’03: Proceedings of the ninth ACM
SIGPLAN symposium on Principles and practice of parallel programming, pages 216–229, New York, NY,
USA, 2003. ACM.
27. D. Thain, T. Tannenbaum, and M. Livny. Distributed computing in practice: the condor experience:
Research articles. Concurr. Comput. : Pract. Exper., 17(2-4):323–356, 2005.
28. J. Waldo. Remote procedure calls and java remote method invocation. IEEE Concurrency, 6(3):5–7,
1998.
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
28 NIELS DROST ET AL.
29. A. YarKhan, J. Dongarra, and K. Seymour. Gridsolve: The evolution of network enabled solver. In
Proceedings of IFIP WoCo9, Prescott, AZ, USA, July 2006.
Copyright c⃝ 0000 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 0000; 00:1–0
Prepared using cpeauth.cls
-----
| 19,474
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1002/cpe.1592?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1002/cpe.1592, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GREEN",
"url": "https://hal.archives-ouvertes.fr/hal-00686074/file/PEER_stage2_10.1002%252Fcpe.1592.pdf"
}
| 2,011
|
[
"JournalArticle"
] | true
| 2011-01-01T00:00:00
|
[] | 19,474
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Medicine",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0086726ba2e54cbdd6545f7af61703c9816728ca
|
[
"Computer Science",
"Medicine"
] | 0.856825
|
Smart contracts software metrics: A first study
|
0086726ba2e54cbdd6545f7af61703c9816728ca
|
PLoS ONE
|
[
{
"authorId": "2192634",
"name": "R. Tonelli"
},
{
"authorId": "2048851",
"name": "Giuseppe Destefanis"
},
{
"authorId": "144083401",
"name": "M. Marchesi"
},
{
"authorId": "3348154",
"name": "Marco Ortu"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Plo ONE",
"PLOS ONE",
"PLO ONE"
],
"alternate_urls": [
"http://www.plosone.org/"
],
"id": "0aed7a40-85f3-4c66-9e1b-c1556c57001b",
"issn": "1932-6203",
"name": "PLoS ONE",
"type": "journal",
"url": "https://journals.plos.org/plosone/"
}
|
Smart contracts (SC) are software programs that reside and run over a blockchain. The code can be written in different languages with the common purpose of implementing various kinds of transactions onto the hosting blockchain. They are ruled by the blockchain infrastructure with the intent to automatically implement the typical conditions of traditional contracts. Programs must satisfy context-dependent constraints which are quite different from traditional software code. In particular, since the bytecode is uploaded in the hosting blockchain, the size, computational resources, interaction between different parts of the program are all limited. This is true even if the specific programming languages implement more or less the same constructs as that of traditional languages: there is not the same freedom as in normal software development. The working hypothesis used in this article is that Smart Contract specific constraints should be captured by specific software metrics (that may differ from traditional software metrics). We tested this hypothesis on 85K Smart Contracts written in Solidity and uploaded on the Ethereum blockchain. We analyzed Smart Contracts from two repositories “Etherscan” and “Smart Corpus” and we computed the statistics of a set of software metrics related to Smart Contracts and compared them to the metrics extracted from more traditional software projects. Our results show that generally, Smart Contract metrics have more restricted ranges than the corresponding metrics in traditional software systems. Some of the stylized facts, like power law in the tail of the distribution of some metrics, are only approximate but the lines of code follow a log-normal distribution which reminds us of the same behaviour already found in traditional software systems.
|
[a1111111111](http://crossmark.crossref.org/dialog/?doi=10.1371/journal.pone.0281043&domain=pdf&date_stamp=2023-04-12)
[a1111111111](http://crossmark.crossref.org/dialog/?doi=10.1371/journal.pone.0281043&domain=pdf&date_stamp=2023-04-12)
[a1111111111](http://crossmark.crossref.org/dialog/?doi=10.1371/journal.pone.0281043&domain=pdf&date_stamp=2023-04-12)
[a1111111111](http://crossmark.crossref.org/dialog/?doi=10.1371/journal.pone.0281043&domain=pdf&date_stamp=2023-04-12)
[a1111111111](http://crossmark.crossref.org/dialog/?doi=10.1371/journal.pone.0281043&domain=pdf&date_stamp=2023-04-12)
OPEN ACCESS
**Citation: Tonelli R, Pierro GA, Ortu M, Destefanis G**
(2023) Smart contracts software metrics: A first
[study. PLoS ONE 18(4): e0281043. https://doi.org/](https://doi.org/10.1371/journal.pone.0281043)
[10.1371/journal.pone.0281043](https://doi.org/10.1371/journal.pone.0281043)
**Editor: Sathishkumar V E, Hanyang University,**
KOREA, REPUBLIC OF
**Received: November 3, 2022**
**Accepted: January 16, 2023**
**Published: April 12, 2023**
**Copyright: © 2023 Tonelli et al. This is an open**
access article distributed under the terms of the
[Creative Commons Attribution License, which](http://creativecommons.org/licenses/by/4.0/)
permits unrestricted use, distribution, and
reproduction in any medium, provided the original
author and source are credited.
**Data Availability Statement: All data files are**
publicly available from the GitHub database
[(https://github.com/aphd/smart-corpus-api).](https://github.com/aphd/smart-corpus-api)
**Funding: This study was financially supported by**
the Italian Ministry of University and Research
(MUR) in the form of a grant (MUR 4 - Public
research - PRIN 2020 cup F73C22000430001) to
R.T. and A.P, and in the form of a grant to M.O
(CUP: PE00000018). This work was also financially
supported by Fondazione Di Sardegna in the form
of a grant (2020/22, F72F20000190007) to R.T.
and A.P. The funders had no role in study design,
RESEARCH ARTICLE
## Smart contracts software metrics: A first study
**[Roberto TonelliID[1]*, Giuseppe Antonio Pierro[1], Marco OrtuID[2], Giuseppe DestefanisID[3]](https://orcid.org/0000-0002-9090-7698)**
**1 Dept. of Computer Science and Mathematics, University Of Cagliari, Cagliari, Italy, 2 Dept. of Economics**
and Business Sciences, University Of Cagliari, Cagliari, Italy, 3 Dept. of Computer Science, Brunel University,
Uxbridge, London, United Kingdom
- [email protected]
### Abstract
Smart contracts (SC) are software programs that reside and run over a blockchain. The
code can be written in different languages with the common purpose of implementing various kinds of transactions onto the hosting blockchain. They are ruled by the blockchain
infrastructure with the intent to automatically implement the typical conditions of traditional
contracts. Programs must satisfy context-dependent constraints which are quite different
from traditional software code. In particular, since the bytecode is uploaded in the hosting
blockchain, the size, computational resources, interaction between different parts of the
program are all limited. This is true even if the specific programming languages implement
more or less the same constructs as that of traditional languages: there is not the same freedom as in normal software development. The working hypothesis used in this article is that
Smart Contract specific constraints should be captured by specific software metrics (that
may differ from traditional software metrics). We tested this hypothesis on 85K Smart Contracts written in Solidity and uploaded on the Ethereum blockchain. We analyzed Smart
Contracts from two repositories “Etherscan” and “Smart Corpus” and we computed the
statistics of a set of software metrics related to Smart Contracts and compared them to the
metrics extracted from more traditional software projects. Our results show that generally,
Smart Contract metrics have more restricted ranges than the corresponding metrics in
traditional software systems. Some of the stylized facts, like power law in the tail of the
distribution of some metrics, are only approximate but the lines of code follow a log-normal
distribution which reminds us of the same behaviour already found in traditional software
systems.
#### 1 Introduction
Smart Contracts have gained tremendous popularity in the past few years, to the point that billions of US Dollars are currently exchanged every day using such a technology. However, since
the release of the Ethereum platform in 2015, there have been many cases in which the execution of Smart Contracts managing Ether coins led to problems or conflicts. Smart Contracts
rely on a non-standard software life-cycle, according to which, for instance, delivered applications can hardly be updated or bugs resolved by releasing a new version of the software. Furthermore, their code must satisfy constraints typical of the domain such as the following:
|a1111111111 a1111111111 a1111111111|Col2|
|---|---|
-----
data collection and analysis, decision to publish, or
preparation of the manuscript.
**Competing interests: The authors have declared**
that no competing interests exist.
- they must be light. Smart Contact definitions are limited in size because of structural constraints imposed by the Blockchain infrastructure and the mining cost;
- Smart Contract execution has a per operation cost so their execution must be limited;
- once published Smart Contracts are immutable: indeed a blockchain is based on the appendonly mechanism—then code under the form of bytecode is inserted into a blockchain block
once and forever [1];
- floating point values cannot be used due to the consensus among all the nodes on the blockchain status which contrasts with the possibility of different rounded values of floating point
numbers on machines with different precision;
- random number generators cannot be used for the same reason and in their place hashing
functions are commonly used.
The idea of Smart Contracts was originally described by cryptographer Nick Szabo in 1997,
as a kind of digital vending machine [2].
_Smart contracts are self-applying agreements, or contracts, implemented through a com-_
puter program whose execution enforces the terms of the contract. The idea is to remove a
central supervisory authority, entity or organization that both parties must trust and delegate
that role to the correct execution of a computer program. Such a scheme can therefore count
on a decentralized system managed automatically by computers, and Blockchain technology is
the tool to deliver the trust model envisaged by smart contracts.
Since smart contracts are stored on a blockchain, they are public and transparent, immutable and decentralised, and since blockchain resources are costly, their code size cannot exceed
domain-specific constraints. Immutability means that when a smart contract is created, it cannot be changed again.
Smart contracts can be applied to many different scenarios: banks could use them to issue
loans or to offer automatic payments; insurance companies could use them to automatically
process claims according to agreed terms; postal companies could use them for payments
on delivery. In the following, we mainly refer to the Ethereum technology without losing
generality.
A Smart Contract (SC) is a full-fledged program stored in a blockchain by a contract-creation
transaction. A SC is identified by a contract address generated upon a success creation transaction. A blockchain state is therefore a mapping from addresses to accounts. Each SC account
holds an amount of virtual coins (Ether in our case), and has its own private state and storage.
Fig 1 illustrates how smart contracts work by comparing smart contracts to traditional contracts. “Smart contracts” differ from traditional contracts in that they are computer programs
that automate certain aspects of an agreement between two parties through the use of blockchain technology. Indeed, blockchains provide security, permanence, and immutability
through the replication of the smart contract code across multiple nodes.
The most used SC programming language is Solidity which runs on the Ethereum Virtual
Machine (EVM) on the Ethereum blockchain. Since this is currently the most popular paradigm, we focus our attention on Solidity. An Ethereum SC account hence typically holds its
executable code and a state consisting of:
- a private storage
- the amount of virtual coins (Ether) it holds, i.e. the contract balance.
Users can transfer Ether coins using transactions, like in Bitcoin, and additionally can
_invoke contracts using contract-invoking transactions. Conceptually, Ethereum can be viewed_
-----
**Fig 1. Smart contract vs. traditional contract.**
[https://doi.org/10.1371/journal.pone.0281043.g001](https://doi.org/10.1371/journal.pone.0281043.g001)
as a huge transaction-based state machine, where its state is updated after every transaction
and stored in the blockchain.
Smart Contracts source code manipulate variables in the same way as traditional imperative
programs. At the lowest level the code of an Ethereum SC is a stack-based bytecode language
run by an Ethereum virtual machine (EVM) in each node. SC developers define contracts
using high-level programming languages. One such language for Ethereum is Solidity [3] (a
JavaScript-like language), which is compiled into EVM bytecode. Once a SC is created at an
address X, it is possible to invoke it by sending a contract-invoking transaction to the address
_X. A contract-invoking transaction typically includes:_
- payment (to the contract) for the execution (in Ether).
- input data for the invocation.
-----
#### 1.1 Working example
Fig 2 shows a simple example of SC reported in [4], which rewards anyone who solves a problem and submit the solution to the SC. This contract has been selected as an example of an old
style solidity smart contracts, in fact many of the constructs it uses are now deprecated, but
it is instructive since it also represents how the solidity language and the metrics used in it
changed along time.
A contract-creation transaction containing the EVM bytecode for the contract in Fig 2 is
sent to miners. Eventually, the transaction will be accepted in a block, and all miners will
**Fig 2. Smart contracts example.**
[https://doi.org/10.1371/journal.pone.0281043.g002](https://doi.org/10.1371/journal.pone.0281043.g002)
-----
update their local copy of the blockchain: first a unique address for the contract is generated in
the block, then each miner executes locally the constructor of the Puzzle contract, and a local
storage is allocated in the blockchain. Finally the EVM bytecode of the anonymous function of
**Puzzle (Lines 16+) is added to the storage.**
When a contract-invoking transaction is sent to the address of Puzzle, the function defined
at Line 16 is executed by default. All information about the sender, the amount of Ether sent to
the contract, and the input data of the invoking transaction are stored in a default input variable called msg. In this example, the owner (namely the user that created the contract) can
update the reward (Line 21) by sending Ether coins stored in msg.value (if statement at
Line 17), after sending back the current reward to the owner (Line 20).
In the same way, any other user can submit a solution to Puzzle by a contract-invoking
transaction with a payload (i.e., msg.data) to claim the reward (Lines 22-29). When a correct solution is submitted, the contract sends the reward to the sender (Line 26).
#### 1.2 Gas system
It is worth remarking that a Smart Contract is run on the blockchain by each miner deterministically replicating the execution of the Smart Contract’s bytecode on the local copy of the
blockchain. This, for instance, implies that to guarantee coherence across the copies of the
blockchain, code must be executed in a strictly deterministic way (and therefore, for instance,
the generation of random numbers may be problematic).
Solidity, and in general high-level Smart Contract’s languages, are Turing complete in
Ethereum. Note that in a decentralised blockchain architecture Turing completeness may be
problematic, e.g., the replicated execution of infinite loops may potentially freeze the whole
network.
To ensure fair compensation for expended computation efforts and limit the use of
resources, Ethereum pays miners some fees, proportionally to the required computation. Specifically, each instruction in the Ethereum bytecode requires a pre-specified amount of gas
(paid in Ether coins). When users send a contract-invoking transaction, they must specify the
amount of gas provided for the execution, called gasLimit, as well as the price for each gas unit
called gasPrice. A miner who includes the transaction in his proposed block receives the transaction fee corresponding to the amount of gas that the execution has actually burned, multiplied by gasPrice. If some execution requires more gas than gasLimit, the execution terminates
with an exception, and the state is rolled back to the initial state of the execution. In this case
the user pays all the gasLimit to the miner as a counter-measure against resource-exhausting
attacks [5].
The code in Fig 2 displays typical features of the Solidity Smart Contract’s code: the Con_tract declaration, addresses declarations and mapping, owner data managing and the functions_
with the specific code for implementing the contract and transactions between blockchain
addresses. Most of the control structures from JavaScript are available in Solidity except for
switch and goto. So there is: if, else, while, do, for, break, continue, return [6],
with the usual semantics known from C or JavaScript.
Functions of the current contract can be called directly (Internal Function Calls), also
recursively. These function calls are translated into simple jumps inside the EVM. This has the
effect that the current memory is not cleared, i.e., passing memory references to internallycalled functions is very efficient. Only functions of the same contract can be called internally.
The expressions this.g(); and c.g(); (where c is a contract instance) are also valid
function calls, but this time, the function will be called as External Function Call, via a message
call and not directly via jumps. Functions of other contracts have to be called externally. For
-----
an external call, all function arguments have to be copied to memory. When calling functions
of other contracts, the amount of cryptocurrency (Wei) sent with the call and the gas can be
specified with special options .value() and .gas() respectively. Inheritance between
contracts is also supported.
Since Smart Contracts are closely related to classes of object-oriented programming languages, it is straightforward to define and compute some of the software metrics typically
encountered in object-oriented software systems, like number of lines of code, comments,
number of methods or functions, cyclomatic complexity and so on, while it is somehow more
difficult to recognize software metrics related to communication between smart contracts,
since these can be ruled by blockchain transactions among contracts, which can act somehow
as code libraries.
On the other hand smart contracts are deployed and work on the blockchain infrastructure
and it is thus likely that typical value of the same metrics can differ from the typical values of
the same metrics in traditional software systems.
It became thus interesting, even from a software engineering point of view, to perform a statistical analysis of Smart Contract software metrics and to compare the data with those displayed by traditional software systems. It would also be of primary interest to examine the
connection between software metrics and software quality, a field of research well established
in traditional software, in the specific domain of smart contracts given that it is well known
that Smart Contract code vulnerability have been exploited to stole value in cryptocurrencies
from smart contracts [3, 5, 7, 8].
In this paper, we perform the analysis on a data set of 85K smart contracts downloaded
from 1) etherscan.io, a platform allowing enhanced browsing of Ethereum blockchain and
smart contracts and 2) smart corpus [9], an organized smart contract repository.
Motivations for this study arise from the need to measure software artifacts in the specific
case of Smart Contracts code. In fact there are no studies involving a full statistical analysis of
the metrics properties for such software artifacts in the new paradigm of blockchain systems.
Knowledge of software metrics statistical properties is fundamental for controlling software
production process, software quality as well as to perform fault prediction and to identify code
smells.
We collected the blockchain addresses, the Solidity source code, the ABI and the bytecode
of each contract and extracted a set of standard and SC-specific software metrics such as number of lines of smart contract code (LOCs), line of comments, blank lines, number of functions,
cyclomatic complexity, number of events calls, number of mappings to addresses, number of
payable, number of modifiable and so on. We analyzed the statistical distributions underlying
such metrics to discover if they exhibit the same statistical properties typical of standard software systems [10–12] or if the SM constraints act so that a sensible variation in these distributions can be detected. Furthermore, we devise a path to the analysis of which and to what
extent the SC metrics influence Smart Contract’s performance, usage in the blockchain, vulnerabilities, and possible other factors related to the specific contracts which can be reflected
on the domain of application for which the smart contract has been deployed, like, for example, to implement and rule an initial coin offer (ICO), to control a chain of certification like in
medical applications and so on.
#### 2 Related work
Blockchain technology and Smart Contracts rose an exponentially increasing interest in the
last years in different fields of research. Organizations such as banking and financial institutions, and public and regulatory bodies, started to explicitly talk of the importance of these
-----
new technologies. Software Engineering specific for blockchain applications and Smart Contract is still in its infancy [13] and in particular the investigation of the relationships among
Smart Contracts Software Metrics (SCSM) and code quality, SC performances, vulnerability,
maintainability and other software features is completely lacking. Smart Contracts and blockchain have been discussed in many textbooks [14] and documents over the internet, where
white papers usually cover the specific topic of interest [15–19].
Ethereum defines a smart contract as a transaction protocol that executes the terms of a
contract or group of contracts on a cryptographic blockchain [20]. Smart Contracts operate
autonomously with no entity controlling the majority of its tokens, and its data and records of
operation must be cryptographically stored in a public, decentralized blockchain [14].
Smart Contract vulnerabilities have been analyzed in [21–23]. A taxonomy of Smart Contract is performed in [22], where Smart Contracts are classified according to their purpose.
These are divided into wallets, financial, notary, game, and library.
Authors in [4] investigate the security of running smart contracts based on Ethereum in an
open distributed network like those of cryptocurrencies and introduce several new security
problems in which an adversary can manipulate smart contract execution to gain profit.
Obviously Smart Contract scientific literature is limited due to their recent creation. On
the other hand there is a plethora of results and information to rely on produced in the last
decades for what concerns the relationship among software metrics and software quality,
maintainability, reliability, performance defectiveness and so on.
Measuring software to get information about its properties and quality is one of the main
issues in modern software engineering.
Limiting ourselves to object-oriented (OO) software, one of the first works dealing with this
problem is the one by Chidamber and Kemerer (CK), who introduced the popular CK metrics
suite for OO software systems [24]. In fact, different empirical studies showed significant correlations between some of CK metrics and bug-proneness [24–28]. Metrics have been defined
also on software graphs and were found most correlated to software quality [29–32]. Tosun
et al. applied Social Networks Analysis to OO software metrics source code to assess defect prediction performance of these metrics [33]
The CK [34] suite is historically the most adopted and validated to analyze bug-proneness
of software systems [24, 27].
CK suite was adopted by practitioners [24] and is also incorporated into several industrial
software development tools. Based on the study of eight medium-sized systems developed by
students, Basili et al. [25] were among the first to find that Object-Oriented metrics are correlated to defect density. Considering industry data from software developed in C++ and Java,
Subramanyam and Krishnan [26] showed that CK metrics are significantly associated with
defects. Among others, Gyimo´thy et al. [27], studying a Open Source system, validated the usefulness of these metrics for fault-proneness prediction.
CK metrics are intended to measure the degree of coupling and cohesion of classes in
object-oriented software contexts. Statistical analysis has also been used in literature to detect
typical features of complex software and to relate the statistical properties to software quality.
Recently, some researchers have started to study the field of software to find and study
associated power-law distributions. In fact, many software systems have reached such a huge
dimension that it looks sensible to treat them using the stochastic random graph approach
[35].
Examples of these properties are the lines of code of a class, a function or a method; the
number of times a function or a method is called in the system; the number of time a given
name is given to a method or a variable, and so on.
-----
Some authors already found significant power-laws in software systems. Cai and Yin [11]
found that the degree distribution of software execution processes may follow a power-law
or display small-world effects. Potanin et al. [36] showed that the graphs formed by runtime objects, and by the references between them in object-oriented applications, are characterized by a power-law tail in the distribution of node degrees. Valverde et al. [37, 38]
found similar properties studying the graph formed by the classes and their relationships in
large object-oriented projects. They found that software systems are highly heterogeneous
small world networks with scale-free distributions of the connection degree. Wheeldon and
Counsell [12] identified twelve power laws in object-oriented class relationships of Java programs. In particular, they analyzed the distribution of class references, methods, constructors, field and interfaces in classes, and the distribution of method parameters and return
types. Myers [39] found similar results on large C and C++ open source systems, considering the collaborative diagrams of the modules within procedural projects and of the classes
within the Object-oriented projects. He also computed the correlation between some metrics concerning software size and graph topological measures, revealing that nodes with
large output degree tend to evolve more rapidly than nodes with large input degree. Other
authors found power-laws studying C/C++ source code files, where graph edges are the
files, while the “include” relationships between them are the links [40, 41]. Tamai and Nakatani [42], proposed a statistical model to analyze and explain the distributions found for the
number of methods per class, and for the lines of code per method, in a large object-oriented
system.
While most of these studies are based on static languages, such like C++ and Java, Marchesi
et al. [43] provide evidence that a similar behavior is displayed also by dynamic languages such
as Smalltalk. Concas et al. found power-law and log-normal distributions in some properties
of Smalltalk and Java software systems—the number of times a name is given to a variable or a
method, the number of calls to methods with the same name, the number of immediate subclasses of a given class in five large object-oriented software system [10, 44]. The Pareto principle is used to describe how faults in large software systems are distributed over modules [45–
49]. Baxter et al. [50] found power-law and Log-normal distributions in the class relationship
in Java programs. They proposed a simple generative model that reproduces the features
observed in real software graph degree distributions. Ichii et al. [51] investigated software component graphs composed of Java classes finding that in-degree distribution follows the power
law distribution and the out-degree distribution does not follow the power-law. Louridas et al.
[52], in a recent work, show that incoming and outgoing links distributions have in common
long, fat tails at different levels of abstraction, in diverse systems and languages (C, Java, Perl
and Ruby). They report the impact of their findings on several aspects of software engineering:
reuse, quality assurance and optimization.
Given the vast literature investingating power law distributions in software systems, we
choose to investigate these properties, also in SC software not only to look for power-law
behaviour, but also because some features are related to design and coding guidelines, to software quality and also to Chidamber and Kemerer (CK) NOC metrics [24].
Wheeldon and Counsell [12], as well as other researchers, found power-laws in the distributions of many software properties, such as the number of fields, methods and constructors of
classes, the number of interfaces implemented by classes, the number of subclasses of each
class, as well as the number of classes referenced as field variables and the number of classes
which contain references to classes as field variables. Thus, there is much evidence that powerlaws are a general feature of software systems. Concas et al. [44] explained the underlying
mechanism through a model based on a single Yule process in place during the software creation and evolution.
-----
More recently affect metrics have been investigated revealing how during software development productivity and software quality can be highly influenced by developers moods [53–58].
In [59] authors review papers relating to smart contracts metrics and other five specific topics: smart contract testing, smart contract code analysis, smart contract security, Dapp performance, and blockchain applications.
A few studies investigated SC metrics and collected a curated repository of SC [9, 59–62].
In [63] authors examined SCs extracted from various Ethereum blockchain-oriented software projects hosted on GitHub.com, extracting also a suite of object-oriented metrics, to evaluate their structural characteristics.
More recently, deep learning neural networks have been used [64, 65] where to develop a
deep learning framework for detecting fraudulent smart contracts on blockchain systems and
hybrid deep learning models combining different word embedding methods, for smart contract vulnerability detection.
#### 3 Experimental set-up
Etherscan [66] is a web based platform which allows for Ethereum blockchain exploration of
all blockchain addresses. It allows one to recover Smart Contracts bytecode, ABI, and it collects
also Smart Contract source codes in Solidity Part of the data used in this paper (15% of the
total) have been retrieved by analyzed the blockchain addresses related to the available source
code on Etherscan. These addresses have been used to systematically download the code of the
Solidity contracts, as well as the bytecode and information associated with the ABI.
Smart contracts analyzed in this study can be found online through a tool named Smart
Corpus [9]. Smart Corpus is a collection of over 100K smart contracts categorized by software
metrics (number of lines of code, cyclomatic complexity, etc.) and uses cases (banks, finance,
betting, hectares, etc.). A detailed description of the Smart Corpus tool and its related publica[tion can be found here (https://aphd.github.io/smart-corpus/). After collected and locally](https://aphd.github.io/smart-corpus/)
stored Solidity code, bytecode, and ABI infos, we built a code parser to extract the software
metrics of our interest for each smart contract. We also manually explored the code to get
insights into the more relevant information to eventually extract from the data and to get a flavour of the main features of the overall dataset. This exploratory analysis allowed us to note
how the same contract code is often replicated and deployed to different blockchain addresses
or deployed with very little changes. This pattern reveals how many contracts are simply experiments or are deployed to the blockchain for testing and then modified according to test’s
results. They usually appear in a series of neighbour blockchain blocks. The dataset has thus a
little bias but the overall effect is negligible in our analysis since there are very few cases of replicated Solidity code.
The dataset source code has been then parsed for computing total lines of code associated
to a specific blockchain address, the number of smart contracts inside a single address code
(the analogous of classes into java files, e.g., compilation units), blank lines, comment lines,
number of static calls to events, number of modifiers, number of functions, number of payable
functions, cyclomatic complexity as the simplest McCabe definition [67], and number of mappings to addresses.
We also computed the size of the associated bytecode and of the vector of contract’s ABIs.
These are the Application Binary Interfaces, defining the interface definition of any smart contract, known at compilation time and static. All contracts will have the interface definitions of
any contracts they call available at compile-time [68]. This specification does not address contracts whose interface is dynamic or otherwise known only at run-time.
-----
The data set is structured to keep track of the specific Smart Contract address so that any
blockchain address related Smart Contract metrics (SCEM: smart contract external metrics)
can be fully analyzed in relationship with the software metrics self-contained into the Smart
Contract Solidity code (SCIM: smart contract internal metrics). For example, it is possible
to investigate interactions with other Smart Contracts, gas consumption and cryptocurrency
exchanges.
ABI metrics in particular are the Smart Contract interface and reflect the external exposure
of the Smart Contract towards blockchain calls from other addresses, which can be interaction
with other Smart Contracts as well.
It is worth noting that not all the measures related to addresses stay constant but many of
them depend on the time of analysis and cannot be defined among the Smart Contract metrics,
and others can simply be contract variables, like the amount of ether stored into the contract,
the number of owners in a multi owned contract, the contract performance, or popularity in
terms of calls to the contract. In such cases, much care is needed to evaluate the relationship
between Smart Contract software metrics and other blockchain-related measures, not only
because they may be time-varying, but also because other external factors can be in place. For
example, the success of a contract could be defined in terms of calls to that contract, but if the
contract implements an Initial Coin Offer, then most likely the contract in itself, measured as
software code, has probably little to do with it.
For each software metric we computed standard statistics like average, median, maxima
and minima values and standard deviation. Furthermore we verified what kind of statistical
distribution these metrics belong to. This is particularly important when comparing Smart
Contract’s source code with other source code metrics, e.g., Java source code, for standard software projects. In fact the literature on software metrics demonstrates that there exist statistical
distributions which are typical of specific metrics regardless the programming language used
for software development [69].
In particular LOC, coupling metrics, like fan-in and fan-out, and other software metrics are
known to display a fat tail in their statistical distribution [52] regardless the programming language, the platform or the software paradigm adopted for a software project.
Due to the domain specific constraints the Smart Contract software must satisfy to, in particular limited size resources, it is not granted that such software metrics respect the canonical
statistical distributions found in general purpose software projects. It is one of the aims of this
research to verify and eventually discuss such a conjecture.
#### 4 Results
The smart contracts’ source code was analysed with a tool named PASO. Thanks to this tool
the smart contract’s source code can be represented as an abstract syntax tree (AST). Based on
the AST, software metrics and patterns in smart contract codes have been evaluated and computed. Detailed information about this tool and its publication can be found online at this link
[(https://aphd.github.io/paso/).](https://aphd.github.io/paso/)
We started analyzing centrality and dispersion measures for all the computed metrics, like
mean, average, median, and standard deviation, interquartile range, and total variation range.
These statistics provide a summary of the overall behavior for the metrics values. In particular,
for asymmetric distributions, centrality measure differs from one another, and in the case of
power laws, distributions the largest values of the metrics can be order of magnitude larger
than central and low values.
Many minima values result set to zero, since there are a few contracts with almost no code.
The results on central tendency measures in Table 1 show that the mean is constantly larger
-----
**Table 1. Centrality and dispersion statistics computed for all the Smart Contract software metrics.**
**variable** **Mean** **Median** **Std** **Min** **Max** **IQR** **10th** **90th**
total_lines 586.96 317.00 937.23 1 25,920 525.00 93.00 1,373.00
blanks 91.69 54.00 160.31 0 4,045 77.00 13.00 201.00
functions 44.96 28.00 66.27 0 1,256 36.00 9.00 95.00
payable 2.00 1.00 6.40 0 205 2.00 0.00 5.00
events 5.08 3.00 6.08 0 137 4.00 1.00 11.00
mapping 4.11 3.00 4.67 0 155 2.00 0.00 8.00
modifiers 1.86 1.00 2.48 0 40 3.00 0.00 5.00
contracts 7.29 5.00 9.52 1 227 6.00 2.00 14.00
interfaces 1.28 0.00 2.55 0 52 1.00 0.00 5.00
libraries 1.22 1.00 1.87 0 36 2.00 0.00 3.00
addresses 55.27 36.00 91.31 0 2,500 40.00 9.00 108.00
cyclomatic 66.50 36.00 105.66 0 2,318 55.00 13.00 146.00
comments 72.77 38.00 198.16 0 25,536 68.00 1.00 154.00
abiLength 221.60 144.00 586.81 0 34,728 113.00 66.00 310.00
abiStringLength 4,644 3,886 3,282 2 48,274 3,030 1,671 8,375
bytecode 12,483 9,606 9,953 2 49,152 10,714 3,336 26,921
LOC 306.63 167.00 529.08 1 14,151 240.75 64.00 663.00
block 47.83 28.00 72.34 0 1,534 39.00 10.00 102.00
isFallback 0.38 0.00 0.55 0 8 1.00 0.00 1.00
isVirtual 4.70 0.00 17.98 0 462 0.00 0.00 18.00
pure 5.58 4.00 9.67 0 209 7.00 0.00 13.00
view 12.22 6.00 28.86 0 650 14.00 0.00 33.00
[https://doi.org/10.1371/journal.pone.0281043.t001](https://doi.org/10.1371/journal.pone.0281043.t001)
than the median, (almost always of about two third) which is a feature typical of right skewed
distributions. One simple reason explaining this fact is the lower bound posed to all the
metrics by the fact that they are defined null or positive, while in principle, large values are
not bounded. A little exception is represented by the Bytecode metric which features values for
mean and median very close to each other, suggesting a distribution shape which may be not
really skewed. Standard deviations are all comparable with the mean, meaning a large dispersion of values around the last, but there are not cases where it is much large than the mean or
the media. Values of standard deviation much larger than the mean might be instead the case
for power law distributions and such behavior has already been observed in software metrics
for typical software systems [12, 44].
The maxima are all much larger than the corresponding means and medians, often reach
one or two order of magnitude larger and only in a few cases three orders of magnitude.
Finally the 90th percentiles are comparable with a displacement of some standard deviation
from the mean. All these results suggest that the selected Smart Contracts metrics might not
display fat tail or power law distributions which are instead found in the literature for corresponding metrics of standard software systems.
Nevertheless outlier values appear for all the metrics and the values in Table 1 are not
exhaustive for explaining completely their statistical properties.
Table 2 shows the Solidity programming statements statistics computed for all the 85K Smart
Contracts composing our dataset. Based on statements’ statistic, a typical Smart Contract consists of almost 10 IF’s statements, 5 EMIT’s statements and 1.5 iteration statements. The same
overall distribution of statement types was obtained in different periods of time with varying versions of solidity. So the statistic tends to be relatively stable. Notably, the number of iteration
|variable|Mean|Median|Std|Min|Max|IQR|10th|90th|
|---|---|---|---|---|---|---|---|---|
|total_lines|586.96|317.00|937.23|1|25,920|525.00|93.00|1,373.00|
|blanks|91.69|54.00|160.31|0|4,045|77.00|13.00|201.00|
|functions|44.96|28.00|66.27|0|1,256|36.00|9.00|95.00|
|payable|2.00|1.00|6.40|0|205|2.00|0.00|5.00|
|events|5.08|3.00|6.08|0|137|4.00|1.00|11.00|
|mapping|4.11|3.00|4.67|0|155|2.00|0.00|8.00|
|modifiers|1.86|1.00|2.48|0|40|3.00|0.00|5.00|
|contracts|7.29|5.00|9.52|1|227|6.00|2.00|14.00|
|interfaces|1.28|0.00|2.55|0|52|1.00|0.00|5.00|
|libraries|1.22|1.00|1.87|0|36|2.00|0.00|3.00|
|addresses|55.27|36.00|91.31|0|2,500|40.00|9.00|108.00|
|cyclomatic|66.50|36.00|105.66|0|2,318|55.00|13.00|146.00|
|comments|72.77|38.00|198.16|0|25,536|68.00|1.00|154.00|
|abiLength|221.60|144.00|586.81|0|34,728|113.00|66.00|310.00|
|abiStringLength|4,644|3,886|3,282|2|48,274|3,030|1,671|8,375|
|bytecode|12,483|9,606|9,953|2|49,152|10,714|3,336|26,921|
|LOC|306.63|167.00|529.08|1|14,151|240.75|64.00|663.00|
|block|47.83|28.00|72.34|0|1,534|39.00|10.00|102.00|
|isFallback|0.38|0.00|0.55|0|8|1.00|0.00|1.00|
|isVirtual|4.70|0.00|17.98|0|462|0.00|0.00|18.00|
|pure|5.58|4.00|9.67|0|209|7.00|0.00|13.00|
|view|12.22|6.00|28.86|0|650|14.00|0.00|33.00|
-----
**Table 2. Statements statistics computed for all the Smart Contracts.**
**variable** **Mean** **Median** **Std** **Min** **Max** **IQR** **10th** **90th**
ifStatement 9.97 3.00 23.04 0 621 10.00 0.00 22.00
doWhileStatement 0.00 0.00 0.09 0 7 0.00 0.00 0.00
emitStatement 4.93 4.00 6.96 0 130 7.00 0.00 11.00
whileStatement 0.33 0.00 1.11 0 24 0.00 0.00 1.00
forStatement 0.95 0.00 2.26 0 13 1.00 0.00 3.00
inlineAssemblyStatement 0.90 0.00 2.98 0 81 1.00 0.00 2.00
returnStatement 21.80 14.00 30.05 0 712 19.00 3.00 45.00
revertStatement 0.01 0.00 0.30 0 37 0.00 0.00 0.00
throwStatement 0.53 0.00 2.96 0 75 0.00 0.00 0.00
tryStatement 0.06 0.00 0.41 0 25 0.00 0.00 0.00
[https://doi.org/10.1371/journal.pone.0281043.t002](https://doi.org/10.1371/journal.pone.0281043.t002)
statements per line of code (0.005) is two orders of magnitude smaller than other programming
languages such as Java (0.121), C and python. The number of conditional statements per line of
code (0.033) is one order of magnitude smaller than other programming languages such as Java
(0.142), C and python. The third most used statement in Smart Contracts after the return statement and IF statement is the EMIT’s statement. The Emit statement is used to release an event
in a Smart Contracts, which can be read by the client in a decentralized application (dApp).
To perform a complete analysis, we proceed in two steps. We perform a first qualitative investigation analyzing the histograms for all the metrics, then we use more complex statistical models
for best fitting the Empirical Complementary Cumulative Distribution Function to extract quantitative information on Smart Contracts software metrics. The histogram patterns are well
known to depend on the bin size and number, as well as on the local density of points into the
various ranges. Nevertheless they can be an helpful instrument to get insight into the distribution
shape general features, namely if there may be fat tails, bulk initial distribution values and so on.
On the contrary the best fittings functions with statistical models provide precise values of core
parameters and can be compared with those reported in literature for standard software metrics.
In Figs 3–5 we report the histograms for all the Smart Contracts software metrics in the
same order they are reported in Table 1. To make the histograms more readable, the range of
the last bin is highlighted with a different fill colour. The orange-colored bin represents the
outlier aggregation. The general shape can be distinguished into two categories. From one side
there are those metrics whose ranges of variations are quite limited and maximum values are
below 250, like Payable, Events, Mapping, Modifiable. For such metrics the histograms contain
too few different values which does not allow to display a power law behavior. In particular
Payable and Modifiable appear also to have a bell shape which allows to exclude a general
power law distribution. For Events and Mapping the shape may suggest a power law behavior
which is limited by the upper bounds reached by the maximum metric values. This deserves to
be better investigated using statistical distribution modeling.
From the other side the metrics which reach values large enough (whose maxima are over
250) contain enough points to well populate the histograms. Also in this case many metrics
have bell shaped distributions with limited asymmetry and skewness. This feature can be
ascribed to the limited range of values these metrics can reach. In fact, in cases where the metrics can assume virtually arbitrary large values, many orders of magnitude larger that their
mean values, the bell shape disappear and the shape presents a strong asymmetry with a high
skewness. This is the behavior observed in literature for metrics in common software systems.
The only cases where a full power law distribution may approximately hold are those related to
|variable|Mean|Median|Std|Min|Max|IQR|10th|90th|
|---|---|---|---|---|---|---|---|---|
|ifStatement|9.97|3.00|23.04|0|621|10.00|0.00|22.00|
|doWhileStatement|0.00|0.00|0.09|0|7|0.00|0.00|0.00|
|emitStatement|4.93|4.00|6.96|0|130|7.00|0.00|11.00|
|whileStatement|0.33|0.00|1.11|0|24|0.00|0.00|1.00|
|forStatement|0.95|0.00|2.26|0|13|1.00|0.00|3.00|
|inlineAssemblyStatement|0.90|0.00|2.98|0|81|1.00|0.00|2.00|
|returnStatement|21.80|14.00|30.05|0|712|19.00|3.00|45.00|
|revertStatement|0.01|0.00|0.30|0|37|0.00|0.00|0.00|
|throwStatement|0.53|0.00|2.96|0|75|0.00|0.00|0.00|
|tryStatement|0.06|0.00|0.41|0|25|0.00|0.00|0.00|
-----
**Fig 3. Histogram distributions of the metrics Total lines, Blanks, Function and Payable.**
[https://doi.org/10.1371/journal.pone.0281043.g003](https://doi.org/10.1371/journal.pone.0281043.g003)
the lines of code, like total lines of code, blank lines, comments and LOC. But also in these
cases the upper bound of the values of the metrics does not allow to fully acknowledge for the
power law. This seems to be a structural difference with respect to standard software systems
where the number of lines of code for a class, for example in Java systems, may easily reach
tens of thousands. In fact such systems rely on service classes containing many methods and
code lines, whilst Smart Contracts code relies basically on the self contained code.
It is interesting to note the bell shaped behavior of the ABI metrics and of the Bytecode metric, which strongly differ from the shapes associated to lines of code or in general to other metrics. In the case of ABI this means that the amount of exposure of Smart Contracts to external
interactions has a typical scale, provided by clear central values, even if the variance may be
quite large. In other words Smart Contract exposure to the blockchain is very similar for most
of the contracts, with no significative outliers, regardless the contract size in terms of LOC or
other metrics. The bytecode displays a rather similar but less symmetric bell shape. In this case
the behavior is clearly governed by the size constraints imposed by the costs of uploading very
large Smart Contracts on the blockchain.
#### 4.1 Analysing distributions of the metrics grouped by the pragma version
This section analyzes the distribution of some software metrics, such as the number of lines of
code (LOC), the number of empty lines (Blanks), the number of functions (Functions) and the
-----
**Fig 4. Histogram distributions of the metrics Events, Mapping, Modifier and Contract.**
[https://doi.org/10.1371/journal.pone.0281043.g004](https://doi.org/10.1371/journal.pone.0281043.g004)
number of payable functions (payable), grouped by the pragma version. The pragma version is
a directive which specifies how a compiler should process its input. The pragma version is not
part of the grammar of a solidity programming language. The pragma version changes over
time, as it is a way to identify the language used to categorize the states of solidity program
language as it is developed and released. Smart Contracts should be annotated following this
directive to avoid to be compiled by future compiler versions that might introduce incompatible changes. Despite this recommendation, not all smart contracts follow the pragma directive.
The data set we consider in this paper consists of 85K of Smart Contracts and 19% of them
did not follow the pragma directive. However, only the smart contracts following the pragma
directive will be analysed to show a possible change or trend in how the smart contracts are
developed over time.
For the following software metrics, functions, LOC and ABI, the peak of the distribution of
smart contracts having the pragma version 0.5.* directives is shifted to the right compared to
the smart contracts having the pragma version 0.4.* directives. As to what concerns the shape
of the curves, the shape of the curve is broader in smart contracts having the pragma version
0.5.* directives, becoming progressively sharper with the decreasing of smart contracts having
the pragma version 0.4.* directives.
-----
**Fig 5. Histogram distributions of the metrics Address, Cyclomatic, Comments, ABI, Bytecode and LOCS.**
[https://doi.org/10.1371/journal.pone.0281043.g005](https://doi.org/10.1371/journal.pone.0281043.g005)
#### 4.2 Analysis of the number of contracts, libraries and interfaces
This section analyzes the number of Contracts, Libraries and Interfaces used in Smart Contracts written in solidity language during the time frame period from the year 2016 to the year
2021. Smart Contracts written in Solidity Program language consist of a number of contract
declarations. Contracts in Solidity Program language are similar to classes in object-oriented
programming (OOP) languages and, as in the case of OOP languages, there are four types of
-----
smart contracts: Abstract Contract, Interface Contract, Concrete Contract and Library Contract. In the following sections, the definition of each contract type will be provided and the
use of these different contracts over the last 4 years will be analyzed.
**4.2.1 Abstract contract.** Contracts are marked as Abstract Contracts when at least one of
their functions lacks an implementation, as in the following example 1
**Listing 1. Abstract Contract Example**
35 // Abstract Contract
36 contract Notify
37 {
38 event Notified (address indexed _from, uint indexed _amount);
39 // functions signature
40 function notify (address _from, uint _amount) public returns
(bool);
41 }
The functions that lack the implementation are named Abstract Functions. If a contract
extends an Abstract Contract, it has to implement or define all the Abstract Functions of the
extended Abstract Class, otherwise, it will be an Abstract Contract itself. Abstract contracts
allow the use of patterns, such as the Template Method Design Pattern, and they allow to
remove code duplication.
**4.2.2 Interfaces and libraries.** Interface Contract was introduced in Solidity v0.4.11 on
3rd May 2017 [7]. An Interface Contract is similar to an Abstract Contract, but it cannot have
any functions implemented. There are further restrictions such as it cannot inherit other Contracts or Interfaces.
Interface Contracts allow decoupling the definition of a contract from its implementation,
providing better extensibility. In fact, when a Contract Interface is defined, the implementations of a new Contract can be provided for any existing functions without modifying their
declarations. Interface Contracts are denoted by the interface keyword as in the following
example 2
Listing 2. Interface Contract Example
42 // Interface Contract
43 interface Notify
44 {
45 event Notified(address indexed _from, uint indexed _amount);
46 // functions signature
47 function notify(address _from, uint _amount) public returns
(bool);
48 }
A Concrete Contract has the implementation of all functions that are declared in the body
of the contract. When a Concrete Contract implements an Interface Contract, it must provide
the implementation of all the functions that are defined within the Interface implemented. If a
contract extends an Abstract Contract, it needs to provide implementations for all functions
not implemented in the extended Abstract Contract.
Library Contracts are similar to Concrete Contracts, but their purpose is different. A library
is a type of contract that does not allow to use functions, such as Payable and Fallback, which
provide a mechanism to collect or receive funds in Ethers. These limitations are enforced at
compile-time, therefore making it impossible for a library to hold funds. A library is defined
with the keyword library (library C {}) in the same way a contract is defined (contract A {}).
Library Contracts are used to extract code away from the other Contracts for maintainability
and reuse purposes.
Figs 6 and 7 show a growing trend in many software metrics such as the average number of
LOC, Bytecode, number of interfaces, number of libraries, programming statements until the
-----
**Fig 6. The average number of interfaces and libraries in Smart Contract.**
[https://doi.org/10.1371/journal.pone.0281043.g006](https://doi.org/10.1371/journal.pone.0281043.g006)
solidity version 0.7. Starting from solidity version v0.8 the trend is reversed. A plausible explanation for this trend can be found in the features’ changes of the Solidity programming language described in section 6.
Fig 8 shows the frequency distribution of Lines of Code (LOC) for Smart Contract written
respectively with Solidity version v0.4 (from 2016 to 2018) and Solidity v.0.8 (from 2020
onwards). Many Smart Contracts written before 2017 are in the LOC range from 0 to 500, and
most of the Smart Contracts written after the 2020 year are in a larger LOC range between 01000. Moreover, the number of smart contracts having a LOC range between 4K-14K is one
order of magnitude greater for smart contracts written after 2020.
**4.2.3 Replicated smart contracts.** In this section we explain when and why we consider
two Smart Contracts as different Smart Contracts. This is important for the aims of the paper
because the results depend on the definition of replicated Smart Contracts. Some features of
the Smart Contracts motivating the section are indeed the following ones:
-----
**Fig 7. The average number of LOC and Bytecodes per Smart Contract.**
[https://doi.org/10.1371/journal.pone.0281043.g007](https://doi.org/10.1371/journal.pone.0281043.g007)
- Distinguishability. Each Smart Contract in the Ethereum Blockchain is distinguishable
from any other as it is identified by a unique address, i.e. a hash of 160 bits, and its code is
stored on the blockchain. Smart Contracts can be deployed in the network by a user or by
another Smart Contract or a cryptocurrency wallet. Each time a Smart Contract is deployed
in the network, either in the main or in the test network, a unique address is associated with
the Smart Contract even in the case the source code of two or more Smart Contracts is the
same.
- Immutability. A user has no permission to change any Smart Contract deployed in the
Blockchain. For example, if the user wants to correct a bug s/he is forced to redeploy the
-----
**Fig 8. Smart Contracts’ LOC distribution vs. pragma version.**
[https://doi.org/10.1371/journal.pone.0281043.g008](https://doi.org/10.1371/journal.pone.0281043.g008)
Smart Contract with a new unique address. As a result, on the blockchain there might be
two or more almost identical Smart Contracts with different addresses. The fact that different addresses refer to the same Smart Contract lead us to suppose that many Smart Contracts
might simply be “experiments” or contracts deployed in the blockchain to test and then
modified according to the test results.
- Inheritance. The languages used to write Smart Contracts, such as Solidity, support multiple
inheritance. When a Smart Contract inherits from multiple Smart Contracts, only a single
Smart Contract is created on the blockchain, and the code from all the inherited Smart Contracts is copied into the new Smart Contract.
Based on these features, three ways to define the uniqueness of a smart contract will be
outlined.
- Smart Contract A is different from a Smart Contract B because A and B have distinguishable
addresses.
- Smart Contract A is different from a Smart Contract B if there is at least one different metric
value.
- Smart Contract A is different from a Smart Contract B inheriting from the same Smart Contract C if the shared part of C does not overcome a given threshold, for example 80% of the
code lines (LOC).
#### 5 Statistical modeling
In order to get insights on the behavior of the statistical distributions underlying Smart Contracts software metrics we perform a best fitting analysis using a power law statistical distribution for best fitting the tails of the empirical distributions. Furthermore we performed a
second analysis making use of the Log-normal statistical model. In fact, even when the power
law model well represent the data in the tail it usually is unable to best fit the complete range of
values in the statistical distributions.
To show the results of such analysis we don’t use histograms anymore, which are a rough
approximation of a Probability Density Function (PDF).
-----
Our methodology does not neglect any data and the use of cumulative complementary distributions allows to fully represent the statistical properties of the system analyzed (the blockchain software metrics in this specific case). This allows to model the system with analytical
statistical distributions which provide more detailed and reliable information since all data
points are included into the model.
The histogram representation in fact carries many drawbacks, in particular when data are
power-law distributed in the tail. The problems with representing the empirical PDF are that it is
sensitive to the binning of the histogram used to calculate the frequencies of occurrence, and that
bins with very few elements are very sensitive to statistical noise. This causes a noisy spread of
the points in the tail of the distribution, where the most interesting data lie. Furthermore, because
of the binning, the information relative to each single data is lost. All these aspects make difficult
to verify the power-law behavior in the tail. To overcome these problems from now on we systematically report the experimental CCDF (Complementary Cumulative Distribution Function)
in log-log scale, as well as the best-fitting curves in many cases. This is convenient because, if the
PDF (probability distribution function) has a power-law in the tail, the log-log plot displays a
straight line for the raw data. This is a necessary but by no means a sufficient condition for
power-law behavior. Thus we used log-log plots only for convenience of graphical representation, but all our calculations (CDF, CCDF, best fit procedures and the same analytical distribution functions we use) are always in normal scale. With this representation, there is no
dependence on the binning, nor artificial statistical noise added to the tail of the data. If the PDF
exhibits a power-law, so does the CCDF, with an exponent increased by one. Fitting the tail of
the CCDF, or even the entire distribution, results in a major improvement in the quality of fit.
An exhaustive discussion of all these issues may be found in [70]. This approach has already been
proposed in literature to explain the power-law in the tail of various software properties [44, 52].
The CCDF is defined as 1 − _CDF, where the CDF (Cumulative Distribution Function) is_
the integral of the PDF. Denoting by p(x) the probability distribution function, by P(x) the
CDF, and by G(x) the CCDF, we have:
_GðxÞ ¼ 1 �_ _PðxÞ_ ð1Þ
Z x
_PðxÞ ¼ pðX �_ _xÞ ¼_
�1
Z 1
_GðxÞ ¼ pðX �_ _xÞ ¼_
_x_
_pðx[0]Þdx[0]_ ð2Þ
_pðx[0]Þdx[0]_ ð3Þ
The first distribution that we describe is the well-known Log-normal distribution. If we
model a stochastic process in which new elements are introduced into the system units in
amounts proportional to the actual number of the elements they contain, then the resulting element distribution is log-normal. All the units should have the same constant chance for being
selected for the introduction of new elements [70]. This general scheme has been demonstrated
to suit large software systems where, during software development, new classes are introduced
into the system, and new dependencies –links– among them are created [52, 71]. The Log-normal has also been used to analyze the distribution of Lines of Code [72]. The Log-normal distribution has been also proposed in literature to explain different software properties ([52, 69, 73]).
Mathematically it is expressed by:
1 2
_pðxÞ ¼_ pffiffiffiffiffiffiffiffi2psx _e[�]ð[ln][ð]2[x][Þ�]s_ [m]Þ
4
ð Þ
-----
It exhibits a quasi-power-law behavior for a range of values, and provides high quality fits
for data with power-law distribution with a final cut-off. Since in real data largest values are
always limited and cannot actually tend to infinity, the log-normal is a very good candidate for
fitting power-laws distributed data with a finite-size effect. Furthermore, it does not diverge
for small values of the variable, and thus may also fit well the bulk of the distribution in the
small values range.
The power-law is mathematically formulated as:
_pðxÞ ’ x[�][a]_ ð5Þ
where α is the power-law exponent, the only parameter which characterizes the distribution,
besides a normalization factor. Since for α 1 the function diverges in the origin, it cannot
�
represent real data for its entire range of values. A lower cut-off, generally indicated x0, has to
be introduced, and the power-law holds above x0. Thus, when fitting real data, this cut-off acts
as a second parameter to be adjusted for best fitting purposes. Consequently, the data distribution is said to have a power-law in the tail, namely above x0.
In Fig 9 we show the best fitting plot for the power law model for the metrics Total lines,
Blanks, Function, and Payable. The power law in the tail is clearly failed by all metrics. In Fig
10 Mapping and Modifier seems to follow a power law, confirmed also by the low values
(D 0.05) of the Kolmogorof-Smirnov significance test value, but the range where the metrics
�
behave according to a power law regime is too small.
Fig 11 finally shows that a good candidate for a power law in the tail is the LOC metric, supported by a KS coefficient of significance of about 0.039. This suggests that also for the Smart
Contract code the main size metric in software, the lines of code, shows properties similar to
those of standard software systems. Also the Address metric displays a reasonable power law
regime for a range of its values, showing a behaviour similar to that found for the metric
“Name of Variables” in Java software [44]. Thus the usage of the keyword “Address” in Smart
Contracts occurs in quantities which remind the usage of variable names in Java.
We then analyzed all the statistical distributions using a log-normal best fitting model.
In Fig 11 we show the Log-normal best fitting curves together with the empirical cumulative
distribution functions for the Smart Contracts metrics Total lines, Blanks, Function and Payable. The first three metrics are nicely fitted by the Log-normal statistical distribution in the
bulk, for low values of the metrics, but not in the tail, even if the R[2] is quite close to one for
each case (R[2] � 0.95). Such result confirms the previous one obtained for the power law
model. The best fitting lacks mainly in the tail of the distribution, as expected. In fact the
empirical distribution drops more rapidly than the best fitting curve because of the cut-off for
large values of the metrics. This may be explained by the hypothesis that Smart Contract size
metrics, like Total Lines of code, Functions and Blanks are upper bounded according to the
size constraints associated to the deployment of Smart Contracts into the blockchain. The Payable metric results in a too poor statistic to be well fitted by a Log-normal distribution.
Fig 11 show the metrics Events, Mapping, Modifier and Contract. Mapping cannot be well
fitted by a Log-normal, as it was very well explained by a power law in the range corresponding
to the bulk of the distribution rather than in the tail. Also Events and Modifier do not suite a
Lo-gnormal distribution and their R[2] values are lower than 0.95. Finally Contract is quite well
approximated in the bulk, but not in the tail, confirming once again the power law best fitting
results.
Finally Fig 11 shows that the initial parts of Bytecode and ABI metrics well overlap with the
Log-normal but as soon as the values crosses the central ones observed in the corresponding
-----
**Fig 9. Power law and Log normal best fitting of the metrics Total lines, Blanks, Function and Payable.**
[https://doi.org/10.1371/journal.pone.0281043.g009](https://doi.org/10.1371/journal.pone.0281043.g009)
histograms the Log-normal curves tend to miss the empirical ones which drops quickly and do
not display power law in the tail.
Address, Cyclomatic ad Comments rapidly drop with respect to the Log-normal model,
even if the initial part presents some overlap with it. Again this may be ascribed to the upper
bounds which limit the range of values reachable by these metrics. In particular Comments are
less, on average, than in traditional software development. This is maybe due to the fact that
Smart Contract software code is written with specific purpose and constraints, so that the
same patterns are most likely found and do not need comment lines.
-----
**Fig 10. Power law and Log normal best fitting of the metrics Events, Mapping, Modifier and Contract.**
[https://doi.org/10.1371/journal.pone.0281043.g010](https://doi.org/10.1371/journal.pone.0281043.g010)
Finally the LOC metric is quite well represented by the Log-normal distribution both on
the bulk and in the tail, and presents an R[2] value larger than 0.98. This is quite in agreement
with the results found in literature for the LOC metric in traditional software systems [44]. In
some sense, this result is different from results obtained in similar studies, since it seems that
this metric is not influenced by the peculiarity that can belong to Smart Contract software and
tends to preserve the same statistical features found in traditional software systems.
Table 3 shows the final fitting parameters for the Power Law and Log-Normal distributions.
We reported the xmin and α estimated parameters for the Power Law and xmin, log(μ) and log(σ)
estimated parameters for the Log-Normal.
-----
**Fig 11. Power law and Log normal best fitting of the metrics Address, Cyclomatic, Comments, ABI, Bytecode and**
**LOCS.**
[https://doi.org/10.1371/journal.pone.0281043.g011](https://doi.org/10.1371/journal.pone.0281043.g011)
We validated our results using the bootstrap methodology in order to provide a 95% confidence interval for the estimated parameters. By default, the bootstrap function will use the
Max Likelihood Estimator (MLE) to infer the parameter values and check all values of xmin.
The bootstrap procedure resamples the dataset with replacement for a large number of
-----
**Table 3. Fitting parameters for the power law and log-normal distributions. The xmin and α estimated parameters are reported for the Power Law. For the Log-Normal**
the xmin, log(μ) and log(σ) estimated parameters are reported.
**Power Law** **Log Normal**
**Metric** **_xmin_** **_α_** **95% CI** **_xmin_** **_log(μ)_** **95% CI** **_log(σ)_** **95% CI**
total lines 1323 3.33 3.327;3.341 150 5.75 5.748;5.758 1.105 1.104;1.108
blanks 308 2.94 2.925;2.949 23 3.97 3.972;3.984 1.032 1.029;1.033
functions 108 3.29 3.286;3.299 25 2.81 2.811;2.837 1.14 1.138;1.145
payable 5 3.01 2.994;3.021 1 0.29 0.296;0.312 1.16 1.155;1.160
events 11 3.29 3.282;3.295 3 1.08 1.071;1.084 0.965 0.963;0.967
mapping 3 2.92 2.915;2.935 4 0.28 0.26;0.31 1.06 1.064;1.076
modifiers 5 3.42 3.412;3.434 3 0.68 0.66;0.95 0.806 0.803;0.816
contracts 10 3.61 3.601;3.623 3 0.42 0.41;0.439 1.02 1.025;1.037
addresses 108 3.08 3.072;3.088 32 2.62 2.59;2.64 1.2 1.212;1.224
cyclomatic 161 3.15 3.145;3.159 36 3.68 3.675;3.698 1.04 1.041;1.049
comments 149 2.75 2.746;2.755 50 3.33 3.31;3.347 1.28 1.274;1.284
abi 174 3.1 3.095;3.155 3370 8.59 8.478;8.623 0.53 0.493;0.567
bytecode 11052 3.46 3.409;3.499 1830 9.02 8.993;9.032 0.65 0.642;0.661
LOC 148 2.62 2.574;2.642 161 0.38 -0.31;1.68 1.9 1.684;1.992
[https://doi.org/10.1371/journal.pone.0281043.t003](https://doi.org/10.1371/journal.pone.0281043.t003)
iterations (1000 in our case), for each iteration, all the parameter are estimated and at the end,
a confidence interval is calculated. The bootstrap procedure provides more robust results.
In Table 3 we report the results of the bootstrap procedure, a 95% confidence intervals for
the α parameter of the Power Law and log(μ) and log(σ) parameters of the Log-Normal is provided in the column next to each parameter.
#### 6 Discussion
This section investigates the implications of the research based on the findings of our study.
Some of the findings are the following:
- The Solidity program language has different styles of programming when compared to other
high-level programming languages because of computational cost constraints and to be easier to understand for non-expert users.
- In the last two years the way of writing the smart contracts has been changing due to the the
introduction new programme features in the last version of the compiler and because the
Solidity developers started to implement more complex business logic over time.
As to what concerns the Solidity programming style, based on our findings (see Table 2),
the number of iteration statements and conditional statements per line of code is respectively
two and three orders of magnitude smaller than other high-level programming languages such
as Java, C and python. Some relevant studies on this subject are [60, 73]. Furthermore authors
in [74] show how cyclomatic complexity on Java code can reach very high values [74].
We assume that Smart Contract developers might have a tendency to minimize the use of
branch statements (IF) and iterative statements (FOR, WHILE) because these instructions
have a high computational cost when compared to other program statements such as the
bitwise operations. Moreover, we assume that in order to increase public trust, the solidity
developers tend to write smart contracts easy to understand. Indeed, a program easy to understand should have a low cyclomatic complexity although literature shows that readability, as
intended by humans, weakly correlates with low cyclomatic metrics [75].
|Col1|Power Law|Col3|Col4|Log Normal|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
|Metric|xmin|α|95% CI|xmin|log(μ)|95% CI|log(σ)|95% CI|
|total lines|1323|3.33|3.327;3.341|150|5.75|5.748;5.758|1.105|1.104;1.108|
|blanks|308|2.94|2.925;2.949|23|3.97|3.972;3.984|1.032|1.029;1.033|
|functions|108|3.29|3.286;3.299|25|2.81|2.811;2.837|1.14|1.138;1.145|
|payable|5|3.01|2.994;3.021|1|0.29|0.296;0.312|1.16|1.155;1.160|
|events|11|3.29|3.282;3.295|3|1.08|1.071;1.084|0.965|0.963;0.967|
|mapping|3|2.92|2.915;2.935|4|0.28|0.26;0.31|1.06|1.064;1.076|
|modifiers|5|3.42|3.412;3.434|3|0.68|0.66;0.95|0.806|0.803;0.816|
|contracts|10|3.61|3.601;3.623|3|0.42|0.41;0.439|1.02|1.025;1.037|
|addresses|108|3.08|3.072;3.088|32|2.62|2.59;2.64|1.2|1.212;1.224|
|cyclomatic|161|3.15|3.145;3.159|36|3.68|3.675;3.698|1.04|1.041;1.049|
|comments|149|2.75|2.746;2.755|50|3.33|3.31;3.347|1.28|1.274;1.284|
|abi|174|3.1|3.095;3.155|3370|8.59|8.478;8.623|0.53|0.493;0.567|
|bytecode|11052|3.46|3.409;3.499|1830|9.02|8.993;9.032|0.65|0.642;0.661|
|LOC|148|2.62|2.574;2.642|161|0.38|-0.31;1.68|1.9|1.684;1.992|
-----
As far as the change in programming style, we observed at least two different distributions
of software metrics data. First, many Smart Contracts written before 2017 are in the LOC
range from 0 to 500, and most of the Smart Contracts written after 2020 year are in a larger
LOC range between 0-1000. Moreover, the number of smart contracts written after 2020 and
having a LOC in the outlier values (between 4K-14K) is one order of magnitude greater when
compared to smart contracts written before 2017 and having a LOC in the same interval. The
larger LOC range for Smart Contracts written after 2020 can be explained by the fact that the
business logic of some Smart Contracts is deployed both 1) in longer source code and 2) in different Smart Contract addresses via specific pattern programs to bypass the source code size
limit. Indeed, a Smart Contract has a code size limit equal to 24576 bytes and this limit was
introduced to prevent denial-of-service (DOS) attacks. Originally, this limit was not a problem
because the business logic of smart contracts was very simple as highlighted by our findings
(LOC range from 0 to 500). However, in the last few years, the Solidity developers added more
and more functionalities to their smart contracts until at some point they reached a code size
limit. If the Solidity developers exceed this code size limit equal to 24576 bytes, they will not be
allowed deploying the Smart Contract on the blockchain network. According to the grey literature, in the last few years, the Smart Contract size limit was overcome by using the “diamond
pattern”. A “diamond Smart Contract” is a contract that gets its external functions from other
contracts (called “facets”). On the contrary in traditional software power laws are commonly
identified (eg. in Java programs) for general “size” metrics, defined for example in terms of the
number of methods, constructors and other class features, where very large values of such metrics are commonly found [12].
Second, we observed a growing trend in many software metrics, such as the average number
of LOC, Bytecode, number of interfaces, number of libraries, programming statements until
the solidity version 0.7. Starting from Solidity version v0.8 the trend is reversed. A plausible
explanation for this trend can be found in the changes of features in the Solidity programming
language. The change of some features of the Solidity programming language is influencing
the way Solidity software developers implement smart contracts from version 0.8 (released on
16 Dec 2020). Indeed, until Solidity version 0.7 (released on 28 July 2020), some characteristics
of Solidity could lead many programming developers to introduce bugs in Smart Contracts.
Fortunately, it was possible to mitigate the introduction of bugs by using external libraries
such as OpenZepelling. For example, arithmetic operations in Solidity did not throw exceptions when an overflow occurred up to version 0.7 (the last release was on 16 Dec 2020).
Indeed, this characteristic of Solidity can easily result in bugs, because programmers usually
assume that a calculation that exceeds the memory space throws an error as in other high-level
programming languages. Actually, starting from version 0.8, the Solidity compiler throws an
exception when an overflow occurs in arithmetic operations. This means that the Solidity
developers can update a Smart Contract or write a new Smart Contract via the newest compiler
version without using external libraries, thus resulting in a Smart Contract smaller in size.
#### 7 Conclusions
In this paper we studied Smart Contracts software metrics extracted from a data set of more
than 85K Smart Contracts deployed on the Ethereum blockchain. We were interested in determining if, given the peculiarity related Smart Contract software development, the corresponding software metrics present differences in their statistical properties with respect to metrics
extracted from traditional software systems and already largely studied in literature.
The assumptions are that resources are limited on the blockchain and such limitations may
influence the way Smart Contracts are written. Our analysis dealt with source code metrics as
-----
well as with ABI and bytecode of Smart Contracts. Our main results show that, overall, the
exposure of Smart Contracts to the interaction with the blockchain as qualitatively measured
in terms of ABI size are quite similar to each other and there are not outliers Contracts. The
distribution is compatible with a bell shaped statistical distribution where most of values tend
to lie around a central value with some dispersion around it.
In general Smart Contracts metrics tend to suffer from blockchain limited resources constraints, since they tend to assume limited upper values. There is not the ubiquitous presence
of fat tail distributions where there are values very far from the mean, even order of magnitude
larger, as typical in traditional software. In Smart Contract software metrics large variations
from the mean are substantially unknown and all the values are generally into a range of few
standard deviations from the mean.
Finally the Smart Contract lines of code is the metric which more closely follow the statistical distribution of the corresponding metric in traditional software system and shows a truncated power law in the tail and an overall distribution which is well explained by a Log-normal
distribution.
#### Acknowledgments
The work was partially funded through the PRIN-project “WE_BEST” financed by the Italian
Ministry of University and Research (MUR): MUR 4—Public research—PRIN 2020, TITOLO
PROGETTO/FONTE DI FINANZIAMENTO: RICMIUR_CTC_2022_MARCHESI_TONELLI—PRIN annualità 2020—MUR, CODICE CUP: F73C22000430001, CODICE CO.AN:
A.15.01.02.01.01.01—Progetti ministeriali(PRIN FIRB FAR ecc.) by the project: “Analysis of
innovative Blockchain technologies: Libra, Bitcoin and Ethereum and technological, economical and social comparison among these different blockchain technologies” funded by Fondazione Di Sardegna, oct-2020 to oct 2022, F72F20000190007, and by the project Partenariato
Esteso “GRINS—Growing Resilient, INclusive and Sustainable”, tematica “9. Economic and
financial sustainability of systems and territories”, CUP: PE00000018.
#### Author Contributions
**Conceptualization: Roberto Tonelli, Marco Ortu.**
**Data curation: Giuseppe Antonio Pierro.**
**Formal analysis: Roberto Tonelli, Marco Ortu.**
**Methodology: Roberto Tonelli, Marco Ortu.**
**Supervision: Roberto Tonelli.**
**Validation: Giuseppe Antonio Pierro, Marco Ortu, Giuseppe Destefanis.**
**Writing – original draft: Roberto Tonelli, Giuseppe Antonio Pierro, Marco Ortu.**
**Writing – review & editing: Roberto Tonelli, Giuseppe Antonio Pierro, Marco Ortu, Giu-**
seppe Destefanis.
#### References
**1.** Bragagnolo S., Rocha H., Denker M., and Ducasse S., “Smartinspect: solidity smart contract inspector,”
in 2018 International Workshop on Blockchain Oriented Software Engineering (IWBOSE), mar
[2018, pp. 9–18, electronic ISBN: 978-1-5386-5986-1. [Online]. Available: http://rmod.inria.fr/archives/](http://rmod.inria.fr/archives/papers/Braga18a-IWBOSE-SmartInspect.pdf)
[papers/Braga18a-IWBOSE-SmartInspect.pdf https://doi.org/10.1109/IWBOSE.2018.8327566](http://rmod.inria.fr/archives/papers/Braga18a-IWBOSE-SmartInspect.pdf)
**2.** [Szabo N., “Formalizing and securing relationships on public networks,” First monday, 1997. https://doi.](https://doi.org/10.5210/fm.v2i9.548)
[org/10.5210/fm.v2i9.548](https://doi.org/10.5210/fm.v2i9.548)
-----
**3.** Destefanis G., Marchesi M., Ortu M., Tonelli R., Bracciali A., and Hierons R., “Smart contracts vulnerabilities: a call for blockchain software engineering?” in 2018 International Workshop on Blockchain Ori_[ented Software Engineering (IWBOSE). IEEE, 2018, pp. 19–25. https://doi.org/10.1109/IWBOSE.](https://doi.org/10.1109/IWBOSE.2018.8327567)_
[2018.8327567](https://doi.org/10.1109/IWBOSE.2018.8327567)
**4.** Luu L., Chu D.-H., Olickel H., Saxena P., and Hobor A., “Making smart contracts smarter,” in Proceedings
of the 2016 ACM SIGSAC conference on computer and communications security, 2016, pp. 254–269.
**5.** Luu L., Teutsch J., Kulkarni R., and Saxena P., “Demystifying incentives in the consensus computer,” in
Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security,
2015, pp. 706–719.
**6.** Grishchenko I., Maffei M., and Schneidewind C., “A semantic framework for the security analysis of
ethereum smart contracts,” in International Conference on Principles of Security and Trust. Springer,
2018, pp. 243–269.
**7.** Hegedűs P., “Towards analyzing the complexity landscape of solidity based ethereum smart contracts,”
_[Technologies, vol. 7, no. 1, p. 6, 2019. https://doi.org/10.3390/technologies7010006](https://doi.org/10.3390/technologies7010006)_
**8.** Sai A. R., Holmes C., Buckley J., and Gear A. L., “Inheritance software metrics on smart contracts,” in
Proceedings of the 28th International Conference on Program Comprehension, 2020, pp. 381–385.
**9.** Pierro G. A., Tonelli R., and Marchesi M., “An organized repository of ethereum smart contracts’ source
[codes and metrics,” Future internet, vol. 12, no. 11, p. 197, 2020. https://doi.org/10.3390/fi12110197](https://doi.org/10.3390/fi12110197)
**10.** Concas G., Marchesi M., Pinna S., and Serra N., “Power-laws in a large object-oriented software sys[tem,” IEEE Transactions on Software Engineering, vol. 33, no. 10, pp. 687–708, 2007. https://doi.org/](https://doi.org/10.1109/TSE.2007.1019)
[10.1109/TSE.2007.1019](https://doi.org/10.1109/TSE.2007.1019)
**11.** Cai K.-Y. and Yin B.-B., “Software execution processes as an evolving complex network,” Information
_[Sciences, vol. 179, no. 12, pp. 1903–1928, 2009. https://doi.org/10.1016/j.ins.2009.01.011](https://doi.org/10.1016/j.ins.2009.01.011)_
**12.** Wheeldon R. and Counsell S., “Power law distributions in class relationships,” in Proceedings Third
_IEEE International Workshop on Source Code Analysis and Manipulation. IEEE, 2003, pp. 45–54._
**13.** Porru S., Pinna A., Marchesi M., and Tonelli R., “Blockchain-oriented software engineering: challenges
and new directions,” in 2017 IEEE/ACM 39th International Conference on Software Engineering Com_panion (ICSE-C). IEEE, 2017, pp. 169–171._
**14.** Swan M., Blockchain: Blueprint for a new economy. “ O’Reilly Media, Inc.”, 2015.
**15.** Johnston D., Yilmaz S. O., Kandah J., Bentenitis N., Hashemi F., Gross R., et al., “The general theory
of decentralized applications,” DApps, URL-https://cryptochainuni.com/wp-content/uploads/The-Gen_eral-Theory-of-Decentralized-Applications-DApps.pdf, 2014._
**16.** Nakamoto S. and Bitcoin A., “A peer-to-peer electronic cash system,” Bitcoin.–URL: https://bitcoin. org/
_bitcoin. pdf, vol. 4, 2008._
**17.** Buterin V.et al., “A next-generation smart contract and decentralized application platform,” white paper,
vol. 3, no. 37, pp. 2–1, 2014.
**18.** Pierro G. A., Rocha H., Ducasse S., Marchesi M., and Tonelli R., “A user-oriented model for oracles’
[gas price prediction,” Future Generation Computer Systems, vol. 128, pp. 142–157, 2022. https://doi.](https://doi.org/10.1016/j.future.2021.09.021)
[org/10.1016/j.future.2021.09.021](https://doi.org/10.1016/j.future.2021.09.021)
**19.** Di Sorbo A., Laudanna S., Vacca A., Visaggio C. A., and Canfora G., “Profiling gas consumption in
[solidity smart contracts,” Journal of Systems and Software, vol. 186, p. 111193, 2022. https://doi.org/](https://doi.org/10.1016/j.jss.2021.111193)
[10.1016/j.jss.2021.111193](https://doi.org/10.1016/j.jss.2021.111193)
**20.** Gencer A. E., Basu S., Eyal I., Renesse R. v., and Sirer E. G., “Decentralization in bitcoin and ethereum
networks,” in International Conference on Financial Cryptography and Data Security. Springer,
2018, pp. 439–457.
**21.** Buterin V., “Thinking about smart contract security,” Np, nd Web. https://blog.ethereum.org/2016/06/19/
_thinking-smartcontract-security, 2016._
**22.** Bartoletti M. and Pompianu L., “An empirical analysis of smart contracts: platforms, applications, and
design patterns,” in International conference on financial cryptography and data security. Springer,
2017, pp. 494–509.
**23.** Atzei N., Bartoletti M., and Cimoli T., “A survey of attacks on ethereum smart contracts (sok),” in Interna_tional conference on principles of security and trust. Springer, 2017, pp. 164–186._
**24.** Churcher N. I., Shepperd M. J., Chidamber S., and Kemerer C., “Comments on a metrics suite for object
oriented design,” IEEE Transactions on software Engineering, vol. 21, no. 3, pp. 263–265, 1995.
[https://doi.org/10.1109/32.372153](https://doi.org/10.1109/32.372153)
**25.** Basili V. R., Briand L. C., and Melo W. L., “A validation of object-oriented design metrics as quality indi[cators,” IEEE Transactions on software engineering, vol. 22, no. 10, pp. 751–761, 1996. https://doi.org/](https://doi.org/10.1109/32.544352)
[10.1109/32.544352](https://doi.org/10.1109/32.544352)
-----
**26.** Subramanyam R. and Krishnan M. S., “Empirical analysis of ck metrics for object-oriented design complexity: Implications for software defects,” IEEE Transactions on software engineering, vol. 29, no.
[4, pp. 297–310, 2003. https://doi.org/10.1109/TSE.2003.1191795](https://doi.org/10.1109/TSE.2003.1191795)
**27.** Gyimo´thy T., Ferenc R., and Siket I., “Empirical validation of object-oriented metrics on open source
software for fault prediction,” IEEE Transactions on Software engineering, vol. 31, no. 10, pp. 897–910,
[2005. https://doi.org/10.1109/TSE.2005.112](https://doi.org/10.1109/TSE.2005.112)
**28.** Murgia A., Concas G., Tonelli R., Ortu M., Demeyer S., and Marchesi M., “On the influence of maintenance activity types on the iss resolution time,” in Proceedings of the 10th international conference on
predictive models in software engineering, 2014, pp. 12–21.
**29.** Zimmermann T. and Nagappan N., “Predicting defects using network analysis on dependency graphs,”
in Proceedings of the 30th international conference on Software engineering, 2008, pp. 531–540.
**30.** Concas G., Marchesi M., Murgia A., Pinna S., and Tonelli R., “Assessing traditional and new metrics for
object-oriented systems,” in Proceedings of the 2010 ICSE Workshop on Emerging Trends in Software
Metrics, 2010, pp. 24–31.
**31.** Concas G., Marchesi M., Destefanis G., and Tonelli R., “An empirical study of software metrics for
assessing the phases of an agile project,” International Journal of Software Engineering and Knowledge
_[Engineering, vol. 22, no. 04, pp. 525–548, 2012. https://doi.org/10.1142/S0218194012500131](https://doi.org/10.1142/S0218194012500131)_
**32.** Destefanis G., Tonelli R., Tempero E., Concas G., and Marchesi M., “Micro pattern fault-proneness,” in
_2012 38th Euromicro Conference on Software Engineering and Advanced Applications. IEEE,_
2012, pp. 302–306.
**33.** Tosun A., Turhan B., and Bener A., “Validation of network measures as indicators of defective modules
in software systems,” in Proceedings of the 5th international conference on predictor models in software engineering, 2009, pp. 1–9.
**34.** Chidamber S. R. and Kemerer C. F., “Towards a metrics suite for object oriented design,” in Conference proceedings on Object-oriented programming systems, languages, and applications, 1991, pp.
197–211.
**35.** Focardi S., Marchesi M., and Succi G., “A stochastic model of software maintenance and its implications
on extreme programming processes,” in Extreme programming examined, 2001, pp. 191–206.
**36.** Potanin A., Noble J., Frean M., and Biddle R., “Scale-free geometry in object oriented programs, victoria
university of wellington,” New Zeland, Technical Report CS-TR-02/30, Tech. Rep., 2002.
**37.** Valverde S., Cancho R. F., and Sole R. V., “Scale-free networks from optimal design,” EPL (Europhy_[sics Letters), vol. 60, no. 4, p. 512, 2002. https://doi.org/10.1209/epl/i2002-00248-2](https://doi.org/10.1209/epl/i2002-00248-2)_
**38.** S. Valverde and R. V. Sole´, “Hierarchical small worlds in software architecture,” arXiv preprint cond_mat/0307278, 2003._
**39.** Myers C. R., “Software systems as complex networks: Structure, function, and evolvability of software
[collaboration graphs,” Physical review E, vol. 68, no. 4, p. 046116, 2003. https://doi.org/10.1103/](https://doi.org/10.1103/PhysRevE.68.046116)
[PhysRevE.68.046116 PMID: 14683011](https://doi.org/10.1103/PhysRevE.68.046116)
**40.** Gorshenev A. and Pis’mak Y. M., “Punctuated equilibrium in software evolution,” Physical Review E,
[vol. 70, no. 6, p. 067103, 2004. https://doi.org/10.1103/PhysRevE.70.067103 PMID: 15697556](https://doi.org/10.1103/PhysRevE.70.067103)
**41.** De Moura A. P., Lai Y.-C., and Motter A. E., “Signatures of small-world and scale-free properties in
[large computer programs,” Physical review E, vol. 68, no. 1, p. 017102, 2003. https://doi.org/10.1103/](https://doi.org/10.1103/PhysRevE.68.017102)
[PhysRevE.68.017102 PMID: 12935286](https://doi.org/10.1103/PhysRevE.68.017102)
**42.** Tamai T. and Nakatani T., “Analysis of software evolution processes using statistical distribution models,” in Proceedings of the International Workshop on Principles of Software Evolution, 2002, pp. 120–
123.
**43.** Marchesi M., Pinna S., Serra N., and Tuveri S., “Power laws in smalltalk,” ESUG 2004 Research Track,
p. 27, 2004.
**44.** Concas G., Marchesi M., Pinna S., and Serra N., “On the suitability of yule process to stochastically
model some properties of object-oriented systems,” Physica A: Statistical Mechanics and its Applica_[tions, vol. 370, no. 2, pp. 817–831, 2006. https://doi.org/10.1016/j.physa.2006.02.024](https://doi.org/10.1016/j.physa.2006.02.024)_
**45.** Fenton N. E. and Ohlsson N., “Quantitative analysis of faults and failures in a complex software sys[tem,” IEEE Transactions on Software engineering, vol. 26, no. 8, pp. 797–814, 2000. https://doi.org/10.](https://doi.org/10.1109/32.879815)
[1109/32.879815](https://doi.org/10.1109/32.879815)
**46.** Ostrand T. J. and Weyuker E. J., “The distribution of faults in a large industrial software system,” in Proceedings of the 2002 ACM SIGSOFT international symposium on Software testing and analysis,
2002, pp. 55–64.
**47.** Ostrand T. J., Weyuker E. J., and Bell R. M., “Predicting the location and number of faults in large soft[ware systems,” IEEE Transactions on Software Engineering, vol. 31, no. 4, pp. 340–355, 2005. https://](https://doi.org/10.1109/TSE.2005.49)
[doi.org/10.1109/TSE.2005.49](https://doi.org/10.1109/TSE.2005.49)
-----
**48.** Andersson C. and Runeson P., “A replicated quantitative analysis of fault distributions in complex soft[ware systems,” IEEE transactions on software engineering, vol. 33, no. 5, pp. 273–286, 2007. https://](https://doi.org/10.1109/TSE.2007.1005)
[doi.org/10.1109/TSE.2007.1005](https://doi.org/10.1109/TSE.2007.1005)
**49.** Zhang H., “On the distribution of software faults,” IEEE Transactions on Software Engineering, vol. 34,
[no. 2, pp. 301–302, 2008. https://doi.org/10.1109/TSE.2007.70771](https://doi.org/10.1109/TSE.2007.70771)
**50.** G. Baxter and M. R. Frean, “Software graphs and programmer awareness,” arXiv preprint
_arXiv:0802.2306, 2008._
**51.** Ichii M., Matsushita M., and Inoue K., “An exploration of power-law in use-relation of java software systems,” in 19th Australian Conference on Software Engineering (aswec 2008). IEEE, 2008, pp. 422–
431.
**52.** Louridas P., Spinellis D., and Vlachos V., “Power laws in software,” ACM Transactions on Software
_[Engineering and Methodology (TOSEM), vol. 18, no. 1, pp. 1–26, 2008. https://doi.org/10.1145/](https://doi.org/10.1145/1391984.1391986)_
[1391984.1391986](https://doi.org/10.1145/1391984.1391986)
**53.** Murgia A., Tourani P., Adams B., and Ortu M., “Do developers feel emotions? an exploratory analysis of
emotions in software artifacts,” in Proceedings of the 11th working conference on mining software
repositories, 2014, pp. 262–271.
**54.** Ma¨ntyla¨ M., Adams B., Destefanis G., Graziotin D., and Ortu M., “Mining valence, arousal, and dominance: possibilities for detecting burnout and productivity?” in Proceedings of the 13th international
conference on mining software repositories, 2016, pp. 247–258.
**55.** Ortu M., Destefanis G., Kassab M., and Marchesi M., “Measuring and understanding the effectiveness
of jira developers communities,” in 2015 IEEE/ACM 6th International Workshop on Emerging Trends in
_Software Metrics. IEEE, 2015, pp. 3–10._
**56.** Bartolucci S., Destefanis G., Ortu M., Uras N., Marchesi M., and Tonelli R., “The butterfly “affect”:
Impact of development practices on cryptocurrency prices,” EPJ Data Science, vol. 9, no. 1, p. 21,
[2020. https://doi.org/10.1140/epjds/s13688-020-00239-6](https://doi.org/10.1140/epjds/s13688-020-00239-6)
**57.** Destefanis G., Ortu M., Porru S., Swift S., and Marchesi M., “A statistical comparison of java and python
software metric properties,” in Proceedings of the 7th International Workshop on Emerging Trends in
Software Metrics, 2016, pp. 22–28.
**58.** Ortu M., Destefanis G., Counsell S., Swift S., Tonelli R., and Marchesi M., “Arsonists or firefighters?
affectiveness in agile software development,” in International Conference on Agile Software Develop_ment. Springer, Cham, 2016, pp. 144–155._
**59.** Vacca A., Di Sorbo A., Visaggio C. A., and Canfora G., “A systematic literature review of blockchain and
smart contract development: Techniques, tools, and open challenges,” Journal of Systems and Soft_[ware, vol. 174, p. 110891, 2021. https://doi.org/10.1016/j.jss.2020.110891](https://doi.org/10.1016/j.jss.2020.110891)_
**60.** Ortu M., Orru´ M., and Destefanis G., “On comparing software quality metrics of traditional vs blockchain-oriented software: An empirical study,” in 2019 IEEE International Workshop on Blockchain Ori_ented Software Engineering (IWBOSE). IEEE, 2019, pp. 32–37._
**61.** Pinna A., Ibba S., Baralla G., Tonelli R., and Marchesi M., “A massive analysis of ethereum smart con[tracts empirical study and code metrics,” IEEE Access, vol. 7, pp. 78 194–78 213, 2019. https://doi.org/](https://doi.org/10.1109/ACCESS.2019.2921936)
[10.1109/ACCESS.2019.2921936](https://doi.org/10.1109/ACCESS.2019.2921936)
**62.** Pierro G. A., “Smart-graph: Graphical representations for smart contract on the ethereum blockchain,”
in 2021 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER).
IEEE, 2021, pp. 708–714.
**63.** Ajienka N., Vangorp P., and Capiluppi A., “An empirical analysis of source code metrics and smart contract resource consumption,” Journal of Software: Evolution and Process, vol. 32, no. 10, p. e2267,
2020.
**64.** Hu H., Bai Q., and Xu Y., “Scsguard: Deep scam detection for ethereum smart contracts,” in IEEE
_INFOCOM 2022-IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)._
IEEE, 2022, pp. 1–6.
**65.** Zhang L., Chen W., Wang W., Jin Z., Zhao C., Cai Z., et al., “Cbgru: A detection method of smart con[tract vulnerability based on a hybrid model,” Sensors, vol. 22, no. 9, p. 3577, 2022. https://doi.org/10.](https://doi.org/10.3390/s22093577)
[3390/s22093577 PMID: 35591263](https://doi.org/10.3390/s22093577)
**66.** Baek H., Oh J., Kim C. Y., and Lee K., “A model for detecting cryptocurrency transactions with discernible purpose,” in 2019 Eleventh International Conference on Ubiquitous and Future Networks (ICUFN).
IEEE, 2019, pp. 713–717.
**67.** Li W. and Henry S., “Object-oriented metrics that predict maintainability,” Journal of systems and soft_[ware, vol. 23, no. 2, pp. 111–122, 1993. https://doi.org/10.1016/0164-1212(93)90077-B](https://doi.org/10.1016/0164-1212(93)90077-B)_
**68.** Tikhomirov S., “Ethereum: state of knowledge and research perspectives,” in International Symposium
_on Foundations and Practice of Security. Springer, 2017, pp. 206–221._
-----
**69.** Destefanis G., Counsell S., Concas G., and Tonelli R., “Software metrics in agile software: An empirical
study,” in International Conference on Agile Software Development. Springer, 2014, pp. 157–170.
**70.** Newman M. E., “Power laws, pareto distributions and zipf’s law,” Contemporary physics, vol. 46, no.
[5, pp. 323–351, 2005. https://doi.org/10.1080/00107510500052444](https://doi.org/10.1080/00107510500052444)
**71.** Concas G., Marchesi M., Murgia A., Tonelli R., and Turnu I., “On the distribution of bugs in the eclipse
[system,” IEEE Transactions on Software Engineering, vol. 37, no. 6, pp. 872–877, 2011. https://doi.](https://doi.org/10.1109/TSE.2011.54)
[org/10.1109/TSE.2011.54](https://doi.org/10.1109/TSE.2011.54)
**72.** Zhang H. and Tan H. B. K., “An empirical study of class sizes for large java systems,” in 14th Asia_Pacific Software Engineering Conference (APSEC’07). IEEE, 2007, pp. 230–237._
**73.** Baxter G., Frean M., Noble J., Rickerby M., Smith H., Visser M., et al., “Understanding the shape of java
software,” in Proceedings of the 21st annual ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications, 2006, pp. 397–412.
**74.** Lopez M. and Habra N., “Relevance of the cyclomatic complexity threshold for the java programming
language,” SMEF 2005, p. 195, 2005.
**75.** Buse R. P. and Weimer W. R., “Learning a metric for code readability,” IEEE Transactions on software
_[engineering, vol. 36, no. 4, pp. 546–558, 2009. https://doi.org/10.1109/TSE.2009.70](https://doi.org/10.1109/TSE.2009.70)_
-----
| 25,024
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1802.01517, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0281043&type=printable"
}
| 2,018
|
[
"JournalArticle"
] | true
| 2018-02-05T00:00:00
|
[
{
"paperId": "41c710cb18b9ce0eb6bd3227470620aa77aa19eb",
"title": "CBGRU: A Detection Method of Smart Contract Vulnerability Based on a Hybrid Model"
},
{
"paperId": "cde7fa130e6174be2e637b59a782e809f9fdfdd4",
"title": "A user-oriented model for Oracles' Gas price prediction"
},
{
"paperId": "bb505828e62dc5d21a6751b31043b98b72193c14",
"title": "SCSGuard: Deep Scam Detection for Ethereum Smart Contracts"
},
{
"paperId": "c38e79c3b1a144a686a2e97473465fdd8556a7b0",
"title": "A systematic literature review of blockchain and smart contract development: Techniques, tools, and open challenges"
},
{
"paperId": "ce17558dcc95103bb6e0a0da8b2fbbe60f05a4cd",
"title": "Smart-Graph: Graphical Representations for Smart Contract on the Ethereum Blockchain"
},
{
"paperId": "3bd8ad97856ea864bd2e021666df573042c2fa84",
"title": "An Organized Repository of Ethereum Smart Contracts' Source Codes and Metrics"
},
{
"paperId": "4dbbfcf0f9014bb4cbcc0b4f15027d835bbd6fff",
"title": "Profiling gas consumption in solidity smart contracts"
},
{
"paperId": "adeb0a2c26f6cc9c0f953d3e9d372682449962f3",
"title": "The Butterfly “Affect”: impact of development practices on cryptocurrency prices"
},
{
"paperId": "f93cc5fc42cb902a52c79c532721e90e0308d5b2",
"title": "Inheritance software metrics on smart contracts"
},
{
"paperId": "e9554528392300c5ecd6a63f3a633729fd439c66",
"title": "An empirical analysis of source code metrics and smart contract resource consumption"
},
{
"paperId": "7478e0854bbc07842cb4c4ed446237012653c2d4",
"title": "A Model for Detecting Cryptocurrency Transactions with Discernible Purpose"
},
{
"paperId": "f25ba0f34ffec22c4e0e8463c6434184c01e0f95",
"title": "A Massive Analysis of Ethereum Smart Contracts Empirical Study and Code Metrics"
},
{
"paperId": "f34c65316c6990e79935015eacb1dab0e68fd680",
"title": "On Comparing Software Quality Metrics of Traditional vs Blockchain-Oriented Software: An Empirical Study"
},
{
"paperId": "4d8de964ba5f6b86d14b6399d1f185c9ecfa7147",
"title": "Towards Analyzing the Complexity Landscape of Solidity Based Ethereum Smart Contracts"
},
{
"paperId": "fb2bb5777f1b1bd745070c006265edf8feb5f29f",
"title": "Smart contracts vulnerabilities: a call for blockchain software engineering?"
},
{
"paperId": "9f507cadacfc66640cd0940cf5587fc6656745c4",
"title": "SmartInspect: solidity smart contract inspector"
},
{
"paperId": "3fefa6858fcc41d300b473249210ef165b113198",
"title": "A Semantic Framework for the Security Analysis of Ethereum smart contracts"
},
{
"paperId": "50c01543a35ddc9eb7ae43f5a6cc52602ed62057",
"title": "Decentralization in Bitcoin and Ethereum Networks"
},
{
"paperId": "96b3e734bc96be7250d7c29304566c54cf3327d0",
"title": "How diverse is your team? Investigating gender and nationality diversity in GitHub teams"
},
{
"paperId": "4a5135de004ac95cf379e9a4c1372afaee9837c5",
"title": "Ethereum: State of Knowledge and Research Perspectives"
},
{
"paperId": "ab4c9fc41f6acde475d34787caca8dae9e6a3fa4",
"title": "Towards a metrics suite for object oriented design"
},
{
"paperId": "e60d43ef6a380b1d17a6bb744150b3a4c8c2fcb3",
"title": "An exploratory qualitative and quantitative analysis of emotions in issue report comments of open source systems"
},
{
"paperId": "0e1738bdcc0f84b30a3bbffcac84f5a9b3cb8f7e",
"title": "Connecting the Dots: Measuring Effectiveness and Affectiveness in Software Systems"
},
{
"paperId": "aec843c0f38aff6c7901391a75ec10114a3d60f8",
"title": "A Survey of Attacks on Ethereum Smart Contracts (SoK)"
},
{
"paperId": "de39653317b23c2b129b56402178799ba0718e4a",
"title": "On the randomness and seasonality of affective metrics for software development"
},
{
"paperId": "73fd4caae8b4ce04df63437fb99e17f2cc1b6b39",
"title": "Financial Cryptography and Data Security"
},
{
"paperId": "9a9643601989088ace41382b3c1cc61e1b4d5633",
"title": "Blockchain-Oriented Software Engineering: Challenges and New Directions"
},
{
"paperId": "7968129a609364598baefbc35249400959406252",
"title": "Making Smart Contracts Smarter"
},
{
"paperId": "ef5556dd20cfde8fc488a1bb117e94a5cc3e3ef0",
"title": "Software development: do good manners matter?"
},
{
"paperId": "c74d7be8500a541702dbf792f506ffed49a337d3",
"title": "Arsonists or Firefighters? Affectiveness in Agile Software Development"
},
{
"paperId": "c4fe9fcf609310e38f3208c1f0235c55ce950096",
"title": "A Statistical Comparison of Java and Python Software Metric Properties"
},
{
"paperId": "dedc7d776d94da0270b38732f3da579fdb938ee3",
"title": "The Emotional Side of Software Developers in JIRA"
},
{
"paperId": "4d9f17f653e5b45be7b590e0085ee07e0731c565",
"title": "Mining Valence, Arousal, and Dominance - Possibilities for Detecting Burnout and Productivity?"
},
{
"paperId": "8bb5fb97a6998d40b227dc6941466ee4b7a8294c",
"title": "The JIRA Repository Dataset: Understanding Social Aspects of Software Development"
},
{
"paperId": "54a41191d9a7ac8e9e004c19560128b66fcbdd79",
"title": "Demystifying Incentives in the Consensus Computer"
},
{
"paperId": "b1f3f05449b0f922484c4909dd37bb131098eab5",
"title": "A Metrics Suite for Object Oriented Design"
},
{
"paperId": "fd27a9a5b3eca60824dd1c2dfa9d7b9681ca5219",
"title": "Measuring and Understanding the Effectiveness of JIRA Developers Communities"
},
{
"paperId": "806d8df8d9289a4b532b690d004097ea1085b5e6",
"title": "Mining software repositories: measuring effectiveness and affectiveness in software systems."
},
{
"paperId": "b649535d2687939291ac0003e1532d3109c001bd",
"title": "Could micro patterns be used as software stability indicator?"
},
{
"paperId": "97fddbbfd681bce9eeb8e0a013353b4d5b2ba0db",
"title": "Blockchain: Blueprint for a New Economy"
},
{
"paperId": "7d82a3ee7323a005f8d0c3f3ab6894cd49d6ad0e",
"title": "On the influence of maintenance activity types on the issue resolution time"
},
{
"paperId": "178ba706f41ca8e9e5a159dc4375838f1b86087f",
"title": "Do developers feel emotions? an exploratory analysis of emotions in software artifacts"
},
{
"paperId": "2caa431212216c9449eb5ab0ee8241da54f89613",
"title": "Software Metrics in Agile Software: An Empirical Study"
},
{
"paperId": "a46a867ffd07cb83a78d561745f2dcd9e4e707b3",
"title": "Micro Patterns in Agile Software"
},
{
"paperId": "2aac1a786dbec91333a13cbf1d94f8f486bfec0b",
"title": "Micro Pattern Fault-Proneness"
},
{
"paperId": "5e06a7f3f681b96bc01540d0328658b1998d31ed",
"title": "An Empirical Study of Software Metrics for Assessing the Phases of an Agile Project"
},
{
"paperId": "8b0dfd03a9d7253050f7f2beff1d12b4f3367f37",
"title": "On the Distribution of Bugs in the Eclipse System"
},
{
"paperId": "27ae15eef9de3576b3e24ddd6958b30736837094",
"title": "A modified Yule process to model the evolution of some object-oriented system properties"
},
{
"paperId": "1a2b8aa0ed7f24ca001508654f506ea010b18a5e",
"title": "Learning a Metric for Code Readability"
},
{
"paperId": "f5fdaef995e8c77713708ae4b1f05f9f07b6f2af",
"title": "Assessing traditional and new metrics for object-oriented systems"
},
{
"paperId": "60e9709f1731da2608245b67b8b326f2ec3d4804",
"title": "Power-Law Distributions of Component Size in General Software Systems"
},
{
"paperId": "13e6e90f039608324ce4c30de389089455add43e",
"title": "Validation of network measures as indicators of defective modules in software systems"
},
{
"paperId": "830a3b8ac456862669344d6074ed206187721fdd",
"title": "Software execution processes as an evolving complex network"
},
{
"paperId": "1b69cc958c5b6925ba9b2d2f45325eaf40a987d0",
"title": "Power laws in software"
},
{
"paperId": "8cd6e0c4f65ccc7f48b4f63748ca9fd134322eec",
"title": "Predicting defects using network analysis on dependency graphs"
},
{
"paperId": "e7d79609166923877e05748ad36612d84f6b9bec",
"title": "An Exploration of Power-Law in Use-Relation of Java Software Systems"
},
{
"paperId": "5e06d51b3c42b0896761f242a25a5070d6cadb64",
"title": "On the Distribution of Software Faults"
},
{
"paperId": "1ce644a02f30cb855fe941cfa1da30a1fc6a62b4",
"title": "Software graphs and programmer awareness"
},
{
"paperId": "95c9bf91937cd0b3eff59a55762e9ae98ffc0414",
"title": "An Empirical Study of Class Sizes for Large Java Systems"
},
{
"paperId": "4f9774d815bd44c9829783a1a7bc81bc91ccc15e",
"title": "Power-Laws in a Large Object-Oriented Software System"
},
{
"paperId": "7dcc27e011874c43463b80257d8ff3d797411844",
"title": "Power-Law Distributions in Empirical Data"
},
{
"paperId": "ff89fbcde6bf3faf0b01c0e5c2a65a5c21db1153",
"title": "A Second Replicated Quantitative Analysis of Fault Distributions in Complex Software Systems"
},
{
"paperId": "9abfd5555ae5fca02d1d63c3edb63d9d3efeec0d",
"title": "Parameter estimation for power-law distributions \n by maximum likelihood methods"
},
{
"paperId": "f147bb108d1fd8791c0d38d71a518a1ae8ef3f22",
"title": "Understanding the shape of Java software"
},
{
"paperId": "28a54a44c3efc8ae3b0df5852fb791fcf99347c1",
"title": "On the suitability of Yule process to stochastically model some properties of object-oriented systems"
},
{
"paperId": "1051280d2b825c04f27d231aba0f8284bb297880",
"title": "Empirical validation of object-oriented metrics on open source software for fault prediction"
},
{
"paperId": "eeabd28948cd27434cce67e2fdb9d0aa43812f00",
"title": "Power laws, Pareto distributions and Zipf's law"
},
{
"paperId": "21305504e7ccde5505a4281419a5799bccc42378",
"title": "Predicting the location and number of faults in large software systems"
},
{
"paperId": "1e41ed1ac234cba0138329047e16a8a424389e77",
"title": "A comparison of modified reconstructability analysis and Ashenhurst‐Curtis decomposition of Boolean functions"
},
{
"paperId": "f729603c65298439f5d0cde2823a387aa5032177",
"title": "Hierarchical Small Worlds in Software Architecture"
},
{
"paperId": "5808c5bf9be1a156fab043db322b61091e7f4626",
"title": "Punctuated Equilibrium in Software Evolution"
},
{
"paperId": "0184569e127beb6c6399d647db4995081ae2977b",
"title": "Signatures of small-world and scale-free properties in large computer programs"
},
{
"paperId": "2e023f8c6e423073ac1755bda723c20254a9b3a6",
"title": "Software systems as complex networks: structure, function, and evolvability of software collaboration graphs"
},
{
"paperId": "886d883c82e31266453f2d5a3476c8ded37647d9",
"title": "Power law distributions in class relationships"
},
{
"paperId": "3acc32ea0ded0a9e2ef5759c2f4970bf5ea18508",
"title": "Empirical Analysis of CK Metrics for Object-Oriented Design Complexity: Implications for Software Defects"
},
{
"paperId": "a8b22274d97a967034eb98e9e9fa1e4e6de71a14",
"title": "The distribution of faults in a large industrial software system"
},
{
"paperId": "d2a9d2ec3512e7a4c1c06ed6527ebb4a4d7a334b",
"title": "Analysis of software evolution processes using statistical distribution Models"
},
{
"paperId": "e4fff8352cdf8ba9f552e24250c8989f9c5bebef",
"title": "Scale-free networks from optimal design"
},
{
"paperId": "c704d7b835c9e37949ef0d5196d84780477c5d34",
"title": "A stochastic model of software maintenance and its implications on extreme programming processes"
},
{
"paperId": "11d038add375cb0c06baf9797ad13b8201b80cb4",
"title": "Quantitative Analysis of Faults and Failures in a Complex Software System"
},
{
"paperId": "5b4cf1e37954ccd1ca6b315986d45904f9d2f636",
"title": "Formalizing and Securing Relationships on Public Networks"
},
{
"paperId": "77ddb5c10e69b4e4104deb20e9d6888b31187c55",
"title": "A Validation of Object-Oriented Design Metrics as Quality Indicators"
},
{
"paperId": "eba85f9f55d1b6200a3a9b8ef0905afd886af279",
"title": "Comments on \"A Metrics Suite for Object Oriented Design\""
},
{
"paperId": "234ae505e647ce9c16ee08ce2fbf0312cae94ac4",
"title": "Object-oriented metrics that predict maintainability"
},
{
"paperId": "8d7fe4fee2b6105f540567fc49e54b1863082082",
"title": "A Mathematical Theory of Evolution Based on the Conclusions of Dr. J. C. Willis, F.R.S."
},
{
"paperId": "433561f47f9416a6500c8350414fdd504acd2e5e",
"title": "Bitcoin Proof of Stake: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": null,
"title": "Pompianu L arXiv:1703.06322v1 [cs.CR"
},
{
"paperId": null,
"title": "you can win with an unlimited high bid without paying for it"
},
{
"paperId": "0dbb8a54ca5066b82fa086bbf5db4c54b947719a",
"title": "A NEXT GENERATION SMART CONTRACT & DECENTRALIZED APPLICATION PLATFORM"
},
{
"paperId": "038d05ab72b7919e2444e2fa0fa9c3e7af54f3e7",
"title": "A Replicated Quantitative Analysis of Fault Distributions in Complex Software Systems"
},
{
"paperId": null,
"title": "Relevance of the cyclomatic complexity threshold for the java programming language"
},
{
"paperId": "d4be79551567b334489fb107f17ce9aee137331b",
"title": "Power Laws in Smalltalk"
},
{
"paperId": "98a78d67cace708e330ae4669e1fff90503ed65a",
"title": "Scale-free Geometry in Object-Oriented Programs"
},
{
"paperId": null,
"title": "The MOODMetrics Set"
}
] | 25,024
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Medicine",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Medicine",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00868fb7ff83df812f94bb390ab81de5663a5d57
|
[
"Computer Science",
"Medicine"
] | 0.869164
|
Agent-based Modeling for Ontology-driven Analysis of Patient Trajectories
|
00868fb7ff83df812f94bb390ab81de5663a5d57
|
Journal of medical systems
|
[
{
"authorId": "2405073",
"name": "Davide Calvaresi"
},
{
"authorId": "46779638",
"name": "M. Schumacher"
},
{
"authorId": "1795889",
"name": "Jean-Paul Calbimonte"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"J Med Syst",
"Journal of Medical Systems",
"J med syst"
],
"alternate_urls": null,
"id": "79c59592-820f-4ed1-87df-db795b4326be",
"issn": "0148-5598",
"name": "Journal of medical systems",
"type": "journal",
"url": "https://link.springer.com/journal/10916"
}
|
Patients are often required to follow a medical treatment after discharge, e.g., for a chronic condition, rehabilitation after surgery, or for cancer survivor therapies. The need to adapt to new lifestyles, medication, and treatment routines, can produce an individual burden to the patient, who is often at home without the full support of healthcare professionals. Although technological solutions –in the form of mobile apps and wearables– have been proposed to mitigate these issues, it is essential to consider individual characteristics, preferences, and the context of a patient in order to offer personalized and effective support. The specific events and circumstances linked to an individual profile can be abstracted as a patient trajectory, which can contribute to a better understanding of the patient, her needs, and the most appropriate personalized support. Although patient trajectories have been studied for different illnesses and conditions, it remains challenging to effectively use them as the basis for data analytics methodologies in decentralized eHealth systems. In this work, we present a novel approach based on the multi-agent paradigm, considering patient trajectories as the cornerstone of a methodology for modelling eHealth support systems. In this design, semantic representations of individual treatment pathways are used in order to exchange patient-relevant information, potentially fed to AI systems for prediction and classification tasks. This paper describes the major challenges in this scope, as well as the design principles of the proposed agent-based architecture, including an example of its use through a case scenario for cancer survivors support.
| ERROR: type should be string, got "https://doi.org/10.1007/s10916 020 01620 8\n\n\n**SYSTEMS-LEVEL QUALITY IMPROVEMENT**\n\n\n# Agent-based Modeling for Ontology-driven Analysis of Patient Trajectories\n\n**Davide Calvaresi[1]** **· Michael Schumacher[1]** **· Jean-Paul Calbimonte[1]**\n\n\nReceived: 6 May 2020 / Accepted: 16 July 2020\n© The Author(s) 2020\n\n\n/ Published online: 2 August 2020\n\n\n**Abstract**\nPatients are often required to follow a medical treatment after discharge, e.g., for a chronic condition, rehabilitation after\nsurgery, or for cancer survivor therapies. The need to adapt to new lifestyles, medication, and treatment routines, can\nproduce an individual burden to the patient, who is often at home without the full support of healthcare professionals.\nAlthough technological solutions –in the form of mobile apps and wearables– have been proposed to mitigate these issues,\nit is essential to consider individual characteristics, preferences, and the context of a patient in order to offer personalized\nand effective support. The specific events and circumstances linked to an individual profile can be abstracted as a patient\ntrajectory, which can contribute to a better understanding of the patient, her needs, and the most appropriate personalized\nsupport. Although patient trajectories have been studied for different illnesses and conditions, it remains challenging to\neffectively use them as the basis for data analytics methodologies in decentralized eHealth systems. In this work, we present\na novel approach based on the multi-agent paradigm, considering patient trajectories as the cornerstone of a methodology\nfor modelling eHealth support systems. In this design, semantic representations of individual treatment pathways are used\nin order to exchange patient-relevant information, potentially fed to AI systems for prediction and classification tasks. This\npaper describes the major challenges in this scope, as well as the design principles of the proposed agent-based architecture,\nincluding an example of its use through a case scenario for cancer survivors support.\n\n**Keywords Patient trajectories · Semantic modeling · Agent-based modeling**\n\n\n## Introduction\n\nThe importance of sustained support over extended periods\nof time is particularly important for patients, especially for\nrehabilitation, chronic diseases, or other conditions such as\nthose affecting cancer survivors. In these situations, patients\nare often left at home, expected to continue their lives and\nactivities, while dealing with potential complications and\nissues inherent to their health conditions [27]. To support\nthem effectively in this delicate phase, healthcare providers\n\nThis article is part of the Topical Collection on Healthcare\n_Intelligent Multi-Agent Systems (HIMAS2020)_\nGuest Editors: Neil Vaughan, Sara Montagna, Stefano Mariani,\nEloisa Vargiu and Michael I. Schumacher\n\n� Jean-Paul Calbimonte\n[[email protected]](mailto: [email protected])\n\n1 University of Applied Sciences and Arts Western Switzerland,\nHES-SO Valais-Wallis, TechnoPole 3, CH-3960,\nSierre, Switzerland\n\n\nneed to have a sufficient understanding of the individual\npathways of each patient, as well as the potential risks\nand courses of action [19]. Each patient may respond differently to treatments, depending on a series of factors,\nincluding demographics, health conditions, psychological\naspects, social and emotional characteristics, etc. Although\nit is undoubtedly complicated and even expensive to have\nsuch a detailed picture of each patient’s situation using traditional approaches, nowadays, the use of digital solutions for\npersonal data monitoring and coaching opens the ways for\npersonalized healthcare. Such solutions include the usage\nof artificial intelligent (AI) techniques —including machine\nlearning (ML) based data analytics— through the exploitation of large volumes of personal health data acquired\nfrom patients going through different health pathways.\nThe concept of illness trajectories [31], describing the\ndifferent events and situations a patient experiences through\na given illness, can be broadened to what is called a patient\ntrajectory [3]. Beyond the scope of an illness, a patient trajectory encompasses contextual data from the patient, even\nbefore diagnosis, and may include multiple co-morbidities,\nas well as emotional and social indicators, self-reported\n\n\n-----\n\noutcomes, and wellness monitoring observations during\nand after treatment [14, 41]. The usage of data analytics based on ML techniques applied to this vast body of\ndata can provide a number of features including: patient\nstratification, identification of unusual behavior patterns,\nprediction of wellness and distress parameters, assessment\nof home exercise performance, improvement of adherence\nto treatment, identification and prevention of risk situations. On the one hand, the information contained in these\ntrajectories requires managing and integrating (potentially)\nvery diverse types of data, ranging from electronic health\nrecords [8, 18] to self-reported observations [20] or sensor\nmeasurements recorded by a wearable device [10]. The data\n_variety and distribution aspects are, therefore, fundamental_\nproblems to be addressed. On the other hand, as a consequence, the management of this information requires taking\ninto account specific concerns regarding data distribution,\nreuse conditions, sharing among different care structures,\nconfidentiality & privacy. In particular, the agent-oriented\napproach characterizes the majority of assistive systems\noperating with distributed and heterogeneous data [12].\nAgent-based systems can ensure a high-degree of personalization [4], autonomy, distributed collaborative/competitive\nintelligence, and security.\nTherefore, in the context of patient trajectory analytics,\nthe main high-level requirements are: to handle broad-scope\ninformation, heterogeneous data-sources, and distributed\ndata producers and consumers. These requirements entail\nscientific challenges related to (i) the modeling of patient\ntrajectories under heterogeneity constraints; and (ii) the\ndesign of decentralized digital infrastructures for analyzing\nand sharing these trajectories. In this paper, we propose\naddressing these two challenges by introducing an agentbased modeling approach that relies on the use of semantic\nmodeling of patient trajectories. The rationale behind this\ndesign is that ontology models can effectively help to\ndescribe events and circumstances of a patient with respect\nto her health condition, while autonomous agents can\nrepresent her interests facing other agents, which may\nact on behalf of other patients, healthcare providers, and\ndata analytics processes. The agent paradigm, in this case,\nguarantees that patients (through their agents) can establish\nand negotiate how and what data is collected from them,\nwhich data sources can be considered, which data is shared\nand with whom, or what kind of processing is allowed. In\nthe same way, healthcare professionals may request through\ntheir agents, what kinds of data are requested form a patient\ntrajectory, which kind of data analytics are necessary, and\nwhat other collaborations or cooperation mechanisms are\nneeded with other physicians, nurses or other personnel.\nThe main contributions of this work can be summarized\nas follows: we (i) identify the main challenges for decentralized analytics of patient trajectories (“Challenges in patient\n\n\ntrajectories: Modeling and analytics”); (ii) establish a set of\ndesign principles of agent interaction models for patient trajectories represented through ontologies (“Patient trajectory\nagents: Design principles”); (iii) propose a multi-agent\narchitecture that complies with those principles (“Agentbased architecture for patient trajectory management”); and\n(iv) provide an example of how this approach can be applied\nin the context of cancer survivor trajectories (“Case study\nscenario: Trajectories of cancer survivors” and “Cancer\nsurvivors support with τ Agents”).\n\n## Case study scenario: Trajectories of cancer survivors\n\nCancer is one of the main causes of death worldwide,\nand diagnosed cases are expected to increase significantly\nin the next decades [9]. Although the different forms of\ncancer affect a large portion of the population, including\nmillions of patients in working age, recent advances in\nearly detection and treatment are already showing promising\nresults [34]. In Europe, more than 50% of cancer patients\nsurvive five years or more after diagnosis, and a number\nof them are able to return to work and daily life activities,\nalthough experiencing side-effects and other conditions due\nto their treatment [29]. These patients endure different\nphysical and psychological issues after cancer treatment has\nceased, potentially during long-term periods. These issues\nare known to affect the quality of life (QoL) significantly\nand include reduced physical activity, increased fatigue,\nfear of cancer recurrence, emotional distress, etc. [24, 38].\nAlthough there is evidence that specific changes in behavior\ncan lead to better outcomes for survivors [21] –e.g., changes\nin diet, moderate exercise, cognitive therapies– in practice,\nit is difficult to adapt these recommendations to individual\nneeds, preferences, expectations, and motivation factors.\nUnderstanding the trajectory of cancer survivors can constitute a fundamental starting point in order to provide useful\nand personalized suggestions or support [26]. Trajectory\ninformation can be acquired from several sources, including\nthe EHR of each patient, self-reported information, behavior questionnaires, or wearable data. Events in the trajectory\ncan be used to identify associations between symptoms,\nand events, such as therapies, interventions, admissions, readmissions, etc. (Fig. 1). Trajectories can be used to assess\nrisks as well as to establish predictive models associating symptoms, diseases and outcomes. As we can see in\nFig. 1, the trajectory of a patient has a direct incidence not\nonly on her physical well-being but also on the social and\npsychological aspects of her life. Therefore, the trajectory\ninformation can help coping with disease sequels and issues\naffecting physiological and physical characteristics, while\nalso supporting a broader scope of quality of life aspects.\n\n\n-----\n\n**Fig. 1 Schematic view of a**\npatient trajectory over time, with\nrespect to general well-being\nand distress. Notice that the\ntrajectory can be analyzed for\ndifferent aspects, e.g. physical,\npsychological, social\n\nAn additional difficulty for managing cancer survivor\ntrajectories is the need to share data among different institutions and entities, entailing an inherently distributed scenario, while guaranteeing privacy requirements. Survivors\nare generally at home, and a lot of the information produced at this point is acquired through apps, self-reported\noutcomes and other instruments. Moreover, EHR data may\ncome from different hospitals and clinics where the patient\nwas treated, e.g. for chemotherapy, physiotherapy, radiotherapy, or surgery, even in different geographical locations.\nWithout coordination mechanisms, the patient is left with\nthe burden of managing her own data, and having to use adhoc procedures for sharing it among clinical and medical\nprofessionals.\n\n## Challenges in patient trajectories: Modeling and analytics\n\nThe modeling of patient trajectories is not straightforward,\ngiven the diversity of information sources, and the broad\nscope of data that they may include, from demographics\nto physiological or psychological observations. We can\nsummarize these challenges according to the following\naspects:\n\n**Trajectory information heterogeneity A fundamental issue**\nfor the modeling of trajectories is related to the vast number\nof information that can potentially be integrated. Depending\non the objectives of the analytics to be performed, trajectories must be able to include different types of data.\nFor example, in Table 1, we identify items form EHR\nand other sources that could be relevant for the trajectory of a cancer survivor [14, 41]. The degree of heterogeneity requires the usage of models that incorporate\nsemantics, potentially spanning very different aspects: diagnostics, treatments, medication, laboratory, imaging, quality\nof life, etc.\n\n**Patient data sources Trajectory information may be acquired**\nfrom different repositories and devices. Models must define\n\n\ninteraction mechanisms for acquisition, negotiation, and\nexchange of trajectory data from heterogeneous sources\n(see Table 1). For example, cancer survivor data may\ninclude retrospective information extracted from EHR\nrecords in one or more hospitals and clinics. It may also\ncomprise continuous measures from a wearable device\n(e.g., for physical activity), or even chatbot interactions and\nquestionnaire responses (e.g., emotional assessment).\n\n**Trajectory data integration & aggregation In order to**\nanalyze trajectories, it is necessary to combine not only\ndifferent data sources but also from large numbers of\npatients. Using machine learning or other AI techniques, it\nis then possible to extract relevant insights, derive patterns,\nand classify trajectory trends. The acquisition of these data\nrequires protocols for establishing the conditions on which\ndata will be used, how it will be processed, and what\noutcomes might be obtained.\n\n**Life-long dynamic trajectories Trajectories can span several**\nyears, and may also include live data collected daily\n(or instantaneously) through sensing devices. Trajectory\nanalysis must be able to cope with this dynamicity and\nincorporate on-demand analytics that adapts through time\nand according to the evolution of the patient pathway.\nFor example, trajectory predictions can help dramatically\nimproving quality-of-life indicators in cancer survivors.\n\n**Data analytics explainability Although AI-based analytics**\nhave shown impressive results for classification, prediction,\nand pattern identification, they often lack in terms of\nunderstandability and interpretability. Patient trajectory\nanalytics should be able to provide explainable outcomes,\npotentially combining and reconciling complementary\npredictors. In particular, for cancer survivors explanations\ncan lead to stronger motivation and self-efficacy regarding\na therapy or treatment.\n\n**Privacy and confidentiality Given the sensitive nature of**\ntrajectory data, privacy has to be guaranteed along the\nprocess of acquisition, exchange, processing, and storage.\n\n\n-----\n\n**Table 1 Relevant aspects for**\npatient trajectories of cancer\nsurvivors from different\nsources\n\n\nAspects Potential parameters Source\n\nDemographics age, gender, marital status, employment, etc. EHR\nGeneral indicators BMI, weight, height, blood pressure, etc. EHR +\nMonitoring\nDiagnosis Cancer type, disease stage, tumor location, EHR\ntime after diagnosis, etc.\nTreatment surgery, ostomy, radiation, chemotherapy, etc. EHR\nCo-morbidities hypertension, diabetes, CVD, chronic lung disease, EHR\nhigh cholesterol\nSymptom burden fatigue, sleep disturbances, depression, pain, Self-reported +\ncognitive dysfunction, insomnia Monitoring\nQuality of life physical, psychological and social functioning Self-reported\n\n\nFollowing current regulations in privacy (e.g., GDPR in the\nEU), patients’ rights must be respected, e.g., granting access\nto selected data, accepting or rejecting consent conditions,\ndeleting personal data partially/entirely, or obtaining one’s\npersonal data collections.\n\n## Patient trajectory agents: Design principles\n\nTo address the challenges described in “Challenges in\npatient trajectories: Modeling and analytics”, we propose\nthe representation of trajectories using semantic models\nand embedding interactions in a multi-agent environment\naccording to the following design principles.\n\n**Ontology-based trajectory modeling Our model proposes**\nusing ontologies to represent trajectories, as well as\nconnected aspects, including illnesses, admission/discharge\nevents, periodical observations, diagnosis, etc. As a result,\ntrajectories can be represented as knowledge graphs with\nprecise semantics and upon which reasoning and analytics\ncan be applied [6, 7]. The advantages of using ontologies\nare numerous, as they provide semantics-by-design, allow\novercoming heterogeneity, facilitate the interconnection of\ndiverse sources, and can be used as the backbone of logicbased reasoning. In particular, this paper focuses on the\nuse of the widely used schema.org [22] vocabulary (see\nFig. 2), which contains a set of medical concepts related to\ntrajectory aspects, including symptoms, medical conditions,\ntherapies, diagnosis, etc.\n\n**Standard semantic vocabularies Several ontologies have**\nbeen standardized, especially in the health domain. These\ninclude medication standards, laboratory codes, diagnosis,\nbiomedical concepts, among many others. Moreover,\ngeneric health vocabularies, such as the schema.org medical\nterms, can be used to have a common way of referring to\ntrajectories and their related concepts. Our architecture, as\n\n\nseen later, is based on the use of standard semantic models,\ni.e., RDF and ontologies in the health domain. As seen in\nFig. 2, the popular schema.org vocabulary contains standard\nterms, which can be complemented with specific medical\nontologies like MeSH [32] or ICD-10 [33]. Moreover, as\nseen in Fig. 3, we can use these terms to represent the\ndifferent events and stages in the patient trajectory, e.g.,\nsymptoms, therapies, surgical procedures, conditions, etc.\n\n**Agent-based entity modeling. The multi-agent paradigm**\nenables decentralized interactions among entities concerned\nwith patient trajectories. These include the patient itself,\nwhich includes her behaviors, goals, and knowledge. Data\nacquisition processes can also be modeled as agents,\ncoordinating trajectory building with other agents that\nimplement analytics processing, confidentiality negotiation,\nor aggregation on behalf of a clinical institution (e.g.,\nfor a research study). We propose modeling all entities\nintervening in the generation, processing, and consumption\nof trajectory information.\n\n**Multi-agent behaviors for trajectory interactions Interac-**\ntions among agents managing trajectories can be governed\nthrough dynamic behaviors, considering changes that may\noccur during the period of observation or study. These\nbehaviors may include ML or other AI-based processing\nof trajectory data; or in a meta-level, the negotiation of\nexchange of trajectories. Regarding data aggregation, the\nbehavior of an agent representing a clinical study may\nrequire managing interactions within a cohort of patients\nor the request for crowd-sourced data. In all of these, the\ndecentralized nature of these behaviors makes it possible to\navoid top-down governance schemes, which are unfeasible\nin multi-party clinical studies and support environments.\n\n**Negotiation in trajectory processing The** multi-agent\nparadigm includes the possibility of incorporating negotiation mechanisms at different levels of trajectory analysis.\n\n\n-----\n\n**Fig. 2 Excerpt from schema.org [22] of relevant medical concepts for patient trajectories. For simplicity, empty boxes represent unspecified types**\n\n\nFor example, a processing agent using ML techniques may\nrequire detailed EHR records for training, which could\npotentially clash with a patient agent’s goal regarding data\nanonymity. A negotiation could be established to comply\nwith both parties’ expectations. Other negotiation protocols\ncan be set up, for instance, by coaching agents, which may\npropose different treatment strategies to a patient agent. A\ndialogue between the two parties can then be established\nin order to agree on the most suitable strategy to follow\njointly. Our model considers these negotiation patterns a\nfundamental element in the decentralized management of\npatient trajectories.\n\n**Personaldataprivacyinteractions Agents must be designed**\nto comply with existing regulations for data privacy (e.g.,\nGDPR). In this regard, it is fundamental to consider semantic models representing personal data handling concepts,\nincluding consent, purpose, processing, legal basis, controllers, and recipients, among others [36]. Agents can,\ntherefore, exchange patient trajectory data, only if consent\n\n\nrequirements are met, and according to the legal constraints\nreflected with these semantic vocabularies.\n\n## Agent-based architecture for patient trajectory management\n\nThis section presents a conceptual architecture of an agentbased approach for patient trajectory management, relying\non the use of ontology-driven data models. The central element in this architecture is the τ Agent, which s a patient\ntrajectory management agent (Fig. 4). Agents of this type\ncan play different specific roles, such as a patient agent,\na processing agent, coaching agent, aggregator agent, and\nacquisition agent. A τ Agent is characterized by a set\nof goals, beliefs, and behaviors; and includes a specialized knowledge graph of patient trajectory data (partial,\ncomplete and/or aggregated). Moreover, it employs a set of\nchannels for communication with other τ Agents, a scheduler for establishing task allocation strategies, a set of\n\n\n**Fig. 3 Schematic view of a patient trajectory, aligning with schema.org medical concepts: symptoms, conditions, therapies, surcial procedures, etc**\n\n\n-----\n\n**Fig. 4 Schematic view of τ** Agents for managing patient trajectories\n\nstandard ontologies for trajectory and medical data representation, and (optionally) a set of ML analytics components.\n_τ_ Agent goals may differ according to the assumed\nrole [39]. For a patient τ Agent, the goals may be related, for\ninstance, to quality of life indicators. For example, a goal\nof an agent acting on behalf of cancer survivor, could be\nto retain moderate physical activity over a certain period,\nin order to reduce risk factors of recurrence. Conversely, a\ncoaching agent may define goals regarding the adherence\nof its assigned patients to their individual treatments or\ntherapies. This could be measured using different indicators,\ne.g., through quantitative instruments.\nSimilarly, beliefs can be defined differently according\nto the agent role. In general, beliefs include metadata of\nother agents (e.g., patient agents subscribed to a coaching\nagent, or potential trajectory contributors for training a\nML agent), health vocabularies, constraints, and privacy\npolicies. These beliefs can be crucial later on, for example,\nduring a negotiation among different agents. For instance,\na coaching agent belief set can be periodically updated in\norder to follow the evolution of a patient trajectory, so that\nfuture support actions are adapted to the current situation.\nBehaviors may require access to different functionalities. In\nthe case of processing τ Agents, this may include gateways\nfor machine learning methods or reasoning over the\ntrajectory knowledge graphs. All communication channels\nin τ Agents use RDF [16] as underlying representation\nmodel (Figs. 4 & 5).\nIn Fig. 5 we provide a detailed example of interactions\namong τ Agents assuming different roles. A patient agent\nacting on behalf of a human may solicit data from data\nacquisition agents, i.e., those gathering data from sensors\n\n\nin the patient environment. Upon negotiation of the data\nacquisition terms, sensor agents may periodically send data\nto the patient agent, which can then construct its own\ntrajectory, which will be part of its own beliefs. Then,\nan aggregator agent may request, through a negotiation\nprotocol, data to several patient agents. To accept or\nreject this request, the different privacy regulations and\npreferences, as well as usage and consent information,\nare fundamental. Patient agents agreeing to aggregate\ntheir data, will probably expect further processing to\nproduce actionable feedback. Precisely, a processing agent\nmay then use the aggregated trajectories to create (e.g.,\nprediction) models using ML techniques. The outcomes of\nthe processing of patient trajectories can then be used by a\ncoaching agent to provide support and recommendations to\nthe patients that initially contributed their data.\nAs can be seen, this conceptual architecture emphasizes\non the decentralized nature of patient trajectory interactions.\n_τ_ Agents can respond to entirely different goals, even\nleading to potential conflicts that would require negotiation\nto be solved. Moreover, the approach also encourages\nsupport for different levels of commitment within the\nagent environment. This responds to the personalized\nrequirements of patient support systems. For example,\ncancer survivors may have different levels of adherence to\ntreatment and very different illness pathways.\nInteractions among τ Agents can be embedded in\nstandard agent protocols such as FIPA [1]. For example,\nas seen in Fig. 6, a coaching agent may require prediction\nresults from a processing agent, regarding potential\noutcomes of a given patient. This request can be encoded\nas a Request Interaction Protocol, to which the processing\nagent may agree or refuse. In case of acceptance, the\n\n\n-----\n\n**Fig. 5 Interactions among**\n_τ_ Agents assuming different\nroles. All interactions rely on the\nusage of semantic RDF\nmessages\n\nprediction data can be transmitted. All interactions are\nencoded in RDF in the proposed architecture.\n\n## Cancer survivors support with τ Agents\n\nTo illustrate the different interactions among τ Agents, we\npresent excerpts of semantically annotated data representing\nexcerpts and parts of patient trajectories, for the case\nscenario of colorectal cancer survivors.\nConsider a patient who has survived colon cancer and\nis now following a long-life support program. His patient\nagent is in charge of managing his patient trajectory, and\nfor this purpose, it collects EHR information available\nfrom agents representing the different hospitals and clinics\nwhere he was treated. Moreover, and assuming that the\nsupport program includes the usage of wearable devices\nthat monitor physical activity, stress, and behavior, the\npatient trajectory can be completed with live data integrated\ncontinuously.\nIn Listing 1, we illustrate how we can represent a set of\nsymptoms from a patient, using the schema.org vocabulary.\nIn the example, the patient symptoms are encoded as\nMedicalSymptom instances, with codes referring to a\nspecific medical coding system (in this case, the ICD-10\n\n**Fig. 6 τ** Agent interaction\nfollowing the FIPA request\ninteraction protocol\n\n\nstandard). These symptoms, i.e., fatigue, rectal bleeding, and\ndiarrhea, can be integrated as part of the patient trajectory\nand could be used later for stratification or classification.\nThe symptomatic and diagnosis information is only one\nsmall part of the patient trajectory. Additional information\ncan be appended, including the colon cancer diagnose itself\n(Listing 2), treatments such as a colonoscopy, epidemiology,\nrisk factors, stage of cancer, etc. Many of these pieces of\ninformation can be used in different ways during a support\nprogram. Just as an example, considering that risk factors\nsuch as polyps or smoking habits can be linked to future\nrecurrence of cancer, the coaching agent may choose to\npropose actions that reduce those risks. Notice that we can\nuse different coding systems, as in the case of risk factors,\nwhere the MeSH [32] standard is employed.\nFurthermore, during the program, a cancer survivor may\nsuffer not only from physical problems but also from\npsychological issues. As an example, consider that the\npatient suffers from anxiety, mainly due to the fact of having\nfear of recurrence. Using a self-reported questionnaire (e.g.,\nthrough a mobile app), or supported by wearable devices\nthat compute stress levels, and anxiety symptom can be\nestablished, encoded with ICD-10 in Listing 3.\nHaving this information, the coaching agent can propose\nactions, in this case potential therapies and activities that\n\n\n-----\n\n**Listing 1 Example of symptoms**\nencoded with ICD-10 and\nfollowing schema.org\nrepresented in RDF Turtle\nformat. All prefixes omitted for\nbrevity\n\n**Listing 2 Example of colorectal**\ncancer details described with\nschema.org\n\n**Listing 3 Example of a medical**\ncondition –anxiety– for a cancer\nsurvivor\n\n**Listing 4 Example of potential**\ntherapies for a cancer survivor\n–flexibility exercises and\npsychological group therapy\n\n\n-----\n\ncould help the patient dealing with his conditions. As\nan example, in Listing 4 we include both an exercise\ntherapy (flexibility) and psychological therapy (group\npsychotherapy).\n\n## Discussion and related work\n\nThe proposed conceptual architecture is based on two\nfundamental ideas: (i) the use of semantic representation\nmodels, and (ii) the multi-agent paradigm. Both show\ncomplementary properties allowing the establishment of\ndecentralized networks of potentially independent agents,\nwhich can establish cooperation and negotiation mechanisms to achieve their goals. Although at this stage, the\nproposed model does not materialize into an implementation, it already establishes the main guiding principles that\nshould be observed. In particular, we can emphasize on the\n_τ_ Agent basic structure, the types of roles that can be implemented, the usage of RDF for inter-agent communication,\nthe reliance on standard vocabularies such as schema.org,\nand of medical ontologies like ICD-10 or MeSH. We believe\nthat this approach can lead to promising results, especially\nfor use-cases where patient trajectories can be exploited\nusing large volumes of data while maintaining personal data\npreferences and guarantees. We identify several aspects in\nwhich further research is required in order to address the\nchallenges identified above, and we relate them to existing\nwork in the literature.\n\n**Ontology agreement Matching terms among ontologies**\nis a long-studied topic, and in this case, it will be\nnecessary to align concepts from different vocabularies, and\neven data models [25]. For example, patient trajectories\ncould be specified both using schema.org and the FHIR[1]\n\nspecifications. Moreover, a large number of medical specific\ncodes can make it hard to overcome potential coding\ndiscrepancies. Several works in the literature have used\nontology-based approaches for health data integration [17,\n30]. However, only few works include the modeling\nof interactions, negotiation, and collaboration among\nintelligent and autonomous systems [11], as in τ Agents.\n\n**Agentautonomy We presented different profiles for τ** Agents,\nincluding specialized sensor data acquisition agents. Nevertheless, given that it is often the case that sensing and\nwearable devices have limited computation capabilities, it\nbecomes challenging to deploy intelligent agents on such\nplatforms. Although there have been recent proposals on\nhow to adapt multi-agent systems to these environments,\ne.g., incorporating real-time support [12] or scheduling\n\n[1http://hl7.org/fhir](http://hl7.org/fhir)\n\n\nstrategies [13], the integration of these data into semantic\ntrajectories remains to be implemented.\n\n**Implementation The** implementation of the proposed\nagent-based model is one of the key aspects to consider\nin the immediate future. This implementation will need to\nconsider the communication interactions as described earlier in the paper and using ontologies such as schema.org as\na first-class citizen. Nevertheless, given the open nature of\nsemantic vocabularies, it is at the same time advantageous\nfor extensibility purposes, but problematic as the number of\nmodels to integrate can be incompatible or hard to align. The\nimplementation will also consider the issues of agent discovery, negotiation implementation, and publishing patient\ntrajectories. Previous works have explored the integration of\nhealth agents through semantic services [11] and ontologybased approaches [23, 40], although lacking the concept of\npatient trajectories.\n\n**Recommendation & support The proposed architecture**\nserves as a platform for eHealth support. Therefore, the\nhigh-level challenge is to provide useful recommendations\nand advice. We plan to implement the use-case for\ncancer survivors, following the principles and examples\nshown in this paper. Beyond existing works in the area,\nincluding eHealth support and Semantic Web architectures\nfor patient support [5, 23], we combine both the modelling\nof trajectories and of agents’ behaviors. An additional\nchallenge will be to effectively assess the adequacy and\naccuracy of the recommendation with respect to the\nsurvivors’ needs, goals, and expectations.\n\n**Explainability A general challenge regarding data analytics,**\nand especially when using ML techniques, is explainability.\nThis is even more important in eHealth, where decisions can\nhave vital consequences. In this case, future work should\nalso consider not only the of symbolic knowledge from ML\npredictors but also the integration of heterogeneous knowledge and negotiation among explainability agents [15].\nAgents may need to have reliable explanations of analysis\nand decisions taken regarding a trajectory, before choosing\na behavior change strategy [2].\n\n**Evaluation and validation Several indicators must be con-**\nsidered for evaluation of this approach, including not only\nperformance metrics for communication and decision making but also considering the effectiveness of negotiations,\naccuracy of data analytics, response time of agent interactions, compliance to privacy policies, etc. While a number of\nontology-based medical system have been evaluated in the\nlast decade [28, 35, 37, 40], the incorporation of trajectory\nand agent-based modelling requires a thorough assessment,\ne.g. by running pilot studies.\n\n\n-----\n\n## Conclusions\n\nIn this paper, we presented a novel approach based on multiagent systems for managing patient trajectories, which are\nrepresented and exchanged using semantic models. We\nidentified first a set of challenges in this context, for which\nwe proposed a corresponding set of design principles. In\nturn, these principles guide our proposal for a conceptual\narchitecture that defined what we call τ Agents, which can\nassume different roles. Furthermore, we exemplified how\nthis architecture can be used to acquire patient trajectory\ndata, aggregate them, and apply AI algorithms to provide\ninput for coaching agents. The entire concept has been used\nto illustrate a concrete use-case, i.e., for cancer survivorship\nsupport. Finally, we have proposed a research agenda that\ncontinues addressing the different challenges described in\nthe paper, targeting not only scientific but also societal\nimpact through the development of decentralized eHealth\napplications.\n\n**Funding Information Open access funding provided by University of**\nApplied Sciences and Arts Western Switzerland (HES-SO). This work\nis partially supported by the H2020 project PERSIST: Patient-centered\nsurvivorship care plan after cancer treatment (GA 875406).\n\n### Compliance with Ethical Standards\n\n**Conflict of interests The authors declare that they have no conflicts of**\ninterest.\n\n**Ethical approval This article does not contain any studies with human**\nparticipants or animals performed by any of the authors.\n\n**Open Access This article is licensed under a Creative Commons**\nAttribution 4.0 International License, which permits use, sharing,\nadaptation, distribution and reproduction in any medium or format, as\nlong as you give appropriate credit to the original author(s) and the\nsource, provide a link to the Creative Commons licence, and indicate\nif changes were made. The images or other third party material in\nthis article are included in the article’s Creative Commons licence,\nunless indicated otherwise in a credit line to the material. If material\nis not included in the article’s Creative Commons licence and your\nintended use is not permitted by statutory regulation or exceeds\nthe permitted use, you will need to obtain permission directly from\n[the copyright holder. To view a copy of this licence, visit http://](http://creativecommonshorg/licenses/by/4.0/)\n[creativecommonshorg/licenses/by/4.0/.](http://creativecommonshorg/licenses/by/4.0/)\n\n## References\n\n[1. Foundation for Intelligent Physical Agents Standard. http://www.](http://www.fipa.org/)\n[fipa.org/.](http://www.fipa.org/)\n2. Abdulrahman, A., Richards, D., Ranjbartabar, H., and Mascarenhas, S., Belief-based agent explanations to encourage behaviour\nchange. In: Proceedings of the 19th ACM International Confer_ence on Intelligent Virtual Agents, pp. 176–178, 2019._\n3. Alexander, G. L., The nurse—patient trajectory framework. Studies\n_in Health Technology and Informatics 129(Pt 2):910, 2007._\n\n\n4. Ardissono, L., Goy, A., Petrone, G., and Segnan, M., A\nmulti-agent infrastructure for developing personalized web-based\nsystems. ACM Trans. Internet Technol. (TOIT) 5(1):47–69, 2005.\n5. Benyahia, A. A., Hajjam, A., Hilaire, V., and Hajjam, M., e-care:\nOntological architecture for telemonitoring and alerts detection.\nIn: 2012 IEEE 24Th International Conference on Tools with\n_Artificial Intelligence, Vol. 2, pp. 13–17: IEEE, 2012._\n6. Berners-Lee, T., Bizer, C., and Heath, T., Linked data-the story so\nfar. IJSWIS 5(3):1–22, 2009.\n7. Berners-Lee, T., Hendler, J., and Lassila, O., The semantic Web.\n_Scientific American 284(5):34–43, 2001._\n8. Blumenthal, D., and Tavenner, M., The “meaningful use”\nregulation for electronic health records. New England J. Medicine\n363(6):501–504, 2010.\n9. Bray, F., Ferlay, J., Soerjomataram, I., Siegel, R. L., Torre, L. A.,\nand Jemal, A., Global cancer statistics 2018: Globocan estimates\nof incidence and mortality worldwide for 36 cancers in 185 countries.\n_CA:, a Cancer Journal for Clinicians 68(6):394–424, 2018._\n10. Buonocunto, P., Giantomassi, A., Marinoni, M., Calvaresi, D.,\nand Buttazzo, G., A limb tracking platform for tele-rehabilitation.\n_ACM Trans. Cyber. Phys. Syst. 2(4):1–23, 2018._\n11. Caceres, C., Fern´andez, A., Ossowski, S., and Vasirani, M., Agentbased semantic service discovery for healthcare: an organizational\napproach. IEEE Intel. Syst. 21(6):11–20, 2006.\n12. Calvaresi, D., Cesarini, D., Sernani, P., Marinoni, M., Dragoni, A.\nF., and Sturm, A., Exploring the ambient assisted living domain:\na systematic review. J. Amb. Intel. Humanized Comput. 8(2):239–\n257, 2017.\n13. Calvaresi, D., Marinoni, M., Lustrissimini, L., Appoggetti, K.,\nSernani, P., Dragoni, A. F., Schumacher, M., and Buttazzo,\nG., Local scheduling in multi-agent systems: getting ready for\nsafety-critical scenarios. In: Multi-agent Systems and Agreement\n_Technologies, pp. 96–111: Springer, 2017._\n14. Chung, J. Y., Lee, D. H., Park, J. H., Lee, M. K., Kang, D.\nW., Min, J., Kim, D. I., Jeong, D. H., Kim, N. K., Meyerhardt,\nJ. A. et al., Patterns of physical activity participation across the\ncancer trajectory in colorectal cancer survivors. Supportive Care\n_in Cancer 21(6):1605–1612, 2013._\n15. Ciatto, G., Calegari, R., Omicini, A., and Calvaresi, D.,\nTowards XMAS: explainability through multi-agent systems. In:\n_Proceedings of the 1st Workshop on Artificial Intelligence and_\n_Internet of Things co-located with AI*IA 2019, pp. 40–53, 2019._\n16. Cyganiak, R., Wood, D., and Lanthaler, M., Rdf 1.1 concepts and\n[abstract syntax. W3c Recommendation 25(02). https://www.w3.](https://www.w3.org/TR/rdf11-concepts/)\n[org/TR/rdf11-concepts/, 2014.](https://www.w3.org/TR/rdf11-concepts/)\n17. Dimitrieski, V., Petrovi´c, G., Kovaˇcevi´c, A., Lukovi´c, I., and\nFujita, H., A survey on ontologies and ontology alignment\napproaches in healthcare. In: International Conference on\n_Industrial, Engineering and Other Applications of Applied_\n_Intelligent Systems, pp. 373–385: Springer, 2016._\n18. Dubovitskaya, A., Urovi, V., Barba, I., Aberer, K., and Schumacher, M. I., A multiagent system for dynamic data aggregation\nin medical research. BioMed Research International 2016, 2016.\n19. Eslami, M. Z., Zarghami, A., Sapkota, B., and Van Sinderen,\nM., Service tailoring: Towards personalized homecare services.\n_ACT4SOC 2010:109–121, 2010._\n20. Falcionelli, N., Sernani, P., Brugu´es, A., Mekuria, D. N.,\nCalvaresi, D., Schumacher, M., Dragoni, A. F., and Bromuri, S.,\nIndexing the event calculus: towards practical human-readable\npersonal health systems. Artific. Intel. Medic. 96:154–166,\n2019.\n21. Finne, E., Glausch, M., Exner, A. K., Sauzet, O., Stoelzel, F., and\nSeidel, N., Behavior change techniques for increasing physical\nactivity in cancer survivors: a systematic review and meta-analysis\nof randomized controlled trials. Cancer Manage. Res. 10:5125,\n2018.\n\n\n-----\n\n22. Guha, R. V., Brickley, D., and Macbeth, S., Schema. org: evolution\nof structured data on the web. Commun. ACM 59(2):44–51, 2016.\n23. Hussain, S., Abidi, S. R., and Abidi, S. S. R., Semantic\nweb framework for knowledge-centric clinical decision support\nsystems. In: Conference on Artificial Intelligence in Medicine in\n_Europe, pp. 451–455: Springer, 2007._\n24. Jones, J. M., Olson, K., Catton, P., Catton, C. N., Fleshner, N. E.,\nKrzyzanowska, M. K., McCready, D. R., Wong, R. K., Jiang, H.,\nand Howell, D., Cancer-related fatigue and associated disability in\npost-treatment cancer survivors. Journal of Cancer Survivorship\n10(1):51–61, 2016.\n25. Khan, W. A., Khattak, A. M., Hussain, M., Amin, M. B., Afzal,\nM., Nugent, C., and Lee, S., An adaptive semantic based mediation system for data interoperability among health information\nsystems. J. Med. Syst. 38(8):28, 2014.\n26. Klimmek, R., and Wenzel, J., Adaptation of the illness trajectory\ntheory to describe the work of transitional cancer survivorship. In:\n_Oncology Nursing Forum, Vol. 39, p. e499: NIH Public Access,_\n2012.\n27. Koutkias, V. G., Chouvarda, I., Triantafyllidis, A., Malousi, A.,\nGiaglis, G. D., and Maglaveras, N., A personalized framework for\nmedication treatment management in chronic care. IEEE Trans.\n_Inform. Technol. Biomed. 14(2):464–472, 2009._\n28. Lasierra, N., Rold´an, F., Alesanco, A., and Garc´ıa, J., Towards\nimproving usage and management of supplies in healthcare: An\nontology-based solution for sharing knowledge. Expert Systems\n_with Applications 41(14):6261–6273, 2014._\n29. Liu, L., O’Donnell, P., Sullivan, R., Katalinic, A., Moser, E. C.,\nde Boer, A., Meunier, F., Scientific Committee, O. et al., Cancer\nin europe: Death sentence or life sentence? European Journal of\n_Cancer 65:150–155, 2016._\n30. Liyanage, H., Krause, P., and de Lusignan, S., Using ontologies to\nimprove semantic interoperability in health data. BMJ Health &\n_Care Informatics 22(2):309–315, 2015._\n31. Murray, S. A., Kendall, M., Boyd, K., and Sheikh, A., Illness\ntrajectories and palliative care. Bmj 330(7498):1007–1011, 2005.\n32. Nelson, S. J., Schopen, M., Savage, A. G., Schulman, J. L. A., and\nArluk, N., The mesh translation maintenance system: structure,\n\n\ninterface design, and implementation. In: Medinfo, pp. 67–69,\n2004.\n33. Organization, W. H., et al., Icd-10: international statistical classification of diseases and related health problems: tenth revision,\n2004.\n34. Organization, W. H., et al., Guide to cancer early diagnosis, 2017.\n35. Paganelli, F., and Giuli, D., An ontology-based system for contextaware and configurable services to support home-based continuous care. IEEE Trans. Inform. Technol. Biomed. 15(2):324–333,\n2010.\n36. Pandit, H. J., Polleres, A., Bos, B., Brennan, R., Bruegger, B.,\nEkaputra, F. J., Fern´andez, J. D., Hamed, R. G., Kiesling, E.,\nLizar, M. et al., Creating a vocabulary for data privacy. In:\n_OTM Confederated International Conferences on the Move to_\n_Meaningful Internet Systems, pp. 714–730: Springer, 2019._\n37. Parry, D., Evaluation of a fuzzy ontology-based medical information system. International Journal of Healthcare Information\n_Systems and Informatics 1(1):40–51, 2006._\n38. Van Leeuwen, M., Husson, O., Alberti, P., Arraras, J. I., Chinot,\nO. L., Costantini, A., Darlington, A. S., Dirven, L., Eichler,\nM., Hammerlid, E. B. et al., Understanding the quality of life\nissues in survivors of cancer: towards the development of an eortc\nqol cancer survivorship questionnaire. Health and Quality of life\n_Outcomes 16(1):114, 2018._\n39. Vermunt, N. P., Harmsen, M., Westert, G. P., Rikkert, M. G. O.,\nand Faber, M. J., Collaborative goal setting with elderly patients\nwith chronic disease or multimorbidity: a systematic review. BMC\n_Geriatrics 17(1):167, 2017._\n40. Wang, M. H., Lee, C. S., Hsieh, K. L., Hsu, C. Y., Acampora,\nG., and Chang, C. C., Ontology-based multi-agents for intelligent\nhealthcare applications. Journal of Ambient Intelligence and\n_Humanized Computing 1(2):111–131, 2010._\n41. Wu, H. S., and Harden, J. K., Symptom burden and quality of\nlife in survivorship: a review of the literature. Cancer Nursing\n38(1):E29–E54, 2015.\n\n**Publisher’s Note Springer Nature remains neutral with regard to**\njurisdictional claims in published maps and institutional affiliations.\n\n\n-----\n\n"
| 10,450
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC7396405, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://link.springer.com/content/pdf/10.1007/s10916-020-01620-8.pdf"
}
| 2,020
|
[
"JournalArticle"
] | true
| 2020-08-02T00:00:00
|
[
{
"paperId": "988da212382074480db7041447de8c4e41cb5d64",
"title": "Creating a Vocabulary for Data Privacy - The First-Year Report of Data Privacy Vocabularies and Controls Community Group (DPVCG)"
},
{
"paperId": "0bc6bb58bc81e2311700928887175a7696e21e08",
"title": "Belief-based Agent Explanations to Encourage Behaviour Change"
},
{
"paperId": "3147707bccfa9a6d299aa396cf0392b5a661c822",
"title": "Indexing the Event Calculus: Towards practical human-readable Personal Health Systems"
},
{
"paperId": "65d8a42915bd713b1f6822eccd6b682602777ce9",
"title": "Guide to Cancer Early Diagnosis"
},
{
"paperId": "7af0e89793e39505dbf64605d6a906c825a1e20a",
"title": "Behavior change techniques for increasing physical activity in cancer survivors: a systematic review and meta-analysis of randomized controlled trials"
},
{
"paperId": "40e2d723ab5aea9d4a1415eda49289ce2dc7099b",
"title": "A Limb Tracking Platform for Tele-Rehabilitation"
},
{
"paperId": "83ab5cf89399bca5449f4a7baf1b1b3c2e1178c7",
"title": "Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries"
},
{
"paperId": "2ef546a40a7319a7474baacabe9c34645ed03da5",
"title": "Formal Verification of Medical CPS"
},
{
"paperId": "de7afc21bc9d83b7c5f2f714918aa8dfdd6fd9d0",
"title": "Understanding the quality of life (QOL) issues in survivors of cancer: towards the development of an EORTC QOL cancer survivorship questionnaire"
},
{
"paperId": "8a384e3ace208007eff9f31b131b144a623dd6fb",
"title": "Local Scheduling in Multi-Agent Systems: Getting Ready for Safety-Critical Scenarios"
},
{
"paperId": "1cc5e97434a55251b7890b69e40d58a08e15d173",
"title": "Collaborative goal setting with elderly patients with chronic disease or multimorbidity: a systematic review"
},
{
"paperId": "d250d04ffbb6aadfd22dde35ee91602222eef8a5",
"title": "Exploring the ambient assisted living domain: a systematic review"
},
{
"paperId": "5644f43a4816ba3beabade76dd01ffeb17c6a0aa",
"title": "Service Tailoring: Towards Personalized Homecare Services"
},
{
"paperId": "7916adc491e7df2bb5b475864e6fd54fc24835f6",
"title": "A Multiagent System for Dynamic Data Aggregation in Medical Research"
},
{
"paperId": "2598ed8817756c1f26df706fcf44e53657aeac5c",
"title": "Cancer in Europe: Death sentence or life sentence?"
},
{
"paperId": "df3c543c893645300b63f19509627f80d4738d62",
"title": "A Survey on Ontologies and Ontology Alignment Approaches in Healthcare"
},
{
"paperId": "d358495daf98cfd7e2f3bed887b5f958e17efd8f",
"title": "Preserving hybrid objects"
},
{
"paperId": "7ffe4eae5ddcd569e6f824711c73d3cc8e5e375a",
"title": "Cancer-related fatigue and associated disability in post-treatment cancer survivors"
},
{
"paperId": "2f43abe059978640adea641679dd0fe168d1b30e",
"title": "Schema.org"
},
{
"paperId": "9f58262626aeaab0b78750a7433f3f8017908c87",
"title": "Schema.org: Evolution of Structured Data on the Web"
},
{
"paperId": "841df50ecd50d27ffea9dada180e34a24aa12409",
"title": "Using ontologies to improve semantic interoperability in health data"
},
{
"paperId": "88342cb4ac24549c2eb8e5f210bf61b7d4bcfd12",
"title": "Symptom Burden and Quality of Life in Survivorship: A Review of the Literature"
},
{
"paperId": "13948b72279f06239dcb4f736187e33965b88cdf",
"title": "Towards improving usage and management of supplies in healthcare: An ontology-based solution for sharing knowledge"
},
{
"paperId": "aa72e1a9ddea60d0029bddbd229c34ea5b074bc5",
"title": "An Adaptive Semantic based Mediation System for Data Interoperability among Health Information Systems"
},
{
"paperId": "4707ba0e48cfacb394e122ce584c6adf7595a53d",
"title": "Patterns of physical activity participation across the cancer trajectory in colorectal cancer survivors"
},
{
"paperId": "b2a26ac825208f10d1bc771c4fb6d7a8a4c4dc29",
"title": "e-Care: Ontological Architecture for Telemonitoring and Alerts Detection"
},
{
"paperId": "27e2fcacae870dcca63b5c1e0947429692fc1519",
"title": "Adaptation of the illness trajectory framework to describe the work of transitional cancer survivorship."
},
{
"paperId": "72fa920c575a451dc693fdf7b539f7cd559a5736",
"title": "An Ontology-Based System for Context-Aware and Configurable Services to Support Home-Based Continuous Care"
},
{
"paperId": "7cb992e7373c5b797d5e76bbfb11035008f1afb3",
"title": "The \"meaningful use\" regulation for electronic health records."
},
{
"paperId": "c4e2469b2a544576154e3c561894f683a2f2fcfb",
"title": "Ontology-based multi-agents for intelligent healthcare applications"
},
{
"paperId": "ab39f9536a529a5d0955fe79cae0f34eb1de9a7a",
"title": "A Personalized Framework for Medication Treatment Management in Chronic Care"
},
{
"paperId": "9f54a0057d0694bc7d1dcf69d186e313ca92775c",
"title": "Linked Data - The Story So Far"
},
{
"paperId": "fb74369671ececb313b88ce24fff09323369f8d3",
"title": "Semantic Web Framework for Knowledge-Centric Clinical Decision Support Systems"
},
{
"paperId": "0118ab6a856ad431288b042ed2c0895246dc72de",
"title": "Agent-Based Semantic Service Discovery for Healthcare: An Organizational Approach"
},
{
"paperId": "daa287d877897ddd8dbaf2e3bd149f55ecbafb9b",
"title": "Illness trajectories and palliative care"
},
{
"paperId": "f4ee9f429f35ccc05e89482a081efd432c26db71",
"title": "A multi-agent infrastructure for developing personalized web-based systems"
},
{
"paperId": "3cdb93a3927b7006c9cf6967a238b0813cee70f8",
"title": "The mice that warred."
},
{
"paperId": "85c941157e16ce87ba1eb49ee6fe9d4603c99061",
"title": "Book Reviews : International Statistical Classification of Diseases and Related Health Problems 10th Revision, Vol 2. Instruction Manual. by World Health Organisation, 1993. 160 pp, Sw fr 40. Hardback. ISBN: 92-4-154420-1"
},
{
"paperId": "d9f16ebe9650e35c5027e0dd4549718613fec3c6",
"title": "Review of literature"
},
{
"paperId": "682effcfad70887c82ffc14a94fe01233f0feb4c",
"title": "The Semantic Web"
},
{
"paperId": "50fe6c0cb83a2f1bfd0fd625e9d461a7068b1bae",
"title": "Towards XMAS: eXplainability through Multi-Agent Systems"
},
{
"paperId": "41c719449db8bbe0688b8a9d5dc565bd94b97a54",
"title": "A systematic review and meta-analysis of randomized controlled trials"
},
{
"paperId": null,
"title": "Rdf 1.1 concepts and abstract syntax. W3c Recommendation 25(02)"
},
{
"paperId": "0bfc14efa5c65cfe8b7bc7edbe2940afb44d3216",
"title": "The Nurse - Patient Trajectory Framework"
},
{
"paperId": "6326c4c75f31ce30c73bcf7ec60a4d6536052a31",
"title": "Evaluation of a Fuzzy Ontology-Based Medical Information System"
},
{
"paperId": "e519e8c3c33f80ec65ef37ba87a4faee3fef7606",
"title": "The MeSH Translation Maintenance System: Structure, Interface Design, and Implementation"
},
{
"paperId": "34b9635d7779e219e9d60e0d3d33919ca9bc123c",
"title": "Publisher's Note"
},
{
"paperId": "3e713c24b17e960b5b6667ab046edfc614614832",
"title": "The Semantic Web: A new form of Web content that is meaningful to computers will unleash a revolutio"
},
{
"paperId": "e6515b9f72a177efe8e5c176fd8b6e2d3d040e77",
"title": "International statistical classification of diseases and related health problems. Tenth revision."
},
{
"paperId": null,
"title": "Foundation for Intelligent Physical Agents Standard"
}
] | 10,450
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00873fd3b05f665571e8cd10b4dd147a65c827b2
|
[
"Computer Science"
] | 0.900592
|
A Loyalty System Incorporated with Blockchain and Call Auction
|
00873fd3b05f665571e8cd10b4dd147a65c827b2
|
Journal of Theoretical and Applied Electronic Commerce Research
|
[
{
"authorId": "9813289",
"name": "Shu-Fen Tu"
},
{
"authorId": "49674331",
"name": "Ching-Sheng Hsu"
},
{
"authorId": "2134115239",
"name": "Yanbing Wu"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"J Theor Appl Electron Commer Res"
],
"alternate_urls": [
"http://www.jtaer.com/"
],
"id": "890beb40-ba59-4681-9bb6-88ed97b7decb",
"issn": "0718-1876",
"name": "Journal of Theoretical and Applied Electronic Commerce Research",
"type": "journal",
"url": "http://www.scielo.cl/scielo.php?lng=en&pid=0718-1876&script=sci_serial"
}
|
A loyalty program is a type of incentive to reward customers’ perceived value and enhance their purchasing behavior. The key to the success of a loyalty program is to allow customers to more actively participate in the program. One possible solution is to allow customers to sell out idle loyalty points and buy in the points that they need. On the basis of a call auction, this study designs a peer-to-peer exchange mechanism for customers to realize the above trade. In addition, a blockchain-based system is developed to support the issuance, redemption, and exchange of loyalty points. In this study, Hyperledger Fabric is adopted as the underlying blockchain technology because it has some features that are beneficial to a cross-organizational coalition loyalty program. This study also proposes a feasible multi-host deployment scheme for the Hyperledger Fabric blockchain network that is suitable for our application scenario. Finally, some implementation results are given to demonstrate the system process from the perspective of the application layer. The mechanism proposed in this study is helpful to improve the likelihood of successfully exchanging points, thus accelerating the circulation and use of loyalty points.
|
_Article_
# A Loyalty System Incorporated with Blockchain and Call Auction
**Shu-Fen Tu** **[1]** **, Ching-Sheng Hsu** **[2,]*** **and Yan-Ting Wu** **[1]**
1 Department of Information Management, Chinese Culture University, Taipei 111, Taiwan
2 Department of Information Management, Ming Chuan University, Taoyuan 333, Taiwan
***** Correspondence: [email protected]
**Abstract: A loyalty program is a type of incentive to reward customers’ perceived value and enhance**
their purchasing behavior. The key to the success of a loyalty program is to allow customers to
more actively participate in the program. One possible solution is to allow customers to sell out
idle loyalty points and buy in the points that they need. On the basis of a call auction, this study
designs a peer-to-peer exchange mechanism for customers to realize the above trade. In addition, a
blockchain-based system is developed to support the issuance, redemption, and exchange of loyalty
points. In this study, Hyperledger Fabric is adopted as the underlying blockchain technology because
it has some features that are beneficial to a cross-organizational coalition loyalty program. This
study also proposes a feasible multi-host deployment scheme for the Hyperledger Fabric blockchain
network that is suitable for our application scenario. Finally, some implementation results are given
to demonstrate the system process from the perspective of the application layer. The mechanism
proposed in this study is helpful to improve the likelihood of successfully exchanging points, thus
accelerating the circulation and use of loyalty points.
**Keywords: loyalty program; blockchain; Hyperledger Fabric; call auction**
**Citation: Tu, S.-F.; Hsu, C.-S.; Wu,**
Y.-T. A Loyalty System Incorporated
with Blockchain and Call Auction. J.
_Theor. Appl. Electron. Commer. Res._
**[2022, 17, 1107–1123. https://doi.org/](https://doi.org/10.3390/jtaer17030056)**
[10.3390/jtaer17030056](https://doi.org/10.3390/jtaer17030056)
Academic Editors:
Eduardo Álvarez-Miranda and
Jani Merikivi
Received: 23 April 2022
Accepted: 18 July 2022
Published: 4 August 2022
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright:** © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**1. Introduction**
Customer relationship management (CRM) appeared in the 1970s and, since then,
CRM has become a popular tool to enhance customer interaction and knowledge management. CRM can help enterprises to better understand customers and has important
positive impacts on performance [1,2]. A loyalty program (LP) is an important component
of customer relationship management, which can be used to identify, reward, and retain
profitable customers [3]. Most LPs reward customers in the form of points, which can be
exchanged for goods or services. As an incentive, the LP generates perceived value and
guides customers to continue to purchase or use enterprise services related to the program.
Therefore, an LP is an effective way to strengthen customers’ purchasing behavior and
their relationship with the enterprise [4]. An LP not only brings benefits to customers, but
also creates additional revenue for enterprises [5]. Let us take the frequent-guest program,
which is a loyalty program widely adopted in the hotel industry, as an example. Extra
expenses include tangible costs, such as affiliation fees, rigid membership benefits, and
advertising fees, and intangible costs, such as management, administration, communication
efforts, and alternative uses of money [6,7]. The control of operating costs is a challenge
when designing loyalty programs. Moreover, the effectiveness of loyalty programs has
recently been questioned [8]. According to the Bond Brand Loyalty Report, the total global
loyalty programs expenditure in 2019 was approximately USD 323 billion. On average,
consumers belong to 14.8 loyalty programs, but only 6.7 of them are actively involved.
In other words, although the number of members continues to rise, only half of them
are active [9]. Therefore, improving the effectiveness of loyalty programs by increasing
customers’ participation is also an important issue.
-----
_J. Theor. Appl. Electron. Commer. Res. 2022, 17_ 1108
Many industry experts and researchers have suggested that modern Information
and Communication Technology (ICT) can help to improve the effectiveness and attractiveness of loyalty programs [8,10–12]. Recently, an emerging information technology
called blockchain has attracted more and more attention to alleviate the issues of LPs [13].
Blockchain is a distributed ledger governed by a peer-to-peer network. Data stored in the
blockchain are immutable and traceable, so it is suitable for managing intangible assets,
such as loyalty points [14,15]. In addition to reducing the operating costs of LPs, blockchainenabled LP schemes can also affect customers’ LP participation behaviors through three
characteristics: near-real-time transactions, a coalition loyalty program of multiple brands,
and peer-to-peer point exchange. These characteristics can cultivate customers’ motivation
to participate in LPs by delivering various values within LPs, and thus improve customer
participation [16]. There have been studies that propose blockchain-based LP systems
achieving these features. Some researchers designed a unified platform for companies
to form an alliance to issue electronic reward points and support cross-organizational
redemption. Moreover, the value of loyalty points is set by tokens and hence determines
the peer-to-peer point exchange rates [5,17–19]. Since the LP is not a new concept, most
companies may have their own LP systems, and the migration of the legacy system may
be very laborious and time-consuming, resulting in companies resisting change. Chen
et al. [20] designed a three-layer architecture for a blockchain-enabled LP system to make
minimal modifications to legacy systems. Customers can use points from company ‘A’ to
redeem goods or services from company B based on the pre-set exchange rate between
the points of the two companies. However, the rule to determine the exchange rate is
not specified in their paper. In addition, Chen et al.’s architecture does not seem to take
into account the exchange of points between customers. Pramanik et al. [21] proposed a
blockchain-based reward point exchange system that allowed users to directly exchange
points issued by different companies. A customer who owns points for company ‘B’ but
wants to redeem them on the goods and services of company ‘A’ can propose an exchange
of point ‘A’ for point ‘B’. The quantity of point ‘A’ and point ‘B’ to be exchanged is specified
by the person proposing the exchange. If they are satisfied with the quantity and accept the
proposal, the exchange is executed. Companies can benefit from such a system because
their respective redemption rules can be retained, and there is little change to their original
LP system. In addition, the exchange rate of points does not need to be priced in tokens, so
tokens can be eliminated. However, there must be one person who has a sufficient quantity
of point ‘B’ and is satisfied with the exchange rate, otherwise the exchange will not be
executed. It is reasonably foreseeable that some proposals may be pending for a long time.
If companies wish to form a consortium to offer a coalition loyalty program but do not
want to be hindered by legacy system mitigation, a blockchain-based platform supporting
peer-to-peer loyalty point exchange is a good solution. The aim of this paper is to propose
a blockchain-based platform enabling customers to exchange their own loyalty points.
Different from Pramanik et al. [21], we designed a fair trading mechanism to automatically
match the two sides of the exchange. Moreover, even if there is no single person possessing
a sufficient quantity of points, the exchange can still be performed because our mechanism
can gather the same points to fulfill the exchange order. The concept of the proposed
mechanism is borrowed from the call auction mechanism of the stock exchange market.
In addition, this study adopts Hyperledger Fabric as the underlying blockchain platform
because it has some features suitable for business applications. The remainder of this paper
is organized as follows: in Section 2, Hyperledger Fabric and the call auction mechanism
are briefly introduced, and research related to blockchain-based loyalty point systems is
reviewed; in Section 3, the proposed trading mechanism and the proposed blockchainbased LP system are described in detail; in Section 4, the experimental results are presented;
and, finally, the discussion and conclusions are provided in Section 5.
-----
_J. Theor. Appl. Electron. Commer. Res. 2022, 17_ 1109
**2. Related Work**
_2.1. Blockchain and Hyperledger Fabric_
Initially, blockchain was used as the underlying technology of Bitcoin, proposed by
Satoshi Nakamoto [22]. Each transaction is recorded as a block, and these blocks are
linked securely to prevent any block from being changed or a block from being inserted
between two existing blocks. This makes the blockchain tamper-proof and provides the
key advantage of immutability. Moreover, blockchain is decentralized and maintained by
the collective, which makes the data stored on the blockchain more reliable [23]. According
to the mechanism of participation and access, blockchain can be divided into public,
consortium, and private [24,25]. Therefore, a public blockchain is open to everyone. Anyone
can participate in the maintenance and data reading of the blockchain. It is completely
decentralized and not controlled by any organization. Bitcoin is a typical example of a
public blockchain. Because it is completely open and transparent, maintenance requires
huge computing power and lacks transactional privacy. Contrary to a public blockchain, a
private blockchain is open to an individual or entity. There is an organization that controls
who can participate in and maintain the shared ledger. Although limited decentralization
makes transactions faster and more efficient, a private blockchain fails to make full use of
the decentralized trust foundation, which limits its application. A consortium blockchain
is open to specific organizations and groups, and these pre-selected organizations decide
who can perform transactions or access data. In other words, a consortium blockchain
platform is governed by multiple organizations. No organization can engage in any illegal
activities because every other organization on the platform carries out monitoring and
checking. Therefore, a consortium blockchain can help enterprises to trust and collaborate
with each other. Such a blockchain can adopt a consensus algorithm that is more efficient
than the public blockchain. A consortium blockchain is very suitable for enterprises that
require all participants to obtain permission and share the responsibility of maintaining
the blockchain.
Hyperledger Fabric, founded by the Linux Foundation, is a consortium blockchain especially designed for large-scale enterprises [26]. Compared with other common distributed
ledger or blockchain platforms, Hyperledger Fabric offers some important differentiated
functions. First of all, Fabric has a highly modular and configurable structure, which
can provide versatility and optimization for various industries. Second, Fabric supports
the writing of smart contracts in general programming languages, such as Java, Go, and
Node.js, rather than domain-specific languages (DSL) [27]. This means that most companies
already have the skills needed to develop smart contracts and no extra training is needed to
learn a new language. Third, Fabric uses permissioned access, and participants know each
other and are not anonymous. In addition, one of the most important differences between
Fabric and other blockchain technologies is that it supports a hot-swappable consensus
mechanism, enabling the platform to more effectively adapt to specific use cases and trust
models. Finally, Fabric does not need a cryptocurrency consensus protocol to deal with expensive mining or promote the execution of smart contracts, which means that it only needs
to deploy the platform at approximately the same operating costs as other decentralized
systems. Moreover, compared with R3 Corda, which is also an enterprise-grade platform
and does not require cryptocurrency, Hyperledger Fabric achieves better throughput [28].
In view of these unique design features, Hyperledger Fabric is used as the underlying
blockchain platform in this study.
_2.2. Blockchain-Based Loyalty Point Systems_
Due to the features of immutability and traceability, blockchain has attracted attention
in managing intangible assets, such as loyalty points. According to [29], the inherent
properties of blockchain, such as immutability and distributed data storage, have significant positive impacts on institution-based trust, which is helpful in implementing loyalty
programs that can maintain long-lasting relationships between service providers and their
customers. The key reasons that customers may continue accumulating or using loyalty
-----
_J. Theor. Appl. Electron. Commer. Res. 2022, 17_ 1110
points are the diversity of redeemable goods or services and the convenience of using
loyalty points [20]. Therefore, some researchers began to propose loyalty point systems
based on blockchain to support cross-company or cross-industry alliance.
Liao et al. [18] put forward a blockchain-based LP platform to promote the cooperative
issuance of electronic reward points and support cross-organizational redemption. In Liao
et al.’s system, the transactions involving point issuance and redemption were recorded
on the blockchain platform ‘Ethereum’. Technically, a company can launch an Ethereum
node on its own, or seek out an Ethereum node service provider to join the platform.
These Ethereum nodes are dedicated to mining to make the system more credible. Relying
on other mining nodes is also an option for the security of the network. Sönmeztürk
et al. [19] also developed a loyalty system based on the Ethereum platform. Sönmeztürk
et al.’s system allowed companies to issue ERC20-compliant tokens, called TECH, as
loyalty points. Since TECH is an ERC20-compatible token, it cannot only be used to
pay any company within the alliance for services or goods, but can also be traded in
the exchange market. Moreover, customers can exchange TECH with cryptocurrencies.
Customers can check their wallet and check their balance of Ether and TECH tokens.
Because Ethereum is susceptible to low transactions with high fees, some researchers
adopted other blockchain platforms. Agrawal et al. also proposed a unified blockchainbased LP platform and adopted two different blockchain technologies: one is Stellar [30],
and the other is Hyperledger Fabric. Stellar is used to manage the loyalty points of the
companies and customers. Stellar is similar to Ethereum, but it sacrifices decentralization
and security to gain better transaction speed and cost. Hyperledger Fabric provides
confidentiality and transaction privacy between different groups of companies. Since Stellar
was originally a decentralized payment platform that supports cross-border transactions,
Agrawal et al.’s system allows customers to exchange loyalty points by making a trade
offer. An example of a trade offer is to “Buy company A’s loyalty coins with company B’s
20 loyalty coins”. If there is an offer, e.g., “Sell company A’s loyalty coins for company
B’s 20 loyalty coin”, then the two offers match, and the exchange is executed. In other
words, only one-to-one matching is possible, and trading part of the volume is not allowed.
Dominguez Perez et al. [5] proposed a loyalty program based on Waves blockchain, and,
similar to Sönmeztürk et al. [19], they used Waves tokens as loyalty points. Different from
Ethereum, using PoW (Proof of Work), Waves adopts lease PoS (Proof of Stake) as its
consensus mechanism. Thus, the authors stated that Waves provides more flexibility than
Ethereum [31].
Chen et al. [20] have conducted research on initiating a cross-industry horizontal
alliance with the operators with a blockchain-based loyalty points system. They found that
replacing the legacy system with a brand-new system is very costly, so it was not accepted
by the operators because the legacy system already existed. Their research concluded that
minimum modification of the legacy system is a basic design principle for a blockchainbased loyalty point system. Chen et al. mentioned that one of the tasks related to the design
principle is to keep the original settlement rules of their respective companies. To achieve
the cooperation of a loyalty program and consistent settlement rules, Chen et al. devised
a method to set the exchange rate between different types of points. Assuming that the
exchange rate between point ‘A’ and point ‘B’ is 2:1 and that the redemption cost of an item
from company ‘B’ is 500 points ‘B’, then customers can exchange 1000 points ‘A’ for this
item. In this way, the legacy system and the exchange rate of each company can remain
stable. Pramanik et al. [21] proposed a blockchain-based loyalty point system that enables
customers to exchange points with each other. A customer can make an offer to specify the
details of the exchange transaction—for example, “buy 2000 points of company ‘A’ and sell
3000 points of company ‘B’”. If a customer wants to buy a certain number of points and
another wants to sell the same number of points, their orders match, thus completing the
exchange. Therefore, if a customer wants to redeem goods from company ‘A’ but only owns
points for company ‘B’, they can exchange the points from company ‘B’ for points from
company ‘A’. As a result, the system of Pramanik et al. can achieve the purpose of keeping
-----
_J. Theor. Appl. Electron. Commer. Res. 2022, 17_ 1111
the original settlement rules. Furthermore, it is no longer necessary to pre-set the exchange
rate. However, as with the system of Agrawal et al., Pramanik et al.’s system only provides
one-to-one matching. In other words, the offer is either entirely transacted or not transacted
at all. Therefore, the probability of successfully matching offers may not be satisfactory. It
is reasonable to state that the probability would be improved if many-to-many matching
was allowable or the points of an offer could be partially traded.
_2.3. Call Auction Mechanism_
In the stock market, investors place their orders to buy or sell at a certain price. Orders
are periodically processed and executed using a price–time priority precedence hierarchy.
Call auction is a matching method to determine the final execution price of these orders.
During a call auction, buy and sell orders are put together in batches, and bids and offers are
aggregated and matched with each other. Call auctions typically match many buy orders
and many sell orders at one trade price. The trade price of a call auction is determined by
the following principles [32]:
1. Achieving the maximum trade volume such that buy orders with bid prices higher
than the determined price and sell orders with an offer price lower than the determined price shall be all satisfied;
2. If there are buy and sell orders with prices equal to the determined price, at least one
type of order shall be fully satisfied;
3. When more than two prices meet the above two principles, the price closest to the
latest traded price in the current trading period shall be used. If there is not yet any
traded price in the current period, the price closest to the auction reference price at
market opening shall be used.
According to the above principles, all the buy orders whose bid prices are higher
than the determined price, and all the sell orders whose offer prices are lower than the
determined price, are traded at the determined price. Orders with the same prices as the
determined price are traded according to the principle of time first. In addition, many-tomany matching is allowable. In other words, one or more buy orders may be matched with
one or more sell orders. If some orders cannot be matched at the end of the current auction,
they enter the next auction. Moreover, in a call auction, it is possible that only a part of
the trade volume of an order is matched. The rest of the unmatched volume also enters
the next auction. In summary, call auction leads to the maximum trading volume, and the
trade price is fair to both buy and sell orders.
Similar to a stock exchange, a point exchange also pairs buy and sell orders to make
a trade. Therefore, this study proposes a point trading platform that simulates the stock
exchange market and uses call auction as the matching method. As mentioned earlier, call
auction can bring benefits to our system. First of all, exchange orders are not limited to
one-to-one matching. A buy order can be matched with multiple sell orders, and vice versa.
Second, even if the total quantity in the market cannot meet the exchange order, partial
matching can be obtained. For example, if there is one order to give point ‘B’ for point ‘A’
and three orders to give point ‘A’ for point ‘B’, the quantity of the former order is 20 ‘B’s for
30 ‘A’s, and the accumulated quantity of the latter three orders is 20 ‘A’s for 20 ‘B’s. In this
case, the exchange of these four orders can still be executed, and only 20 ‘A’s of the former
order can be satisfied. Third, neither the buyer nor the seller suffers losses with the trade
price determined by the call auction. If a buy order is traded, the trade price will be equal
to or less than the bid price. Similarly, if a sell order is traded, the trade price will be equal
to or greater than the offer price. Nevertheless, it remains unknown whether call auction
can be successfully employed as a matching method for point exchange. The content of
a point exchange order is not entirely the same as the content of a stock exchange order.
Therefore, every point exchange order needs to be converted into a buy or sell order before
entering a call auction. The detailed conversion method will be explained in Section 3.1.
-----
_J. Theor. Appl. Electron. Commer. Res. 2022, 17_ 1112
**3. The Proposed Method**
This section is divided by subheadings and provide a concise and precise description
of the experimental results, their interpretation, and the experimental conclusions that can
be drawn.
_3.1. The Trading Mechanism_
In the stock exchange market, a trade order is an instruction given by an investor
to indicate how many shares of a stock to buy or sell at a specific price. As regards the
peer-to-peer point exchange, a trade order indicates that a quantity of one point is given to
receive a quantity of another point in return. Obviously, the information provided by an
order in stock exchange is different from the information provided by an order in point
exchange. Therefore, a point exchange order requires pre-processing before entering the
auction phase, and post-processing after the call auction is completed. We will explain
in detail the items included in a stock exchange order and point exchange order and the
pre-processing and post-processing.
A. Stock Exchange Order
The items in a trade order of stock exchange include stock code, trade type, trade
quantity, price, and order time. The trade type is either buy or sell, so the price is a bid
price for a buy order and an ask price for a sell order. Using a matching method, a buy
order in the stock exchange market is matched with one or more sell orders, or vice versa.
For cash trading, the investor who places a buy order needs to deposit cash at the trade
price. Alternatively, the investor who places a sell order receives cash at the trade price. In
the domestic stock exchange market, the price is in domestic currency.
B. Point Exchange Order
The behavior of peer-to-peer point exchange involves taking one point in return for
another point. It can be seen as selling one point and buying another point at one time.
Suppose that each type of point is coded: the items in a trade order of point exchange
include the code of the points to buy, quantity of points to buy, code of points to sell,
quantity of points to sell, and order time. If one customer wants to obtain a certain number
of points and another customer wants to obtain the same number of points, their orders can
be matched, and both parties will receive the points that they need. Since point exchange is
similar to stock exchange to a certain extent, this study aims to apply call auction as the
matching method. However, the items of a point exchange order need to be pre-processed
to correspond to the items of a stock exchange order.
C. Pre-Processing
Let X and Y denote the codes of two different points, and X is lexicographically less
than Y. Let Qty_X and Qty_Y denote the order quantities of X and Y, respectively. A point
exchange order submitted at time T is pre-processed according to the following two cases.
Case 1: Sell X for Y
In this case, the point exchange order corresponds to a sell order as follows:
Stock code = X
_•_
Trade type = sell
_•_
Trade quantity = Qty_X
_•_
Ask price = (Qty_Y/Qty_X) in currency units of Y
_•_
Time = T
_•_
Case 2: Sell Y for X
In this case, the point exchange order corresponds to a buy order as follows:
Stock code = X
_•_
Trade type = buy
_•_
Trade quantity = Qty_X
_•_
-----
_J. Theor. Appl. Electron. Commer. Res. 2022, 17_ 1113
id price = (Qty_Y/Qty_X) in currency units of Y
_•_
Time = T
_•_
In other words, the price of the point X is derived from the exchange rate against the
point Y, and in this way we can obtain a point exchange order corresponding to a buy or
sell order of stock exchange. All the buy and sell orders with the same trade target and
currency unit are collected for call auction. For example, a customer wants to give 30 points
coded as 1200 in exchange for 20 points coded as 1100. Suppose that the customer places
the order at 10:10:10 on 15 September 2021; the order is then converted into a buy order
containing the following information:
Stock code = 1100
_•_
Trade type = buy
_•_
Trade quantity = 20
_•_
Bid price = 1.5 (currency unit: 1200)
_•_
Time = 15 September 2021 10:10:10
_•_
All the orders for buying and selling point 1100 at a certain price in units of point
1200 are collected for call auction.
D. Post-processing
If a buy or sell order is successfully matched, the order will be traded at the price
determined by call auction. Then, post-processing is used to restore the order to the original
exchange information. The post-processing of a matched order can be divided into the
following two cases.
Case 1: Sell a quantity Q of X at the trade price P in units of Y
Point to sell = X
_•_
Quantity to sell = Q
_•_
Point to buy = Y
_•_
Quantity to buy = P Q
_•_ _×_
Case 2: Buy a quantity Q of X at the trade price P in units of Y
Point to sell = Y
_•_
Quantity to sell = P Q
_•_ _×_
Point to buy = X
_•_
Quantity to buy = Q
_•_
Take a sell order of point 1100 as an example. If the trade quantity is 40 and the trade
price is 1.4 units of point 1200, then 40 of point 1100 will be exchanged for 56 of point 1200.
_3.2. Blockchain-Based Loyalty Point System_
The main function of the proposed system includes registration, issuance, redemption,
and exchange. Before explaining the system process in detail, we define the following symbols:
_•_ _X, Y: company and also code of point;_
_•_ _A, B: customer;_
_•_ idu: identity of a customer u;
_•_ _ecc_pku: ECC (Elliptic Curve Cryptography) public key of role u;_
_•_ _DSu: digital signature of role u;_
_•_ **Enc(m, k): encryption function, which encrypts message m with key k;**
_•_ **Dec(m, k): decryption function, which decrypts message m with key k;**
_•_ **H(m): SHA-256 hash function, which generates digital digest of message m.**
Figure 1 illustrates the whole system process from the point of view of system users.
Users of the proposed system are mainly companies and customers. The issuance and
redemption occur between a company and a customer, but in order to express the peer-topeer point exchange, we add two companies and two customers in Figure 1. The following
is a detailed description of Figure 1. The functions of registration, issuance, and redemption
are only described for company X and customer A.
-----
to-peer point exchange, we add two companies and two customers in Figure 1. The fol
_J. Theor. Appl. Electron. Commer. Res. 2022lowing is a detailed description of Figure 1. The functions of registration, issuance, and, 17_ 1114
redemption are only described for company X and customer A.
**Figure 1. The whole process of the proposed system.**
**Figure 1. The whole process of the proposed system.**
(1) Registration
(1) Registration
In this stage, a pair of ECC public and private keys is generated for company X and
In this stage, a pair of ECC public and private keys is generated for company X and
customer A, respectively. The secret keys, customer A, respectively. The secret keys,ecc_sk ecc_skXX and andecc_sk ecc_skA, are kept secretly, and the A, are kept secretly, and the
public keys, public keys,ecc_pk ecc_pkXX and andecc_pk ecc_pkA, are recorded in the blockchain. The key pairs are used for A, are recorded in the blockchain. The key pairs are used for
a digital signature in a later phase.
a digital signature in a later phase.
(2) Issuance
(2) Issuance
After customer After customerA A interacts with company interacts with companyX X, point, pointX X and its quantity and its quantityQ QXX associated associated
with customer with customerA A’s identity id’s identity idAA are recorded in the blockchain. The quantity are recorded in the blockchain. The quantityQ QX is deter-X is determined according to the issuance rule set by company mined according to the issuance rule set by companyX X. .
(3)(3) Redemption Redemption
When customer When customerA A wants to purchase the products of company wants to purchase the products of companyX X, customer, customerA A with- with
draws the required quantity draws the required quantityQ QXX of point of pointX X from the blockchain. Customer from the blockchain. CustomerA A generates generates
digital signature digital signatureDS DSA by calculating A by calculatingEnc Enc(H((HX(|XQ|XQ), Xecc_sk), ecc_skA), where ‘|’ represents a concat-A), where ‘|’ represents a conenation operator. Then, customer catenation operator. Then, customerA submits A submitsX X, Q, QX, XDS, DSA to company A to companyX X, and company, and companyX X
queries queriesecc_pk ecc_pkAA from the blockchain and checks the authenticity of from the blockchain and checks the authenticity ofX X and andQ QXX by comparing by comparing
**DecDec(DS(DSA, Aecc_pk, ecc_pkA) with A) withH( HX|(QXX|). If they match, customer QX). If they match, customerA will successfully obtain prod- A will successfully obtain**
ucts from company products from companyX. _X._
(4)(4) Exchange Exchange
If the point exchange order of customer A is matched with that of customer B, the
exchange is executed as follows. At first, customer A and customer B generate digital signatures DSA and DSB, respectively, where DSA = Enc(H(X|QX), ecc_skA) and
_DSB = Enc(H(Y|QY), ecc_skB). Then, both customer A and customer B query each other’s_
public keys, ecc_pkA and ecc_pkB, from the blockchain and perform authenticity verification
by comparing Dec(DSA, ecc_pkA) with H(X|QX) and Dec(DSB, ecc_pkB) with H(Y|QY). If
the comparisons show no difference, then point Y and quantity QY associated with idA and
point X and quantity QX associated with idB are recorded in the blockchain.
In this study, Hyperledger Fabric is used as the underlying blockchain platform,
and system users interact with Hyperledger Fabric through the applications, as shown
in Figure 2. Hyperledger Fabric provides a number of SDKs (Software Development
Kits) for several common programming languages. The Hyperledger Fabric Client SDK
provides various APIs (Application Programming Interfaces), enabling applications to send
requests to, and receive responds from, the Hyperledger Fabric blockchain network. The
deployment of the Hyperledger Fabric blockchain network includes an orderer cluster
and a consortium composed of two or more organizations, as shown in Figure 3. At
-----
yp g p ( p )
_J. Theor. Appl. Electron. Commer. Res.several common programming languages. The Hyperledger Fabric Client SDK provides 2022, 17_ 1115
various APIs (Application Programming Interfaces), enabling applications to send requests to, and receive responds from, the Hyperledger Fabric blockchain network. The
deployment of the Hyperledger Fabric blockchain network includes an orderer cluster
present, some implementations of the ordering service are available. In this study, Raft,
and a consortium composed of two or more organizations, as shown in Figure 3. At present, some implementations of the ordering service are available. In this study, Raft, offi-officially recommended by Hyperledger Fabric, was adopted. The organizations refer
cially recommended by Hyperledger Fabric, was adopted. The organizations refer to the to the companies that participate in the collaborative loyalty program. The nodes in the
companies that participate in the collaborative loyalty program. The nodes in the Hy-Hyperledger Fabric architecture play a variety of roles, including endorser, committer,
perledger Fabric architecture play a variety of roles, including endorser, committer, or-orderer, and CA (Certificate Authority). An orderer provides services to arrange and
derer, and CA (Certificate Authority). An orderer provides services to arrange and pack-package transactions into blocks and also provides a service of crash fault tolerance (CFT).
age transactions into blocks and also provides a service of crash fault tolerance (CFT). The The Hyperledger Fabric CA is responsible for the registration of identities and certificate
Hyperledger Fabric CA is responsible for the registration of identities and certificate re-renewal and revocation. The client application signs and submits a transaction proposal to
newal and revocation. The client application signs and submits a transaction proposal to the endorsement peer. The endorsement peer is responsible for verifying the identity and
the endorsement peer. The endorsement peer is responsible for verifying the identity and authority of the submitting client, approving the execution results of the chaincode, and
authority of the submitting client, approving the execution results of the chaincode, and
returning the verification output to the client. Committer is the default role of each peer
returning the verification output to the client. Committer is the default role of each peer
in the Hyperledger Fabric architecture and is responsible for committing transactions and
in the Hyperledger Fabric architecture and is responsible for committing transactions and
maintaining the ledger and state. The services of the endorser and committer are provided
maintaining the ledger and state. The services of the endorser and committer are provided
by the organizations in the consortium. In addition to the various roles of nodes mentioned
by the organizations in the consortium. In addition to the various roles of nodes men
above, the Hyperledger Fabric architecture includes a communications mechanism, called
tioned above, the Hyperledger Fabric architecture includes a communications mecha
a channel, which is used to define access control between organizations in the consortium.
nism, called a channel, which is used to define access control between organizations in the
The system channel is created at the beginning to define the set of ordering nodes and store
consortium. The system channel is created at the beginning to define the set of ordering
nodes and store the consortium configuration. Whenever a new organization joins or an the consortium configuration. Whenever a new organization joins or an organization exits
organization exits a consortium, the consortium configuration in the system channel a consortium, the consortium configuration in the system channel needs to be updated to
needs to be updated to reflect these changes. Application channels are used to define the reflect these changes. Application channels are used to define the private communication
private communication among consortium members, in which members share the same among consortium members, in which members share the same ledger and chaincode for a
ledger and chaincode for a specific business purpose. specific business purpose.
_JTAER 2022, 17, FOR PEER REVIEW_ 10
**Figure 2.Figure 2. Interactions between users and the blockchain network.Interactions between users and the blockchain network.**
**Figure 3.Figure 3. Multi-host deployment of Hyperledger Fabric network.Multi-host deployment of Hyperledger Fabric network.**
**4 I** **l** **t ti** **R** **lt**
-----
**Figure 3. Multi-host deployment of Hyperledger Fabric network.**
_J. Theor. Appl. Electron. Commer. Res. 2022, 17_ 1116
**4. Implementation Results**
_4.1. System Interface_
**4. Implementation Results**
As mentioned in Section 3.1, the proposed system includes four functions: registra
_4.1. System Interface_
tion, issuance, redemption, and exchange. Below, we introduce some interfaces of these
As mentioned in Section 3.1, the proposed system includes four functions: registration,
four functions. In addition, the blockchain keeps the entire history of transactions and
issuance, redemption, and exchange. Below, we introduce some interfaces of these four
customers’ loyalty points in full detail. Therefore, the proposed system also provides a
functions. In addition, the blockchain keeps the entire history of transactions and customers’
function for customers to examine the transaction history and the balance of points in their
loyalty points in full detail. Therefore, the proposed system also provides a function for
accounts.
customers to examine the transaction history and the balance of points in their accounts.
A. Registration
A. Registration
Initially, customers need to fill in their identity number, name, and password to reg
ister in the system (see Figure 4a). After this, the proposed system generates a pair of ECC Initially, customers need to fill in their identity number, name, and password to register
in the system (see Figure 4a). After this, the proposed system generates a pair of ECC
private and public keys for the customer, and the public key is recorded in the blockchain.
private and public keys for the customer, and the public key is recorded in the blockchain.
Members of the consortium are assigned identities and passwords by default, and they
Members of the consortium are assigned identities and passwords by default, and they
can log in to the administrative platform using the identities and passwords, as shown in
can log in to the administrative platform using the identities and passwords, as shown in
Figure 4b.
Figure 4b.
_JTAER 2022, 17, FOR PEER REVIEW_ 11
(a) (b)
**Figure 4. Figure 4. User interface of registration.User interface of registration.**
After logging into the system, the issuer can set the consumption amount corre
B.B. Issuance Issuance
sponding to one loyalty point (see Figure 5a). Given the consumer’s ID number and con
After logging into the system, the issuer can set the consumption amount correspond
sumption amount (see Figure 5b), the system converts the points that should be issued to
ing to one loyalty point (see Figure 5a). Given the consumer’s ID number and consumption
the consumer according to the issuance rule, and writes the issuance record in the block
amount (see Figure 5b), the system converts the points that should be issued to the con
chain.
sumer according to the issuance rule, and writes the issuance record in the blockchain.
(a) (b)
**Figure 5. User interface of issuance.**
**Figure 5. User interface of issuance.**
-----
(a) (b)
_J. Theor. Appl. Electron. Commer. Res. 2022, 17_ 1117
**Figure 5. User interface of issuance.**
C. Redemption
C. Redemption
Customers can view the redeemable products of each company and the points that
they need to spend on each product, as shown in Figure 6a. Recall the system process of Customers can view the redeemable products of each company and the points that
they need to spend on each product, as shown in Figureredemption described in Section 3.2. The customer needs to read points from the block- 6a. Recall the system process of
redemption described in Sectionchain and generate a digital signature and then submit these to the company. The com- 3.2. The customer needs to read points from the blockchain
and generate a digital signature and then submit these to the company. The company thenpany then uses the customer’s public key to verify the submitted information. Correuses the customer’s public key to verify the submitted information. Corresponding to thesponding to the above system process of redemption, the overall operations of system
above system process of redemption, the overall operations of system users are described
users are described as follows. The customer presses the redemption button of the product
as follows. The customer presses the redemption button of the product to be redeemed,
to be redeemed, and then a QR code is generated, as shown in Figure 6b. Next, the cus
and then a QR code is generated, as shown in Figure 6b. Next, the customer presents the
tomer presents the QR code to the company, and the company scans the QR code to com
QR code to the company, and the company scans the QR code to complete the verification.
plete the verification. If the verification is passed, the system provides a message, as
If the verification is passed, the system provides a message, as shown in Figure 6c, and the
shown in Figure 6c, and the company can provide the product to the customer.
company can provide the product to the customer.
_JTAER 2022, 17, FOR PEER REVIEW_ 12
(a) (b)
(c)
**Figure 6. Figure 6. User interface of redemption.User interface of redemption.**
**2022, 17, FOR PEER REVIEW**
D.D. Exchange Exchange
A customer can make an exchange order using the procedure shown in Figure 7a and A customer can make an exchange order using the procedure shown in Figure 7a
check the transaction status, as shown in Figure 7b. Figure 7b lists two transactions, in and check the transaction status, as shown in Figure 7b. Figure 7b lists two transactions,
which the first one has succeeded and been completed, and the second one is waiting for in which the first one has succeeded and been completed, and the second one is waiting
matching. By pressing the cancel button, the customer can cancel a transaction that is wait-for matching. By pressing the cancel button, the customer can cancel a transaction that is
ing for matching. waiting for matching.
E. List
The entire history of all transactions is kept on the blockchain, so the proposed sys
tem provides an interface for customers to inquire about the list of loyalty point transactions, as shown in Figure 8a. In addition, a customer can also check the balance of each
-----
tem provides an interface for customers to inquire about the list of loyalty point transac
_J. Theor. Appl. Electron. Commer. Res. 2022, 17_ 1118
tions, as shown in Figure 8a. In addition, a customer can also check the balance of each
type of point, as shown in Figure 8b.
(a) (b)
**Figure 7. Figure 7. User interface of exchange.User interface of exchange.**
E. List
The entire history of all transactions is kept on the blockchain, so the proposed system
provides an interface for customers to inquire about the list of loyalty point transactions,
_JTAER 2022, 17, FOR PEER REVIEW as shown in Figure 8a. In addition, a customer can also check the balance of each type of13_
point, as shown in Figure 8b.
(a) (b)
**Figure 8. Figure 8. User interface of list.User interface of list.**
_4.2. Performance Evaluation 4.2. Performance Evaluation_
Our Raft-based multi-host blockchain network comprised five orderers and four Our Raft-based multi-host blockchain network comprised five orderers and four
peers and adopted Hyperledger Fabric 2.1.1 with parameters BatchTimeout = 100 mspeers and adopted Hyperledger Fabric 2.1.1 with parameters BatchTimeout = 100 ms and
MaxMessageCount = 10. The parameter BatchTimeout is the amount of time to wait after and MaxMessageCount = 10. The parameter BatchTimeout is the amount of time to wait
the first transaction arrives for additional transactions before cutting a block, and Max-after the first transaction arrives for additional transactions before cutting a block, and
MessageCount is the maximum number of messages permitted in a batch [33]. For the MaxMessageCount is the maximum number of messages permitted in a batch [33]. For the
blockchain, there are two operations in a proposal transaction: one is query, and the other blockchain, there are two operations in a proposal transaction: one is query, and the other is
is invoke. The former means reading data from the ledger, and the latter means writing invoke. The former means reading data from the ledger, and the latter means writing data
data into the ledger. In this research, we carried out a system test by sending multiple into the ledger. In this research, we carried out a system test by sending multiple requests
requests of query or invoke to the blockchain at the same time to demonstrate the perfor-of query or invoke to the blockchain at the same time to demonstrate the performance of
mance of our system. our system.
A. Query
A. Query
We wrote automatic scripts to perform three different tests with 10, 30, and 50 query
operations, respectively. Each test was repeated 30 times, and the required time fromWe wrote automatic scripts to perform three different tests with 10, 30, and 50 query
operations, respectively. Each test was repeated 30 times, and the required time from re
-----
_J. Theor. Appl. Electron. Commer. Res. 2022, 17_ 1119
_JTAER 2022, 17, FOR PEER REVIEW_ 14
receiving the query to returning the result was recorded. The bar charts and statistics of the
three tests are shown in Figure 9.
(a)
(b)
(c)
**Figure 9. Figure 9.Test results of query operations. ( Test results of query operations. (a) 10 queries (max: 153; min: 80; mean: 94); (a) 10 queries (max: 153; min: 80; mean: 94); (b) 30 queries b) 30 queries**
(max: 280; min: 232; mean: 253); (c) 50 queries (max: 424; min: 364; mean: 393).
(max: 280; min: 232; mean: 253); (c) 50 queries (max: 424; min: 364; mean: 393).
-----
_J. Theor. Appl. Electron. Commer. Res. 2022, 17_ 1120
B. Invoke
Similar to the above test, we also performed three different tests with 10, 30, and
50 invoke operations, respectively. Each test was repeated 30 times, and the required time
_JTAER 2022, 17, FOR PEER REVIEW_ 15
from receiving the request of invoking to finishing the writing was recorded. The bar charts
and statistics of the three tests are shown in Figure 10.
(a)
(b)
(c)
**Figure 10. Test results of invoke operations. (a) 10 invokes (max: 2150; min: 1950; mean: 1998.1); (b)**
**Figure 10. Test results of invoke operations. (a) 10 invokes (max: 2150; min: 1950; mean: 1998.1);**
30 invokes (max: 6090; min: 5804; mean: 5911.1); (c) 50 invokes (max: 9957; min: 9650; mean: 9773.5).
(b) 30 invokes (max: 6090; min: 5804; mean: 5911.1); (c) 50 invokes (max: 9957; min: 9650;
mean: 9773.5).
**5. Discussion and Conclusions**
This research proposed a loyalty point management system based on blockchain.
-----
_J. Theor. Appl. Electron. Commer. Res. 2022, 17_ 1121
**5. Discussion and Conclusions**
This research proposed a loyalty point management system based on blockchain.
Having the features of decentralization and immutability, blockchain is very suitable for
managing intangible assets, such as loyalty points. The key for customers to accumulate
loyalty points and actively participate in loyalty program is to provide a large variety of
items on which to redeem points. Therefore, the coalition loyalty program has become a
trend. As a consortium blockchain platform, Hyperledger Fabric provides various frameworks, tools, and libraries for enterprise-grade and cross-industry blockchain deployments.
In view of these designs, suitable for enterprise alliances, the proposed system adopted
Hyperledger Fabric as the underlying blockchain platform. According to Chen et al.’s
research, a blockchain-enabled loyalty points system should induce minimum modifications to legacy systems. Chen et al. suggested setting an exchange rate between points
of different companies so that customers could exchange the points of one company for
the goods of another company as long as they gave the equivalent points according to the
exchange rate. By setting the exchange rate, companies do not need to change their original
settlement rules. However, it may take some time to reach an agreement on the exchange
rate because companies may have different opinions on the value of points [20]. Therefore,
this paper proposed a blockchain-based system that enables customers to exchange the
points that they obtain from different companies. With the help of peer-to-peer exchange,
customers can purchase the items of a company with the points issued by this company.
As a result, companies need not change their original settlement rules, nor do they need to
set the exchange rate in advance.
Although a few studies also propose peer-to-peer point exchange, they all use one-toone matching. In other words, two orders can only be traded when the type and quantity
of points to be exchanged exactly match, which means that the probability that one order
can be traded may not be high. The most important contribution of this study is to employ
the call auction method of the stock exchange market to realize many-to-many matching.
However, the content of a point exchange order is not exactly the same as that of a stock
exchange order, so it is not practicable to directly apply call auction to match point exchange
orders. In this study, an innovative technique to convert a point exchange order is proposed,
which ensures that the converted point exchange order corresponds to a buy or sell order
in the stock exchange market. After this, it becomes practicable to use call auction to match
orders in a loyalty point exchange. In addition to many-to-many matching, call auctions can
also allow the partial trading of orders. In other words, if the total amount on the market
only satisfies a part of an order, then the satisfied part can be traded, and the remaining
part remains not traded. Consequently, the introduction of call auction in this study can
increase the probability of orders being traded.
The matching method proposed in this study can bring benefits to our system. First
of all, we do not need tokens or coins to value loyalty points, so it is not necessary to use
a blockchain platform with built-in cryptocurrency. For this reason, we can adopt Hyperledger Fabric and take advantage of its enterprise-level functions. Secondly, companies
can eliminate the negotiation process regarding exchange rates between different types
of points. In fact, our method implies that it is actually the customers who decide on the
exchange rate of points. In addition to these benefits brought by the matching method,
the point exchange platform also has some advantages. From the customers’ point of
view, the platform can prevent their loyalty points from becoming idle and speed up the
circulation of points. It is also reasonable to state that the platform can increase customers’
willingness to accrue points. From the companies’ point of view, companies may have
complementary advantages and form a cooperative alliance to share common benefits.
Over time, the opportunity for the growth of users within each company may increase
through cross-company promotion. It is foreseeable that the point exchange platform will
become a market mechanism under which the power of demand and supply determines
the value of points.
-----
_J. Theor. Appl. Electron. Commer. Res. 2022, 17_ 1122
Peer-to-peer point exchange is the reciprocal transfer of an asset. In the future, the
nonreciprocal transfer of a loyalty point is worth considering —that is, loyalty point donation. In addition, there are other means to increase the activation of loyalty points. For
example, multiple customers can gather loyalty points together, so that the number of
loyalty points can reach the redemption threshold as soon as possible. These customers can
agree in advance how to share the products after redemption. When multiple customers
cooperate, trust and transparency become the keys to successful cooperation. Some important information, such as the number of points collected by each person, needs to be
accessible by everyone within the group. In this case, blockchain is also very suitable for
storing such information. Therefore, efforts will be made in future studies to develop a
blockchain-based platform to support the joint collection of loyalty points.
**Author Contributions: Conceptualization, formal analysis, and methodology, C.-S.H. and S.-F.T.;**
software, validation, data curation, and investigation, C.-S.H. and Y.-T.W.; writing—original draft
preparation, and writing—review and editing, S.-F.T. and C.-S.H.; resources, supervision, and project
administration, S.-F.T. All authors have read and agreed to the published version of the manuscript.
**Funding: This research received no external funding.**
**Institutional Review Board Statement: Not applicable.**
**Informed Consent Statement: Not applicable.**
**Data Availability Statement: Not applicable.**
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. Gil-Gomez, H.; Guerola-Navarro, V.; Oltra-Badenes, R.; Lozano-Quilis, J.A. Customer relationship management: Digital transfor[mation and sustainable business model innovation. Econ. Res.-Ekon. Istraživanja 2020, 33, 2733–2750. [CrossRef]](http://doi.org/10.1080/1331677X.2019.1676283)
2. Luck, D.; Lancaster, G. The significance of CRM to the strategies of hotel companies. Worldw. Hosp. Tour. Themes 2013, 5, 55–66.
3. [Liu, Y. The long-term impact of loyalty programs on consumer purchase behavior and loyalty. J. Mark. 2007, 71, 19–35. [CrossRef]](http://doi.org/10.1509/jmkg.71.4.019)
4. Chen, Y.; Mandler, T.; Meyer-Waarden, L. Three decades of research on loyalty programs: A literature review and future research
[agenda. J. Bus. Res. 2021, 124, 179–197. [CrossRef]](http://doi.org/10.1016/j.jbusres.2020.11.057)
5. Dominguez Perez, L.; Ibarra, L.; Alejandro, G.; Rumayor, A.; Lara-Alvarez, C. A loyalty program based on Waves blockchain and
[mobile phone interactions. Knowl. Eng. Rev. 2020, 35, E12. [CrossRef]](http://doi.org/10.1017/S0269888920000181)
6. Shanshan, N.; Wilco, C.; Eric, S. A study of hotel frequent-guest programs: Benefits and costs. J. Vacat. Mark. 2011, 17, 315–327.
[[CrossRef]](http://doi.org/10.1177/1356766711420836)
7. Xie, L.; Chen, C.C. Hotel loyalty programs: How valuable is valuable enough? Int. J. Contemp. Hosp. Manag. 2014, 26, 107–129.
[[CrossRef]](http://doi.org/10.1108/IJCHM-08-2012-0145)
8. Purohit, A.; Thakar, U. Role of information and communication technology in improving loyalty program effectiveness: A
[comprehensive approach and future research agenda. Inf. Technol. Tour. 2019, 21, 259–280. [CrossRef]](http://doi.org/10.1007/s40558-018-00139-6)
9. Sorrells, M. How Blockchain Is Reinventing Travel Loyalty Programs for Both Brands and Consumers. Available online:
[https://www.phocuswire.com/travel-blockchain-loyalty-programs (accessed on 10 November 2021).](https://www.phocuswire.com/travel-blockchain-loyalty-programs)
10. Peñalba, J.E.M.; Guzmán, G.M.; de Mojica, E.G. The effect of information and communication technology in innovation level: The
Panama SMEs case. J. Bus. Econ. Policy 2015, 2, 124–131.
11. Related Team Beyond Stand-Alone Loyalty Programs: The Reward Marketplace. [Available online: https://related.me/](https://related.me/rewardmarketplace-post/)
[rewardmarketplace-post/ (accessed on 10 November 2021).](https://related.me/rewardmarketplace-post/)
12. Šeri´c, M.; Gil-Saura, I.; Molla-Descals, A. Loyalty in high-quality hotels of Croatia: From marketing initiatives to customer brand
[loyalty creation. J. Relatsh. Mark. 2013, 12, 114–140. [CrossRef]](http://doi.org/10.1080/15332667.2013.794101)
13. Lim, Y.H.; Hashim, H.; Poo, N.; Poo, D.C.C.; Nguyen, H.D. Blockchain technologies in e-commerce: Social shopping and loyalty
program applications. In Social Computing and Social Media. Communication and Social Communities. HCII 2019. Lecture Notes in
_[Computer Science; Meiselwitz, G., Ed.; Springer: Cham, Switzerland, 2019; Volume 11579. [CrossRef]](http://doi.org/10.1007/978-3-030-21905-5_31)_
14. Brandon, D. The blockchain: The future of business information systems. Int. J. Acad. Bus. World 2016, 10, 33–40.
15. Gatteschi, V.; Lamberti, F.; Demartini, C. Blockchain Technology Use Cases. In Advanced Applications of Blockchain Technology;
[Studies in Big Data; Kim, S., Deka, G., Eds.; Springer: Singapore, 2020; Volume 60. [CrossRef]](http://doi.org/10.1007/978-981-13-8775-3_4)
16. Wang, L.; Luo, X.R.; Xue, B. Too good to be true? understanding how blockchain revolutionizes loyalty programs. In 24th Americas
_Conference on Information Systems; Association for Information Systems: New Orleans, LA, USA, 2018._
17. Agrawal, M.; Amin, D.; Dalvi, H.; Gala, R. Blockchain-based universal loyalty platform. In Proceedings of the 2019 International
Conference on Advances in Computing, Communication and Control, Mumbai, India, 20–21 December 2019; pp. 1–6.
-----
_J. Theor. Appl. Electron. Commer. Res. 2022, 17_ 1123
18. Liao, C.H.; Teng, Y.W.; Yuan, S.M. Blockchain-based cross-organizational integrated platform for issuing and redeeming reward
points. In Proceedings of the 10th International Symposium on Information and Communication Technology, Hanoi-Halong Bay,
Vietnam, 4–6 December 2019; pp. 407–411.
19. Sönmeztürk, O.; Ayav, T.; Erten, Y.M. Loyalty program using blockchain. In Proceedings of the 2020 IEEE International Conference
on Blockchain, Rhodes Island, Greece, 2–6 November 2020; pp. 509–516.
20. Chen, J.; Ying, W.; Chen, Y.; Wang, Z. Design principles for blockchain-enabled point exchange systems: An action design research
on a polycentric collaborative network for loyalty programs. In Proceedings of the 21st IFIP WG 5.5 Working Conference on
Virtual Enterprises, Valencia, Spain, 23–25 November 2020; pp. 155–166.
21. Pramanik, B.K.; Rahman, A.S.; Li, M. Blockchain-based reward point exchange systems. Multimed. Tools Appl. 2020, 79, 9785–9798.
[[CrossRef]](http://doi.org/10.1007/s11042-019-08341-2)
22. Tasatanattakool, P.; Techapanupreeda, C. Blockchain: Challenges and applications. In Proceedings of the 2018 International
Conference on Information Networking (ICOIN), Chiang Mai, Thailand, 10–12 January 2018; pp. 473–475.
23. Zheng, Z.; Xie, S.; Dai, H.; Chen, X.; Wang, H. An overview of blockchain technology: Architecture, consensus, and future
trends. In Proceedings of the 2017 IEEE international congress on big data (BigData congress), Boston, MA, USA, 25–30 June 2017;
pp. 557–564.
24. Dib, O.; Brousmiche, K.L.; Durand, A.; Thea, E.; Hamida, E.B. Consortium blockchains: Overview, applications and challenges.
_Int. J. Adv. Telecommun. 2018, 11, 51–64._
25. Zheng, Z.; Xie, S.; Dai, H.N.; Chen, X.; Wang, H. Blockchain challenges and opportunities: A survey. Int. J. Web Grid Serv. 2018, 14,
[352–375. [CrossRef]](http://doi.org/10.1504/IJWGS.2018.095647)
26. Androulaki, E.; Barger, A.; Bortnikov, V.; Cachin, C.; Christidis, K.; De Caro, A.; Enyeart, D.; Ferris, C.; Laventman, G.; Manevich,
Y.; et al. Hyperledger fabric: A distributed operating system for permissioned blockchains. In Proceedings of the Thirteenth
EuroSys Conference 2018, Porto, Portugal, 23–26 April 2018; pp. 1–15.
27. Sajana, P.; Sindhu, M.; Sethumadhavan, M. On blockchain applications: Hyperledger Fabric and Ethereum. Int. J. Pure Appl.
_Math. 2018, 118, 2965–2970._
28. Nelaturu, K.; Du, H.; Le, D.P. A Review of Blockchain in Fintech: Taxonomy, Challenges, and Future Directions. Cryptography
**[2022, 6, 18. [CrossRef]](http://doi.org/10.3390/cryptography6020018)**
29. Utz, M.; Johanning, S.; Roth, T.; Bruckner, T.; Strüker, J. From ambivalence to trust: Using blockchain in customer loyalty programs.
_[Int. J. Inf. Manag. 2022, 102496, in press. [CrossRef]](http://doi.org/10.1016/j.ijinfomgt.2022.102496)_
30. Lokhava, M.; Losa, G.; Mazières, D.; Hoare, G.; Barry, N.; Gafni, E.; Jove, J.; Malinowsky, R.; McCaleb, J. Fast and secure global
payments with Stellar. In Proceedings of the ACM SIGOPS 27th Symposium on Operating Systems Principles, Huntsville, ON,
Canada, 27–30 October 2019; pp. 80–96.
31. Quasim, M.T.; Khan, M.A.; Algarni, F.; Alharthy, A.; Alshmrani, G.M.M. Blockchain Frameworks. In Decentralised Internet of
_Things; Studies in Big Data; Khan, M., Quasim, M., Algarni, F., Alharthi, A., Eds.; Springer: Cham, Switzerland, 2020; Volume 71._
[[CrossRef]](http://doi.org/10.1007/978-3-030-38677-1_4)
32. [Taiwan Stock Exchange. Operating rules of the Taiwan Stock Exchange Corporation. Available online: https://twse-regulation.](https://twse-regulation.twse.com.tw/ENG/EN/law/DOC01.aspx?FLCODE=FL007304&FLNO=58-3)
[twse.com.tw/ENG/EN/law/DOC01.aspx?FLCODE=FL007304&FLNO=58-3 (accessed on 10 January 2022).](https://twse-regulation.twse.com.tw/ENG/EN/law/DOC01.aspx?FLCODE=FL007304&FLNO=58-3)
33. [Hyperledger. A Blockchain Platform for the Enterprise. Available online: https://hyperledger-fabric.readthedocs.io/en/latest](https://hyperledger-fabric.readthedocs.io/en/latest)
(accessed on 10 January 2022).
-----
| 15,391
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/jtaer17030056?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/jtaer17030056, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/0718-1876/17/3/56/pdf?version=1659592650"
}
| 2,022
|
[
"JournalArticle"
] | true
| 2022-08-04T00:00:00
|
[
{
"paperId": "2787870d9316c291c4b304a7ad21a0bc7ccc6b7a",
"title": "A Review of Blockchain in Fintech: Taxonomy, Challenges, and Future Directions"
},
{
"paperId": "a0b79131324aee519af4597935c959c0345e6776",
"title": "From ambivalence to trust: Using blockchain in customer loyalty programs"
},
{
"paperId": "fe5792f706072b82e8a5918dd3fd085d3e9ab07b",
"title": "Three decades of research on loyalty programs: A literature review and future research agenda"
},
{
"paperId": "d14332534980199bad7103dac048f4110cd843b4",
"title": "Loyalty Program using Blockchain"
},
{
"paperId": "3cadfba9550ed02d784f5be0b124adeb7e727198",
"title": "A loyalty program based on Waves blockchain and mobile phone interactions"
},
{
"paperId": "f4fd98095bd0640bcdc49c31fea4e94117751314",
"title": "Blockchain-based reward point exchange systems"
},
{
"paperId": "14a6d004886c2a686f986db26e930932aaf25696",
"title": "Customer relationship management: digital transformation and sustainable business model innovation"
},
{
"paperId": "247060f6035404257ac916363d233428c0c002d2",
"title": "Blockchain-Based Cross-Organizational Integrated Platform for Issuing and Redeeming Reward Points"
},
{
"paperId": "ca7d3778c4b23d428d6e62c9ffef3939c5b13cf4",
"title": "Blockchain-based Universal Loyalty Platform"
},
{
"paperId": "c876007e7796dff7335b862fe3c38863f6c65c67",
"title": "Fast and secure global payments with Stellar"
},
{
"paperId": "0aed7b7f29a68d13e8535e91ac1f969409652e57",
"title": "Blockchain Technologies in E-commerce: Social Shopping and Loyalty Program Applications"
},
{
"paperId": "bea90556c6d75ff547e3d95eb1be44c092738fd8",
"title": "Role of information and communication technology in improving loyalty program effectiveness: a comprehensive approach and future research agenda"
},
{
"paperId": "305edd92f237f8e0c583a809504dcec7e204d632",
"title": "Blockchain challenges and opportunities: a survey"
},
{
"paperId": "1b4c39ba447e8098291e47fd5da32ca650f41181",
"title": "Hyperledger fabric: a distributed operating system for permissioned blockchains"
},
{
"paperId": "ee177faa39b981d6dd21994ac33269f3298e3f68",
"title": "An Overview of Blockchain Technology: Architecture, Consensus, and Future Trends"
},
{
"paperId": "354a0603a26a45fb12e5a2d661b23cd699b41b77",
"title": "Hotel loyalty programs: how valuable is valuable enough?"
},
{
"paperId": "eb80770dceaca0b7c17a7a40afa4a9ca6409d55f",
"title": "The significance of CRM to the strategies of hotel companies."
},
{
"paperId": "286a01c5ba8d2508f48516879577da669018749c",
"title": "Loyalty in High-Quality Hotels of Croatia: From Marketing Initiatives to Customer Brand Loyalty Creation"
},
{
"paperId": "4b3a337813975a54910dad265be48791a95ebc4c",
"title": "A study of hotel frequent-guest programs: Benefits and costs"
},
{
"paperId": "b41dee39256c1aae27cec98a9b69022d8f517bd2",
"title": "The Long-Term Impact of Loyalty Programs on Consumer Purchase Behavior and Loyalty"
},
{
"paperId": "765c15e34f23c75909743a54fc3cc1f6485d6e7d",
"title": "Design Principles for Blockchain-Enabled Point Exchange Systems: An Action Design Research on a Polycentric Collaborative Network for Loyalty Programs"
},
{
"paperId": "e8fec3b9001cc3cc066f2d2a3f2551e8a513d189",
"title": "Blockchain Frameworks"
},
{
"paperId": "7a960ee047523e067e0b79a9a3ad48f16d04d9b2",
"title": "Blockchain Technology Use Cases"
},
{
"paperId": "ade480359d082bb3f4a8c8ff4358f0e72775d538",
"title": "Blockchain: Challenges and applications"
},
{
"paperId": "2dc3f16404739c153ce6d45bf370e295623f6714",
"title": "Consortium Blockchains: Overview, Applications and Challenges"
},
{
"paperId": "767410f40ed2ef1b8b759fec3782d8a0f2f8ad40",
"title": "On Blockchain Applications : Hyperledger Fabric And Ethereum"
},
{
"paperId": "4798100a4dec8eaff623eb29ceabcb518f4722b3",
"title": "Too Good to Be True? Understanding How Blockchain Revolutionizes Loyalty Programs"
},
{
"paperId": "38019bc69b36e8d360e98febb8aaf618e1861b28",
"title": "The Effect of Information and Communication Technology in Innovation Level : The Panama SMEs Case"
},
{
"paperId": null,
"title": "How Blockchain Is Reinventing Travel Loyalty Programs for Both Brands and Consumers"
},
{
"paperId": null,
"title": "https://twse-regulation"
},
{
"paperId": null,
"title": "The blockchain: The future of business information systems"
},
{
"paperId": null,
"title": "A Blockchain Platform for the Enterprise"
},
{
"paperId": null,
"title": "Operating rules of the Taiwan Stock Exchange Corporation"
},
{
"paperId": null,
"title": "The Reward Marketplace"
}
] | 15,391
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Medicine",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/008a8e8bac5d207c912d9bb5d29774d252761844
|
[
"Computer Science"
] | 0.906614
|
Evaluation and Quality Assurance of Fog Computing-Based IoT for Health Monitoring System
|
008a8e8bac5d207c912d9bb5d29774d252761844
|
Wireless Communications and Mobile Computing
|
[
{
"authorId": "2007557945",
"name": "Qing QingChang"
},
{
"authorId": "2074279197",
"name": "Iftikhar Ahmad"
},
{
"authorId": "14896824",
"name": "Xiaoqun Liao"
},
{
"authorId": "3195938",
"name": "S. Nazir"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Wirel Commun Mob Comput"
],
"alternate_urls": [
"https://onlinelibrary.wiley.com/journal/15308677",
"http://www.interscience.wiley.com/jpages/1530-8669/"
],
"id": "501c1070-b5d2-4ff0-ad6f-8769a0a1e13f",
"issn": "1530-8669",
"name": "Wireless Communications and Mobile Computing",
"type": "journal",
"url": "https://www.hindawi.com/journals/wcmc/"
}
|
Computation and data sensitivity are the metrics of the current Internet of Things (IoT). In cloud data centers, current analytics are often hosted and reported on suffering from high congestion, limited bandwidth, and security mechanisms. Various platforms are developed in the area of fog computing and thus implemented and assessed to run analytics on multiple devices, including IoT devices, in a distributed way. Fog computing advances the paradigm of cloud computing on the network edge, introducing a number of options and facilities. Fog computing enhances the processing, verdicts, and interventions to occur through IoT devices and spreads only the necessary details. The ideas of fog computing based on IoT in healthcare frameworks are exploited by shaping the disseminated delegate layer of insight between sensor hubs and the cloud. The cloud proposed a system adapted to overcome various challenges in omnipresent medical services frameworks, such as portability, energy efficiency, adaptability, and unwavering quality issues, by accepting the right to take care of certain weights of the sensor network and a distant medical service group. An overview of e-health monitoring system in the context of testing and quality assurance of fog computing is presented in this paper. Relevant papers were analyzed in a comprehensive way for the identification of relevant information. The study has compiled contributions of the existing methodologies, methods, and approaches in fog computing e-healthcare.
|
Hindawi
Wireless Communications and Mobile Computing
Volume 2021, Article ID 5599907, 12 pages
[https://doi.org/10.1155/2021/5599907](https://doi.org/10.1155/2021/5599907)
# Review Article Evaluation and Quality Assurance of Fog Computing-Based IoT for Health Monitoring System
## QingQingChang,[1] Iftikhar Ahmad,[2] Xiaoqun Liao,[3] and Shah Nazir 2
1School of Information Management, Shanghai Linxin University of Accounting and Finance, 995 Shangchuan Road,
Pudong New District, Shanghai 201209, China
2Department of Computer Science, University of Swabi, Khyber Pakhtunkhwa, Pakistan
3Information and Network Center, Xi’an University of Science and DS Technology, Xi’an 710054, China
Correspondence should be addressed to Xiaoqun Liao; [email protected] and Shah Nazir; [email protected]
Received 25 January 2021; Revised 25 March 2021; Accepted 13 April 2021; Published 23 April 2021
Academic Editor: Ihsan Ali
[Copyright © 2021 QingQingChang et al. This is an open access article distributed under the Creative Commons Attribution](https://creativecommons.org/licenses/by/4.0/)
[License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is](https://creativecommons.org/licenses/by/4.0/)
properly cited.
Computation and data sensitivity are the metrics of the current Internet of Things (IoT). In cloud data centers, current analytics are
often hosted and reported on suffering from high congestion, limited bandwidth, and security mechanisms. Various platforms are
developed in the area of fog computing and thus implemented and assessed to run analytics on multiple devices, including IoT
devices, in a distributed way. Fog computing advances the paradigm of cloud computing on the network edge, introducing a
number of options and facilities. Fog computing enhances the processing, verdicts, and interventions to occur through IoT
devices and spreads only the necessary details. The ideas of fog computing based on IoT in healthcare frameworks are exploited
by shaping the disseminated delegate layer of insight between sensor hubs and the cloud. The cloud proposed a system adapted
to overcome various challenges in omnipresent medical services frameworks, such as portability, energy efficiency, adaptability,
and unwavering quality issues, by accepting the right to take care of certain weights of the sensor network and a distant medical
service group. An overview of e-health monitoring system in the context of testing and quality assurance of fog computing is
presented in this paper. Relevant papers were analyzed in a comprehensive way for the identification of relevant information.
The study has compiled contributions of the existing methodologies, methods, and approaches in fog computing e-healthcare.
## 1. Introduction
Fog computing is an infrastructure located somewhere
between the data source and the cloud in which information
computing, storage, and applications are located to process
the data and information. Fog computing, like edge computing, takes the cloud’s benefits and power closer to where
information is produced and operated. The words fog computing and edge computing are interchangeably used by
many individuals as both require taking knowledge and computation adjacent to where the information is formed. It is
mostly done to enhance reliability, but it may also be done
for reasons of protection and adherence. The distributed
approach to fog computing addresses IoT needs, and perhaps
even the enormous volume of information produced by
smart sensors and IoT devices, that would also be time
consuming and expensive to submit for analysis and processing to the cloud. Fog computing decreases the required bandwidth and decreases the connectivity between receptors and
also the cloud that can have a detrimental impact on IoT
results. Fog computing offers the server counterpart to the
IoT to manage the information gathered on a daily basis.
By exporting gigabytes of Internet traffic from the core network, it eliminates the need for expensive bandwidth additions [1, 2]. Many designed structures have been developed
by researchers depending on the best and mechanized cycle
with the hope that current patient consideration techniques
can be strengthened and fresh limits have been generated
considering the gigantic data upset that ensures the framework is clever. Therefore, a simple technique and a novel
smart flow model for savvy mending emphasis are the
systematic mechanism of need assessment for the brilliant
-----
2 Wireless Communications and Mobile Computing
work cycle of the mending group, considering a few evocative
methods used to get-together requirements. Moreover, this
research measure offers a better solution than knowing the
boggling mending emphasis coordination system consideration and flattens out requesting office work measure of the
specialist. Recreation performance shows that the average
Quick Flow Model will work better than the current work
steps [3].
Modern healthcare approaches are challenging errands to
gain more researcher insights. The application of Healthcare
4.0 technique will contribute to the penetration of medical
care information where programmers can obtain complete
admission to the email records, texts, and reports of patients.
In reality, an assured modern healthcare strategy will provide
all stakeholders with completion, counting patients, and
parental figures. In addition, the research provides a broad
written audit, investigating best in class guidelines for preserving security and safety in modern healthcare. It has also
explored the blockchain-based response to two specialists
and expert networks for offering experiences. Finally, in
modern healthcare, current issues and potential protection
and security exploration bearings are added [4].
The contribution of the proposed study is to present an
overview of e-health monitoring system in the context of
testing and quality assurance of fog computing. Several
relevant papers associated with the proposed study were analyzed in a comprehensive way. The study has compiled the
contributions of the existing methodologies, methods, and
approaches in fog computing in e-healthcare.
The organization of the paper is as follows: Section 2 presents the literature study of the proposed research. Section 3
shows the approaches for evaluation and quality assurance of
fog computing-based IoT for health monitoring. Section 4
represents statistics of the research done in the area. The
paper concludes in Section 5.
## 2. Literature Study
Research in the area of healthcare and IoT has gained more
attention for devising new algorithms, approaches, techniques, and mechanisms for solving different problems. The
integrity of IoT in medical care medicine is discussed by
incorporating a comprehensive literature due to the lack of
and less convincing medical care administrations to meet
the rising demands of a growing population with persistent
diseases. It is recommended that this involves a move from
facility-driven care to quiet-driven medical services where
each specialist is regularly aligned with each other, for example, medical unit, patients, and administration. This IoT ehealth biological patient-driven model includes a multilayer
infrastructure facility. Various case instances of administration and applications that are updated on certain layers adopt
this mist-driven IoT engineering. These models range from
portable well-being, assisted living, e-medication, inserts,
and structures for early admonition to population management in savvy urban communities. At that point, it has finally
got IoT e-healthcare challenges, such as executive data,
adaptability, guidance, interoperability, gadget networkhuman interfaces, security, and safety [5]. Hartmann et al.
[6] presented a report describing the existing and evolving
edge processing systems and processes for medical care
applications, to differentiate system preconditions and difficulties for various use cases. The application for connected
devices focuses particularly on the grouping of well-being
information, including critical sign monitoring and fall recognition. Other low-dormancy applications conduct explicit
side effect scans for illnesses, such as walking irregularities
in patients with Parkinson’s infection. In addition, it presents
a detailed audit of eager figuring information tasks that
include transition, encryption, validation, characterization,
decrease, and forecasting. Indeed, edge figuring has some
related problems, even with these focal points, including
prerequisites for refined protection and data reduction techniques to allow their cloud-based partners to perform equivalently, but with smaller capacity. It has been acknowledged
that potential analysis headings in edge figures for medical
facilities give consumers a wider spread of life whenever they
tend to achieve. All information is collected in the concept of
the information lake, regardless of its length, its abundance,
and its pace. It may be a test to put away all this data regardless of whether the invention provides a few arrangements,
for example, on reason, on the cloud or half-breed clouds,
as well as the foundation and atmosphere. The Internet of
Things has modified the concept of securing information in
the atmosphere of the information lake, and the volume cutoff points could be reached earlier rather than later for certain
information lakes. As of late, a novel concept, called mist
registering, has been introduced. The exchange of information intake steps between the sensor that provides knowledge
and the information lake that burns through information is a
fundamental feature of haze figuring. Initially, this section
discusses the principle of mist registration and the associated
difficulties and then explores the alternative options to be
considered when managing a knowledge lake [7].
Jaimes et al. [8] presented a study in which a crowd
detecting measure is illustrated and evaluated that involves
effective collaboration in brilliant contexts between crowd
sensing participants, using a simple mist that registers the
empowered Internet of Things. A haze figuring IoT model
involves a layer of figuring hubs that reside closer to the
detecting gadgets, with this layer of mist hubs lying in the
organization and the cloud in the center of portable and
detecting gadgets. This encourages us to propose a model in
brilliant circumstances for crowd sensing that involves both
competition and cooperation between members of the edge
organization who are close to crowd sensing. To test the show
of the specific proposal, recreations are added. The work
demonstrates desirable attributes regarding the number of
dynamic participants, the number of tests obtained, and
inclusion within a given investment plan, considering the
limited involvement of crowd detecting members on the edge
layer that can serve various atmosphere applications. One of
the new research areas is investigating the critical hypothesis,
challenging framework, and innovation of continuous
inquiry over streaming data for cloud processing. This review
describes the related innovation of the investigation depending on random hash, finding out how to hash and summarize, investigating the problems and difficulties of the
-----
Wireless Communications and Mobile Computing 3
ongoing question in the climate of asset-restricted mist processing, ultimately analyzing in detail the vital methodology
and techniques for the issue, even decreasing the estimation,
encoding techniques depending on figuring out how the
development of systematic reviews strategy for inquiry over
web-based Internet of Thing details, and the related research
question structure study bearings and others. In addition, a
Hybrid Dynamic Quantization approach for finding out
how to hash has been proposed; studies show that other
quantization methods are beaten by DAQ [9].
Kelati et al. [10] have discussed recent advances in metered
energy usage knowledge in locally formed administrations. It
also studies and analyzes interference, reliable existing, and
effective force strategies that demonstrate stable load. This
study readily retrieves either nonmeddle or judgmental
approaches. This study demonstrates that engineering utilizes
advances in the strategy of the savvy instrument and haze registering worldview for planning crude oil data. The framework
is experiencing a change in perception to increase the need for
everyday comfort of metropolitan networks and to provide
healthcare administrations that are practical and competent.
Patients with intellectual disabilities can be tested and illustrated by analyzing the power usage of home devices. After
this, the article describes the execution stage based on replication to create unique models of family devices and check the
AI measurement for the identification operation. Kumari
et al. [11] presented an approach which addressed basic nature
and difficulty of investigating mist data. The FDA’s point-bypoint scientific categorization is concerned with the cycle
model. We need efficient and persuasive arrangements to handle such big data, such as information mining, analysis, and
reduction to be distributed on a cloud at the edge of haze gadgets. For the most part, the current creative work attempts
focused around conducting big data investigations lack the
challenge of supporting mist knowledge analysis. The proposed model tackles numerous exploration challenges, such
as availability, adaptability, and interaction with mint nodes,
nodal coordination, variability, efficiency, and the essence of
administration needs. We present two contextual studies to
view the proposed cycle model. Li et al. [12] offered the
production processes for edge fog IoT phase beginning to be
completed. These models are applied to a solid situation: the
analysis of information streams provided by inserted cameras.
The administrations rely on cloud capacity and computing
resource systems, transforming their engineering into more
dispersed one-dependent eager offices provided by Internet
service providers. It is indistinct between the IoT equipment
association and cloud system, which is the largest portion in
terms of energy utilization. The approval consolidates predictions on a growing array of IoT gadgets on real proving
grounds running application-focused and recreations with
prominent test systems to discuss the scaling up. The outcomes for this case are indeed the portion of the cloud infrastructure that inserts the processing assets devouring
multiple times more than the IoT part containing the IoT
equipment and the remote passageway.
Liu et al. [13] presented a framework for half, and half
protection saving clinical option emotionally supporting
network in haze cloud services, called HPCS, is proposed in
this paper. A fog worker uses a lightweight information mining technique in HPCS to gradually screen patients’ disease
safely. In an authentication manner, the recently found
abnormal appearances can be further shipped away from
the cloud worker for rising projection. In particular, the goal
is to prepare another secure reassessed internal item convention for mist workers to achieve a healthy lightweight singlelayer neural organization. In addition, the security safeguarding convention of piecewise polynomial estimation allows
cloud workers to safely execute any initiation capabilities in
different neural organization layers. Besides that, another
framework called security safeguarding division estimate
convention is planned to take care of the estimation flood
issue. At that phase, we show that by changing the constant
and exacting quality of recreations, the HPCS meets the goal
of patient possibly the best status checking without preventive splashback with unpermitted parties. To deliver the level
of comfort, capability, and digitalization for consumers, the
current and impending IoT administrations are exceptionally
promising. It takes high security, assurance, validation, and
recovery from attacks to get the option to complete such an
environment in a constantly creating manner. A stable IoT
structure is important at present, joining the crucial reforms
in IoT structures designed to achieve start to finish. A
detailed analysis is combined in this exploration of securityrelated problems and threat wellsprings in IoT properties or
applications. Precisely, when taking a gender at privacy
concerns, recent progress in maintaining a serious level of
confidence in IoT applications appears to be made. Four
basic changes are assessed to extend the degree of IoT security, including cryptography, fog figuring, edge processing,
and machine learning [14].
## 3. Approaches for Evaluation and Quality Assurance of Fog Computing-Based IoT for Health Monitoring
Numerous platforms, approaches, and techniques are established in the field of fog computing and thus implemented
and evaluated to run analytics on multiple devices, such as
IoT devices, in a distributed way. Fog computing improves
the paradigm of cloud computing on the network edge, introducing a number of options and facilities. Manocha et al. [15]
presented a novel scientific fog supported to upgrade an
individual’s living accomplishments by a deep learningempowered real position-based inconsistency recognition
structure. An effort was made to record predicted movement
scores on the cloud to extend the efficacy of the proposed
augmented reality treatment by pursuing the ceaseless time
arrangement plan to include potential well-being references
to an approved clinical expert. In addition, a shrewd risk profile age structure is proposed to gradually insinuate clinical
subject matter experts and managers regarding an individual’s actual real status. The age of the alert is straight forwardly relative to the anticipated actual abnormality and
the size of well-being seriousness. The determined results
legitimize the prevalence of the proposed examination checking arrangement over the traditional cloud-based observing
-----
4 Wireless Communications and Mobile Computing
arrangements by accomplishing high movement expectation,
precision, and less dormancy rate in dynamics. Mutlag et al.
[16] offered a study with the purpose to implement a deliberately writing audit of cloud processing developments in the
field of IoT frameworks for medical services and review the
history. The implications of the scientific categorization have
been isolated into three main classes; systems and models,
frameworks, audit, and summary. For demanding applications,
ongoing low inertness, and high reaction time, particularly in
medical services applications, fog figuring is considered necessary. Separate activities with glare registration were established.
Compared to distributed computing, cloud processing
decreased inertness without doubt. Specialists show that extensions of reproduction and research ensure that a detailed image
passivity are provided.
Fog figuring is still starting and needs strong preparation
to obtain a successful, productive, and effectively deployable
replacement for the now prevalent cloud as essentially
achievable cost [17]. In this article, a new asset-productive
framework is presented for a multidistrict haze processing
worldview for disseminated video synopsis. The portals of
the sensor field depend on the Raspberry Equity gadget.
Validation tapes are distributed over different hubs, and a
breakdown is provided over the structure of cloud, which is
periodically pushed to the cloud to decrease the consumption
of data transfer resources. To test the proposed system, a
number of realistic remaining tasks are used as observation
recordings. Trial results indicate that the proposed device
has virtually nothing overhead with great adaptability over
off-the-rack costly database arrangements, even by using an
exceedingly restricted asset, a single board, accepting its
adequacy for brilliant urban areas assisted by IoT [17].
Olakanmi and Odeyemi [18] represented a security conspiracy that provides executives with viable data, and safe admission to patient data in an e-health setting is supported. In
addition, the methodology underpins the useful conveyance
of medical services among carers through compelling automation for data sharing. It will help clinical emphasis on
carers to function more effectively and for patients to receive
better treatment. Receiving wearable clinical gadgets and distributed computing offers an immense amount of data for
quick and momentary access. Nevertheless, it provides some
details on the bottlenecks, security, and safety challenges of
managers. Using the symmetric key and modified cipher textstrategy trait-based encryption, a two-layer security approach
is obtained to provide fine-grained admission control, timesensitive repudiation of land, and agreeable assignment of
well-being management among caregivers.
3.1. E-Health Approaches in Pandemic. Otoom et al. [19] presented a study suggesting an ongoing system for COVID-19
discovery and checking. The proposed structure uses the
IoT system to collect client constant manifestation information, to identify suspected cases of Covid19 early, to screen
the care reaction of people who have just recovered from
the infection, and to gather and analyze significant information to understand the concept of the infection. The platform
consists of five main segments: Collection and Uploading of
Symptom Data, Isolation Focus, Data Analysis Center (AI),
Health Advisors, and Network Equipment. This study proposes eight Artificial Intelligence calculations, specifically
Support Vector Machine, Naive, Reverse Nearest Neighbor,
Linear Regression, State Diagram, and Proposed General.
In contrast to the part of the relevant side effects, the analysis
was aimed at testing these eight calculations on a real COVID
embodiment dataset. The results indicate that five of these
eight analyses achieved an efficiency of more than 90 percent.
Parasuraman and Sangaiah [20] presented a study that
explores the systematic needs of vast spaces and devoured
massive amounts of power to needless electronic measures.
The coordinated structure was to form dispersed structures
with higher efficiency at the end of the ongoing years. The
normal registration process turns out to be more expensive
and inviolate to oversee in the current years as information
requests and online customers are rapidly extended. Conventional processing is unacceptable for getting to the data
wherever and whenever. Cloud calculation is a web-based
figure with comprehensive running effects and unsurprising
features across companies, partnerships, data innovation,
architecture, programming, and data stockpiling, providing
easy and updated planning tools and on-demand preparation
of resources. In fact, vendors may assume that their customer
information placed on their base is safe and, in addition, very
much guaranteed, so the strongest security efforts need to be
divided to deal with the difficulties of putting away data at an
outsider data center.
In the light of compact IoT and cloud side administration,
the authors created two overlay arrangement in this paper.
ITaaS contains arrangements for (a) the IoT side to regularly
support information assortment from IoT gadgets to a passage
and (b) the cloud back-end side to help exchange stockpile and
prepare information. ITaaS provides the vanguard of innovation to allow fast application arrangements in the space of
interest. E-health and distant tracking are conspicuous and
promising applications of this breakthrough. A distant patient
observation framework as a proof of idea and the coordination
of the proposed scheme uses a beat oximeter and devices for
detecting pulse observation. Similarly, the spine system with
high client concurrence and high information streams was
stressed, and we show that the solicitations are performed at
around 1 second, a number that means a good presentation
by considering the number of solicitations, the organization
inactivity, and the general (two GB RAM) [21].
3.2. Geo-Based Dissemination. The concept of fog registering
in healthcare frameworks is exploited by shaping a geodisseminated delegate layer of insight between sensor hubs
and the cloud. The cloud proposed system will adapt to
various challenges in omnipresent medical services frameworks, such as portability, energy efficiency, adaptability,
and unwavering quality issues, by accepting the right to take
care of certain weights of the sensor network and a distant
medical service group. Particularly in clinical conditions, a
prosperous use of weight associated gateways will empower
enormous arrangements of pervasive observing frameworks.
A model is presented for a smart e-health gateway known as
UT-GATE, where a portion of the higher level highlights
reviewed has been modified. In addition, an Internet of
-----
Wireless Communications and Mobile Computing 5
Papers
Chapter
Article
Conference paper
Papers
Conferences
Books
Early access articles
Journals
Magazines
Figure 1: Paper types.
Business and management
Computer science
Engineering
0 10 20 30 40 50 60 70 80
Papers
Figure 2: Disciplines in the area.
Things early warning score check was conducted to essentially demonstrate the efficacy and validity of our system for
clinical contextual studies. The proof of concept configuration
demonstrates an Internet of Things observing system with
improved and broad knowledge of the platform, energy ability,
accessibility, operation, connectivity, stability, and durability
[22]. The study advocates the critical role of modern guidelines and edge authentication components for the diffusion
of the largely expanded consumer experience in conjunction
with presented collection management and surveys the modern insights that can gain from both the IoT and edge processing situation, discussing in depth about each of the taxonomic
segments at that stage. Second, it presents two use cases executed for all intents and purposes that have as of late used
the edge-IoT worldview together to fix metropolitan savvy
living problems and, third, for e-medical services such as the
proposed novel fog-based engineering and developed demo
proving ground. The test results showed promising results in
limiting emphasis on IoT cloud research or doorway. It
concludes with discussions on various boundaries, such as
engineering, prerequisite capacity, helpful problems, and
determination rules, associated with the endurance of layer
joining [23].
Figure 3: Paper types.
Rehman et al. [24] have completed genome datasets of
different organisms readily available, and a lot more are being
sequenced. In understanding the functioning of normal
living beings, these genomic mechanisms are of utmost
importance and have many applications in our everyday
lives. It is a daunting job to control this gigantic measure of
knowledge with conventional methods. Analysis of such data
may take hours or days to produce results which have caused
ideal models of current distributed computing to face various
difficulties. Among the indicated qualities, fog processing is
commonly used by specialists around the world for flexible
asset distribution. Cloud registration uses the cloud at the
back end, thus expanding the spectrum of cloud to things
by taking resources close to the edge of gadgets, thus defeating various impediments to the worldview of distributed
computing. In view of the interesting properties of haze, such
as low jitter, low idleness, enhanced protection, and so on, it
is argued that the philosophy of fog extraction has extraordinary potential for high embedded platforms for data and
information. Sanchez-Gallegos et al. [25] presented a study
on the plan creation, as well as implementation of an engineering model to build on request edge-mist cloud handling
frameworks to deal consistently with enormous data and
simultaneously execute NFR filling administration. Effective
and calculated squares, revised as microservices and nanoservices, are recursively interconnected in this model to
construct edge-haze cloud planning systems as a rationalist
administrative framework. Coherence plans generate information through the cloud and edge structure squares and
enable a model developed using this model to demonstrate
the accomplishment of this model, which was tested in a
situation study based on the handling of data to endorse a
simple dynamic methodology in distant patient observation.
This research examines situations in which end-clients and
clinical staff received bits of information when planning electrocardiograms provided by sensors in remote IoT devices,
much as doctors were accommodated and admonished when
examining and identifying anomalies in the broken down
ECG content on the web. It was also considered a situation
in which associations deal with different concurrent edge
-----
6 Wireless Communications and Mobile Computing
Figure 4: Conference locations.
mist cloud systems for the preparation of information and
material transmitted to inside and outer workers.
3.3. Real-Time Mobility and Robust Streaming. García-Valls
et al. [26] presented the plan and approval of a system that
improves the administration season of the fog workers’ chosen exercises; undoubtedly, most of those exercises are
described by distant patients. It crosses the limits of current
processors to parallelize explicit exercises that can be a sudden spike in demand for saved centers; what is more, it
depends on the nature of administration, certification of
information circulation stages to improve correspondence,
and reaction times to versatile patients. A significant test of
e-health administrations on the cloud, instead of various
administrations running on shrewd large cities, is that they
typically conduct various computational exercises conducting
broad data handling on realistic information that should be
protected. The overhaul of distant patient hubs can be
enhanced by using the limits of current processors. The proposed approach is approved for a model execution of recreated
computationally serious e-health collaborations, diminishing
the reaction time by 4x when center reservation is enacted.
In comparison to cloud space, the latest ideal models of edge
and cloud figuring offer innovative arrangements by bringing
assets closer to the customer and offering low idleness and
energy efficient responses for knowledge planning. In any
event, there are various limitations and spotlights on the latest
mist models from restriction. It is suggested in this study that a
new structure called health fog to integrate deep learning in
edge registering gadgets and conveyed it for the genuine use
of the fog-enabled cloud system programmed heart-disease
inspection. Fog bus is used to convey and evaluate the presentation of the proposed monitoring. In various cloud calculation situations and for different customer needs, health fog is
configurable for different operation modes that offer the best
quality of service or forecast accuracy, as necessary [27]. To
minimize the spread of the infection and protect the health
of patients who need to stay in an emergency clinic, home
hospitalization is a standout among other alternative arrangements. This paper proposes a system for home hospitalization
based on IoT, fog, and cloud processing; these are among the
key developments that have led in a big way to improving
the field of medical services. These systems enable patients in
their homes and among their families to recover and obtain
care, where awareness and the ecological condition of the
hospital stay room are observed, to encourage specialists to
follow the hospital stay cycle and to make recommendations,
through control units and flexible applications created for this
-----
Wireless Communications and Mobile Computing 7
Big data
Internet
Biomedical communication
Computerised monitorir
Data acquisition
5
Data privacy
Decision making
Geriatrics
Resource allocation
Smart phones
79 Internet of things
52 Cloud computing
Data analysis
13
Medical information system
12 Diseases
Electrocardiography
11 Learning (artificial intelli
Medical signal process
53 Health care
14 Medical computing
17 Mobile computing
7 Patient care
34 Patient monitoring
Telemedicine
8
Wireless sensor network
Figure 5: Publication topics.
5
79
52
13
12
11
53
14
17
7
34
8
purpose, for patients and their supervisors. The after effects of
the test have shown a remarkable appreciation of this framework by patients and specialists alike [28].
The use of IoT gadgets for ML deduction saves the cloud
disadvantage of high dormancy in the enterprise, unsuitable
for delay-touch apps such as fall locators. The present fall recognition structures, however, require induction on the mist,
and there is no evidence of it under real circumstances, nor
documentation regarding the dynamic challenge of the structure. To collect tolerant observing data, a handheld trihub
accelerometer is used. This study suggests a genius Open IoT
engineering in the cloud to assist the far-off sending and the
DL model board. Two DL models have been submitted to
advance assets, and their exhibition and derivation time using
virtualization are analyzed. The results show the adequacy of
our fall system, which offers a more convenient and accurate
solution than traditional fall finder frameworks, greater competence, 98.75 percent accuracy, lower deferral, and improvement in administration [29]. Farahani et al. [5] proposed a
comprehensive AI-driven IoT e-health engineering focused
on the concept of a collective machine learning method in
which insight is transmitted through devices Despite the
energizing advances in the shift from center-driven to
understanding-driven medical care, the device enables medical service professionals to continuously screen the associated
data of subjects anywhere anytime and has constant noteworthy interactions that ultimately strengthen the dynamic force.
Using a comprehensive ECG-based arrhythmia position contextual analysis, the plausibility of such engineering is tested.
From plan recommendations, for example, relating to overheads, energy usage, inertia, and implementation, to designing
and conveying advanced AI strategies to such engineering, this
illustrative model explores and discusses immeasurably
important parts of the proposed engineering. Yacchirema
et al. [30] introduced an innovative system based on distributed and cloud computing technologies that provides new
opportunities to assemble novel and inventive administrations
to support the rest of apnea and to resolve the current constraints in combination with IoT and large knowledge levels.
In particular, the structure is focused on a few remote lowpower organizations with brilliant heterogeneous gadgets.
An edge center offers IoT association and interoperability in
cloud computing and prehandling IoT information to continuously recognize occasions that can jeopardize the elderly and
function similarly. In the cloud, for additional handling and
investigation, a generic motivating agent background broker
supervises, stores, and infuses information into the massive
information analyzer. The presentation and emotional appropriateness of the system were evaluated separately using more
than thirty GB size datasets and a poll satisfied by medical professionals educated. Results show that the system knowledge
study enhances the dynamics of the experts to screen and
direct rest apnea care, as well as improving the personal satisfaction of older people.
-----
8 Wireless Communications and Mobile Computing
2019 (I2CT)
2017 (CAMAD)
2017 (CISTI)
2017 (CloudCom)
2017 (CNSM)
2017 (COMPSAC)
2017 (FAS[⁎]W)
2017 (GlobalSIP)
1 2017 (ICCAIS)
2017 (ISSC)
2017 (IWCMC)
2017 (JCSSE)
2018 (Cloudtech)
2018 (FMEC)
2 2018 (ICST)
2018 (ICTON)
2019 (FMEC)
4
2019 (WiMob)
7
Fog, Edge and Pervasive Computing in Intelligent loT Driven Applications
8
IEEE Access
IEEE internet of things journal
2016 (IWBIS)
Fog and Edge Computing: Principles and Paradigms
2018 (ICACT)
Figure 6: Publication title.
2019 (I2CT)
2017 (CAMAD)
2017 (CISTI)
2017 (CloudCom)
2017 (CNSM)
2017 (COMPSAC)
2017 (FAS[⁎]W)
2017 (GlobalSIP)
1 2017 (ICCAIS)
2017 (ISSC)
2017 (IWCMC)
2017 (JCSSE)
2018 (Cloudtech)
2018 (FMEC)
2 2018 (ICST)
2018 (ICTON)
2019 (FMEC)
4
2019 (WiMob)
7
Fog, Edge and Pervasive Computing in Intelligent loT Driven Applications
8
IEEE Access
IEEE internet of things journal
2016 (IWBIS)
Fog and Edge Computing: Principles and Paradigms
2018 (ICACT)
400
350
300
250
200
150
100
50
0
1980 1985 1990 1995 2000 2005 2010 2015 2020 2025
Years
Figure 7: Year of publication.
Papers
Review articles Research articles
Encyclopedia Book chapters
Conference abstracts Discussion
Editorials Mini reviews
Short communications Software publications
Other
Figure 8: Publication type.
-----
Wireless Communications and Mobile Computing 9
Vehicular Communications
Measurement
Journal of Systems Architecture
International Journal of Information…
Journal of Manufacturing Systems
Journal of Systems and Software
Ad Hoc Networks
Simulation Modelling Practice and Theory
Journal of Parallel and Distributed…
Sustainable Cities and Society
Computer Communications
Journal of Network and Computer…
Future Generation Computer Systems
0 20 40 60 80 100 120 140
Papers
Figure 9: Publications titles.
## 4. Statistics of the Research in the Area
Papers
Computer Science
It is difficult to guarantee the security of sensitive information
in an acceptable stored information in view of the fact that
after the information is delivered to the data-driven entity
in the type of piece, it is no longer limited by the information
distributor. In addition, terminal clinical sensors are typically
asset-driven in certain real-time health applications, limiting
the immediate receipt of expensive cryptographic natives. To
overcome these challenges, an asset-skilled secure information sharing strategy is proposed in the data-driven e-health
system, the one which uses encryption based on the related
literature trait and adapts it to the previously stated system
regarding essential security needs. It likewise misuses the
calculation assets of fog hubs and utilizes rethinking cryptography to boost framework productivity. The evaluation
shows that the strategy can fundamentally reduce the overhead estimate of resource-restricted terminal clinical gadgets
and can more effectively support ongoing e-health applications [31]. Aladwani [32] proposed to use fog registering
between sensors and distributed computing to competently
collect measurement information, reduce the measurement
of information transferred between the cloud and the
sensors, and increase the efficacy of the whole system.
Remote sensor organizations that use health care observation
in the territory send a large amount of companies of varying
degrees of importance and length to fog registration all time.
Eventually, estimation of the ability to reliably provide task
needs and render the primary factor in the need for tasks is
their importance, paying no attention to their duration. This
study is aimed at enhancing the execution of static business
booking calculations by using another technique called classification of tasks and categorization of virtual machines
based on the significance of enterprises. IoT-characterized
enterprises rely on their importance in three classes: highsignificance errands, medium-significance enterprises, and
low-significance errands that depend on the status of the
Social Sciences
Energy
Medicine and Dentistry
Agricultural and Biological Sciences
Engineering
Decision Sciences
Business, Management and Accounting
Environmental Science
Materials Science
Figure 10: Subject area.
patient. They will be added to the MAX-MIN booking equation to measure the exhibition achieved by these techniques.
Karatas and Korpeoglu [33] proposed that a topographically circulated multiple leveled cloud in this paper, fog
registration-based IoT architecture, and proposed procedures
for setting IoT information in the sections of the proposed
engineering. Information is considered in various kinds, and
different applications can involve each kind of information.
-----
10 Wireless Communications and Mobile Computing
60
50
40
30
20
10
0
Subjects
Figure 11: Subjects of the area.
Papers
Journals
Books
Reference works
Figure 12: Publication types.
The model of the problem of information situation is a problem of improvement and proposes calculations for the effective,
viable situation of information generated and devoured by IoT
hubs that are topographically relevant. Data used for different
applications is packed away in an environment that is essentially accessed by applications using that type of information
for only a single period. To test the plan, comprehensive recreation trial is conducted and the results show that the design and
situation techniques can productively position and store information while providing great execution to applications and
organization’s as far as access inertness and data transfer capability are devoted. The current gadgets that are used today are
also becoming all more impressive in terms of highlights and
skills, but they are still not equipped to perform shrewd, selfgoverning, and savvy orders, such as those often needed for
shrewd medical services, concerning helped living, virtual
reality, and increased reality; we need another substance to
perform undertakings for emerging IoT and distributed
computing applications; assignment offloading is desirable.
Between IoT hubs, sensors and edge gadgets can happen. Offloading can be done based on different components that
involve an application’s computational needs, load change,
board energy, executive inertness, etc. This review presents a
scientific categorization of late discharge plans that have been
suggested, such as cloud, distributed computing, and IoT, for
space. It also discusses the middleware developments that
enable offloading in a cloud-IoT scenario and the components
that are critical for offloading in a particular scenario. Additionally, it presents an exploration preprint submitted to
Future Generation Computer Systems on May 2, 2018, opening concerning offloading in edge and cloud processing [34].
The search process of the proposed research was carried out
in various popular libraries including Springer, ScienceDirect,
IEEE, and Wiley Online. The key reason of the search in these
libraries was to identify the most associated materials for the
process of analysis. The analysis was done from different perspectives such as to identify the publications on year-wise basis
and to identify the type of publication, title of publication, topics
of publication, location of publications, and so on. Figure 1
depicts the paper types in the library of Springer.
Figure 2 represents the disciplines of the area in the given
library. More papers are published in the area of engineering.
Figure 3 shows the types of papers in the IEEE library. In
this library, more articles were published as conference
papers.
Figure 4 shows the conference location in the same
library.
Figure 5 depicts the topics of publication in the library
where more papers are published in the area of IoT.
-----
Wireless Communications and Mobile Computing 11
Security and Privacy
Internet Technology Letters
IET Networks
WIREs Data Mining and Knowledge…
Major Reference Works
Concurrency and Computation: Practice…
Software: Practice and Experience
International Journal of Communication…
Transactions on Emerging…
Wiley Online Books
0 20 40 60 80 100 120 140 160
Papers
Figure 13: Papers published.
Figure 6 depicts the publication title.
Figure 7 graphically represents the number of publications done in a given year in the Library of ScienceDirect.
The publication types are given in Figure 8 for the given
library.
The publication titles are presented in Figure 9. More
publications regarding the area of research were done in
“Future Generation Computer Systems.”
The subject areas are presented in Figure 10. The figure
shows that more publications are done in the field of “Computer Science.”
The library of Wiley online was searched for identifying
associated materials. Figure 11 depicts the subject areas of
research in the library.
The publication types are mentioned in Figure 12. More
publications are done as journal category.
Figure 13 graphically demonstrates the articles published.
## 5. Conclusion
approaches and platforms for handling and managing various situations associated with researches in the area.
## Data Availability
The data will be provided upon request.
## Conflicts of Interest
The authors declare no conflict of interest.
## References
Fog computing is a computing infrastructure located nearby
data sources and the cloud, in which information computing,
storage, and applications are positioned to process the data
and information. Fog computing advances the paradigm of
cloud computing on the network edge, introducing a number
of options and facilities. Fog computing enhances the processing, verdicts, and interventions to occur through IoT
devices and spreads only the necessary details. The ideas of
fog computing based on IoT in healthcare frameworks are
exploited by shaping the disseminated delegate layer of
insight between sensor hubs and the cloud. An overview of
e-health monitoring systems in the context of testing and
quality assurance of fog computing is presented in the study
under consideration. Relevant materials were searched and
analyzed in a widespread manner. The study has compiled
the contributions of the existing methodologies, methods,
and approaches in fog computing in e-healthcare. This
review will be an evidence for the researchers to devise new
[1] S. Khan, S. Nazir, I. García-Magariño, and A. Hussain, “Deep
learning-based urban big data fusion in smart cities: towards
traffic monitoring and flow-preserving fusion,” Computers &
Electrical Engineering, vol. 89, article 106906, 2021.
[2] B. Wu, S. Nazir, and N. Mukhtar, “Identification of attack on
data packets using rough set approach to secure end to end
communication,” Complexity, vol. 2020, Article ID 6690569,
12 pages, 2020.
[3] M. Rath and V. K. Solanki, “Performance improvement in
contemporary health care using IoT allied with big data,” in
Handbook of Data Science Approaches for Biomedical Engineering, V. E. Balas, V. K. Solanki, R. Kumar, and M. Khari,
Eds., pp. 103–119, Academic Press, 2020.
[4] J. J. Hathaliya and S. Tanwar, “An exhaustive survey on security and privacy issues in Healthcare 4.0,” Computer Communications, vol. 153, pp. 311–335, 2020.
[5] B. Farahani, M. Barzegari, F. Shams Aliee, and K. A. Shaik,
“Towards collaborative intelligent IoT eHealth: from device
to fog, and cloud,” Microprocessors and Microsystems, vol. 72,
article 102938, 2020.
[6] M. Hartmann, U. S. Hashmi, and A. Imran, “Edge computing
in smart health care systems: review, challenges, and research
directions,” Transactions on Emerging Telecommunications
Technologies, no. article e3710, 2019.
[7] A. Laurent, D. Laurent, and C. Madera, Book, Data Lakes, First
Edition. Edited by © ISTE Ltd 2020. Published by ISTE Ltd and
-----
12 Wireless Communications and Mobile Computing
John Wiley & Sons, Inc., ISTE Ltd and John Wiley & Sons, Inc,
2020.
[8] L. G. Jaimes, A. Chakeri, and R. Steele, “Localized cooperation
for crowdsensing in a fog computing-enabled internet-ofthings,” Journal of Ambient Intelligence and Humanized Computing, 2018.
[9] X. Jiang, P. Hu, Y. Li et al., “A survey of real-time approximate
nearest neighbor query over streaming data for fog computing,” Journal of Parallel and Distributed Computing, vol. 116,
pp. 50–62, 2018.
[10] A. Kelati, I. B. Dhaou, A. Kondoro, D. Rwegasira, and
H. Tenhunen, “IoT based appliances identification techniques
with fog computing for e-health,” in 2019 IST-Africa Week
Conference (IST-Africa), pp. 1–11, Nairobi, Kenya, May 2019.
[11] A. Kumari, S. Tanwar, S. Tyagi, N. Kumar, R. M. Parizi, and
K.-K. R. Choo, “Fog data analytics: a taxonomy and process
model,” Journal of Network and Computer Applications,
vol. 128, pp. 90–104, 2019.
[12] Y. Li, A.-C. Orgerie, I. Rodero, B. L. Amersho, M. Parashar,
and J.-M. Menaud, “End-to-end energy models for edge
cloud-based IoT platforms: application to data stream analysis
in IoT,” Future Generation Computer Systems, vol. 87,
pp. 667–678, 2018.
[13] X. Liu, R. H. Deng, Y. Yang, H. N. Tran, and S. Zhong, “Hybrid
privacy-preserving clinical decision support system in fogcloud computing,” Future Generation Computer Systems,
vol. 78, pp. 825–837, 2018.
[14] M. Mahbub, “Progressive researches on IoT security: an
exhaustive analysis from the perspective of protocols, vulnerabilities, and preemptive architectonics,” Journal of Network
and Computer Applications, vol. 168, article 102761, 2020.
[15] A. Manocha, G. Kumar, M. Bhatia, and A. Sharma, “Videoassisted smart health monitoring for affliction determination
based on fog analytics,” Journal of Biomedical Informatics,
vol. 109, article 103513, 2020.
[16] A. A. Mutlag, M. K. Abd Ghani, N. Arunkumar, M. A.
Mohammed, and O. Mohd, “Enabling technologies for fog
computing in healthcare IoT systems,” Future Generation
Computer Systems, vol. 90, pp. 62–78, 2019.
[17] M. Nasir, K. Muhammad, J. Lloret, A. K. Sangaiah, and
M. Sajjad, “Fog computing enabled cost-effective distributed
summarization of surveillance videos for smart cities,” Journal
of Parallel and Distributed Computing, vol. 126, pp. 161–170,
2019.
[18] O. Olakanmi and K. Odeyemi, “FEACS: a fog enhanced
expressible access control scheme with secure services delegation among carers in E-health systems,” Internet of Things,
vol. 12, article 100278, 2020.
[19] M. Otoom, N. Otoum, M. A. Alzubaidi, Y. Etoom, and
R. Banihani, “An IoT-based framework for early identification
and monitoring of COVID-19 cases,” Biomedical Signal Processing and Control, vol. 62, article 102149, 2020.
[20] S. Parasuraman and A. K. Sangaiah, “Fog - driven healthcare
framework for security analysis,” in Computational Intelligence for Multimedia Big Data on the Cloud with Engineering
Applications, pp. 253–270, elsevier, 2018.
[21] E. G. M. Petrakis, S. Sotiriadis, T. Soultanopoulos, P. T. Renta,
R. Buyya, and N. Bessis, “Internet of Things as a Service
(iTaaS): challenges and solutions for management of sensor
data on the cloud and the fog,” Internet of Things, vol. 3-4,
pp. 156–174, 2018.
[22] A. M. Rahmani, T. N. Gia, B. Negash et al., “Exploiting smart
e-health gateways at the edge of healthcare Internet-ofThings: a fog computing approach,” Future Generation Computer Systems, vol. 78, pp. 641–658, 2018.
[23] P. P. Ray, D. Dash, and D. De, “Edge computing for Internet of
Things: a survey, e-healthcare case study and future direction,”
Journal of Network and Computer Applications, vol. 140,
pp. 1–22, 2019.
[24] H. U. Rehman, A. Khan, and U. Habib, “Fog computing for
bioinformatics applications,” in Book Chapter, pp. 529–545,
elsevier, 2020.
[25] D. D. Sanchez-Gallegos, A. Galaviz-Mosqueda, J. L. GonzalezCompean et al., “On the continuous processing of health data
in edge-fog-cloud computing by using micro/nanoservice
composition,” IEEE Access, vol. 8, pp. 120255–120281, 2020.
[26] M. García-Valls, C. Calva-Urrego, and A. García-Fornes,
“Accelerating smart eHealth services execution at the fog computing infrastructure,” Future Generation Computer Systems,
vol. 108, pp. 882–893, 2020.
[27] S. Tuli, N. Basumatary, S. S. Gill et al., “HealthFog: an ensemble deep learning based Smart Healthcare System for Automatic Diagnosis of Heart Diseases in integrated IoT and fog
computing environments,” Future Generation Computer Systems, vol. 104, pp. 187–200, 2020.
[28] H. Ben Hassen, N. Ayari, and B. Hamdi, “A home hospitalization system based on the Internet of things, fog computing and
cloud computing,” Informatics in Medicine Unlocked, vol. 20,
article 100368, 2020.
[29] D. Sarabia-Jácome, R. Usach, C. E. Palau, and M. Esteve,
“Highly-efficient fog-based deep learning AAL fall detection
system,” Internet of Things, vol. 11, article 100185, 2020.
[30] D. Yacchirema, D. Sarabia-Jácome, C. E. Palau, and M. Esteve,
“System for monitoring and supporting the treatment of sleep
apnea using IoT and big data,” Pervasive and Mobile Computing, vol. 50, pp. 25–40, 2018.
[31] L. Dang, M. Dong, K. Ota, J. Wu, J. Li, and G. Li, “Resourceefficient secure data sharing for information centric E-health
system using fog computing,” in 2018 IEEE International Conference on Communications (ICC), pp. 1–6, Kansas City, MO,
USA, May 2018.
[32] T. Aladwani, “Scheduling IoT healthcare tasks in fog computing based on their importance,” Procedia Computer Science,
vol. 163, pp. 560–569, 2019.
[33] F. Karatas and I. Korpeoglu, “Fog-based data distribution service (F-DAD) for Internet of Things (IoT) applications,”
Future Generation Computer Systems, vol. 93, pp. 156–169,
2019.
[34] M. Aazam, S. Zeadally, and K. A. Harras, “Offloading in fog
computing for IoT: review, enabling technologies, and
research opportunities,” Future Generation Computer Systems,
vol. 87, pp. 278–289, 2018.
-----
| 11,943
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1155/2021/5599907?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1155/2021/5599907, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://downloads.hindawi.com/journals/wcmc/2021/5599907.pdf"
}
| 2,021
|
[
"JournalArticle",
"Review"
] | true
| 2021-04-22T00:00:00
|
[
{
"paperId": "5cc129602d6ad0bd25b7092018fefff440902a51",
"title": "FEACS: A fog enhanced expressible access control scheme with secure services delegation among carers in E-health systems"
},
{
"paperId": "876f7e736f449814579d04512fecd8c6a6238d11",
"title": "Identification of Attack on Data Packets Using Rough Set Approach to Secure End to End Communication"
},
{
"paperId": "b2e33e85fc693601b407563ee5c84f8d1507c223",
"title": "Progressive researches on IoT security: An exhaustive analysis from the perspective of protocols, vulnerabilities, and preemptive architectonics"
},
{
"paperId": "714e0f9e9f7427bc7faf64196b441e6f6a18cd8e",
"title": "Highly-efficient fog-based deep learning AAL fall detection system"
},
{
"paperId": "c02daecd99104df69ece23030df56beff0b37283",
"title": "An IoT-based framework for early identification and monitoring of COVID-19 cases"
},
{
"paperId": "7010798c137dd757b4af5dea6934899f0a7fc993",
"title": "Video-assisted smart health monitoring for affliction determination based on fog analytics"
},
{
"paperId": "844ee166ad16dbaa45d00824cbed71960aa98ac1",
"title": "Accelerating smart eHealth services execution at the fog computing infrastructure"
},
{
"paperId": "351bb5c31c2f060272617d53d86992760f6fb335",
"title": "A home hospitalization system based on the Internet of things, Fog computing and cloud computing"
},
{
"paperId": "e453d1a6dc899d0b8f669384a4db8591bea5561a",
"title": "Fog Computing for Bioinformatics Applications"
},
{
"paperId": "ca6d338c18c99241ec42389a8884e73317fc62b3",
"title": "An exhaustive survey on security and privacy issues in Healthcare 4.0"
},
{
"paperId": "89e425049b21f5beb01d8c100c1e19fcf18a094f",
"title": "Towards collaborative intelligent IoT eHealth: From device to fog, and cloud"
},
{
"paperId": "f7ca07c4cbc9c9affb620dd9762b582d3b203121",
"title": "HealthFog: An Ensemble Deep Learning based Smart Healthcare System for Automatic Diagnosis of Heart Diseases in Integrated IoT and Fog Computing Environments"
},
{
"paperId": "1cedfb5fb33faa125a14a0f7b892a08de28e80c2",
"title": "Edge computing in smart health care systems: Review, challenges, and research directions"
},
{
"paperId": "bb72ffc2294213c2325e902bddacb9d402911629",
"title": "Edge computing for Internet of Things: A survey, e-healthcare case study and future direction"
},
{
"paperId": "96a1438e430f648566bad5d66cadc70099bfd16e",
"title": "IoT based Appliances Identification Techniques with Fog Computing for e-Health"
},
{
"paperId": "282f8d7ae4b7a5071337563a72bb087f029f1964",
"title": "Fog-Based Data Distribution Service (F-DAD) for Internet of Things (IoT) applications"
},
{
"paperId": "4a52aaeb8f31c4113da544e59856848ed79f6d38",
"title": "Fog computing enabled cost-effective distributed summarization of surveillance videos for smart cities"
},
{
"paperId": "450e40a7b88f3b4dd02da9e778f8ad5d314b0894",
"title": "Fog data analytics: A taxonomy and process model"
},
{
"paperId": "4031b68c7d32f53f73a55292a214d103096175d0",
"title": "Enabling technologies for fog computing in healthcare IoT systems"
},
{
"paperId": "51d02a9fa17d7c295fcf268bc4b6e74f21f413c1",
"title": "System for monitoring and supporting the treatment of sleep apnea using IoT and big data"
},
{
"paperId": "dbdee91ce7a3311d4e3a448b43a54ca4a699712d",
"title": "Internet of Things as a Service (iTaaS): Challenges and solutions for management of sensor data on the cloud and the fog"
},
{
"paperId": "749b1937cb9942715cc0f4d72cfabf0e2cceee59",
"title": "A survey of real-time approximate nearest neighbor query over streaming data for fog computing"
},
{
"paperId": "6e2047a101ee2a3bb047bcdd782e29f35469477a",
"title": "Localized cooperation for crowdsensing in a fog computing-enabled internet-of-things"
},
{
"paperId": "16063e9432424c8304723fad50d235b46565016e",
"title": "Resource-Efficient Secure Data Sharing for Information Centric E-Health System Using Fog Computing"
},
{
"paperId": "7a6584c077a16f339368176ccfaeb45a83712a57",
"title": "End-to-end energy models for Edge Cloud-based IoT platforms: Application to data stream analysis in IoT"
},
{
"paperId": "583cbce7d77e80cd2cbc1ff217575b9f0b7efd75",
"title": "Deep learning-based urban big data fusion in smart cities: Towards traffic monitoring and flow-preserving fusion"
},
{
"paperId": "02b8baed0c44c035351d92ee183d9167a7090aad",
"title": "Performance improvement in contemporary health care using IoT allied with big data"
},
{
"paperId": "871e7acd61fd74de79e26298424a1f8316eae8e1",
"title": "On the Continuous Processing of Health Data in Edge-Fog-Cloud Computing by Using Micro/Nanoservice Composition"
},
{
"paperId": "5d28443d84b828248652e922f190dbebd5bcd2cb",
"title": "Scheduling IoT Healthcare Tasks in Fog Computing Based on their Importance"
},
{
"paperId": "19774ca5d9acea9b9f69d894f721f74b5fa7c2a5",
"title": "Hybrid privacy-preserving clinical decision support system in fog-cloud computing"
},
{
"paperId": "6d39881703977192cfd0fd5b47e07fcc2fd53244",
"title": "Fog – Driven Healthcare Framework for Security Analysis"
},
{
"paperId": "61f5b7704281241d34f79b03061c0e82d75ff158",
"title": "Exploiting smart e-Health gateways at the edge of healthcare Internet-of-Things: A fog computing approach"
},
{
"paperId": "c3c21a170c607b5daa3bbaf3ffedc7ec742e7a8e",
"title": "Offloading in fog computing for IoT: Review, enabling technologies, and research opportunities"
},
{
"paperId": null,
"title": "Data Lakes, First Edition"
},
{
"paperId": null,
"title": "A survey of real - time approximate nearest neighbor query over streaming data for fog comput"
}
] | 11,943
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0091c96d9792db459df5837b096ba18308dfd3fe
|
[
"Computer Science"
] | 0.940195
|
Challenges of Proof-of-Useful-Work (PoUW)
|
0091c96d9792db459df5837b096ba18308dfd3fe
|
2022 IEEE 1st Global Emerging Technology Blockchain Forum: Blockchain & Beyond (iGETblockchain)
|
[
{
"authorId": "48131797",
"name": "F. Hoffmann"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
Proof-of-Work is a popular blockchain consensus algorithm that is used in cryptocurrencies like Bitcoin in which hashing operations are repeated until the resulting hash has certain properties. This approach uses lots of computational power and energy for the sole purpose of securing the blockchain. In order to not waste energy on hashing operations that do not have any other purpose than enabling consensus between nodes and therefore securing the blockchain, Proof-of-Useful-Work is an alternative approach which aims to replace excessive usage of hash functions with tasks that bring additional real-world benefit, e.g. supporting scientific experiments that rely on computationally heavy simulations. In this publication theoretical PoUW concepts such as Coinami, CoinAI and the cryptocurrency Primecoin are analyzed with respects to how PoW properties can be retained while doing useful work.
|
# Challenges of Proof-of-Useful-Work (PoUW)
### Felix Hoffmann
Johann Wolfgang Goethe-Universität Frankfurt am Main
[[email protected]](mailto:[email protected])
### September 9, 2022
**Abstract**
Proof-of-Work (PoW) is a popular blockchain consensus algorithm that is used in cryptocurrencies like
Bitcoin in which hashing operations are repeated until the resulting hash has certain properties. This approach uses lots of computational power and energy for the sole purpose of securing the blockchain. In order
to not waste energy on hashing operations that do not have any other purpose than enabling consensus between nodes and therefore securing the blockchain, Proof-of-Useful-Work (PoUW) is an alternative approach
which aims to replace excessive usage of hash functions with tasks that bring additional real-world benefit,
e.g. supporting scientific experiments that rely on computationally heavy simulations. This publication
consists of two parts: In the first part, important properties of conventional hash-based PoW are described.
In the second part, theoretical PoUW concepts such as Coinami, CoinAI and the first successful PoUW
cryptocurrency Primecoin are analyzed with respects to how PoW properties can be retained while doing
useful work.
### I. Introduction
Traditional proof-of-work cryptocurrencies
have been widely criticized for using up lots
of energy in order to run and secure the underlying blockchain. In the past few years,
there has been done research with the goal of
replacing repeated hash operations with useful work. Notable projects such as Primecoin
[Kin13], Coinami [IOG [+] 16] and CoinAI [BS19]
use search for certain kinds of prime number
chains, multiple sequence alignment of protein sequences or training of deep learning
models as useful work consensus algorithms.
In order to give an overview of the challenges
these projects have to overcome, the following
part gives an outline of important properties
that hash-based PoW solutions have.
### II. Properties of hash-based PoW
Hash-based PoW consensus algorithms use
cryptographically secure hash functions such
as SHA256 or KECCAK256 in order generate
hex strings of fixed size. The proof-of-work
hash puzzle consists of finding a result that
is smaller than a given number which defines
the difficulty of the problem. Hash functions
are one-way functions which also allow for
quick verification of a proposed solution’s validity. If a hash function is not considered to
be broken, there is no known way to manipulate the hash function’s input to influence its
output in a preferred direction. Therefore, the
nodes of the blockchain have to brute-force dif
ferent inputs until they solve the hash puzzle
by luck. This leads to an arms race between
miners, in which more hardware is acquired
to increase ones hash rate. For non-ASIC
resistant cryptocurrencies such as Bitcoin, specialized hardware with the sole purpose of efficiently mining can be used.
In the following, advantageous properties of
hash-based PoW algorithms are outlined and
it is described why these properties are useful
in the context of blockchains.
1
-----
Felix Hoffmann March 2022
### i. Block sensitivity & non re-usability
In order to prevent the re-usage of existing
proofs-of-work, it is necessary to bind the validity of a PoW to the block it validates. This
means that the computational work has to
factor in information that is not known be
forehand to prevent pre-calculation of future
blocks. A common strategy to retain the block
sensitivity property is to require the hash of
the previous block to be used as part of the
hash function’s input for the PoW of the next
block. Since the hash of the previous block
is only known when it is successfully added
to the blockchain, this makes it infeasible to
pre-calculate future blocks as long as the hash
function used is not fundamentally broken.
However, there exists an attack called Selfish
Mining in which a group of malicious miners that solved the current hash puzzle do not
broadcast their solution to all miners but in
stead continue mining additional blocks in secret until they decide to publish a long chain
of new blocks. Since common blockchain im
plementations obey the longest chain rule, all
other valid blocks that were mined by other
nodes during this time will be discarded if the
public state of the chain is shorter than the
secretly mined chain. Selfish Miners always
run the risk of not being able to keep up with
the speed of the chain they compete against,
in which case block rewards that could have
been collected by Selfish Miners are lost. As a
result, Selfish Mining strategies are only feasible if large amounts of a blockchain network’s
computational power is controlled by an organized group of miners.
### ii. Adjustable problem hardness
The difficulty of the PoW needs to be adjustable. This property is required so that
block intervals can be regulated (e.g. Bitcoin’s
average block time is around 10 minutes
[Nak08]). The goal is to both counteract
inflation [1] (miners of blocks are financially
compensated for their work) and to guarantee
1 Additionally, in the case of Bitcoin, block rewards are
decreased over time to further prevent inflation.
a stable transaction throughput. Problem
hardness commonly is dynamically adjusted
over time depending on the current total
hash rate of the network. It should be noted
that the difficulty of PoW problems need to
have a lower bound: Trivial problems that
can be solved instantly are not suitable for a
consensus algorithm because then miners are
incentivized to mine empty blocks instead
of filling them with pending transactions.
Using hash-based PoW approaches has the
advantage that the difficulty of problems
can trivially be adjusted in both directions
by lowering/increasing the required upper
bound of accepted hash values.
Another property of hash-based PoW is
that miners with limited computational power
have a non-zero chance of quickly solving
the hash puzzle by luck. If an entity with
computational power *α* would always lose
against entities with computational power
*β* - *α*, then the blockchain would be dictated
by the single largest group of miners, which
as a result would disincentivize miners from
participating in the blockchain. This would
be the death of the blockchain since known
attacks like the 51% attack would become fea
sible. Therefore, it is not only important that
in the long term a miner earns block rewards
that are proportional to that miners hash
rate contribution to the overall network, but
also that any miner has a non-zero chance to
successfully solve the proof-of-work puzzle.
### iii. Fast verification
In order to be able to find consensus and be
protected from spam attacks, miners need to
be able to quickly verify the validity of blocks
proposed by other miners. Therefore, it is crucial that the proof-of-work can efficiently be
verified in reasonable time without demanding excessive computational resources. The
need for fast verification mechanisms is the
main factor why hash functions are commonly
used in proof-of-work algorithms. Executing
one SHA256 or KECCAK256 function call on
2
-----
Felix Hoffmann March 2022
a small input barely uses any computational
power, since the main difficulty is finding an
input for such a function that produces the required output. Thus, the term one-way function.
### iv. Problem is parallelizable
In order to make efficient use of existing hardware, it is preferred to use proof-of-work problems that can be parallelized. For instance,
finding hash function outputs that have a certain of amount of leading zeroes is called an
embarrassingly parallel problem since there is no
need for communication between threads.
Further, parallelizable problems enable the formation of mining pools: Depending on the difficulty of the hash puzzle, low hash rate miners might have a probability close to zero to
mine a new block alone. By joining existing
mining pools in which computational power
of multiple entities is combined and block rewards are shared proportionally to the provided hash rate of every pool member, weak
miners can collect small amounts of financial
compensation in regular intervals.
All in all, while a proof-of-work consensus algorithm does not necessarily have to be parallelizable, this property makes mining more
accessible for a wider range of participants
which positively affects network diversity and
strengthens the blockchain’s overall security.
### III. Proof-of-Useful-Work (PoUW)
This section consists of two parts: In the first
part, existing PoUW approaches and ideas are
briefly introduced. In the second part, they
are analyzed with regards to how the properties of hash-based PoW consensus algorithms
are retained and which issues might occur.
Even though exotic consensus algorithm
classes like Proof-of-Storage can be considered useful, the focus in this publication is on
computationally-heavy PoUW which shares
lots of similarities with hash-based PoW.
### i. Primecoin
Primecoin is a PoUW cryptocurrency that was
launched in 2013 by Sunny King.[Kin13]
Its PoUW consists of finding certain types of
prime number chains, so-called Cunningham
and bi-twin chains. Cunningham chains are
a series of prime numbers that nearly double
each time. In mathematical terms, a prime
chain of length n ∈ **N** must fulfill
p i + 1 = 2p i + 1 (1)
to be considered a first order chain or
p i + 1 = 2p i − 1 (2)
to be considered a second order chain for all
1 ≤ i < n. For instance, { 41, 83, 167 } is a first
order chain of length n = 3 and { 7, 13 } is a
second order chain of length n = 2.
In addition to Cunningham chains, the
third type of chain that Primecoin allows as
proof-of-work are bi-twin chains. These are
prime chains that consist of a strict combination of first and second order Cunningham
primes. The mathematical definition of a
bi-twin chain of length k + 1 is the sequence
{ n − 1, n + 1, 2n − 1, 2n + 1, 2 [2] n − 1, 2 [2] n +
1, ..., 2 [k] n − 1, 2 [k] n + 1 } .
For instance, choosing n = 6 leads to
{ 5, 7, 11, 13 } which is a bi-twin chain of length
2 that consists of 4 prime numbers.
As of writing this publication, a Primecoin is
traded for about $0.04 and the currency’s total
market capitalization is around $1.7 million.
[Coi22] The success of Primecoin can be seen
as evidence that PoUW is a viable concept
with real-world applications.
### ii. Coinami
In 2016, a theoretical proposal of a mediator interface for a volunteer grid similar to
BOINC middleware that can be connected to
a cryptocurrency was published and named
Coinami. [IOG [+] 16] The PoUW of Coinami
is built on DNA sequence alignment (HTS
read mapping in particular) and aims to
3
-----
Felix Hoffmann March 2022
generate and analyze huge datasets of disease
signatures which can help us to gain a better
understanding of diseases such as different
cancer variants.
The authors of Coinami describe their approach as a three-level multi-centric system
which consists of a root authority, subauthorities and miners. Miners download
problem sets from sub-authorities, map HTS
reads to a reference genome and send the
results back to sub-authorities for verification.
Sub-authorities are certified by the root authority. [IOG [+] 16]
As a result, this approach can be seen as
a hybrid of Proof-of-Authority (PoA) and
Proof-of-Useful-Work (PoUW) consensus
algorithms. As of writing, while Coinami
does have a prototype implementation on
Github [Coi16], there currently exists no cryptocurrency that is connected to this academic
proposal.
### iii. CoinAI
In 2019, a theoretical proposal of PoUW
consensus that is built on training and hyperparameter optimization of deep learning
models was published and named CoinAI.
[BS19]
The goal of CoinAI is to secure a blockchainbased cryptocurrency with a consensus
algorithm that both secures the underlying
blockchain while also producing deep learning models that solve real-world problems.
The proposed proof-of-work consists of training a model that passes a certain performance
threshold in order for it to be considered valid.
In addition to the training of deep learning
models, the CoinAI proposal features another financial incentive to participate in the
blockchain: Nodes can rent out available hard
drive storage to provide distributed storage
for the resulting deep learning models of the
blockchain. [BS19]
Thus, CoinAI’s approach can be described
as a hybrid of Proof-of-Useful-Work (PoUW)
and Proof-of-Storage (PoS). As of writing,
CoinAI remains an academic proposal that
has not yet been implemented to secure a
tradeable cryptocurrency.
### Analysis of PoUW approaches
**PoUW: Non re-usability**
To prevent future calculation and re-usability
of proofs-of-work, a given problem must
involve information or parameters that can
not reliably be guessed beforehand. All nodes
must be able to agree on how these parameters are to be adjusted over time so that the
problem sets are adjusted over time and it can
be decided whether a given proof-of-work
is valid for some time interval. A common
approach here is to involve the hash of the
previous block as a parameter as part of the
next problem. However, since this directly
influences the result of the calculations, it
must be decided on a case-per-case basis
whether the resulting information can still be
considered to be useful.
If incorporating hashes into the calculations
is not possible, then another approach must
be found to bind the PoUW to a given period
in time. Relying on an external (as in information taken from outside the blockchain)
source that continuously publishes new information over time is not desirable, since
this approach leads to a high degree of
centralization which not only opposes core
principles of a decentralized blockchain but
which also has the potential to create security
issues and conflicts of interest, especially if
the underlying blockchain is connected to a
cryptocurrency.
⊲ Primecoin retains the property of block
sensitivity by requiring the origin of the
prime chains to be divisible by the hash of the
previous block. In this case, the resulting quotient is defined as a so-called PoW certificate.
[Kin13] This guarantees that pre-calculation
of future blocks is not a viable strategy as
long as there is no scientific breakthrough in
efficiently calculating certain chains of large
4
-----
Felix Hoffmann March 2022
primes.
⊲ The theoretical Coinami approach tries
to evade re-usability and pre-calculation problems by relying on an authority approach,
in which miners must request tasks from
(sub)-authority nodes. Since miners can not
guess which task they might be given next,
pre-calculation of future blocks is not feasible.
Since sub-authorities know which problems
have already been given out, re-usability is
not an issue either. The main issue of this
solution can be seen as a high degree of
centralization which forces miners to trust
any (sub)-authority.
⊲ The CoinAI proposal concatenates information such as previous block hash, a
random number called nonce and a list of
pending transactions which then is hashed.
This hash result then is used to determine
the initial hyperparameter structure of a deep
learning architecture which must be trained
until it satisfies performance requirements.
An issue that potentially arises with this
approach is that if the goal is to produce
useful deep learning models, then starting
the training with an inadequate initial hyperparameter configuration affects the amount
of training required to reach acceptable
model performance which can be seen as
wasted energy. Assuming that the space of
all allowed hyperparameter configurations
is limited to prevent this from happening,
the next problem that might arise is that
now hash-to-hyperparameter-configuration
mapping collisions are bound to happen
more frequently, which in this case means
that multiple hashes lead to the same initial
hyperparameter configuration which as a
result could make pre-calculation strategies
feasible.
**PoUW: Adjustable hardness**
Since miners might join or leave the network
of nodes at any time, the blockchain’s total
computational power fluctuates over time. In
order to provide regular block intervals which
in the case of a cryptocurrency is necessary
to stabilize the transaction throughput, there
must be consensus between nodes with
respect to how the difficulty of problems is
to be adjusted over time. Hash-based PoW
approaches control the problem difficulty by
dynamically adjusting the amount of leading
zeroes that the resulting hash must have in
order to be valid depending on the current
hash rate of the network. Increasing the
amount of required leading zeroes by just
one increases the difficulty of the hash puzzle
exponentially, which is why softer variations
of this approach can be used (such as e.g.
amount of leading digits smaller than eight)
to provide a more fine-grained control of the
problem difficulty.
For useful work approaches, it needs to be
decided on a case-per-case basis how the
hardness of a given problem can dynamically
be adjusted without jeopardizing usefulness
of results.
⊲ In the context of Primecoin, two intuitive mechanics to control problem difficulty
come to mind: First of all, the size of prime
numbers that start a chain could be increased
over time. However, the prime number
theorem states that
x lim → ∞ *π* ( x x ) = 1 (3)
ln ( x )
with *π* ( x ) being the so-called prime-counting
function. The for our context useful inter
pretation of this equation is that the prime
density approaches zero, which means that
the proof-of-work difficulty over time might
become too high to sustain stable transaction
throughput long-term.
The second intuitive approach that comes to
mind is to dynamically adjust the required
length of valid prime number chains to
control the problem difficulty. This is the
approach Primecoin takes: Given a prime
chain of some length, Primecoin dynamically
adjusts its Fermat primality test which results
in a relatively linear continuous difficulty
5
-----
Felix Hoffmann March 2022
function (as opposed to the non-linear difficulty function of the first approach) that is
claimed to be accurate enough to adjust the
problem hardness appropriately over time.
[Kin13]
⊲ The Coinami authors have not yet defined how the difficulty of the DNA sequence
alignment problems can be dynamically
adjusted over time. The issue here is that
the network must rely on an external source
for HTS data and simply increasing the size
of assignments potentially leads to issues
with resulting data size and networking
bottlenecks. An idea here is to let miners
solve multiple problems at once and then let
authority nodes randomly select one of these
solutions and discard the others. While this
can be seen as a waste of useful work it might
be necessary sacrifice to control problem
difficulty without increasing data sizes.
⊲ CoinAI aims to adjust the PoUW difficulty over time by dynamically adjusting
the required performance requirements of
resulting deep learning models over time. The
idea behind this approach is that validating
the performance of a given model is less
computationally expensive than training
the model. An issue with this approach is
that even when knowing the network’s total
computational power, it would be difficult
to estimate an adequate performance threshold. With respect to this problem, Coinami
authors note that even slightly increasing the
difficulty can potentially result in unsolvable
problems. Another problem here is that a
centralized entity is supposed to collect all
submitted models, test their performance and
then announce the winner. A negative aspect
here is that miners would be forced to trust
a centralized authority. If no such authority
were to be involved, then other issues would
occur: Deep learning models that solve nontrivial problems can have a size from a few
megabytes to many gigabytes. If there were
no centralized authority, then every node
would be forced to download the models of
all other nodes and test the performance of
all of them in order to determine the winner
model. As a result bandwidth limitations,
spam and sybil attacks potentially make this
approach infeasible.
**PoUW: Verification**
A core principle of consensus algorithms in
public blockchains is that they are used in
order to provide nodes with a method that
enables them to form consensus about the
current state of the blockchain without having
to rely on trust. Hash functions are useful in
this regard since the validity of a proposed
(input, output) tuple can quickly be verified. As soon as hash-based approaches are
discarded in favor of methods that perform
useful work, it can become difficult to find a
verification method that does not have to rely
on a verification-by-replication approach in
which the entire useful work process has to be
repeated by many nodes. For a given problem
there might or might not exist a probabilistic
verification approach in which the likelihood
of some proposed solution being valid can be
estimated efficiently. Therefore, it needs to
be decided on a case-per-case basis what is
the best way to formulate a PoUW problem
in such a way that verification of results can
happen quickly and with reasonable amounts
of computational effort.
⊲ In the case of Primecoin, probable primality of prime chains is verified using a
combination of both the Fermat and the
Euler-Lagrange-Lifchitz test for prime numbers.
These are proven mathematical methods that
can be used to efficiently verify the primality
of a given number with the downside that
there exist so-called pseudoprimes that pass
those prime tests but which are in fact not
prime numbers. The authors of Primecoin
have concluded that the probability of pseudoprimes occurring is low enough that this issue
can be traded in favor of being able to provide
a fast and efficient verification mechanism.
[Kin13]
6
-----
Felix Hoffmann March 2022
⊲ In the Coinami proposal, sub-root authorities collect results from miners and verify
the validity of alignments using decoy reads
that have been placed into the problem. These
decoys are planned to make up around 5% of
each problem and they can be pre-calculated
by the sub-authorities. After verification,
decoy data is removed from the results. The
main challenge here is to place decoy data
in such a way that miners are not able to
spot these segments in their assignments. If a
sub-authority has validated a miners solution,
then the data is signed and sent back so that
it can be added to the blockchain.
⊲ In CoinAI resulting deep learning models are considered to be valid proofs-of-work
only if they pass the current performance
threshold. The authors provide no concrete
plans about whether a centralized entity is
responsible for verification or if every miner
has to verify all submitted models by other
nodes. Potential issues that might occur in
either case have already been presented in the
adjustable hardness section of this publication.
iii
A common approach to validate the performance of a deep learning model is to use two
separate datsets, one containing training data
used for training the model and the second
dataset being the validation/test dataset.
CoinAI gives no specifics on how nodes
acquire required training datasets which
potentially poses a challenge in overcoming
issues such as networking bottlenecks due to
large datasets that need to be downloaded.
The current state-of-the-art in training of deep
learning models boils down to the fact that
you need more and more training data to
improve your model over time, since hyperparameter tuning of a model that was trained on
a small dataset alone rarely results in a robust
model than can reliably solve non-trivial
problems. As a result, the training dataset
would have to be extended over time which
raises further questions about who provides
this data, how this affects centralization and
who is willing to sacrifice computational
power and network bandwidth to test the
performance of all submitted models. Even
if all of these potential issues were to be
resolved, assuming the same model is trained
over many blocks one could argue that as
soon as better performing models for a given
task are discovered all previously published
models lose their usefulness since they perform worse than the newer models. This
raises the question if such an approach can
be considered to be useful work in the first
place. If, however, completely different deep
learning models are to be trained at regular
block intervals, potential problems of continuously broadcasting new training data sets and
generating robust models performance might
become overwhelming.
**PoUW: Parallelizability**
In order to enable the efficient usage of
multi-core CPUs, GPUs and facilitate the
existence of mining pools, a PoUW consensus
algorithm preferable should be of embarrassingly parallel nature. An intuitive example of
such a problem is any form of processing or
generation of unrelated data, like it is done in
e.g. brute-force searches.
There are many non hash-based approaches
that fulfill this property: For instance, Monte
Carlo event generation and reconstruction
in particle physics, pattern matching over
DNA sequences in bioinformatics and hyperparameter tuning in deep learning can all
be considered to be embarrassingly parallel
problems.
⊲ In Primecoin the search for prime chains can
trivially be implemented in a parallelizable
way.
⊲ Pattern matching over DNA sequences
in bioinformatics like proposed in Coinami is
of embarrassingly parallel nature.
⊲ Training deep learning models and hyperparameter tuning like proposed in CoinAI
7
-----
Felix Hoffmann March 2022
is an embarrassingly parallel problem.
All in all, it can be concluded that retaining the parallelizability property is not an
issue for PoUW approaches.
### IV. Conclusion
This publication has provided an overview
over essential properties that conventional
hash-based proof-of-work consensus algorithms possess. Additionally, an analysis of
which measures were taken by existing PoUW
approaches such as Primecoin, Coinami and
CoinAI in order to retain hash-based PoW
properties while rewarding useful work was
provided. It was concluded that domainspecific knowledge is required to make PoUW
consensus possible and that implementation
details must be decided on a case-by-case
basis using domain knowledge from that area
of research.
The main weakness that all presented
PoUW approaches have in common is the
verification of results. While the author of
Primecoin was able to find an elegant probabilistic solution of this problem, theoretical
publications like Coinami and CoinAI had
to make both efficiency and decentralization
sacrifices to prevent potential problems.
A common issue with designing new PoUW
consensus approaches is that the size of
resulting data can be significant compared
to hash-based approaches which leads to
situations in which data must either be stored
externally or on-chain which negatively
affects not only storage requirements of full
nodes but also sync times of new nodes
which effectively raises the entry barriers of
participating in the blockchain.
All in all, problems of mathematical nature seem to be best suited for PoUW.
These problems have the advantage that a
large repertoire of probabilistic verification
methods already exists for a wide range of
problems, which in addition to a generally
asymmetrical ratio of computational effort
and size of resulting output make this class of
problems potential suitable for making PoUW
consensus mainstream.
It remains to be seen whether the con
cept of Proof-of-Work itself will survive the
surge of alternative blockchain consensus
algorithms like Proof-of-Stake which do not
require notable amounts of computational
effort to efficiently form consensus and
therefore secure the underlying blockchain.
### References
[BS19] Alejandro Baldominos and Yago
Saez. Coin.ai: A proof-of-usefulwork scheme for blockchain-based
distributed deep learning. Entropy,
21(8):723, 2019.
[Coi16] Coinami. Coinami prototype.
[https://github.com/coinami/coinami-pro,](https://github.com/coinami/coinami-pro)
03 2016.
[Coi22] Coinmarketcap. Primecoin.
[https://coinmarketcap.com/currencies/primecoin/,](https://coinmarketcap.com/currencies/primecoin/)
08 2022.
[IOG [+] 16] Atalay Mert Ileri, Halil I. Ozercan,
Alper Gundogdu, Ahmet K. Senol,
M. Yusuf Oezkaya, and Can Alkan.
Coinami: A cryptocurrency with
dna sequence alignment as proofof-work. CoRR, abs/1602.03031,
2016.
[Kin13] Sunny King. Primecoin:
Cryptocurrency with prime
number proof-of-work.
[https://primecoin.io/primecoin-paper.pdf,](https://primecoin.io/primecoin-paper.pdf)
07 2013.
[Nak08] Satoshi Nakamoto. Bitcoin: A
peer-to-peer electronic cash system.
[https://nakamotoinstitute.org/literature/bitcoin](https://nakamotoinstitute.org/literature/bitcoin/)
10 2008.
8
-----
| 6,801
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2209.03865, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/2209.03865"
}
| 2,022
|
[
"JournalArticle"
] | true
| 2022-09-08T00:00:00
|
[
{
"paperId": "c172b3429e70fc8e4489475a2f09da9234189d98",
"title": "Coin.AI: A Proof-of-Useful-Work Scheme for Blockchain-Based Distributed Deep Learning"
},
{
"paperId": "2d0ac00635f2c4cafc4c22ae15df9ddaaa1b697d",
"title": "Coinami: A Cryptocurrency with DNA Sequence Alignment as Proof-of-work"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
}
] | 6,801
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0093f965957eceddf5604daf41ea9ae7a48ab245
|
[
"Computer Science"
] | 0.889887
|
A Fully Privacy-Preserving Solution for Anomaly Detection in IoT using Federated Learning and Homomorphic Encryption
|
0093f965957eceddf5604daf41ea9ae7a48ab245
|
Inf. Syst. Frontiers
|
[
{
"authorId": "2166504589",
"name": "Marco Arazzi"
},
{
"authorId": "1706945",
"name": "S. Nicolazzo"
},
{
"authorId": "1840213",
"name": "Antonino Nocera"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
Anomaly detection for the Internet of Things (IoT) is a very important topic in the context of cyber-security. Indeed, as the pervasiveness of this technology is increasing, so is the number of threats and attacks targeting smart objects and their interactions. Behavioral fingerprinting has gained attention from researchers in this domain as it represents a novel strategy to model object interactions and assess their correctness and honesty. Still, there exist challenges in terms of the performance of such AI-based solutions. The main reasons can be alleged to scalability, privacy, and limitations on adopted Machine Learning algorithms. Indeed, in classical distributed fingerprinting approaches, an object models the behavior of a target contact by exploiting only the information coming from the direct interaction with it, which represents a very limited view of the target because it does not consider services and messages exchanged with other neighbors. On the other hand, building a global model of a target node behavior leveraging the information coming from the interactions with its neighbors, may lead to critical privacy concerns. To face this issue, the strategy proposed in this paper exploits Federated Learning to compute a global behavioral fingerprinting model for a target object, by analyzing its interactions with different peers in the network. Our solution allows the training of such models in a distributed way by relying also on a secure delegation strategy to involve less capable nodes in IoT. Moreover, through homomorphic encryption and Blockchain technology, our approach guarantees the privacy of both the target object and the different workers, as well as the robustness of the strategy in the presence of attacks. All these features lead to a secure fully privacy-preserving solution whose robustness, correctness, and performance are evaluated in this paper using a detailed security analysis and an extensive experimental campaign. Finally, the performance of our model is very satisfactory, as it consistently discriminates between normal and anomalous behaviors across all evaluated test sets, achieving an average accuracy value of 0.85.
| ERROR: type should be string, got "https://doi.org/10.1007/s10796 023 10443 0\n\n# A Fully Privacy-Preserving Solution for Anomaly Detection in IoT using Federated Learning and Homomorphic Encryption\n\n**Marco Arazzi[1]** **· Serena Nicolazzo[2]** **· Antonino Nocera[1]**\n\nAccepted: 17 October 2023\n© The Author(s) 2023\n\n**Abstract**\nAnomaly detection for the Internet of Things (IoT) is a very important topic in the context of cyber-security. Indeed, as\nthe pervasiveness of this technology is increasing, so is the number of threats and attacks targeting smart objects and their\ninteractions. Behavioral fingerprinting has gained attention from researchers in this domain as it represents a novel strategy to\nmodel object interactions and assess their correctness and honesty. Still, there exist challenges in terms of the performance of\nsuch AI-based solutions. The main reasons can be alleged to scalability, privacy, and limitations on adopted Machine Learning\nalgorithms. Indeed, in classical distributed fingerprinting approaches, an object models the behavior of a target contact by\nexploiting only the information coming from the direct interaction with it, which represents a very limited view of the target\nbecause it does not consider services and messages exchanged with other neighbors. On the other hand, building a global\nmodel of a target node behavior leveraging the information coming from the interactions with its neighbors, may lead to\ncritical privacy concerns. To face this issue, the strategy proposed in this paper exploits Federated Learning to compute a\nglobal behavioral fingerprinting model for a target object, by analyzing its interactions with different peers in the network.\nOur solution allows the training of such models in a distributed way by relying also on a secure delegation strategy to involve\nless capable nodes in IoT. Moreover, through homomorphic encryption and Blockchain technology, our approach guarantees\nthe privacy of both the target object and the different workers, as well as the robustness of the strategy in the presence of\nattacks. All these features lead to a secure fully privacy-preserving solution whose robustness, correctness, and performance\nare evaluated in this paper using a detailed security analysis and an extensive experimental campaign. Finally, the performance\nof our model is very satisfactory, as it consistently discriminates between normal and anomalous behaviors across all evaluated\ntest sets, achieving an average accuracy value of 0.85.\n\n**Keyword Internet of Things, Federated Learning, Blockchain, Autonomy, Reliability, Machine Learning, Privacy,**\nHomomorphic Encryption\n\n\n### 1 Introduction\n\nThe massive distribution of smart and interconnected devices\nis making us spectators and actors, at the same time, of a new\nworld of application scenarios inside the Internet of Things\n(IoT,hereafter).However,asthepervasivenessandautonomy\nof smart things grow, cyber attacks are becoming more and\nmore dangerous and complex (Adat et al., 2018), demanding\nsecurity approaches based on always improved and sophis\nThese authors contributed equally to this work.\n\n### B Antonino Nocera\[email protected]\n\nExtended author information available on the last page of the article\n\n\nticated techniques. This crucial aspect has to be tackled\nbecause security and privacy concerns act as inhibitors of this\nmarket’s future expansion and evolution (Al-Sarawi et al.,\n2020).\nA recent solution to make IoT more robust to possible\nsecurity threats and misuse is the computation of devices\n_fingerprint, used to detect the object anomalies caused by_\nattacks, hardware deterioration, or malicious software modifications (Sánchez et al., 2021). Previous strategies in this\ncontext leveraged features derived from device information\n(i.e., device name, device type, manufacturer information,\nserial number, and so forth) and other basic networking data\nto model the identity of an IoT node (Oser et al., 2018; Kohno\net al., 2005). More recent approaches, based on Machine\nLearning (ML, hereafter) and Deep Learning (DL, hereafter)\n\n## 1 3\n\n\n-----\n\ntechniques, aim at modeling a complete profile of a thing,\ncomposed not only of device and network information but\nalso of the hidden and unique patterns in the behavior that\na node reveals when it interacts with other peers. This so\ncalled behavioral fingerprint is more difficult to be forged by\na malicious adversary, increasing the probability of detecting\npotential misbehavior that may arise due to cyber attacks,\nsystem faults, or misconfigurations (Aramini et al., 2022;\nBezawada et al., 2018; Ferretti et al., 2021; Celdrán et al.,\n2022).\nMost of the approaches based on behavioral fingerprinting fall into two different groups. The first set is composed\nof centralized solutions in which a single hub is in charge\nof training and executing ML algorithms to assess the fingerprint of all the devices of the network. Therefore, due to\nthe use of end-to-end encryption, these solutions cannot take\ninto consideration features obtainable by analyzing private\nmessage payloads exchanged between every pair of nodes\n(Hamad et al., 2019; Miettinen et al., 2017). A second group\nconsists of distributed approaches in which a comprehensive\nprofile can be built, but only concerning a single node point\nof view (i.e., the ML model is trained and executed by a node,\nbased on its direct interactions with a target node) (Aramini\net al., 2022; Ferretti et al., 2021).\nTo overcome these limitations, in this paper, we face the\nchallenge of designing a global model for behavioral fingerprinting considering the information from multiple nodes\nwithout centralizing the solution in a single super-node. To\ndo so, we leverage the novel paradigm of Federated Learning (FL, for short) (Yang et al., 2019). Generally, FL is a\ndistributed collaborative AI approach that allows the training of models through the coordination of multiple devices\nwith a central server, acting as an aggregator, without the\nneed to share the actual datasets (Nguyen et al., 2021).\nIn particular, in an IoT scenario, an aggregator can coordinate multiple objects, called workers, to perform neural\nnetwork training. The main steps can be summarized as follows. First, the aggregator initializes a shared global model\nwith random parameters and broadcasts it to the worker\nnodes.Secondly,forseveraliterations,eachworkercomputes\nits individual model update, leveraging its local dataset. Once\nthe gradient is computed the aggregator receives all model\nupdates and combines them into an aggregated global model.\nFinally, this global update will be downloaded by the workers to compute their next local update. The steps above are\nrepeated until the global training is complete.\nIn our paper, we apply this approach to an IoT scenario\nin which devices with different computational capabilities\ncan cooperate. In particular, the worker devices, in charge of\ntraining local ML models, should be powerful devices with\nsufficient computational capability, memory, and stability.\nThe role of the aggregators, instead, is distributed among\nmultiple devices that can have high or medium computational\n\n## 1 3\n\n\ncapabilities. Observe that, each aggregator collects information from workers to create a global model for one or more\ntargets, but a target node can have only one aggregator. In\nthis way, FL can be simply applied to an IoT environment\nin the form of a “distributed aggregation” architecture, that\ninvolves multiple aggregation servers receiving local learning model updates from their associated devices (Khan et al.,\n2021).\nThis approach presents several points of strength. First off,\nglobal behavioral fingerprints can be computed for a target\nnode by considering aspects captured and modeled by all its\npeers. This strategy allows for enhanced learning accuracy\nrates. Approach scalability is also improved due to the distributed learning nature of FL. Moreover, the raw data are not\nrequired for the training on the aggregator side, thus minimizing the leakage of sensitive information to external third\nparties.\nHowever, the application of this strategy can introduce\nfurther privacy concerns arising from the exposure of sidechannel information. For instance, all the workers involved\nin the learning task would expose their interactions with the\ntarget, and the aggregator would know the identity of the\nmonitored objects.\nIn this paper, we try to face this further issue by designing a Secure Multi-party Computation (SMC, for short)\nscheme based on Homomorphic Encryption (HE, for short)\nanditsproperties.Unlikeconventionalencryptionalgorithms\nsuch as Advanced Encryption Standard (AES) or RivestShamir-Adleman (RSA), HE has been designed to perform\noperations over encrypted data (Gentry, 2009), proving endto-end IoT dataflow privacy. In general, HE has been applied\nto IoT scenarios to securely store data in public clouds, where\ncomputations, such as the training and execution of ML algorithms, can be performed without deciphering and accessing\nthe user’s data (Kim et al., 2018). In our approach, we make\nuse of HE during a safe starting phase. We assume that this\nphase has a sufficient duration to gather enough data to train\nML models in an environment in which the target node is\nfree from possible attacks. Specifically, the main steps of\nthis stage can be summarized as follows.\nEvery node with sufficient computation capability to train\nan ML model contacts the target node (for which it wants to\ncompute the behavioral fingerprint) to exchange a message\ncontainingthenecessaryidentifierparametersencryptedwith\na homomorphic hash function.\nAfter this step, the worker nodes query the Blockchain\nto discover the identity of the aggregator node for the considered target. In our solution, we leverage Blockchain and\nsmart contracts technology for a number of tasks to make\nit fully distributed. In particular, Blockchain is exploited to\nimplement a reputation mechanism to: (i) monitor aggregator nodes at a global level and (ii) store malicious nodes’\ninformation resulting from the application of our strategy.\n\n\n-----\n\nTo achieve this goal, our approach leverages a consolidated\npractice, indeed, Blockchain smart contracts are already\nbeing used to control and secure IoT devices (Christidis &\nDevetsikiotis, 2016; Khan & Salah, 2018), and, in addition,\nlightweight adaptations of a Blockchain have been designed\ntosupportresource-constrainedsmartthings(Corradinietal.,\n2022). As for the reputation mechanism, although this function is orthogonal to our approach, several proposals can be\nused to provide forms of trust in an IoT network (Corradini\net al., 2022; Dedeoglu et al., 2019; Pietro et al., 2018). Nevertheless, in our solution, we adapt an existing schema by\nallowing nodes to assign a trust score (i) to their peers based\non the analysis of their behavior through the proposed behavioral fingerprinting model, and (ii) to an aggregator according\nto its performance during the training phase.\nWith that said, leveraging information exchanged through\na refined use of HE properties, worker nodes can identify a\ncommon aggregator and, this last can, then, group together\nthe ones with common learning tasks. In our solution, the\nsteps above are carried out by maintaining private all the side\ninformation, as a matter of fact, to realize a fully privacypreserving solution, neither the aggregator must know the\nidentity of the target node, nor the different workers should\nknow each other. Finally, as stated before, in our heterogeneous IoT environment all these devices, even less powerful\nones, can benefit from our approach by delegating several\ntasks of our schema to more capable devices. In our strategy,\nalso this additional facility must be privacy-preserving.\nThe outline of this paper is as follows. In Section 2, we\nillustrate the literature related to our approach. In Section\n3, we give a general overview of our reference IoT model\nand describe the proposed framework in detail. In Section\n4, we analyze our security model. In Section 5, we present\nthe set of experiments carried out to test our approach and\nshow its performance. Finally, in Section 6, we discuss the\nlimitations of our paper, draw our conclusions, and present\npossible future works related to our research efforts. In the\nfollowing, we list the main challenges faced and describe the\ninsightful contributions provided.\n\n#### 1.1 Challenges and Contribution\n\nAs described above, the challenges faced by our proposal and\nits main contributions are numerous and we can summarize\nthem as follows:\n\nDynamic threat landscape. IoT devices are constantly\n\n updated and released. Nevertheless, vulnerability exploitation is developed at a similarly high pace. This makes the\nthreats against this context highly dynamic and difficult\nto foresee. We tackle this issue by proposing a behavioral fingerprinting model able to monitor the hidden and\nunique patterns of the behavior of a node in a network.\n\n\nThis tailored countermeasure appears suitable for a constantly changing attack surface.\nIncrease security. We improve the accuracy of behav\n ioral fingerprinting models by building a comprehensive\nobject profile. Indeed, adopting a solution based on FL\nallows us to evaluate the behavior of an object across different services and leverage the interaction with multiple\npeers.\nSolution scalability. Scalability is an issue that affects\n\n various aspects of behavior monitoring approaches, especially in the context of IoT. We face this problem by\nadopting a FL strategy aiming at distributing the monitoring tasks across the nodes of the network.\nLack of interaction data. IoT devices generate traffic by\n\n infrequent user interactions. FL strategy empowers nodes\nwith global models generated from the aggregation of\ndifferent contributions.\nAutonomy. The IoT scenario demands a growing num\n ber of tasks carried out without the need for human\nintervention. We leverage Blockchain and smart contract\ntechnology for several steps in our approach to distribute\nthe computation and increase object autonomy.\nPrivacy of data. IoT devices exchange sensitive informa\n tion, hence the privacy aspects related to behavioral data\nandcorrespondingmodelsplayakeyrole.WeadoptFLto\nsecure data during the training of behavioral fingerprinting models. More importantly, we take a step forward in\nmaintaining the private identity of target nodes and workers leveraging a homomorphic encryption-based strategy.\nIoT device heterogeneity. Many IoT devices have lim\n ited capabilities in terms of available memory, computing\nresources, and energy and, therefore, they are not capable\nof performing complex algorithms. Through our secure\ndelegation solution also less capable devices can benefit\nfrom our approach in a privacy-preserving way.\n\n### 2 Related Works\n\nWith the growing complexity and pervasiveness of IoT-based\nsolutions, the surface and the impact of possible attacks\nagainst this scenario are increasing as well (Hassija et al.,\n2019; Li et al., 2015). In the last years, researchers have\nstudied novel countermeasures to the most disparate type\nof threats to IoT devices (Buccafurri et al., 2016; Kozlov\net al., 2012; Sicari et al., 2016; Tweneboah-Koduah et al.,\n2017), and the latest ones are involving also Machine Learning and Deep Learning techniques (Al-Garadi et al., 2020;\nCauteruccio et al., 2019). In this context, a recent trend is\nto develop ML and DL algorithms to model peculiar characteristics of target objects to detect compromised devices\nwithin a network. The ensemble of these features, that an\nIoT device possesses and reveals when it interacts with other\n\n## 1 3\n\n\n-----\n\nobjects over a network, represents the so called fingerprint.\nClassical device fingerprinting comprehends soft identities,\nsuch as: device name, device type, manufacturer information, serial number, network address, and other features that\ncan be derived from different types of networking information. For instance, the authors of (Oser et al., 2018) identified\n19 features that can be used to assess the security level of\nan object directly from the data-link header of 802.11 messages. Also physical layer information is used, for instance,\nthe work illustrated in (Radhakrishnan et al., 2014) focuses\non the analysis of the physical aspects of devices, like interarrival times of different packets, to fingerprint them. An\nevolution of such an approach that cannot be very easily\ncloned by a malicious adversary, is represented by behavioral fingerprinting (Aramini et al., 2022; Bezawada et al.,\n2018; Celdrán et al., 2022; Ferretti et al., 2021). This type of\ntechnique leverages application-level information to extract\nfeatures concerning the interaction among the devices and,\nhence, their networking behavior. In particular, in (Bezawada et al., 2018) the authors leverage a number of features\nextracted from the network traffic of the device to train an\nML model that can be used to detect similar device types. The\nwork presented in (Celdrán et al., 2022) illustrates a detection\nframework that applies device behavioral fingerprinting and\nML to detect anomalies and classify different threats, such as:\nbotnets, rootkits, backdoors, and ransomware affecting real\nIoT spectrum sensors. As for the work presented in (Aramini\net al., 2022), it describes an enhanced behavioral fingerprinting model consisting of a fully decentralized scenario, where\nit is possible to exploit the features derived from the analysis\nof packet payloads (for instance, different types of devices\nand their traffic characteristics) and message content as well.\nStill, there exist challenges in terms of the performance of\nML-based fingerprinting solutions able to detect a forged or\ncorrupted smart thing in the network. The causes are related\nto scalability, security, and privacy issues and also to the fact\nthat an object can model the behavior of another object concerning its single point of view (i.e., the ML algorithm used\nis thought to evaluate only the services and messages from\nthe interaction of the two things) (Sánchez et al., 2021).\nHence, a new perspective that can comprehend the whole\nbehavior of an object is demanding. Moreover, classical ML\ntechniques require centralized data collection and processing that may not be feasible in IoT application scenarios\ndue to the high scalability of modern IoT networks, growing data privacy concerns, and heterogeneity of devices. To\nface these issues and allow a collaborative ML approach,\nFederated Learning (Khan et al., 2021; Nguyen et al., 2021;\nYang et al., 2019) solutions have emerged with the aim of\ndistributing ML algorithm execution without the need for\ndata sharing. For instance, (Rey et al., 2022) shows a framework that uses FL to detect malware affecting IoT devices\nusing multi-layer perceptron and autoencoder neural net\n## 1 3\n\n\nwork architectures. Whereas the authors of (Preuveneers\net al., 2018) studied FL to design an intrusion detection\nsystem. This work also includes Blockchain technology to\nmitigate the problems faced in adversarial FL, however it\ndoes not focus specifically on IoT devices. Also the authors\nof (Nguyen et al., 2019) used FL, their aim is to build a\ndistributed system for detecting compromised IoT devices\nthrough an anomaly detection-based approach. It consists of\na simple fingerprint of the device based on network packets\nable to monitor changes caused by network attacks. All the\nabove works exploit FL for a different goal concerning ours.\nTo the best of our knowledge, no previous works have used\nFL for behavioral fingerprinting computation.\nTill now we described how the problem of scalability and\nperformances of behavioral fingerprinting computation can\nbe faced through FL. But other challenges arise in this new\nIoT scenario, for instance, the privacy of data exchanged by\nthings.\nTo face the risk of privacy leakage of sensitive information in the IoT caused by the centralized servers’ architecture\nand the weakness and heterogeneity of devices and security\nprotocols, researchers have begun to exploit the potentiality\nof Homomorphic Encryption (Peralta et al., 2019; Shrestha\n& Kim, 2019). For instance, the work presented in (Peralta\net al., 2019) shows a possible application of HE to perform\ncomputations in the cloud maintaining data privacy, and it\nalso reviews a number of challenges in this context, such as\ncomputational cost and lack of interoperability, which will\nrequire further research efforts. However, recently, research\nadvances have made it possible to implement practical homomorphiccryptosystems,atleastinMobileenvironments(Ren\net al., 2021; Shafagh et al., 2017). In particular, the encryption primitive used is the hash function and the operation\nwe exploit is XOR. Homomorphic Hashing, first introduced\nby Bellare, Goldreich, and Goldwasser (Bellare et al., 1994)\nhas been used for disparate application scenarios (Kim &\nHeo, 2012; Lewi et al., 2019; Yao et al., 2018). In particular, (Kim & Heo, 2012) proposes a device authentication\nprotocol for smart grid systems based on the properties of\nthis function to decrease the amount of computation on\na smart meter. Whereas, the approach presented in (Yao\net al., 2018) proposes a homomorphic hash and Blockchainbased authenticated key exchange in the context of social\nnetworks. Facebook researchers design a scheme based on\nHomomorphic Hashing to secure update propagation in the\ncontext database replication, ensuring consistency (Lewi\net al., 2019).\nIn our approach, we leverage the properties of Homomorphic Hashing, in particular, related to the XOR operation,\nto allow the aggregator node, during the safe starting phase\nof our framework design, to identify groups of objects able\nto compute the device fingerprint of a target object, without\nrevealing the identity of the target object itself. To the best\n\n\n-----\n\nof our knowledge, the way we design this algorithm is novel\nand has never been used before.\nA novel research direction to monitor the behavior of\nobjects in IoT networks in a distributed way and provide\nsome forms of trust or authentication is Blockchain (Ali et al.,\n2021; Chen et al., 2022; Dedeoglu et al., 2019; Hammi et al.,\n2018; Nofer et al., 2017; Pietro et al., 2018). In particular, the\nauthors of (Pietro et al., 2018) present a framework based on\nthe concept of Islands of Trust, that are portions of the IoT\nnetwork where trust is managed by both a full local PKI and\na Certification Authority. Service Consumers generate transactions forming an Obligation Chain first locally accepted\nby Service Providers and, then, shared with the rest of the\nnetwork. Also the work presented in (Hammi et al., 2018)\nexploits a similar concept of secure virtual zones (called bubbles) obtained through Blockchain technology, where objects\ncan identify and trust each other. Both the work presented in\n(Corradini et al., 2022; Dedeoglu et al., 2019) try to overcome Blockchain limitations proposing a light architecture\nfor improving the end-to-end trust making this technology\nfeasible to limited IoT devices. The proposal illustrated in\n(Dedeoglu et al., 2019) leverages some gateway nodes calculating the trust for sensor observations based on some\nparameters, such as: nodes reputation, data received from\nneighboring nodes, and the observation confidence. to compute the trustworthiness of a node, if the neighboring sensor\nnodes are associated with different gateway nodes, then, the\ngateway nodes are in charge of computing and sharing the\nevidence with their neighbors’ gateway nodes. This architecture is not fully distributed and secure delegation is not\nperformed; indeed, more powerful nodes are used as gateways. Whereas the work presented in (Corradini et al., 2022)\ndescribes a framework based on a two-tier Blockchain able\nto provide security and autonomy of smart objects in the\nIoT by implementing a trust-based protection strategy. This\nwork leverages the idea of communities of objects and relies\non a first-tier Blockchain to record transactions evaluating\nthe trust of an object in another one of the same community\nor of a different community. After a certain time interval,\nthese transactions are aggregated and stored in the secondtier Blockchain to be globally available. In our approach the\nuse of Blockchain technology is limited to keeping trace of:\n_(i) the identity of the device in charge to act as an aggrega-_\ntor for a target node; (ii) the evaluation of the behavior of\naggregator after the aggregation task to enable the aforementioned FL approach; and (iii) the identity of objects for the\nanomaly detection task. Hence, differently from the abovecited approaches, the core of the strategy is not performed\nthrough Blockchain.\nAnother functionality provided by this paper is the possibility for the less capable devices to benefit and participate in\nour FL approach through secure delegation. This algorithm\nhas been mentioned in the H2O framework (Ferretti et al.,\n\n\n2021), without developing a detailed implementation of it.\nThanks to this paradigm, the training and inference phases\nof our model can be obtained through a privacy-preserving\ncollaborative delegation approach in which power devices\ncooperate and provide support to less powerful ones to implement the solution without revealing the features of the model.\nIn the following, we summarize the comparison with the\nmost important works introduced above based on the different functionalities provided by our approach, namely:\n\nAnomaly Detection: a capability to identify action\n\n sequences that deviate significantly from the expected\nbehavior.\nReputation Model: a functionality that allows a node in\n\n the network to compute a reliability score of another node\nbasedontrustvaluesandaccordingtoitsneighbors’opinion, even if they have not been in contact before.\nPrivacy: the implementation of measures and strategies\n\n to protect the identity of the node during the computation\nof behavioral fingerprint models.\nSecure Delegation: a mechanism allowing devices to del\n egate tasks to more capable peers, by preserving the\nprivacy of the involved nodes’ identity.\n\nWith the letter ‘x’ we denote that the corresponding property\nis provided by the cited paper (Table 1).\n\n### 3 Description of Our Approach\n\nThis section is devoted to the description of our proposal.\nIn particular, in the next subsections, we provide a general\noverview of our approach along with its underlying model;\nwe illustrate our Secure Multi-party computation strategy to\nform groups of co-workers for an FL task; after that, we detail\nour FL-based behavioral fingerprinting solution; finally, we\nsketch the adaption of an existing reputation model into our\nscenario.\n\n#### 3.1 General Overview\n\nThis section details the architectural design of our FL-based\napproach. In particular, we will describe the system actors\nand how they interact with each other during the model\ntraining and evaluation processes. Table 2 reports all the\nabbreviations and symbols used throughout this paper.\nAs typically done in the literature, our model for the considered IoT scenario is based on a directed graph G\n=\n⟨N _, E⟩, where N is the set of nodes and E is the set of_\nedges representing relationships between pairs of nodes. In\nparticular, a link is built if two nodes got in touch in the past\nexchanging one or more messages. Observe that the direc\n## 1 3\n\n\n-----\n\n**Table 1 Comparison of our approach with related ones**\n\nApproach Approach Type Anomaly Reputation Privacy Secure\n\n(Bezawada et al., 2018; Celdrán et al., 2022; Fingerprint x - - Oser et al., 2018; Radhakrishnan et al., 2014)\n\n(Aramini et al., 2022) Fingerprint x - - x\n\n(Preuveneers et al., 2018; Rey et al., 2022) FL, Blockchain x - - \n(Nguyen et al., 2019) Fingerprint, FL, Blockchain x - - \n(Kim & Heo, 2012) HE x - x \n(Yao et al., 2018) HE, Blockchain x - x \nDedeoglu et al. (2019); Hammi et al. (2018); Blockchain x x - Pietro et al. (2018)\n\nFerretti et al. (2021) Fingerprint, Consensus x x - \nOur approach FL, Fingerprinting\n\nConsensus, Delegation, HE x x x x\n\n\n**Table 2 Summary of the main**\nsymbols and abbreviations used\nin our paper\n\n## 1 3\n\n\nSymbol Description\n\nFL Federated Learning\n\nSMC Secure Multi-party Computation\n\nHE Homomorphic Encryption\n\n_N_ The set of IoT nodes of the network\n\n_Nl_ The set of basic devices, a subset of N\n\n_Nm_ The set of devices with medium computation power, a subset of N\n\n_N_ _p_ The set of powerful devices, a subset of N\n\n|N | Cardinality of the set of IoT nodes\n\n_ni_ An IoT device of N\n\n_ci_ A worker device of N\n\n_idci_ The id of a worker device ci\n_b_ A target node\n\n_ab_ An aggregator node for b\n\n_idab_ The id of an aggregator ab\n_�b_ List of workers training a model on b\n\n_�n_ the set of neighbor nodes of n\n\n_H_ Homomorphic Hash Function\n\n_η, ξ_ Nonces\n\n_t_ The size of a sequence of input symbols of the deep learning model\n\n_di_ Delegate node of ci for a task\n\n_thw_ Threshold for mispredicted symbols\n\n_Tci,b_ Trust score assigned by a node ci towards a target node b\n\n_FP_ _wb_ Behavioral fingerprinting function of b during an observation window wb\n_Rb[ω]_ Reputation of b after each time period ω\n\n_τ_ Tolerance value\n\nm A generic Machine Learning evaluation metric\n\n_φban_ Ban interval\n\n\n-----\n\ntion of the link identifies the node that starts to communicate\nduring the message exchange. The group of peers a node ni\nhas been interacting with is the set of neighbors of ni and can\nbe defined as �ni = {n j ∈ _N : (ni_ _, n j_ _) ∈_ _E}._\nMoreover, in our model, N is partitioned into three subsets\naccording to the different object capabilities, thus resulting\nin N = N _p ∪_ _Nm ∪_ _Nl_ . The subset of powerful devices\n_N_ _p includes all the devices with sufficient capabilities in_\nterms of memory and computational strength to perform the\nmore demanding tasks of our approach (e.g., the training\nML/DL models). The second set Nm is composed of devices\nwith medium computational and memory capabilities, due to\ntheir battery constraints or power stability. The last set Nl is\ncomposed of less capable nodes with basic functionalities.\nSince they have limited computational power, they can rely\non delegation to more powerful nodes to participate in our\nframework.\nAs stated in the Introduction, the proposal described in\nthis paper focuses on the computation of behavioral fingerprinting models via FL. To do so, our strategy assumes the\nexistence of an initial phase, called the safe starting phase,\nin which several actors can train ML/DL models to learn the\nbehavior of target nodes in an environment free from possible attacks to these targets (i.e., no attacks are performed\non any involved target node capable of altering its behavior).\nDuring this phase, IoT nodes can play one of the following\nroles:\n\n_Worker. It is in charge of training a local behavioral fin-_\n\n gerprinting model of a target node. Since training such a\nmodel is the more demanding task in our solution in terms\nof computational and memory capability, these nodes\nbelong to the N _p set._\n_Aggregator. They are in charge of aggregating the local_\n\n contributions of the different workers of an FL task to\ncompute a global model for a target. This task is less\ncomputationally demanding than the previous one, hence\nit can be taken over by nodes belonging to N _p_ _Nm (See_\n∪\nSection 5.2 for details on the performance).\n_Target. They are the monitored nodes for which the_\n\n behavioral fingerprinting has to be computed. There are\nno requirements in terms of computational power for\nthem, hence they can belong to any subset of nodes\ndefined above (N _p ∪_ _Nm ∪_ _Nl_ ).\n\nDuring this phase, less and medium-capable nodes belonging\nto Nm _Nl can participate in the scheme leveraging a secure_\n∪\ndelegation approach. In particular, they can entrust nodes in\n_N_ _p to carry out actions on their behalf exchanging data in a_\nprivacy-preserving way. The details of this task are described\nin Section 3.3.\nSubsequently, in the fully operational phase, also referred\nto as inference phase, learned models are used by all the\n\n\nactors to infer possible anomalies on the monitored targets.\nThis phase is less impacting than the training one in terms of\ncomputational requirements, hence all the objects belonging\nto N _p_ _Nm can actively participate in this phase. It is worth_\n∪\nnoting that, also during this phase, less capable nodes belonging to Nl can entrust nodes in N _p_ _Nm for the inference_\n∪\nof behavioral fingerprinting models, through the aforementioned secure delegation strategy.\nThe last actor of our approach is the Blockchain. This technology provides a shared ledger to record trusted information\naccessible to all the nodes over the network. In particular, we\nleverage smart contracts running on the Blockchain to automatically execute predefined actions when certain conditions\nare met. Since smart contracts are stored on the Blockchain,\ntheir code and execution history are visible to all participants\nin the network enhancing transparency in transactions. In\nparticular, we leverage this paradigm to keep track of several\naspects, namely:\n\nThe information necessary to discover the identity of\n\n aggregators for target nodes. In our approach, neither\nthe workers know each other nor the aggregator knows\nthe identity of the target. For these reasons, we design\nour framework to include Blockchain technology, thus\nremoving the need of a trusted central authority or counterpart to keep information private.\nThe trust scores assigned by workers to estimate the reli\n ability of an aggregator. As a matter of fact, the use\nof Blockchain for this task enhances trust and prevents\nmanipulation of scores. Through smart contracts’, code is\nexecuted automatically to compute these complex measures starting from trust scores.\nThe identity of corrupted objects resulting from the mon\n itoring activity of nodes owing behavioral fingerprint\nmodel towards target peers. Once our anomaly detection framework has detected a change in the behavior of\na node, it is important to publish this information in an\nimmutable and trusted ledger accessible by every node\nof the network.\n\nFigure 1 shows the general architecture of our solution\nillustrating the different actors of the model. In particular,\n_c1, c2, c3 are three worker nodes, b is the target node, and_\n_ab is the aggregator for b. The right part of this figure shows_\nthe Blockchain exploited during a number of steps of our\napproach. It is worth noting that, the interactions between the\naggregator and the workers take place only during the safe\n_starting phase to train the behavioral fingerprinting model_\nof the target. In the subsequent phase, nodes communicate\nwith each other and can leverage both trained models and\nthe information stored in the Blockchain to evaluate the\nbehavior of a contact. It is worth observing that, in our scenario, an anomaly in the behavior of a node can be caused\n\n## 1 3\n\n\n-----\n\n**Fig. 1 The general architecture**\nof our solution\n\nby either a hardware malfunction, an environmental change,\nor an ongoing cyber attack. For the estimation of a change\nin the observed node behavior, a true positive will be signaled if the number of unexpected actions as predicted by our\nmodels exceeds a certain threshold. This happens also in the\ncase of some external causes (like environmental changes).\nMoreover, our strategy leverages a mechanism to estimate\ntrust scores on the basis of the detected behavioral anomalies\nand compute nodes’ reputations. If the reputation of a node,\ncomputed by aggregating all the trust contributions towards\nit, goes under a reference threshold, it will be isolated by\nthe other peers and, therefore, it is technically banned from\nthe system (for, at least, a time φban). At this point, system\nadministrators can decide to restore the node or retrain its\nbehavioral fingerprinting models, especially if the external\ncause is known and under control.\nIn the next sections, we will describe our approach in\ndetail.\n\n#### 3.2 A Secure Multi-Party Computation Strategy to Identify Federated Learning Co-Workers\n\nThis section is devoted to the definition of a privacypreserving strategy to identify the correct aggregator for a\nspecific target and, hence, define groups of workers that can\ncollaborate on an FL task. As said above, in our approach,\neach FL task is focused on the construction of a behavioral\nfingerprinting model for a target node of the network.\nIn practice, given a target node b, the above reasoning\ninvolves two actions that must be carried out to configure the\nFL task: (i) the identification of the aggregator for a target\nnode, and (ii) the creation of the group of workers for the\n\n## 1 3\n\n\nsubsequent training task. It is worth noting that these tasks\nare performed by keeping the identities of the involved actors\nprivate. To do so, we develop a privacy-preserving strategy\nfor group formation and identity exchange based on a Secure\nMulti-party Computation (SMC) strategy.\nIt is important to underlying that, as stated above, the\nactions above are performed during a safe starting phase,\nin which no attacks occur against the target b. We assume\nthat such a phase is admissible and, typically, it can coincide\nwith the system setup period or any subsequent maintenance\naction involving b.\nGiven a node ci _N_ _p aiming at learning the behavioral_\n∈\nfingerprints of b. Let idci be the identifier of ci, and let η be\na private nonce generated by b. Finally, let _() be a homo-_\n_H_\nmorphic hash function preserving the XOR operation (Lewi\net al., 2019). Our solution would enforce the following steps.\nFirst, ci contacts b to exchange a message containing\ninformation about idci and a nonce generated by b, say η.\nA suitable payload is generated by b crafting the identifier of\n_ci and η, through a bitwise XOR operation. The result of the_\nXOR operation is transformed by b using the homomorphic\nhash function, thus obtaining the final payload H(idci ⊕ _η)._\nAfter receiving the first contact from ci, b proceeds by\nidentifying its referring aggregator. In our scenario, any node\nof N _p_ _Nm can play the role of the aggregator, provided_\n∪\nthat it is associated with a sufficient trust score. The details\nconcerning the trust mechanism are reported in Section 3.4.\nIn any case, the eligible aggregators along with their trust\nscores are stored in the underlying Blockchain. Once b has\nidentified its aggregator ab, it will create a new transaction\nin the Blockchain to publish this information. However, our\nsolution requires that the association between b and ab can\nonly be disclosed by b to the nodes it wishes to involve in\n\n\n-----\n\nthe subsequent FL task. This would confer to b the capability\nof filtering out unwanted workers from the learning task of\nits behavioral fingerprinting model. To do so, b computes a\nsecret by applying again the homomorphic hash function to\na payload composed of the bitwise XOR between the public\nidentifier of the selected aggregator idab and its private nonce\n_η. Consequently, the public transaction on the Blockchain_\ngenerated by b does not save the plain identifier of its aggregator, but the secret H(idab ⊕ _η)._\nAt this point, when ci wants to gather the identity of the\naggregator selected by b, it will retrieve the transaction generated by b from the Blockchain, containing H(idab ⊕ _η),_\nand it will carry out the following computation. First, it performs a bitwise XOR operation with: (i) the hash received\nby the target, namely H(idci ⊕ _η); (ii) H(idab ⊕_ _η); and (iii)_\nthe hash of its own identity H(idci ). For the properties of\nhomomorphic hashing concerning the XOR operation, we\nhave the following equation:\n\n_H(idci ⊕_ _η) ⊕_ _H(idab ⊕_ _η) ⊕_ _H(idci )_\n\n= H(idab ⊕ _η) ⊕_ _H(η ⊕_ _idci ⊕_ _idci )_\n\n= H(idab ⊕ _η) ⊕_ _H(η)_\n\n= H(idab ⊕ _η ⊕_ _η)_\n\n= H(idab _)_ (1)\n\n\n**Algorithm 1 Discovering Aggregator identity**\n\n**Data: ci ∈** _N_ _p, b ∈_ _N_, ab ∈ _N_ _p ∪_ _Nm_ ; /* node, target\nnode, aggregator node for b */\n_η, H;_ /* nonce of b, homomorphic hash\nfunction for XOR */\n_L = {idx_ _, x ∈_ _N_ }; /* list of the aggregator\nidentifiers in the Blockchain */\n_H(idab ⊕_ _η);_ /* secret for the aggregator of\ntarget node in the Blockchain */\n**Result: idab**\n_ci contacts b;_\n_ci ←_ _H(idci ⊕_ _η) from b;_\n_ci computes H(idci ) ;_\n_ci computes_ _H[�](idab_ _) = H(idci ⊕_ _η) ⊕_ _H(idab ⊕_ _η) ⊕_ _H(idci ) ;_\n**foreach idx ∈** _L do_\n\n_ci computes H(idx_ _);_\n**if H(idx** _) ==_ _H[�](idab_ _) then_\n\n_idab = idx_\n**end**\n**end**\n\n\nNow, ci can retrieve from the Blockchain the list of available\naggregators. For each identifier in such a list, c1 can apply\n_() to it and compare the result with the value from the_\n_H_\nprevious computation. The search for the correct aggregator\nwill be completed when a match is found. Algorithm 1 summarizes the steps above for the privacy-preserving discovery\nof idab . Observe that, the computational complexity of such\nan algorithm is O(|L|), where |L| is the number of possible\naggregators in the system.\nAfter this step, ci is now equipped with the identity of the\naggregator ab for the target b, hence ci is ready to contact ab\nto notify its intention to train a model for b.\nThe steps carried out by ci are repeated by any other node\n_c j of N_ _p interested in a model for b. Our solution does not_\nenforce any restriction on the number of FL tasks an aggregator could be involved in. Indeed, as will be shown in Section\n5, the computational complexity required for the aggregation\nis not very high and, therefore, can be easily executed by any\nnode of N _p_ _Nm. However, ab must identify and synchro-_\n∪\nnize all the nodes related to a specific FL task (i.e., a task\ndedicated to a given target b). Again, our solution enforces\nthat ab must not know the identity of b and, therefore, the\nidentification of the groups of workers can be performed as\nshown in Algorithm 2. In particular, given a list of nodes\n_�ab = ⟨c1, c2, ..., cn⟩_ that contacted the aggregator ab, the\nidentification of the groups of workers is done through an\niterative algorithm. For each worker ci ∈ _�ab the aggregator_\n\n\ncomputes the hash of its identity H(idci ) and performs a bitwise XOR operation with the secret previously received from\n_ci (i.e., H(idci ⊕_ _η)). Due, once again, to the homomorphic_\nproperty of the hash function for the bitwise XOR, this will\nresult in the following.\n\n_�ci = H(idci )_ ⊕ _H(idci ⊕_ _η) = H(idci ⊕_ _idci ⊕_ _η) = H(η)_\n\nNow, for each other node c j ∈ _�ab \\ ci_, the aggregator\nperforms a XOR operation between �ci and the secret previously received by c j, say H(idc j ⊕ _η[′]). Thus obtaining:_\n\n_�ci ⊕H(idc j ⊕η[′]) = H(η)⊕H(idc j ⊕η[′]) = H(η⊕idc j ⊕η[′])_\n\nNow, if η = η[′] holds, then the previous computation will\nbe equal to H(idc j ). Since we assumed that different targets\nwill always have different nonces (no collision between generated nonces), this result would mean that ci and c j share\nthe same target and, hence, they belong to the same working\ngroup �b. Observe that, ab can directly compute H(idc j )\nfor c j to verify the equality between the results of the computation above and the identifier of c j . The computational\ncomplexity for the group identification algorithm is O(�ab _),_\nwhere �ab is the number of nodes that contacted ab for an\naggregation task.\nThe sequence diagram in Fig. 2 summarizes all the steps\nperformed during the safe starting phase of our approach.\n\n#### 3.3 Distributed Behavioral Fingerprinting via Federated Learning\n\nThis section is devoted to the description of the Federated Learning strategy for the computation of behavioral\n\n## 1 3\n\n\n-----\n\n**Fig. 2 The sequence diagram of all the FL setup steps performed during the safe starting phase of our solution**\n\n\nfingerprinting models. Practically speaking, FL is a distributed collaborative machine learning approach that allows\nalgorithm training across multiple decentralized devices\nholding local data samples without sharing the actual datasets\n(Koneˇcn`y et al., 2015). Recently, this paradigm has been\ninvestigated for building intelligent and privacy-enhanced\nIoT applications (Nguyen et al., 2021; Sánchez et al., 2021).\nAlthough few works leverage this strategy for anomaly\ndetection in IoT, they are focused on building classical device\nfingerprints based on basic parameters, like usage of CPU,\n\n**Algorithm 2 Training groups identification**\n\n**Data: ci ∈** _N_ _p, b ∈_ _N_, ab ∈ _N_ _p ∪_ _Nm_ ; /* node, target\nnode, aggregator node for b */\n_η, H, �ab_ ; /* nonce of b, homomorphic hash\nfunction, set of nodes that contacted\n_ab */_\n_��ab = {H(idc j ⊕_ _η[′]) : c j ∈_ _�ab_ }; /* The set of\nsecrets sent by the nodes of �ab to ab\n*/\n**Result: ab ←** _�b;_ /* List of nodes that will\ntrain a model on b */\n_ci ∈_ _�b;_\n_ci −→_ _H(idci ⊕_ _η) to ab;_\n_ab computes H(idci );_\n_ab computes_\n_H(idci ) ⊕_ _H(idci ⊕_ _η) = H(idci ⊕_ _idci ⊕_ _η) = H(η);_\n**foreach c j ∈** _�ab do_\n\n_ab computes H(η) ⊕_ _H(idc j ⊕_ _η[′]) = H(η ⊕_ _idc j ⊕_ _η[′]);_\n_ab computes H(idc j );_\n**if H(η ⊕** _idc j ⊕_ _η[′]) = H(idc j ) then_\n\n_c j ∈_ _�b_\n**end**\n**end**\n\n\nmemory, and so on (Sánchez et al., 2022, 2021). The novelty\nof our contribution concerns the fact that we aim to construct\na global device behavioral profile taking into account all the\ninteractions over the network, even across different services,\na node may provide.\nConsider, for instance, the example shown in Fig. 3 about\na smart thermostat. This device can detect multiple metrics,\nsuch as the temperature and humidity of the room in which\nit is located; it can connect to other smart devices via Bluetooth or directly to the Internet allowing the owner to monitor\nthe home situation, remotely. Moreover, it can control the\nhome heating system according to the detected temperature.\nFinally, it could also communicate with a central home alarm\nsystem in the case in which a fire or anomaly temperatures\nhave been detected. Hence, this device holds interfaces with\nthe actors it interacts with, providing different services to\neach of them. This means that the communications and the\nmessages it exchanges can be very different according to the\nservice it is providing.\nClassical decentralized behavioral fingerprinting solutions (Aramini et al., 2022; Bezawada et al., 2018; Ferretti\net al., 2021) consider only a single interaction sequence to\nbuild a profile of a target node and they neglect a comprehensive point of view coming from the messages exchanged\nbetween the target and its other neighbors. Hence in the\nexample shown above, the home heating system will build\nan ML model of the thermostat, which will differ from the\none trained by the home alarm system or any other smart\ndevice.\nOur strategy leverages FL to build behavioral fingerprinting models combining the perspectives of different workers\n(neighbors of a target node) in a global profile. Ultimately,\n\n\n## 1 3\n\n\n-----\n\n**Fig. 3 Smart Thermostat**\ninteractions in a domotic\nenvironment\n\nthis would depict the behavior of the target device in a more\ngeneral way.\nNevertheless, the global model is fed with the single interaction sequences, for which we leverage an adaptation of\nthe behavioral fingerprinting solution described in (Aramini\net al., 2022). Observe that, according to our fully distributed\narchitecture, a worker has always access to payload data as\nit is the intended recipient of the communication with the\ntarget. Therefore, we can follow the solution described in\n(Aramini et al., 2022), thus including payload-based features in our strategy. These additional features allow also\nfor the protection against cyber-physical attacks, in which an\nattacker tries to jeopardize sensing data to alter the behavior\nof the cyber-physical environment. In addition to payloadbased features, to characterize the behavior of an object this\napproach considers also classical network parameters (i.e.,\nsource port type, TCP flag, encapsulated protocol types, the\ninterval between arrival times of consecutive packets, and\npacket length) altogether with features derived from the payload.Thenitproceedsbymappingthesequenceofexchanged\npackets in a sequence of symbols and leverages a Gated\nRecurrent Unit (GRU) neural network composed of 2 layers of 512 and 256 neurons, respectively, a fully connected\nlayer with size 128, and an output classification layer. The\nchoice of a GRU as the reference model, instead of more\ncomplex architectures (such as LSTM), is due to the need of\nsolving the trade-off between the solution accuracy and the\ncomputational complexity of training behavioral fingerprinting models for IoT nodes. The objective of the deep learning\nmodel is to classify the next symbol given a sequence of input\nsymbols of size t [1].\n\n1 Observe that, the value of t can be fixed based on the dynamicity\nof the object-to-object interactions. In our experiment (see Section 5)\n\n\nIn the remainder of this section, we illustrate how we\napply FL in our approach. In the previous sections, we\nfocused on the description of the setup tasks crucial for\nthe privacy-preserving execution of our scheme, namely: (i)\nthe identification of the aggregator device for a target node,\nand (ii) the creation of groups of workers for FL training\ntask. At this point, since all the roles have been assigned,\nthe aggregator first initializes a global model with random\nlearning parameters. Secondly, each worker gets in contact with the aggregator to receive the current model and,\nafter this step, it computes its local model update. To do so,\neach node leverages its own dataset gathered from the direct\ninteraction sequence with the target node. At each training\nepoch, once the local contribution is computed, the worker\ncan forward it to the aggregator that is in charge of combining all the local model updates and, hence, it constructs an\nenhanced global model with better performance, still ensuring protection against privacy leakages. The last two steps are\nperformed iteratively until the global training is complete.\nFigure 4 sketches the steps described above focusing on\nthe communication between one of the involved workers and\nthe aggregator.\n\n**3.3.1 Leveraging Secure Delegation**\n\nIt is worth observing that, because of the high heterogeneity\nof devices in an IoT network, not all the nodes are equipped\nwith sufficient computational and memory capability to execute the training phase of our approach. Hence, we resort to\na secure delegation mechanism according to which less powerful devices in Nl _Nm can delegate such tasks to powerful_\n∪\n\nfollowing the results described in (Aramini et al., 2022), we set this\nvalue to 10.\n\n## 1 3\n\n\n-----\n\n**Fig. 4 Detailed view of the interaction between a worker and the aggregator during the training of a FL model**\n\n\ndevices in N _p. In the recent literature, some theoretical mod-_\nels and ontologies have been designed for the identification of\nreliable IoT devices for secure delegation, tackling the issue\nof incomplete task requests owned by resource-constrained\nIoT devices (Khalil et al., 2021). Of course, any existing\nsecure delegation strategy could be adopted in our approach.\nHowever, for the sake of completeness, we describe a naive\napproach in which both the training and the subsequent inference phases can benefit from delegation.\nIn particular, in the following, we describe the two scenarios above, separately. We start with the training phase and\nwe consider the situation in which a less capable device, say\n_ci_, is involved as a worker in the construction of a behavioral\nfingerprinting model for a target b. We assume that, due to\nthe lightweight nature of the operations described in Section\n3.2, any node can perform the setup steps for the configuration of the FL task (see the experiments on the performance\nof IoT nodes on these tasks in Section 5.2). In practice, ci can\nexecute both Algorithms 1 and 2 to identify the aggregator\nfor b and become a member of the working group to build its\nbehavioral fingerprinting model. Secure delegating is, hence,\nneeded in the subsequent steps involving the training of the\nlocal ML model.\nAccording to our strategy, given a cryptographic salted\nhash function _(v, s) (Rana et al., 2022), in which v is the_\n_H∫_\nvalue to be hashed and s is the salt, the secure delegation of\nthe training phase requires the following steps:\n\ncollection of interaction packets with the target b;\n\n feature extraction and mapping with the corresponding\n\n symbols (as described before);\npre-processing of the symbol sequence to guarantee pri\n vacy;\nupload of the training set in a shared data bucket linked\n\n in the Blockchain;\nidentification of a trusted delegated node in the network;\n\n interaction with the delegated node to start the training.\n\n \n## 1 3\n\n\nFirst, ci collects a sequence of interaction packets during\nits communication with b. Adopting the approach described\nin (Aramini et al., 2022), it, then, extracts both payload-based\nand network-based features from such a sequence. It, then,\nmaps each unique combination of these features to a corresponding symbol. At this point, a sequence of interaction\npackets is replaced by a sequence of symbols.\nNow, without losing information, to protect the privacy\nof the communications between the worker ci and b, our\napproach imposes that each symbol of such a sequence can be\nconverted into its hash representation using the salted secure\nhash function mentioned above. In this way, only the source\nnode ci can know the mapping between the original symbol sequence and the hashed one. This facility is enabled at\nthe FL task level, i.e. once a node ci expresses its need for\na secure delegation, the whole FL task will be adjusted to\nwork with a converted set of symbols. To do so, ci communicates its need to use secure delegation to the aggregator\n_ab. The latter will, then, generate a salt s that will be sent to_\nall the workers involved in the FL task having b as a target.\nAt this point any packet sequence ⟨ _pkt1, pkt2, · · ·, pktm⟩_\nwill be converted, first into a sequence of symbols according to the values of the considered features of each packet,\nnamely ⟨sy1, sy2, · · ·, sym⟩. Then, each node will apply the\nsecure salted hash function to obtain the hashed sym_H∫_\nbol sequence ⟨H∫ _(sy1, s), H∫_ _(sy2, s), · · ·, H∫_ _(sym, s)⟩._\nObserve that, while the first transformation can be done by\nany node in the network and, hence, knowing a sequence of\nsymbols it is possible to derive information about the original\npacket sequence, due to the property of the adopted cryptographic salted hash function, it is not possible to invert the\nhashed symbol sequence into its original packet sequence.\nAs a consequence, only the nodes involved in the FL task,\nwhich know the salt s, can obtain the hashed symbols from\na sequence of packets and, hence, exploit the trained model.\nAs for the identification of a trusted delegated node,\nour approach can leverage any existing state-of-the-art trust\n\n\n-----\n\nmodel for IoT. In Section 3.4, we provide an overview of a\npossible trust scheme and extend it to include support for the\nidentification of aggregators. The only requirement is that ci\ncan estimate the reliability of its peers so as to identify the\ncorrect delegate di for its task.\nAt this point, ci can share its privacy-preserving training set with di to start the training phase. To do so, we\nleverage IPFS as a global file system in which nodes can\nupload their data. Moreover, the links to IPFS folders are\nsharedthroughtransactionsontheBlockchain.Ofcourse,our\nprivacy-preserving strategy does not require additional security mechanisms on IPFS to protect the training set. Indeed,\nas stated above, any node in the network could use these data\nto train a model, however, only the node involved in the specific FL task will know the salt s and, hence, can perform\nthe mapping between the hashed symbol sequence and the\nreal packet one. With that said, di can carry out the training\ntask for ci by receiving the initialized global model of the FL\ntask from it. At each epoch, di will return the local model\nupdates to ci and it will receive the updated global model for\nthe following training epoch.\nAfter the training phase, ci will receive the final version\nof the trained model from ab. However, if the delegation\nembracesalsothemodelinference,thedelegatednoderetains\nthe trained model to support ci also for model inference.\nIn particular, the secure delegation for the inference phase\nworks as follows. First, ci collects the packet sequence from\nits direct interaction with b. Then, it converts this sequence\ninto the corresponding symbol sequence and, hence, applies\n_H∫_, using the same salt s obtained by ab during the training phase, to build the hashed symbol sequence. This last\ncan, then, be used by di as input to the trained behavioral\nfingerprinting model.\n\n**3.3.2 Exploiting behavioral fingerprints for Anomaly**\n**Detection**\n\nThe steps described above focus on the creation of deep\nlearning models that, given an input symbol sequence, are\ncapableofclassifyingitsnextsymbol.Theadvantagebrought\nabout by our solution is that to estimate the behavior of a\nnode, it considers not only a single point-to-point interaction between two peers, but a community-oriented general\nperspective of the target node. However, although the performance of such a classifier is extremely high as will be shown\nin Section 5, using a single prediction to identify a change\nin the behavior of a node is not adequate and could lead to\nfalse predictions. To avoid this issue, as suggested by the\nrelated literature (Aramini et al., 2022; Nguyen et al., 2019),\nwe adopt a window-based strategy. Specifically, given an\nobservation window, say wb, our approach exploits the afore\n\nmentioned classifier to identify mispredicted symbols. As for\nthe estimation of a change in the observed node behavior, a\ntrue positive will be signaled if the number of mispredicted\nsymbols exceeds a threshold thw. Such a threshold should\nbe suitably tuned to dampen the, even low, false prediction\nrate of the underlying classifier. Practically speaking, if the\noverall confidence of the classifier is 0.80, to dampen the\nprediction errors, thw should be fixed to a value greater than\n20% of the window size. Of course, the choice of the correct\nvalue for thw, although its lower bound can be established\nby the reasoning above, strongly depends on the dynamics of\nthe IoT scenario under analysis. Indeed, a greater thw implies\na slower detection of behavior changes for the target nodes\n(Aramini et al., 2022).\n\n#### 3.4 The Underlying Trust Model\n\nInthissection,wesketchtheunderlyingtrustmodelexploited\nby our solution. Indeed, in the previous sections, we stated\nthat an IoT node can select suitable aggregators and/or delegated nodes by leveraging the information stored in the\nBlockchain about node reliability. Behavioral fingerprinting\ncan be a key factor in the construction of enhanced reputation\nmodels. Indeed, it can be used to estimate anomalous actions\nthat can be grounded on security attacks or device malfunctions. The definition of a model to estimate trust scores and\ncompute nodes’ reputations is an orthogonal study concerning our approach; therefore, to build our solution, we can\nleverage existing proposals to provide forms of trust in an\nIoT network (Corradini et al., 2022; Dedeoglu et al., 2019;\nPietro et al., 2018).\nIn particular, in our proposal, we adopt the approach of\n(Corradinietal., 2022)toestimatetrustandreputationscores.\nIn the following, we briefly sketch the adaptation of such an\napproach into our application scenario. Specifically, in our\ncontext, a trust score can be assigned by a node ci towards a\ntarget node b, for which it holds a behavioral fingerprinting\nmodel, as follows:\n\n_Tci_ _,b = 1 −_ _FP_ _wb_ _(ci_ _, b)_\n\nHere, FP _wb is a function that exploits the behavioral fin-_\ngerprinting model of b to estimate changes in its behavior\nduring an observation window wb. This function can naively\nrecord the number of mispredictions registered during wb\nand compute the ratio between such a number and the total\nlength of the packet sequence exchanged by ci and b during\n_wb. As done in (Corradini et al., 2022), such trust scores can_\nbe published by the monitoring node ci in the Blockchain.\nTherefore, given a fixed time period ω > wb, let T Sb[ω] [be]\nthe set of trust transactions published by any node holding\n\n## 1 3\n\n\n-----\n\na fingerprinting model towards b. Moreover, let Tb[ω] [be the]\naverage trust score in T Sb[ω][. The reputation after each time]\nperiod ω can be computed as follows.\n\n\n_Rb[ω]_ [=]\n\n\n�α · Rb[ω][−][1] + (1 − _α) · Tb[ω]_ if|T Sb[ω][| ̸=][ 0]\n_Rb[ω][−][1]_ otherwise\n\n\nIn this equation, again as stated in (Corradini et al., 2022),\n_α is a parameter introduced to tune the importance of past_\nbehavior observations concerning new ones.\nAs an additional trust contribution, we design a specific\ntrust score for aggregators. An aggregator can be also evaluated based on its honesty in constructing global models\nduring FL tasks. To do so, we introduce an additional check\nthat the involved workers can perform during the training\nepochs. Given a normalized performance metric m, at each\nepoch e, a worker ci can compare the value of m for the\nlocal model, say ml, and for the global one returned by the\naggregator ab for this epoch, namely m _g. In practice, such_\nan additional trust score can be formulated as follows.\n\n_Tci_ _,ab = |ml −_ _m_ _g| · (1 −_ _τ)_\n\nHere, τ is a tolerance value introduced to absorb the\nexpected variations in the values of the chosen metric\nbetween the global and local models. Finally, as for the metric m, it can be any evaluation metric typically adopted for\nmachine learning models, such as the accuracy, the preva_lence, the f-measure, and so forth._\n\n### 4 Security Model\n\nThis section is devoted to the security model underlying our\nsolution. In particular, we introduce both the attack model\nand the security analysis proving that our approach is robust\nto possible attacks.\n\n#### 4.1 Attack Model\n\nWe start this section with a preliminary assumption according to which our approach is applied to a scenario already in a\nstationary situation, or fully operational phase, with enough\nnodes available to carry out all the steps required by our\nscheme. For this reason, we do not consider the initial startup stage, which can be characterized by an IoT network not\nyet active or complete. Moreover, as stated in Section 3, we\nassume the existence of a safe starting phase in which the\nnodes are configured and the behavioral fingerprinting models can be trained.\nIn the following, we list the assumptions useful for analyzing the security properties of our model.\n\n## 1 3\n\n\n**A.1 There exists an initial safe phase in which behavioral**\nfingerprinting models are built in the absence of attacks\non target nodes.\n**A.2 An attacker cannot control the majority of the**\nworkers by training a behavioral fingerprinting model\nassociated with a target.\n**A.3 An attacker has no additional knowledge derived**\nfrom any direct physical access to IoT objects.\n**A.4 The exploited Blockchain technology is compli-**\nant with the standard security requirements commonly\nadopted for Blockchain applications.\n**A.5 The nonces and identifiers of nodes are generated**\nstarting from different key spaces. Moreover, no pair of\nidentifiers or nonces can collide.\n\nAs stated above, our model ensures a list of security properties (SP, in the following), as follows:\n\n**SP.1 Resistance to attacks on Federated Learning.**\n**SP.2 Resistance to attacks on the SMC strategy to identify**\nFL co-workers.\n**SP.3 Resistance to attacks on the Blockchain and the**\nSmart Contract technology.\n**SP.4 Resistance to attacks on the Reputation Model.**\n**SP.5 Resistance to attacks on the IoT network.**\n\n#### 4.2 Security Analysis\n\nThis section presents the analysis of the security properties\nlisted above to prove that our approach can ensure them.\nIn the following, we provide a detailed description of such\nanalysis for each of the properties listed above.\n\n**4.2.1 SP.1 - Resistance to attacks on Federated Learning**\n\nOur approach leverages Federated Learning during the safe\n_starting phase in which the behavioral fingerprinting mod-_\nels have to be trained for target nodes. For Assumption A.1,\nduring this stage models computation is performed in the\nabsence of attacks against target nodes. However, both the\nworkers and the aggregator nodes can be forged or attacked.\nAs for the first case, the large threat surface of the Federated Learning scenario makes this new type of distributed\nlearning system vulnerable to many known attacks targeting\nworker nodes (Jere et al., 2020). In general, these security\nattacks focus on poisoning the model or preventing its convergence. In our approach, we can consider the protection\nagainst these attacks as an orthogonal task. Indeed, in the\ncurrent scientific literature, there exist several countermeasures that FL aggregators can adopt to identify misbehaving\nworkers and, hence, discard their contributions. Examples\nof such strategies are, for instance, the robust aggregation\nfunctions AGRs, such as Krum, Trimmed Mean, and so forth\n\n\n-----\n\n(Blanco-Justicia et al., 2021). These represent lightweight\nheuristics that can be easily adopted in our scenario to provide robustness against common attacks.\nConsideringthesecondcaseinwhichtheaggregatornodes\nare corrupted, our approach natively supports a countermeasure to possible attacks targeting them. Indeed, in Section\n3.4, we include a facility in the underlying trust model to\nevaluate their honesty. The trust score, used to assess the\nquality of its aggregation behavior, is computed by analyzing the performance of partial local models and the global one\ngenerated by the aggregator during each epoch. If this value\ngoes under a reference reliability threshold, the aggregator\ncannot be contacted by other nodes in the future. To avoid\nthe permanent removal from the system of a node, we could\nhypothesize a ban interval, say φban, after which the default\nreputation value will be restored. Of course, for critical scenarios, φban can also be infinite. Therefore, no advantage is\nobtained by the attacker if, after a malicious behavior, the\nnode is forbidden to interact with the network for a possibly\nlong period.\n\n**4.2.2 SP.2 - Resistance to attacks on the SMC strategy to**\n**identify FL co-workers**\n\nIn our scenario, during the phase related to the formation\nof the groups of workers for FL tasks (see Section 3.2), a\nmalicious node can try to contact a victim node, say b, to\ndiscover its secret nonce η. Holding this value the attacker\ncan infer the identities of the workers for the victim b. To do\nthis, it performs a cryptographic attack exploiting the properties of HE. Indeed, it queries multiple times b trying to guess\n_η and analyzing the result. In particular, it sends to b a value_\nthat is not its identifier but a guessing value for η, say η[′]. If\nit succeeds in the guessing of η (i.e. η[′] = η) b will return\n_H(η[′]_ ⊕ _η) = 0. At this point, the attacker can violate the_\nSMC scheme and break our privacy-preserving algorithm.\nThis attack can then be used to implement active eavesdropping, as a malicious node can sense the messages exchanged\nbetween two nodes and try to oust the intended target node\nto take some advantage.\nThis attack cannot happen thanks to the Assumption A.5,\nindeed the nonce and the identifier of the nodes have to be\nchosen in different key spaces. Therefore, an attacker cannot\nguess the nonce of the victim by forging a suitable identifier\nas shown above.\n\n**4.2.3 SP.3 - Resistance to attacks on the Blockchain and the**\n**Smart Contract technology**\n\nThis category of attacks tries to exploit known vulnerabilities\nof the Blockchain and the Smart Contract technology. This\nnew paradigm has been widely used in a variety of applications in recent years, but it still presents open issues in terms\n\n\nof security (Idrees et al., 2021; Kushwaha et al., 2022; Singh\net al., 2021).\nTheapproachpresentedinthispaperdoesnotfocusonfacing security challenges on Blockchain, instead, it leverages\nthis technology to equip the network with a secure public\nledger able to support some functionalities. In particular,\nwe exploit Blockchain and Smart Contracts to keep trace\nof (i) the information necessary to discover the identity of\naggregators for target nodes; (ii) the trust scores assigned by\nworkers to estimate the reliability of an aggregator; and (iii)\nthe identity of corrupted objects resulting from the monitoring activity of workers towards target nodes.\nTherefore, also because our proposal does not aim at\nextending existing Blockchain solutions, we do not consider vulnerabilities and possible direct attacks to it. In other\nwords, for Assumption A.4, we presuppose that the underlying Blockchain solution guarantees the standard security\nrequirements already adopted for common Blockchain applications (Singh et al., 2021), thus it can be considered\nsecure.\n\n**4.2.4 SP.4 - Resistance to attacks on the Reputation Model**\n\nOur strategy includes also a contribution to the computation\nof a trust score to evaluate the trustworthiness of IoT nodes.\nAnyway, although in our approach we described a simple\nadaptation of an existing trust model (Corradini et al., 2022)\ninto our scenario, this task can be considered orthogonal to\nour strategy. Therefore, for our security analysis related to\nthe trust model we can rely on the analysis conducted in\n(Corradini et al., 2022).\nAnyway, just to give a few examples of attacks targeting\nthe trust model of our approach, we consider in the following\nhow our schema proves to be robust against two of the most\npopular attacks on reputation systems, namely the Whitewashing and Slandering (or Bad-mouthing) attack.\nThe former occurs when a malicious node tries to exit\nand rejoin the network to delude the system and clean its\nreliability.\nOur strategy is based on a community-oriented general\nperspective of the trustworthiness of a target node. Indeed,\nto assess the reliability of a node, we adopt a window-based\nstrategy leveraging our behavioral fingerprinting models.\nSpecifically, trust scores are computed based on the rate of\nmispredicted symbols inside an observation window. At this\npoint, if the reputation of the node, computed by aggregating\nall the trust contributions towards it, goes under a reference\nthreshold, it will be isolated by the other peers and, therefore,\nas explained above it is technically banned from the system\n(for, at least, a time φban). Moreover, as an additional security mechanism, if a device is banned multiple times, φban\ncan be incremented at every ban until the object removal is\npermanent.\n\n## 1 3\n\n\n-----\n\nObserve that, in IoT, one of the main issues is related\nto the difficulty of mapping a unique identifier with an\nobject. Therefore, in some cases, an attacker could still\nperform a Whitewashing attack by exiting the system and\nre-introducing his/her device with a different (forged) identifier. To face this situation, we can adopt a pessimistic attitude\napproach, which imposes that newly introduced devices will\nstart in a banned state (no other node will interact with it) for\na time φban, and only after this period they can be part of the\nnetwork. In this way, attempting a whitewashing by forging\na new identifier for a device would result again in the node\nbeing banned for φban time, and no advantage is obtained.\nAs for Slandering or Bad-mouthing attacks, they occur\nwhen an intruder tries to distort the innocent nodes’ reputation by attesting a negative opinion of them. In our approach,\na Slandering or Bad-mouthing attack can happen if a worker\nlies about the result of the behavioral fingerprinting model of\na monitored node computing a false negative trust score for\nthat node.\nIf this threat is performed by a single node, only its local\ncontribution to the trust score is impacted. Hence, the global\ntrust score will not be compromised because it will be balanced by the honest contributions of the other nodes testing\nthe behavioral fingerprinting model for the victim.\nMoreover, these attacks can be performed also in a distributed fashion, through some colluding nodes trying to\npoison the trust score of a victim with multiple negative\ntrust contributions. Anyway, for Assumption A.2, an attacker\ncannot control the majority of workers holding a behavioral\nfingerprinting model for a target. It is worth noting that this\nassumption is commonly accepted for distributed domain\nscenarios,inwhichthemajorityofusersornodesinanetwork\nor a system can be considered honest at any time (Cramer\net al., 1997; Rottondi et al., 2016; Zwierko et al., 2007).\nAs an additional consideration, our approach preserves the\nprivacy of the identity of the nodes forming the group of\nworkers for an object thanks to HE. Hence, the components\nof the group do not know each other, also an attacker cannot have this information from additional knowledge derived\nfromanydirectphysicalaccesstoIoTobjectsforAssumption\nA.3. For all these reasons, our approach can be considered\nrobust against Slandering or Bad-mouthing attacks.\n\n**4.2.5 SP.5 - Resistance to attacks on the IoT network**\n\nAs for attacks undermining network and node availability, we\nconsider the two most popular ones, namely DoS and Sleep\nDeprivation attacks.\nDuring a Denial of Service (DoS) an attacker introduces\na large amount of (dummy) transactions in the network to\noverflow it and affect its availability. In our approach, this\nattack could also result in the impossibility for nodes to run\nthe FL algorithm and check peers’ behavior. For this reason,\n\n## 1 3\n\n\nany existing solution aiming at preventing DoS attacks in\nIoT could be exploited in our approach, such as the ones\npresented in (Abughazaleh et al., 2020; Baig et al., 2020;\nHussain et al., 2020). It is worth explaining that, however,\nour approach does not add any advantage to an adversary\nperforming such a category of attacks.\nA form of DoS attack specific to the IoT environment is\nknown as Sleep Deprivation Attack (SDA, hereafter) whose\nobjective is to undermine the power of the node to consume\nits battery life and power it off (excluding the victim from the\nnetwork). As for this attack, our approach natively supports\na countermeasure. Indeed, the alteration in the behavior of\nan attacked node can be detectable by our behavioral fingerprinting models. Therefore, our approach can prevent SDA,\nbecause once a change in the behavior of the attacked node is\ndetected, the other nodes can safely discard all the requests\ncoming from it.\n\n### 5 Experiments\n\nThis section deals with the analysis of our experimental campaign useful for validating our approach. In particular, in\nthe next subsections, after the description of our dataset, we\nreport in detail the performance evaluation of our solution\nto build a global behavioral fingerprinting model using FL,\nthe results of our solution for anomaly detection, and, finally,\nthe tests to assess the performance of the overall approach in\nterms of execution times.\n\n#### 5.1 The Dataset\n\nTo validate our proposal, we started from a dataset publicly available online concerning IoT traffic collected by a\n[centralized network hub. The dataset is available at https://](https://iotanalytics.unsw.edu.au/attack-data.html)\n[iotanalytics.unsw.edu.au/attack-data.html and has been orig-](https://iotanalytics.unsw.edu.au/attack-data.html)\ninally produced by the authors of (Hamza et al., 2019). It\ncontains about 65 GB of data describing daily IoT traffic\n(i.e., traffic generated by smart devices, such as light sensors,\nmotion sensors, and so forth). The original dataset contains\nboth data generated in the absence of cyber attacks, as well\nas traffic generated when some attack is deployed on the IoT\nnodes. Interestingly, this same dataset has been adopted in\n(Aramini et al., 2022) to test the performance of the original behavioral fingerprinting model which is extended in this\nproposal. The authors of (Aramini et al., 2022) also enhanced\nthis dataset to simulate the collection of traffic from the IoT\nnodes, directly (no central hub collector); thus, granting that\npayload data is accessible from monitoring nodes. Because\nin our scenario, we are also focusing on a fully distributed\ncontext, we adopt the extended version of the above dataset\ngenerated in (Aramini et al., 2022). Some statistics about our\nreferring data are, then, reported in Table 3.\n\n\n-----\n\n**Table 3 Statistics of the dataset considered in our study**\n\nCommunication Type Min # of packets Max # of packets\n\nBenign 12,793 97,256\n\nBenign with payload 4,670 39,000\n\nMalign 6,971 89,148\n\nMalign with payload 2,196 8,694\n\n#### 5.2 Performance Analysis of our Global Behavioral Fingerprinting Model\n\nTo assess the performance of our approach to build a global\nbehavioral fingerprinting model using FL, we performed a\ncomparison analysis between our solution and the baseline\napproach proposed in (Aramini et al., 2022). Indeed, the\napproach of (Aramini et al., 2022) started from the results\nreported in (Nguyen et al., 2019) and demonstrated that,\nby exploiting additional features related to the payload, it\nis possible to improve the solution performance. Indeed, the\nauthors of (Aramini et al., 2022) proposed a fully distributed\nbehavioral fingerprinting model, which, however, is focused\non just a point-to-point vision of a node towards a target\npeer. Our approach, instead, extends this idea by considering that in IoT a node can participate in multiple services,\nthus showing different behavioral patterns according to them.\nTherefore, we aim to build a global model considering all\nsuch patterns to represent the complete behavior of a target\nnode, and we leverage Federated Learning for this objective.\nWith that said, we start our comparison by analyzing the\nperformance of our model and the model of (Aramini et al.,\n2022) for 12 nodes monitoring 3 different targets. As for\nour approach, we extracted from the original dataset groups\nof nodes having communications with the same targets; in\nthis way, we could build our Federated Learning scenario. In\nparticular, after analyzing all the communications available\nin the dataset, we were able to set the number of workers to\n4. Hence, for each target, we obtained a global model built\naccording to our strategy and 4 point-to-point models built\naccording to the strategy of (Aramini et al., 2022). As for\nthe training data, we used the communication sequences but\nwe kept the 20% of them for the subsequent testing. Indeed,\nonce the models have been built, to compare the obtained\nperformance, we used the test set of each involved node,\nindependently. Of course, the point-to-point (P2P) models\nare trained and tested on the data of the same communication\n(direct testing), whereas our global model (GM) is trained on\nglobal data and, then, tested on the individual test sets of the\ninvolved nodes; thus, we can expect a slight reduction in\nthe performance. However, we argue that such a reduction\nis negligible. The results of this experiment are reported in\nTable 4 where we analyzed prediction accuracy results and\n\n\n**Table 4 Comparison of the performance of our approach (GM) and the**\nsolution of (Aramini et al., 2022) (P2P) with direct testing in terms of\nprediction accuracy\n\nModel _c1_ _c2_ _c3_ _c4_\n\nTarget 1 P2P **0.78** 0.75 **0.86** **0.83**\n\nGM 0.77 **0.76** 0.82 **0.83**\n\nTarget 2 P2P 0.81 **0.82** **0.85** **0.83**\n\nGM **0.82** 0.80 0.75 **0.83**\n\nTarget 3 P2P 0.82 **0.89** 0.74 **0.84**\n\nGM **0.86** **0.89** **0.79** **0.84**\n\nin which c1, c2, c3, and c4, for each target node, act as both\nindividual nodes building P2P models of the target behavior\nand the workers of the Federated Learning task building the\nglobal model GM.\nBy analyzing this table we can see that, as expected,\nthe point-to-point models achieve sometimes slightly better performance when tested against a test set derived by the\nsame communication from which the training set has been\nextracted. However, our hypothesis is also correct as the performance reduction of our approach is negligible (less than\n1%, on average).\nHowever, the characteristic of our global model is just the\ncapability of being generally valid for any communication\ntowards a target node (also for communications related to\ndifferent services). To test this aspect, we proceeded with\na similar experiment as above, but we performed a crosstesting and assessed the performance of each point-to-point\nmodel (P2Pc1, P2Pc2, P2Pc3, and P2Pc4 ) and our global one,\non every test set available from the different involved nodes.\nWe reported the results of this experiment in Table 5.\nIn practice, in our testbed, each client owns a dataset referring to its individual communications with the shared target\nnode. From these datasets, for each client, we extracted a test\nset namely, Test-set c1, Test-set c2, Test-set c3, and Test-set\n_c4, respectively. At this point, differently from the previous_\nexperiment, the cross-testing consisted in applying all the\nP2P models and our global one on all the available test sets\nfrom the clients. Of course, when a P2P model, say P2Pc1,\nis applied to the test set belonging to the client that built\n\n**Table 5 Comparison of the performance of our approach and the solu-**\ntion of (Aramini et al., 2022) with cross testing\n\n#Model Test-set c1 Test-set c2 Test-set c3 Test-set c4\n\nP2Pc1 **0.82** _< 0.01_ _< 0.01_ _< 0.01_\n\nP2Pc2 _< 0.01_ **0.89** _< 0.01_ _< 0.01_\n\nP2Pc3 _< 0.01_ _< 0.01_ **0.74** _< 0.01_\n\nP2Pc4 _< 0.01_ _< 0.01_ _< 0.01_ **0.84**\n\nGM **0.86** **0.89** **0.79** **0.84**\n\n## 1 3\n\n\n-----\n\nthis model, c1 in this case, the experiment implies a direct\ntesting, thus returning the optimal performance for that specific model. With this experiment, we aim at demonstrating\nthat, because the communications of different clients with the\nsame target node may concern different services, local P2P\nmodels are not a general solution to monitor the behavior of\na node.\nAs a matter of fact, by inspecting Table 5, we can clearly\nsee that the point-to-point models return satisfactory accuracy results only when applied to the test set generated by\nthe same communication of the original training set (direct\ntesting). The last row of this table, instead, shows the performance of our global model which is very satisfactory\nacross every considered test set. This confirms our intuition\nthat classical behavioral fingerprinting approaches, such as\n(Aramini et al., 2022) and (Nguyen et al., 2019), reach very\nsatisfactory performance assessing the behavior of a node\nconcerning only a single target communication type (i.e.,\ncommunications generated for a specific service or action).\nOur approach, on the other hand, allows for the construction of consistent and complete behavioral fingerprints of an\nIoT node. In practice, the models built by our approach are\nmore stable and can be used to characterize the behavior of\na target node in general, and not just for a specific single\nservice/action it may offer/perform.\n\n#### 5.3 Windows-Based Anomaly Detection with Behavioral Fingerprint\n\nAs described in Section 3.3.2, our approach exploits behavioral fingerprinting models to detect anomalies on target\nnodes by leveraging a window-based mechanism. In particular, once again, our solution is based on the strategy originally\ndescribed (Nguyen et al., 2019) and (Aramini et al., 2022).\nThe proposed strategy works by computing the misprediction rate of the next symbol inside an observation window.\nAs seen in Section 3.4, the misprediction rate is defined as\nthe ratio between the number of symbols inside the windows not predicted by our behavioral fingerprinting model\nas plausible ones in the analyzed sequence and the overall number of symbols in the observation window. Clearly,\nthe choice of the right size for such a window plays a key\nrole. Intuitively, larger windows imply a more stable anomaly\ndetection capability, as any noise, even the one caused by\nthe errors in the predictions introduced by our model, would\nbe smoothed out (smaller oscillations in the misprediction\ncurve). Of course, the larger the window the slower the detection of possible anomalies, since more symbols (and, hence,\nmore packets) would be required to detect it. A possible,\nstrategy for identifying the correct size is to use the difference\nbetween the maximum and minimum peaks of the misprediction curve. Indeed, a lower difference would imply better\nstability. At this point, to find the optimal solution we can rely\n\n## 1 3\n\n\non the Kneedle algorithm (Satopaa et al., 2011). Specifically,\nit seeks to find the elbow/knee in the misprediction curve,\nwhich corresponds to the point where the curve has the most\nvisible change from high slope to low slope. In Fig. 5, we\nshow the application of this algorithm in our context.\nAs shown in this figure, in our scenario, a possible optimal\nconfiguration for the window is 100 symbols.\nWith this setting, we performed a further experiment to\ndemonstrate the capability of our solution to detect anomalies in the behavior of an IoT node and we compared the\nobtained performance with those obtained by related pointto-pointmodels.Specifically,wefocusedagainonthetestbed\nintroduced in the experiment described in Section 5.2, in\nwhich we considered 4 different point-to-point behavioral\nfingerprinting models (P2P models, for short), according to\nthe strategy of (Aramini et al., 2022), built by 4 IoT nodes,\nnamely c1, c2, c3, and c4, and targeting the same node b.\nMoreover, we simulated an FL task involving the same 4\nnodes and built a global model for b (GM, for short) according to our approach. Of course, each involved monitoring\nnode, c1, . . ., c4, collects the portion of traffic originated by\n_b towards it and creates its training and test sets. At this_\npoint, we analyzed the performance of the window-based\nanomaly detection strategy using both the P2P models and\nthe GM model as underlying fingerprinting models. To do\nso, we fixed a threshold of 0.5 (i.e., 50% of the symbols in a\nwindow), so that a misprediction rate higher than this threshold in a window would correspond to the detection of an\nanomalous behavior. Moreover, we simulated the situation\nin which the first 280 packets from b are benign and after\nthat, the node performs an attack. To simulate the attack, we\n\n**Fig.5 ApplicationoftheKneedlealgorithmtoidentifythebestwindow**\nsize\n\n\n-----\n\nused the malign traffic for this node contained in our original\ndataset (see Section 5.1). The obtained results are visible in\nFig. 6.\nAs shown in this figure, the anomaly detection strategy\nusing P2P models works only when the traffic analyzed is\nderived from the test set of the node that built the underlying P2P model. Instead, when it is applied to different test\nsets it cannot distinguish between normal and anomalous\nbehaviors. When our GM model is used instead, the anomaly\ndetection strategy achieves very good performance across all\nthe different test sets (see the subplots in the last line of\nFig. 6). This allows for the construction of a solid anomaly\n\n\ndetection solution for IoT nodes, which is agnostic on the\nspecific services the monitored nodes could be involved in.\n\n#### 5.4 Analysis of Execution Times\n\nThis section is devoted to the tests performed to validate\nthe feasibility and effectiveness of our proposal in terms of\nexecution times. Indeed, our approach is designed for an\nIoT scenario, typically characterized by many heterogeneous\ndevices.\nWe start by considering our privacy-preserving schema for\nthe identification of the correct aggregator of a node (Algo\n\n**Fig. 6 Performance of the window-based anomaly detection strategy using both P2P and GM models to monitor a common target**\n\n\n## 1 3\n\n\n-----\n\n**Table 6 Average execution times of Algorithm 1 on different device**\ntypes\n\nDevice Type Average MPC Time\n\nDesktop PC 49.6 ms\n\nRaspberry Pi4 185.3 ms\n\n1 core ARM1176 (QEMU) 774 ms\n\nrithm 1), and for the creation of groups of workers for the FL\ntasks (Algorithm 2). Both cases share a similar strategy and\nare based on the computation of bitwise XOR operations on\nhashed value through homomorphic hashing. Therefore, we\nfocus here on Algorithm 1, which is based on Equation 1,\nand, hence, test the feasibility of this computation on different types of devices. For this experiment, we considered the\nsame Federated Learning scenario analyzed in the previous\nexperiment and derived from the original dataset. Moreover,\nwe considered 3 types of device, namely: (i) a desktop personal computer equippedwithaRyzen75800xOcta-core3.8\nGHz base, 4.7 GHz boost processor, and 32GB of RAM, (ii)\na Raspberry Pi4 with a Quad-core Cortex-A72 processor and\n8GB of RAM, and (iii) a single-core ARM1176 CPU with\n512MB of RAM, emulated with the QEMU virtualization\nenvironment[2]. We executed Algorithm 1 on each considered\ndevice type and reported the results in Table 6.\nBy inspecting this table, we can conclude that our privacypreserving scheme is feasible for all the considered device\ntypes. The computation is, in general, carried out in less than\n1 second with a maximum value of 774 milliseconds for the\nless capable considered device type.\nAfter that, we focused on the computational requirements\nfor the aggregator in our solution. Aggregators coordinate\nFederated Learning tasks and, during each training epoch,\naggregate the gradient updates produced by the workers to\nbuild the global model.\nTo evaluate the execution times of the aggregation task,\nwe considered, again, the 3 types of device and the Federated\nLearning task mentioned above. Hence, we measured the\ntime required, on average, to aggregate the gradient updates\nof the local models (i.e., of the local GRU deep learning\nmodels described in Section 3.3) during the epochs of such\na Federated Learning task. The result of this experiment is\nreported in Table 7.\nThis result confirms again that both our secure multi-party\ncomputationandtheaggregationtaskcanbeexecutedbyvery\nheterogeneous devices including those with limited computational capability (such as a node equipped with a single\ncore ARM1176 and 512MB of RAM).\nAs a final evaluation of execution times, we focused on\nthe performance of the inference of a trained instance of\n\n[2 https://www.qemu.org/](https://www.qemu.org/)\n\n## 1 3\n\n\n**Table 7 Average aggregation time for different device types**\n\nDevice Type Average Aggregation Time\n\nDesktop PC 118ms\n\nRaspberry Pi4 241ms\n\n1 core ARM1176 (QEMU) 755ms\n\nour behavioral fingerprinting model. In particular, we analyzed the impact of our secure delegation strategy in such a\ntask to validate its feasibility. Therefore, we executed model\ninferences with and without the secure delegation strategy\nand computed the execution times for batches of consecutive\nsymbols of variable sizes. The obtained results are reported\nin Fig. 7.\nThis figure shows that the performance reduction introduced by our secure delegation strategy is about 16.6% on\naverage.Althoughsuchadifferenceisnotnegligible,thevery\nlow general inference times of our model make the inclusion\nof the delegation strategy still feasible across all the possible\nscenarios.\n\n### 6 Discussion and Conclusion\n\nIn recent years, IoT devices have grown in number and\ncomplexity to empower new applications with enhanced\npossibilities in monitoring, decision-making, and automation contexts. Clearly, in this scenario, privacy and security\naspects become a major concern.\nThis paper provides a contribution to this setting by\ndesigning a novel distributed framework for the computation\n\n**Fig. 7 Inference time with and without our secure delegation strategy**\n\n\n-----\n\nof global behavioral fingerprints of objects. Indeed, classical behavioral fingerprints are based on Machine Learning\nsolutions to model object interactions and assess the correctness of their actions. Still, scalability, privacy, and intrinsic\nlimitations of adopted Machine Learning algorithms represent the main aspects to be improved to make this paradigm\nentirely suitable for the IoT environment. Indeed, in classical distributed fingerprinting approaches, an object models\nthe behavior of a target contact by exploiting only the information coming from the direct interaction with it, which\nrepresents a very limited view of the target because it does\nnot consider services and messages exchanged with other\nneighbors. However, building global models with information coming from several interactions of nodes with the target\nmay lead to critical privacy concerns.\nTo face this issue, we assumed a comprehensive perspective analyzing the hidden patterns of the behavior of a node\nin the interactions with all its peers over a network. To do\nso, we designed a solution based on Federated Learning that\nbenefits from a distributed computation of behavioral fingerprintsinvolvingdifferentworkingnodes.Thankstothisnovel\nML strategy, besides enriching the fingerprinting model with\ninformation coming from different interactions of multiple\nnodes, our approach addresses also several aspects related\nto the security and privacy of data exchanged among the\ninvolved actors. Moreover, it guarantees the scalability of the\nproposed solution and very satisfactory accuracy results of\nthe anomaly detection schema making our approach suitable\nto the constantly changing attack surface that characterizes\nthe modern IoT. Furthermore, our solution considers the\nintrinsic heterogeneity of the entities involved in the considered scenario, allowing less capable nodes to participate\nin the framework, by relying on a secure delegation strategy for both the training and the inference of FL models in\na privacy-preserving way. Finally, through the properties of\nHomomorphic Encryption and the Blockchain technology,\nour approach guarantees the privacy of both the target object\nand the different contributors, as well as the robustness of\nthe solution in the presence of security attacks. All these features lead to a secure fully privacy-preserving solution whose\nrobustness and correctness have been evaluated in this paper\nthrough a detailed security analysis. Moreover, an extensive\nexperimental campaign showed that the performance of our\nmodel is very satisfactory, and we can distinguish between\nnormal and anomalous behavior across every considered test\nset, reaching a 0.85 value of accuracy on average. Furthermore,theverylowgeneralinferencetimesofourmodelmake\nthe inclusion of the delegation strategy still feasible across\nall the possible scenarios with a performance reduction of\nonly 16.6%, on average.\nWhile this work has provided valuable insights into the\npotential of our solution for anomaly detection in IoT, several\nlimitations should be acknowledged. Firstly, our framework\n\n\nneeds a sufficient total number of heterogeneous nodes to\nperform its operations properly. Moreover, even if secure delegation can be applied, still an adequate number of powerful\nnodes with sufficient computational capability, memory, and\nstability should be present to train local ML models. Furthermore, the effectiveness of our approach, which is based\non FL, heavily relies on frequent communications between\nthe aggregator and the workers in the training phase. In\nan IoT scenario, this might lead to longer training times\nand potentially hinder convergence. Anyway, a number of\nrecent studies have already tackled the issue of training distributed machine learning models for resource-constrained\nIoT devices (Imteaj et al., 2021). Our work can leverage one\nof the existing studies on the application of FL to IoT since\nthis part is orthogonal to our work.\nWe plan to expand the research described in this proposal\nwith further investigations in the next future. For instance, we\nareplanningtostudyasolutiontobuild,stillinacollaborative\nand distributed way, the behavioral fingerprinting of objects\nin the network but also taking into account an optimized\norchestrationoftheirworkload.Inparticular,thankstosecure\ndelegation, this solution should allow a better distribution of\nthe workload, generated by FL tasks, among the nodes of\nthe network, according to power consumption minimization,\nService Level Agreement (SLA, for short) requirements, and\nthe reliability of the nodes.\n\n**Acknowledgements This work was supported in part by the project**\nSERICS (PE00000014) under the NRRP MUR program funded by the\nEU-NGEU, and by the Italian Ministry of University and Research\nthrough the PRIN Project “HOMEY: a Human-centric IOE-based\nframework for supporting the transition towards industry 5.0” (code\n2022NX7WKE).\n\n**Funding Open access funding provided by Università degli Studi di**\nPavia within the CRUI-CARE Agreement.\n\n**Availability of data and materials The dataset used in this paper is**\n[publicly available in the repository: https://iotanalytics.unsw.edu.au/](https://iotanalytics.unsw.edu.au/attack-data.html)\n[attack-data.html and has been originally produced by the authors of](https://iotanalytics.unsw.edu.au/attack-data.html)\n(Hamza et al., 2019). In this paper, we also adopted the algorithms\nproposed in Aramini et al. (2022) to generate payload data.\n\n#### Declarations\n\n**Conflict of interest/Competing interests The authors declare that they**\nhave no conflict of interest or competing interests that are relevant to\nthe content of this article.\n\n**Open Access This article is licensed under a Creative Commons**\nAttribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as\nlong as you give appropriate credit to the original author(s) and the\nsource, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material\nin this article are included in the article’s Creative Commons licence,\nunless indicated otherwise in a credit line to the material. If material\nis not included in the article’s Creative Commons licence and your\n\n## 1 3\n\n\n-----\n\nintended use is not permitted by statutory regulation or exceeds the\npermitteduse,youwillneedtoobtainpermissiondirectlyfromthecopy[right holder. To view a copy of this licence, visit http://creativecomm](http://creativecommons.org/licenses/by/4.0/)\n[ons.org/licenses/by/4.0/.](http://creativecommons.org/licenses/by/4.0/)\n\n### References\n\nAbughazaleh, N., Bin, R., & Btish, M. (2020). Dos attacks in iot systems\nand proposed solutions. Int. J. Comput. Appl., 176(33), 16–19.\nAdat, V., & Gupta, B. B. (2018). Security in internet of things: issues,\nchallenges, taxonomy, and architecture. Telecommunication Sys_tems, 67(3), 423–441._\nAl-Garadi, M. A., Mohamed, A., Al-Ali, A. K., Du, X., Ali, I., &\nGuizani, M. (2020). A survey of machine and deep learning methods for internet of things (iot) security. IEEE Communications\n_Surveys & Tutorials, 22(3), 1646–1685._\nAli, M., Karimipour, H., & Tariq, M. (2021). Integration of blockchain\nand federated learning for internet of things: Recent advances and\nfuture challenges. Computers & Security, 108, 102355.\nAl-Sarawi, S., Anbar, M., Abdullah, R., Al Hawari, A.B. (2020). in 2020\n_Fourth World Conference on smart trends in systems, security and_\n_sustainability (WorldS4) (IEEE), pp. 449–453_\nAramini, A., Arazzi, M., Facchinetti, T., Ngankem, L.S., Nocera, A.\n(2022). in 2022 IEEE 18th International Conference on Factory\n_Communication Systems (WFCS) (IEEE), pp. 1–8_\nBaig, Z. A., Sanguanpong, S., Firdous, S. N., Nguyen, T. G., & So-In, C.\n(2020). Averaged dependence estimators for dos attack detection\nin iot networks. Future Generation Computer Systems, 102, 198–\n209.\nBellare, M., Goldreich, O., Goldwasser, S. (1994). in Annual Interna_tional Cryptology Conference (Springer), pp. 216–233_\nBezawada, B., Bachani, M., Peterson, J., Shirazi, H., Ray, I., Ray I.\n(2018). in Proceedings of the 2018 workshop on attacks and solu_tions in hardware security, pp. 41–50_\nBlanco-Justicia, A., Domingo-Ferrer, J., Martínez, S., Sánchez, D.,\nFlanagan, A., & Tan, K. E. (2021). Achieving security and privacy in federated learning systems: Survey, research challenges\nand future directions. Engineering Applications of Artificial Intel_ligence, 106, 104468._\nBuccafurri, F., Lax, G., Nicolazzo, S., & Nocera, A. (2016). A privacypreserving localization service for assisted living facilities. IEEE\n_Transactions on Services Computing, 13(1), 16–29._\nCauteruccio, F., Fortino, G., Guerrieri, A., Liotta, A., Mocanu, D.\nC., Perra, C., Terracina, G., & Vega, M. T. (2019). Short-long\nterm anomaly detection in wireless sensor networks based on\nmachine learning and multi-parameterized edit distance. Informa_tion Fusion, 52, 13–30._\nCeldrán, A.H., Sánchez, P.M.S., Castillo, M.A., Bovet, G., Pérez, G.M.,\nStiller, B. (2022). Intelligent and behavioral-based detection of\nmalware in iot spectrum sensors. International Journal of Infor_mation Security pp. 1–21_\nChen, Y., Lu, Y., Bulysheva, L., Kataev, M.Y. (2022). Applications of\nblockchain in industry 4.0: A review. Information Systems Fron_tiers pp. 1–15_\nChristidis, K., & Devetsikiotis, M. (2016). Blockchains and smart contracts for the internet of things. Ieee Access, 4, 2292–2303.\nCorradini,E.,Nicolazzo,S.,Nocera,A.,Ursino,D.,&Virgili,L.(2022).\nA two-tier Blockchain framework to increase protection and autonomy of smart objects in the IoT. Computer Communications, 181,\n338–356.\nCramer, R., Gennaro, R., & Schoenmakers, B. (1997). A secure and\noptimally efficient multi-authority election scheme. European\n_transactions on Telecommunications, 8(5), 481–490._\n\n## 1 3\n\n\nDedeoglu, V., Jurdak, R., Putra, G. D., Dorri, A., Kanhere, S. S. (2019).\nin Proceedings of the 16th EAI international conference on mobile\n_and ubiquitous systems: computing, networking and services, pp._\n190–199\nFerretti, M., Nicolazzo, S., & Nocera, A. (2021). H2O: Secure Interactions in IoT via Behavioral Fingerprinting. Future Internet, 13(5),\n117.\nGentry, C. (2009). A fully homomorphic encryption scheme (Stanford\nuniversity)\nHamad, S.A., Zhang, W.E., Sheng, Q.Z., Nepal, S. (2019). in 2019 18th\n_IEEE international conference on trust, security and privacy in_\n_computing and communications/13th IEEE international confer-_\n_ence on big data science and engineering (TrustCom/BigDataSE)_\n(IEEE), pp. 103–111\nHammi,M.T.,Hammi,B.,Bellot,P.,&Serhrouchni,A.(2018).Bubbles\nof trust: A decentralized blockchain-based authentication system\nfor iot. Computers & Security, 78, 126–142.\nHamza, A., Gharakheili, H. H., Benson, T. A., Sivaraman, V. (2019). in\n_Proceedings of the 2019 ACM Symposium on SDN Research, pp._\n36–48\nHassija, V., Chamola, V., Saxena, V., Jain, D., Goyal, P., & Sikdar, B.\n(2019). A survey on iot security: application areas, security threats,\nand solution architectures. IEEE Access, 7, 82721–82743.\nHussain, F., Abbas, S. G., Husnain, M., Fayyaz, U. U., Shahzad, F.,\nShah, G. A. (2020). in 2020 IEEE 23rd International Multitopic\n_Conference (INMIC) (IEEE), pp. 1–6_\nIdrees, S. M., Nowostawski, M., Jameel, R., & Mourya, A. K. (2021).\nSecurity aspects of blockchain technology intended for industrial\napplications. Electronics, 10(8), 951.\nImteaj, A., Thakker, U., Wang, S., Li, J., & Amini, M. H. (2021). A\nsurvey on federated learning for resource-constrained iot devices.\n_IEEE Internet of Things Journal, 9(1), 1–24._\nJere, M. S., Farnan, T., & Koushanfar, F. (2020). A taxonomy of attacks\non federated learning. IEEE Security & Privacy, 19(2), 20–28.\nKhalil, U., Ahmad, A., Abdel-Aty, A. H., Elhoseny, M., El-Soud, M.\nW. A., & Zeshan, F. (2021). Identification of trusted iot devices\nfor secure delegation. Computers & Electrical Engineering, 90,\n106988.\nKhan, L. U., Saad, W., Han, Z., Hossain, E., Hong, C. S. (2021). Federated learning for internet of things: Recent advances, taxonomy,\nand open challenges. IEEE Communications Surveys & Tutorials\nKhan, M. A., & Salah, K. (2018). Iot security: Review, blockchain solutions, and open challenges. Future generation computer systems,\n_82, 395–411._\nKim, Y. S., & Heo, J. (2012). Device authentication protocol for smart\ngrid systems using homomorphic hash. Journal of Communica_tions and Networks, 14(6), 606–613._\nKim, M., Song, Y., Wang, S., Xia, Y., & Jiang, X. (2018). Secure logistic\nregression based on homomorphic encryption: Design and evaluation. JMIR medical informatics, 6(2), e8805.\nKohno, T., Broido, A., & Claffy, K. C. (2005). Remote physical device\nfingerprinting. IEEE Transactions on Dependable and Secure\n_Computing, 2(2), 93–108._\nKoneˇcn`y,J.,McMahan,B.,Ramage,D.(2015).Federatedoptimization:\nDistributed optimization beyond the datacenter. arXiv preprint\n[arXiv:1511.03575](http://arxiv.org/abs/1511.03575)\n\nKozlov, D., Veijalainen, J., Ali, Y. (2012). in BODYNETS, pp. 256–262\nKushwaha, S. S., Joshi, S., Singh, D., Kaur, M., Lee, H. N. (2022). Systematic review of security vulnerabilities in ethereum blockchain\nsmart contract. IEEE Access\nLewi, K., Kim, W., Maykov, I., Weis, S. (2019). Securing update propagation with homomorphic hashing. Cryptology ePrint Archive\nLi, S., Xu, L. D., & Zhao, S. (2015). The internet of things: a survey.\n_Information systems frontiers, 17, 243–259._\nMiettinen, M., Marchal, S., Hafeez, I., Asokan, N., Sadeghi,\nA.R., Tarkoma, S. (2017). in 2017 IEEE 37th International Con\n\n-----\n\n_ference on Distributed Computing Systems (ICDCS) (IEEE), pp._\n2177–2184\nNguyen, T. D., Marchal, S., Miettinen, M., Fereidooni, H., Asokan, N.,\nSadeghi, A. R. (2019). in 2019 IEEE 39th International conference\n_on distributed computing systems (ICDCS) (IEEE), pp. 756–767_\nNguyen, D. C., Ding, M., Pathirana, P. N., Seneviratne, A., Li, J., &\nPoor, H. V. (2021). Federated learning for internet of things: A\ncomprehensive survey. IEEE Communications Surveys & Tutori_als, 23(3), 1622–1658._\nNofer, M., Gomber, P., Hinz, O., & Schiereck, D. (2017). Blockchain.\nBusiness & Information. Systems Engineering, 59(3), 183–187.\nOser, P., Kargl, F., Lüders, S. (2018) in International conference on\n_security, privacy and anonymity in computation, communication_\n_and storage (Springer), pp. 417–427_\nPeralta, G., Cid-Fuentes, R. G., Bilbao, J., & Crespo, P. M. (2019).\nHomomorphic encryption and network coding in iot architectures:\nAdvantages and future challenges. Electronics, 8(8), 827.\nPietro, R. D., Salleras, X., Signorini, M., Waisbard, E. (2018). in Proc. of\n_the ACM International Symposium on Access Control Models and_\n_Technologies (SACMAT’18) (Indianapolis, IN, USA), pp. 77–83._\nACM\nPreuveneers, D., Rimmer, V., Tsingenopoulos, I., Spooren, J., Joosen,\nW., & Ilie-Zudor, E. (2018). Chained anomaly detection models\nfor federated learning: An intrusion detection case study. Applied\n_Sciences, 8(12), 2663._\nRadhakrishnan, S. V., Uluagac, A. S., & Beyah, R. (2014). Gtid: A\ntechnique for physical device and device type fingerprinting. IEEE\n_Transactions on Dependable and Secure Computing, 12(5), 519–_\n532.\nRana, M., Mamun, Q., & Islam, R. (2022). Lightweight cryptography\nin iot networks: A survey. Future Generation Computer Systems,\n_129, 77–89._\nRen, W., Tong, X., Du, J., Wang, N., Li, S. C., Min, G., Zhao, Z.,\n& Bashir, A. K. (2021). Privacy-preserving using homomorphic\nencryptioninmobileiotsystems. _ComputerCommunications,165,_\n105–111.\nRey,V.,Sánchez,P.M.S.,Celdrán,A.H.,&Bovet,G.(2022).Federated\nlearning for malware detection in iot devices. Computer Networks,\n_204, 108693._\nRottondi, C., Panzeri, A., Yagne, C. T., & Verticale, G. (2016). Detection\nand mitigation of the eclipse attack in chord overlays. International\n_Journal of Computational Science and Engineering, 13(2), 111–_\n121.\nSánchez,P.M.S.,Celdrán,A.H.,Rubio,J.R.B.,Bovet,G.,Pérez,G.M.\n(2021). Robust federated learning for execution time-based device\nmodel identification under label-flipping attack. arXiv preprint\n[arXiv:2111.14434](http://arxiv.org/abs/2111.14434)\n\nSánchez, P. M. S., Celdrán, A. H., Schenk, T., Iten, A.L.B., Bovet, G.,\nPérez, G. M., Stiller, B. (2022). Studying the robustness of antiadversarial federated learning models detecting cyberattacks in iot\n[spectrum sensors. arXiv preprint arXiv:2202.00137](http://arxiv.org/abs/2202.00137)\n\nSánchez, P. M. S., Valero, J. M. J., Celdrán, A. H., Bovet, G., Pérez,\nM. G., & Pérez, G. M. (2021). A survey on device behavior fingerprinting: Data sources, techniques, application scenarios, and\ndatasets. IEEE Communications Surveys & Tutorials, 23(2), 1048–\n1077.\nSatopaa, V., Albrecht, J., Irwin, D., Raghavan, B. (2011). in 2011 31st\n_international conference on distributed computing systems work-_\n_shops (IEEE), pp. 166–171_\nShafagh, H., Hithnawi, A., Burkhalter, L., Fischli, P., Duquennoy, S.\n(2017). in Proceedings of the 15th ACM Conference on Embedded\n_Network Sensor Systems, pp. 1–14_\n\n\nShrestha, R., Kim, S. (2019). in Advances in Computers, vol. 115 (Elsevier), pp. 293–331\nSicari, S., Cappiello, C., De Pellegrini, F., Miorandi, D., & CoenPorisini, A. (2016). A security-and quality-aware system architecture for internet of things. Information Systems Frontiers, 18,\n665–677.\nSingh, S., Hosen, A. S., & Yoon, B. (2021). Blockchain security attacks,\nchallenges, and solutions for the future distributed iot network.\n_IEEE Access, 9, 13938–13959._\nTweneboah-Koduah, S., Skouby, K. E., & Tadayoni, R. (2017). Cyber\nsecurity threats to iot applications and service domains. Wireless\n_Personal Communications, 95, 169–185._\nYang, Q., Liu, Y., Cheng, Y., Kang, Y., Chen, T., & Yu, H. (2019).\nFederated learning. Synthesis Lectures on Artificial Intelligence\n_and Machine Learning, 13(3), 1–207._\nYao, H., Wang, C., Hai, B., Zhu, S. (2018). in 2018 Sixth International\n_Conference on Advanced Cloud and Big Data (CBD) (IEEE), pp._\n243–248\nZwierko, A., & Kotulski, Z. (2007). A light-weight e-voting system\nwith distributed trust. Electronic Notes in Theoretical Computer\n_Science, 168, 109–126._\n\n**Publisher’s Note Springer Nature remains neutral with regard to juris-**\ndictional claims in published maps and institutional affiliations.\n\n**Marco Arazzi is currently a Ph.D. Student in Computer Engineering at**\nthe same University. From March to July 2023, he worked as a Visiting Researcher in the Cyber Security group of the Delft University of\nTechnology (TU Delft). His research interests include Data Science,\nMachine Learning, Social Network Analysis, the Internet of Things,\nPrivacy, and Security. He is the author of 10 scientific papers in these\nresearch fields.\n\n**Serena Nicolazzo is currently a Type-A Temporary Research Fel-**\nlow (RTDA) at the University of Milan. She got a PhD in Information Engineering at the University Mediterranea of Reggio Calabria in\n2017. Her research interests include Data Science, Security, Privacy,\nand Social Network Analysis. She is involved in several TPCs and editorial boards of prestigious International Conferences and Journal in\nthe context of Data Science and Cybersecurity and she is the author of\nabout 40 scientific papers. She was a Visiting Researcher at Middlesex\nUniversity of London and is actively collaborating with the Polytechnic University of Marche, the University of Pavia, and the University\nCollege of London.\n\n**Antonino Nocera is an Associate Professor at the University of Pavia.**\nHe received his PhD in Information Engineering at the Mediterranea\nUniversity of Reggio Calabria in 2013. His research interests span\nseveral research contexts including Artificial Intelligence, Data Science, Security, Privacy, Social Network Analysis, Recommender Systems, Internet of Things, Cloud Computing, and Blockchain. In these\nresearch fields, he published about 90 scientific papers. He is involved\nin several TPCs of prestigious International Conferences in the context of Data Science and Cybersecurity and is an Associate Editor\nof Information Sciences (Elsevier) and of the IEEE Transactions on\nInformation Forensics and Security.\n\n## 1 3\n\n\n-----\n\n### Authors and Affiliations\n\n**Marco Arazzi[1]** **· Serena Nicolazzo[2]** **· Antonino Nocera[1]**\n\nMarco Arazzi\[email protected]\n\nSerena Nicolazzo\[email protected]\n\n## 1 3\n\n\n1 Department of Electrical, Computer and Biomedical\nEngineering, University of Pavia, Via A. Ferrata, 5, Pavia\n27100, PV, Italy\n\n2 Department of Computer Science, University of Milan, Via\nCeloria, 18, Milan 20133, MI, Italy\n\n\n-----\n\n"
| 27,632
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1007/s10796-023-10443-0?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/s10796-023-10443-0, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://link.springer.com/content/pdf/10.1007/s10796-023-10443-0.pdf"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-11-14T00:00:00
|
[
{
"paperId": "82489b553d9286fdf2d1e708686d712ad7dd5204",
"title": "Intelligent and behavioral-based detection of malware in IoT spectrum sensors"
},
{
"paperId": "5c84d1bc2a78c663759283a560266420e254fdf5",
"title": "Applications of Blockchain in Industry 4.0: a Review"
},
{
"paperId": "a9e43d93fee7d9ed3a2410ccd13c6b876c33196f",
"title": "Lightweight cryptography in IoT networks: A survey"
},
{
"paperId": "d6a6e51c084ea15e0718032c4cebfc73542918ae",
"title": "A two-tier Blockchain framework to increase protection and autonomy of smart objects in the IoT"
},
{
"paperId": "887f6f6a4ebe631f2332e7f2c7be847fa07b1c43",
"title": "Integration of blockchain and federated learning for Internet of Things: Recent advances and future challenges"
},
{
"paperId": "274ddadb0afd1808b3b95e60d0242c6e2e0b2555",
"title": "H2O: Secure Interactions in IoT via Behavioral Fingerprinting"
},
{
"paperId": "cb00dfedc325d4d70e9f9a31b9fab210bb58e030",
"title": "Security Aspects of Blockchain Technology Intended for Industrial Applications"
},
{
"paperId": "77ead771269258a191c1e8c6dee6985d58943ec2",
"title": "Federated Learning for Malware Detection in IoT Devices"
},
{
"paperId": "019909a50a696237237e96c15dd311051a298add",
"title": "Identification of trusted IoT devices for secure delegation"
},
{
"paperId": "ead5a6ec9ffcda5ff9c8a974ee6fe75919e10979",
"title": "Privacy-preserving using homomorphic encryption in Mobile IoT systems"
},
{
"paperId": "8864f2cc06815840fd611570eb1b557b2395beba",
"title": "Achieving Security and Privacy in Federated Learning Systems: Survey, Research Challenges and Future Directions"
},
{
"paperId": "890d19fd3b2913883624106f4ca5740c4885cf4a",
"title": "Federated Learning"
},
{
"paperId": "73f8dc79cbea204a50d52bc664e963c728adb894",
"title": "Short-long term anomaly detection in wireless sensor networks based on machine learning and multi-parameterized edit distance"
},
{
"paperId": "be4468f37638b83194b406845a06aafee506cfac",
"title": "Homomorphic Encryption and Network Coding in IoT Architectures: Advantages and Future Challenges"
},
{
"paperId": "fbe6553641ce3c6a1914ef5d9f8797fa762c8bad",
"title": "Chained Anomaly Detection Models for Federated Learning: An Intrusion Detection Case Study"
},
{
"paperId": "fb74ec25eea00aaea3cf1519dda5de586c6b762c",
"title": "Bubbles of Trust: A decentralized blockchain-based authentication system for IoT"
},
{
"paperId": "425eb6b97480afc0f918c583980896559ba881f3",
"title": "Secure Logistic Regression Based on Homomorphic Encryption: Design and Evaluation"
},
{
"paperId": "81f6442e50890b990598e637a44b2d8d10329710",
"title": "IoT security: Review, blockchain solutions, and open challenges"
},
{
"paperId": "4f26966bd345e1b1ecd2b81fdd408b6ec3e5447e",
"title": "Security in Internet of Things: issues, challenges, taxonomy, and architecture"
},
{
"paperId": "aa3339404876852c74ef0b1838c01c57e57a3de7",
"title": "Cyber Security Threats to IoT Applications and Service Domains"
},
{
"paperId": "9a32ebe626b5da34e9ef3607799034a8a1a4556b",
"title": "A security-and quality-aware system architecture for Internet of Things"
},
{
"paperId": "d080fb7c7cd1bf65be60ec9cd47dd44fca0abb66",
"title": "The internet of things: a survey"
},
{
"paperId": "b521c33f3d72b2c848307fc9976b320e81fcb798",
"title": "A Light-Weight e-Voting System with Distributed Trust"
},
{
"paperId": "62d72177feb3f8ab4d092eb6398304a4fd96b774",
"title": "A secure and optimally efficient multi-authority election scheme"
},
{
"paperId": "0bcf1135181f8e9b50a4f75c14f2693190fc4212",
"title": "Averaged dependence estimators for DoS attack detection in IoT networks"
}
] | 27,632
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Medicine",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0095c12ce6e60a744b8f1882aa6f3e06fdc73f7c
|
[
"Computer Science"
] | 0.908683
|
Cloud-assisted secure eHealth systems for tamper-proofing EHR via blockchain
|
0095c12ce6e60a744b8f1882aa6f3e06fdc73f7c
|
Information Sciences
|
[
{
"authorId": "2072594686",
"name": "Sheng Cao"
},
{
"authorId": "1749876",
"name": "Gexiang Zhang"
},
{
"authorId": "1391187639",
"name": "Pengfei Liu"
},
{
"authorId": "9117563",
"name": "Xiaosong Zhang"
},
{
"authorId": "2614610",
"name": "Ferrante Neri"
}
] |
{
"alternate_issns": [
"0020-0263"
],
"alternate_names": [
"Information Scientist",
"Inf Sci"
],
"alternate_urls": null,
"id": "e46002a1-d7a6-4681-aae9-36bc3a6a1f93",
"issn": "0020-0255",
"name": "Information Sciences",
"type": "journal",
"url": "http://www.sciencedirect.com/science/journal/00200255"
}
| null |
,
# Cloud-Assisted Secure eHealth Systems for Tamper-Proofing EHR via Blockchain
Sheng Cao[a,b], Gexiang Zhang[c,d], Pengfei Liu[e], Xiaosong Zhang[e,b], Ferrante Neri[f,][∗]
_aSchool of Information and Software Engineering, University of Electronic Science and Technology of China,_
_Chengdu, 611731, Sichuan Province, China_
_bCenter for Cyber Security, University of Electronic Science and Technology of China, Chengdu, 611731, Sichuan_
_Province, China_
_cSchool of Electrical Engineering,Southwest Jiaotong Univeristy,Chengdu 610031,Sichuan,China_
_dRobotics Research Center,Xihua University,Chengdu 610039,Sichuan,China_
_eSchool of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu,_
_611731, Sichuan Province, China_
_fInstitute of Artificial Intelligence, School of Computer Science and Informatics, De Montfort University,_
_Leicester,UK_
**Abstract**
Cloud-assisted electronic health (eHealth) systems provide medical institutions and patients an
efficient way to manage their electronic health records (EHRs). However, it also causes critical
security concerns. Since once a medical institution generates and outsources the patients’ EHRs
to cloud servers, patients would not physically own their EHRs but the medical institution can
access the EHRs as needed for diagnosing, it makes the EHRs integrity protection a formidable
task, especially in the case that a medical malpractice occurs, where the medical institution may
collude with the cloud server to tamper with the outsourced EHRs to hide the medical malpractice.
Traditional cryptographic primitives for the purpose of data integrity protection cannot be directly
adopted because they cannot ensure the security in the case of collusion between the cloud server
and medical institution.
In this paper, a secure cloud-assisted eHealth system (TP-EHR) is proposed to protect outsourced EHRs from illegal modification by using the blockchain technology (blockchain-based
currencies, e.g., Ethereum). The key idea is that the EHRs only can be outsourced by authenticated participants and each operation on outsourcing EHRs is integrated into the public blockchain
as a transaction. Since the blockchain-based currencies provide a tamper-proofing way to conduct
transactions without a central authority, the EHRs cannot be modified after the corresponding
transaction is recorded into the blockchain. Therefore, given outsourced EHRs, any participant
can check their integrity by checking the corresponding transaction. Security analysis and performance evaluation demonstrate that TP-EHR can provide a strong security guarantee with a high
efficiency.
_Keywords: Blockchain, eHealth systems, electronic health record_
_Preprint submitted to Information Sciences_ _October 12, 2018_
-----
**1. Introduction**
Modern technologies are steadily becoming integrating part of the health system [27, 18, 2].
Among the existing technologies, electronic health (eHealth) systems, i.e. information systems
which store and process patient data to enhance the efficiency of the health system has become, in
the past twenty years, an important emerging technology [29]. Compared with traditional paperbased systems, eHealth systems provide a more efficient, less error-prone, and more flexible service for both patients and medical institutions [12, 17, 13]. The wide deployment of eHealth
systems has brought deep impact on human society [11]. As modern eHealth systems are data
intensive, applying cloud computing technologies in eHealth systems has shown great potential
and long list of unprecedented advantages in managing the electronic health records (EHRs) in
reality [32, 15]. Such mechanism is also well known as cloud-assisted eHealth systems. In reality, cloud-assisted eHealth systems not only enable medical institutions to efficiently and flexibly
manage EHRs with the aid of cloud storage services [35, 10], but also make a great contribution
to the judgement and dispute resolution in medical malpractice [39].
While cloud-assisted eHealth systems make these advantages more appealing than ever, critical
privacy and security concerns in EHRs outsourcing have been raised seriously, see [37]. From
the perspective of EHRs owners including patients and medical institutions, the content of EHRs
should not be leaked for privacy reasons, since EHRs are one of the most sensitive and personal
data for them [20]. However, existing cloud service providers would not accept liability for privacy
protection of EHRs against adversaries in their Service Level Agreements (SLA) and only promise
to protect the privacy as much as possible [3]. Furthermore, unlike traditional EHR management
paradigm, where medical institutions or patients store their EHRs locally, both medical institutions
and patients would not physically own their EHRs once outsourcing EHRs to cloud servers. As
such, the correctness and integrity of outsourced EHRs are being put at risk in practice [34]. We
stress that the correctness and integrity not only mean that the contents of EHRs are not modified,
but also mean that the time when the EHRs were generated and outsourced is also not tampered
with. Traditional cryptographic primitives that have been widely applied in cloud storage systems
for the purpose of data confidentiality and integrity protection, such as public-key encryption [7],
digital signature [8], and message authentication code [6], cannot be directly adopted in cloudassisted eHealth systems, due to the following reasons. For example, an encrypted cloud system
was proposed in [38], a system for secure and fair payment was proposed in [41], a hierarchical
multi-authority and attribute-based encryption for security and privacy of social networks was
proposed in [25], and a prototype of secure EHR could system was proposed in [9].
First, different from traditional cloud storage systems, the EHRs’ owner is not always the
EHRs creator to generate the EHRs. Specifically, a patient’s EHRs are generated and outsourced
by a delegated doctor, where the EHRs would not be signed by the patient before outsourcing to
reduce the communication and computation costs on the patient. Second, as the EHRs are outsourced by the doctor without the patient’s participation, traditional encryption algorithms cannot
be straightforward utilized. In particular, the size of EHRs may be large, and cannot be encrypted
_∗F. Neri is the corresponding author_
_Email address: [email protected] (Ferrante Neri)_
2
-----
by public-key encryption schemes due to efficiency reasons. It is also challenging to agree the key
of symmetric-key encryption algorithm between the doctor and the patient. Moreover, ensuring
the integrity and correctness of outsourced EHRs is more challenging than ever when the doctor
outsources the EHRs on behalf of the patient. Since the doctor is only trusted by the patient during
the treatment period and may be malicious after the treatment period [35]. Typically, a doctor may
forge, modify, or delete outsourced EHRs to cover up his mistakes in a medical malpractice.
To ensure the confidentiality of outsourced EHRs, existing scheme [39] employs a smartphonebased key agreement scheme to establish a secure channel between the patient and the doctor.
However, it requires the patient to equip a powerful smartphone for diagnosing, which is not
always practical.
To ensure the correctness and integrity of outsourced EHRs, existing schemes [35, 39] utilize
an authentication mechanism to authenticate the doctor. However, in existing schemes, there is
a strong assumption that the cloud server would not collude with the doctor to tamper with the
outsourced EHRs. If the doctor incentivizes the cloud server to modify the outsourced EHRs, it is
hard to detect such misbehavior. Actually, compromising the cloud server is feasible for a malicious doctor, since the cloud server in existing schemes are rational entity [3, 4, 40], and thereby
will deviate from the prescribed schemes if such a strategy increases its profit in the systems.
To resist the collusion between the misbehavior doctor and irresponsible cloud server, a trivial
solution is to introduce a trusted server to authenticate the doctor, if and only if the doctor is authenticated to the trusted server, she/he is permitted to outsource EHRs. Nonetheless, the security
of such mechanism relies on the security and reliable of the trusted server, and is confronted with
the single-point-of-failure problem. In fact, it is very challenging to resist the collusion between
the doctor and cloud server without introducing any trusted entity.
In this paper, we propose a secure cloud-assisted eHealth system, called TP-EHR, that ensures
the confidentiality of outsourced EHR and protects outsourced EHRs from illegal modification
without introducing any trusted entity. The TP-EHR system is presented in its implementation as
well as its computational model. The security of TP-EHR is ensured even if the doctor colludes
with the cloud server. The key idea is to utilize the blockchain technique (i.e., blockchain-based
currencies) [26, 36, 30, 31], which provides a tamper-proofing and distributed way to conduct
transactions without a central authority, see [42, 22]. In TP-EHR, the EHRs generated by a doctor
is integrated into a transaction on the blockchain. The cloud server can accept the EHRs generated
by the doctor, if and only if the corresponding transaction is recorded into the blockchain.
TP-EHR employs a password-based key agreement mechanism to establish secure channels
between patients and doctors, which is friendly to patients without requiring any additional investments on patients’ devices. Compared with existing scheme, TP-EHR can resist the password
guessing attacks and thereby provides a stronger security guarantee.
Specifically, the contributions of this paper are as follows:
We analyze existing cloud-assisted eHealth systems, and point out that existing schemes
_•_
cannot ensure the correctness and integrity of outsourced EHRs when the malicious doctor
colludes with the cloud server to modify outsourced EHRs which are generated by the doctor
himself.
We propose a secure cloud-assisted eHealth system called TP-EHR that ensures the con
_•_
3
-----
fidentiality, correctness, and integrity of outsourced EHRs without introducing any trusted
entity, where the EHRs generated by one doctor in a treatment period are integrated into
a transaction of blockchain-based currencies. TP-EHR employs a user-friendly passwordbased key agreement to establish secure channels between patients and doctors, which can
thwart password guessing attacks without requiring additional investments on patients’ devices.
We present security analysis to demonstrate that TP-EHR guarantees the confidentiality, cor
_•_
rectness, and integrity of outsourced EHRs. Even if the malicious doctor colludes with the
cloud server, the doctor and cloud server without a large fraction of the network’s computational power cannot fork the blockchain and thus cannot break the security of TP-EHR. We
also conduct a comprehensive performance analysis, and show the TP-EHR is efficient and
practical.
The remainder of the article is organised in the following way. Section 2 provides a brief
literature review on Cloud-assisted eHealth. Section 3 describes the concept of cloud-assisted
eHealth systems and formalises the design problem addressed in this paper. Section 4 introduces
the notation as well as the generalities about blockchain approaches. Section 5 describes the
proposed TP-EHR. Section 6 analyses the security of the TP-EHR system. Section 7 analyses
the performance and cost (communication, overhead and computation) of the proposed TP-EHR.
Section 8 gives the conclusion to this work.
**2. Related work**
Cloud-assisted eHealth systems provide users including individuals and medical institutions
an efficient and flexible way to manage their EHRs. Since EHRs are most personal and sensitive
information for patients, cloud-assisted eHealth systems also suffer from challenging privacy and
security threats toward outsourced EHRs.
To protect patients’ privacy against internal adversaries (i.e., misbehavior cloud service providers)
and external adversaries, EHRs are encrypted before outsourcing. Lee et al. [21] proposed a cryptographic key management solution for protection of patients’ EHRs. However, this scheme employs a trusted server to process all secret keys of patients. As a consequence, the trusted server is
able to retrieve the patients’ EHRs, and the privacy of patients is not well protected. Sun et al. [29]
proposed a secure EHR system to protect patients’ privacy without introducing any trusted entity.
Then Guo et al. [17] proposed a secure authentication scheme for eHealth systems. However, the
system model of this scheme is not consistent with current cloud-assisted systems. Specifically, in
these schemes, patients’ EHRs are outsourced by the patients themselves, and the doctor needs to
send EHRs to the patients before outsourcing. Therefore, the patients in these schemes bear heavy
burden in terms of communication and computation costs.
The integrity of outsourced data has also attracted attentions in the recent literature [32, 39].
These schemes mainly focus on ensuring that the outsourced data would not be lost, and the data
owners generate and outsource the data to the cloud server. Nonetheless, these schemes cannot be
directly adopted in eHealth systems, since the patients’ EHRs are generated by delegated doctors
in eHealth systems, and requiring the patients to process and outsource their EHRs after the doctors
4
-----
generate the EHRs would cause heavy communication and computation costs on the patients [33].
Furthermore, the doctor is only trusted during the treatment period [35], if the malicious doctor
incentivizes the cloud server to tamper with outsourced EHRs generated by himself, it is hard to
detect such misbehavior. Moreover, existing schemes do not consider the timeliness of EHRs. We
stress that it is also important to know when EHRs were generated in eHealth systems, since the
correctness and fairness of conclusions drawn from EHRs in judgements and dispute resolutions
in medical malpractices is based on the correctness and timeliness of EHRs.
The ides of using blockchain in eHealth systems has been stated at the conceptual level in the
prototype described in [5, 14]. A more comprehensive survey on the eHealth security can be found
in [16].
**3. Problem statement**
_3.1. Cloud-assisted eHealth systems_
The system model is shown in Fig. 1. There are five different entities in TP-EHR: patients,
hospital, doctor, cloud storage server, and auditor.
Fig. 1: System model
The procedure that a patient consults doctors in TP-EHR is illustrated as follows.
First, the patient registers with the hospital, and provides it with auxiliary information such
that the hospital generates diagnosing information for the patient, where a treatment key is shared
among the hospital and the patient, and the diagnosing information includes the doctor information
with diagnosing time and place and some other necessary knowledge. Then the patient delegates to
the doctor(s), and is diagnosed and treated at the appointed time. After the diagnosing and treating,
the doctor(s) generates EHRs for the patient, and encrypts the EHRs by using the treatment key.
The doctor(s) outsource(s) the ciphertexts of EHRs to the cloud storage server. Finally, the cloud
storage server authenticates the doctor(s) by verifying the validity of the patient’s delegation.
As described above, in a cloud-assisted eHealth system, the data (i.e., EHRs) is not generated,
uploaded, and encrypted by the data owners (i.e., patients) themselves, which is different from traditional cloud storage services and introduces challenging problems on security and efficiency, see
5
-----
[28]. Specifically, since patients always consult doctors without heavy luggage, it is impractical
to require patients to be well equipped in the system. Therefore, after generating and encrypting
the EHRs, the doctor would outsource the ciphertexts to the cloud storage server without requiring
patients’ signatures on the EHRs. An auditor can verify the correctness and integrity of outsourced
EHRs as needed.
_Definition 1: TP-EHR consists of four algorithms, Setup, Appointment, Store, and Audit._
**Setup. In this algorithm, the system parameters and the secret parameters are generated, and**
the system is initialized.
**Appointment. In this algorithm, patients make appointments with the hospital. The hospital**
agrees a treatment key with each patient, and each patient receives an appointment information
from the hospital.
**Store. In this algorithm, each patient first delegates to the doctor(s). Then, the doctor(s)**
generates and encrypts EHRs for the patient. The cloud server authenticates the doctor(s). If the
authentication succeeds, the doctor(s) outsources the ciphertexts of EHRs to the cloud server.
**Audit. This algorithm enables an auditor to check the integrity and correctness of the out-**
sourced EHRs.
_3.2. Adversary model_
In the adversary model, we will consider threats from two different angles: the external adversaries and internal adversaries.
**External adversaries. External adversaries target to impersonate a patient to make an appoint-**
ment with the hospital. Specifically, in existing eHealth systems, an external adversary always
attempts to request for multiple tokens for diagnosing, and resells the tokens for profits. Most of
existing eHealth systems utilize a password-based authentication mechanism to authenticate users.
As such, external adversaries may perform keyword guessing attack to break the security of the
eHealth systems.
**Internal adversaries.**
Rational cloud server. The cloud server is a rational entity, the same assumption can be
_•_
found in [39, 3, 4]. By rational, we mean that the cloud server would only deviate from the
prescribed scheme if such a strategy would bring benefits for it.
Semi-trusted doctors. Doctors are semi-trusted entity, which follows existing works [39].
_•_
By semi-trusted, we mean that the doctor would be honest during the diagnosing period
(i.e., the delegation time). However, after the diagnosing period, the doctor would perform
the following attacks.
1. The adversarial doctor may outsource forged EHRs. After the diagnosing period, the
doctor may outsource forged EHRs to the storage server to conceal his mistake in
medical malpractice.
2. The adversarial doctor may violate the confidentiality of outsourced EHRs. The doctor may collude with the malicious cloud server to obtain EHRs generated by other
doctors, and learn their contents.
6
-----
In TP-EHR, we assume doctors are computer-aided, which means that doctors are equipped
with computers so as to have adequate computation, communication, and storage resources to
generate and outsource EHRs. Furthermore, the semi-trusted hospital is considered as a semitrusted doctor in the adversary model for the sake of brevity.
Within the adversary model, TP-EHR should satisfy the following security requirements:
Confidentiality. The contents of outsourced EHRs cannot be recovered by any adversary.
_•_
Resistance against EHR forgery. The outsourced EHRs cannot be replaced by forged ones
_•_
generated by any adversary.
Resistance against EHR modification. Once EHRs are outsourced and accepted by the cloud
_•_
server, any adversary cannot substitute existing EHRs for them, and cannot delete them.
Furthermore, the time when the EHRs are generated should be securely recorded and cannot
be modified.
Resistance against impersonation attacks performed by external adversaries. External adver
_•_
saries cannot impersonate a patient to make an appointment with the hospital by guessing
the patient’s password.
_3.3. Design goals_
In this paper, we target the security of EHRs in cloud-assisted eHealth systems, where there
exist three challenges:
1. How to resist the collusion between malicious doctors and misbehavior doctors. Existing
cloud-assisted eHealth systems bear a strong assumption that the cloud server would not
collude with doctors to modify outsourced EHRs. However, a misbehavior doctor may
incentivize the cloud server to modify outsourced EHRs generated by himself to conceal his
mistakes in a medical malpractice. To ensure the security of outsourced EHRs, outsourced
EHRs should not be modified, forged, and deleted by the misbehavior doctor, even if he
colludes with the cloud server and the target EHRs are generated by the doctor himself.
2. How to securely time-stamp the EHRs before outsourcing. In reality, a patient’s EHRs may
be generated by multiple doctors, which is well known as the group consultation. As such,
it is very challenging to determine the time when each part of EHRs from each doctor is
generated. Therefore, how to securely time-stamp the outsourced EHRs should be well
considered.
3. How to securely authenticate the patient. Existing cloud-assisted eHealth systems employ
a password-based authentication mechanism to authenticate patients, since such authentication mechanism is friendly to patients, and is easily to be deployed. However, such mechanism is vulnerable to password guessing attacks, due to the low entropy and inherent vulnerability of human-memorisable passwords. To enhance the security, the cloud-assisted
eHealth system should resist the password guessing attacks.
To enable secure outsourced EHRs for cloud-assisted eHealth systems under the aforementioned model, TP-EHR should achieve the following objectives.
7
-----
Functionality: TP-EHR should provide a reliable and privacy-preserving way to manage pa
_•_
tients’ EHRs, and can be applied in existing cloud-assisted eHealth systems without changing its architecture.
Security: The security requirements under the adversary model presented in Section 3.2
_•_
should be achieved.
Efficiency: TP-EHR should be as efficient as possible in terms of computational complexity,
_•_
communication complexity, and storage costs.
**4. Preliminaries**
_4.1. Notations, conventions, and basic theory_
For two bit-strings x and y, we denote by x _y their concatenation. We use E() to denote_
_||_
symmetric encryption.
_Bilinear maps._
Let G and GT be two multiplicative groups with the same order p. A bilinear map e : G _×_ _G →_
_GT has the following properties:_
_• Bilinearity. e(g[a],_ _q[b]) = e(g,_ _q)[ab]_ for all g, _q ∈_ _G, a,_ _b ∈_ _Z[∗]p[.]_
Non-degeneracy. For g, _q_ _G and g_ = q, e(g, _q)_ = 1.
_•_ _∈_ _̸_ _̸_
There exists an efficient computable algorithm to compute e.
_•_
_4.2. Blockchain_
Fig. 2: A simplified Ethereum blockchain
A blockchain is a linear collection of data elements called block: all blocks are linked to form a
chain and secured by cryptographic primitive. The blockchain is maintained by multiple nodes and
8
-----
new blocks are continuously added to the blockchain without requiring nodes to trust each other.
If and only if a considerable majority of nodes is honest, the security of blockchain is guaranteed
[26].
Typically, each block contains a hash pointer that points to its previous block, a timestamp, and
transaction data. The block can be chained to the blockchain, only if the validity of its transaction
data is verified by a majority of nodes [36, 30].
The blockchain technique can be generally classified into two types: private blockchain and
public blockchain.
In the private blockchain (including consortium blockchain), the nodes that maintain the blockchain
are employed and authorized by a blockchain owner or a consortium comprising multiple participants who jointly own the blockchain but do not trusted each other. The key difference between the
private blockchain and existing techniques (e.g., distributed backup systems) is that once a block
is chained to the blockchain, as long as the majority of nodes is inaccessible to the adversaries, this
block cannot be removed or modified, even if the adversary is the blockchain manager himself.
In the public blockchain, anyone can become a node to maintain the blockchain and can join or
leave the blockchain system without getting permission from a centralized or distributed authority. The most prominent manifestation of public blockchain is blockchain-based cryptocurrencies,
such as Bitcoin and Ethereum [26, 36]. In such systems, the public blockchain serves as an open,
distributed, and tamper-proofing ledger to account for the ownership of the value tokens. Transferring value tokens between two users can be considered as a state transition system, where the
public ledger recorded in the blockchain reflects the ownership status of existing value tokens, and
a state transition function takes a state and a transaction as input, outputs a new state which is the
result, and then updates the ledger.
A simplified Ethereum blockchain is shown in Fig. 2, where Bl Hash denotes the hash value
_−_
of current block, Pre Bl Hash denotes the hash value of the previous block, Nonce denotes
_−_ _−_
the solution of the proofs of work (PoW) puzzle, Time denotes the timestamp of the block, Tx
denotes the transaction, and all transactions are authenticated by using Merkle hash tree with the
MerkleRoot as its root value.
In Ethereum, the state is made up of objects called "account". In general, there are two types
of accounts in Ethereum: externally owned accounts and contract accounts. Externally owned
accounts are controlled by private keys and can conduct a transaction. Contract accounts are controlled by their contract code. When the payer transfers Ethers from her/his (externally owned) account to the payee’s (externally owned) account, if the transaction is recorded into the blockchain,
the balances of these two accounts are updated. Here, the data value of the transaction can be set
to any binary data the payer chooses. If a transaction is recorded into the blockchain, it cannot be
removed and modified due to the security of Ethereum.
**5. The proposed TP-EHR**
_5.1. Overview_
Since patients always consult doctors without heavy luggage, TP-EHR should be as efficient
as possible on the patient side and should enable patients to preprocess the costly computation.
9
-----
In TP-EHR, each patient first makes an appointment with the hospital, and obtains a treatment
key for diagnosing. With the treatment key, a secure channel between the patient and the hospital
as well as the designated doctors is established. This process is based on a password-based authentication mechanism with resistance against password guessing attacks, which ensures that the
security of TP-EHR cannot be broken by external adversaries who guess target patient’s password.
Before the treatment time, the patient generates warrants to delegate to doctors. The warrant
indicates the identities of delegated doctors as well as their diagnosing period and other auxiliary
information. During the treatment time, doctors generate EHRs for the patient. We consider two
different cases in TP-EHR. One is the single doctor case, where the patient is treated by only one
doctor and thereby EHRs are generated by the doctor; Another one is the multi-doctor case where
the patient is treated by a group of doctors, EHRs are successively generated by these doctors, and
each doctor generates EHRs based on the ones generated by the previous doctor.
For the single doctor case, the doctor should be responsible for the EHRs generated by herself/himself. The EHRs generated within one treatment period for the patient are integrated into
a transaction in Ethereum. For the multi-doctor case, the doctor not only should be responsible
for the EHRs generated by herself/himself, but also should be responsible for the EHRs generated
by the previous doctor. As such, each part of EHRs generated by one doctor is integrated into a
transaction of Ethereum to ensure the security of TP-EHR.
We further consider the timeliness of EHRs, since it is more important to know when EHRs
were generated than what they were. Since EHRs generated by a doctor correspond to a transaction
in Ethereum, anyone can learn the time when EHRs are generated is to extract the timestamp of
the block in Ethereum that includes the transaction.
The security of TP-EHR is mainly constructed on the security of Ethereum blockchain, an
adversary without a large fraction of the network’s computational power cannot fork Ethereum
and thus cannot break the security of TP-EHR. Even if the cloud server colludes with a malicious
doctor to tamper with outsourced EHRs, it cannot modify the corresponding transaction recorded
into the Ethereum blockchain.
_5.2. Construction of TP-EHR_
A patient P with identity IDP, a cloud storage server S, a set of doctors D1, D2, ···, Dχ
with identities IDD1, IDD2, ···, IDDχ, a hospital H with identity IDH, and an auditor A are
involved in TP-EHR.
**Setup. With the security parameter ℓ, the system parameter SP = {p,** _g,_ _G,_ _GT_ _,_ _e,_ _H,_ _h,_ _H1}_
are determined, where G and GT are multiplicative groups with the same prime order p, g is the
generator of G, e : G × G → _GT_, h, H : {0, 1}[∗] _→_ _G, H1 : {0,_ 1}[∗] _→_ _Zp. Each doctor creates_
an externally owned account in Ethereum and publishes it to others, where the account is specialpurpose. The cloud storage server also creates an externally owned account in Ethereum and sends
it to all doctors and the auditor.
For the patient P with identifier IDP, H assigns a human-memorisable password pwP to
her/him. The patient P also has a secret key αP, and the corresponding public key QP = g[α][P] .
**Appointment. In this algorithm, P obtains the appointment information protected under a**
treatment key tkP as follows:
10
-----
_• P randomly chooses x ∈_ _Zp, computes X = g[x], X_ _[∗]_ = X · (h(IDP ))[pw][P], and sends X _[∗]_ to
_H ._
_• H randomly chooses y ∈_ _Zp, computes Y = g[y], Y_ _[∗]_ = Y · (h(IDH ))[pw][H], and sends Y _[∗]_ to
_P._
_• P computes KP = (Y_ _[∗]/(h(IDH ))[pw][P]_ )[x] and sets tkP = H1(IDP _,_ _IDH,_ _X_ _[∗],Y_ _[∗], pwP_ _,_ _KP_ ).
_• H computes KH = (X_ _[∗]/(h(IDP_ ))[PW][P] )[y] and sets tkP = H1(IDP _,_ _IDH,_ _X_ _[∗],Y_ _[∗], pwP_ _,_ _KP_ ).
_P makes an appointment with H ._
_•_
_• H appoints a set of doctors {Di}(i ∈_ _I) for P, where I denotes the set of appointed doctors’_
indexes.
_• H sends the appointment information (protected under tkP_ ) to P, and sends tkP to
_{Di}(i ∈_ _I)._
_• P decrypts and parses the appointment information to get IDDi for i ∈_ _I, the valid period_
_TimePeriodP_, and some auxiliary information AuxP .
**Store. There are two cases in this algorithm, the one includes the single doctor who is denoted**
by D1 without loss of generality; the other one includes multiple doctors.
_Case 1: There is only a single doctor D1._
_• P computes a warrant wP to delegate D1 to generate EHRs as:_
_waP_ = _IDP_ _||IDD1||TimePeriodP_ _||AuxP_ _,_
_wP_ = _αP ·_ _H(waP_ ).
_• D1 generates an EHR M for P._
_• D1 encrypts M as:_
_C = E(tkP_ _,_ _M||waP_ _||wP_ ),
where E() is a secure symmetric encryption algorithm, e.g., AES.
_• Based on the current time t, D1 extracts the hash value of the block that is the latest one to_
be attached into the blockchain. This hash value is denoted by Bhasht.
_• D1 creates a transaction TxD1 shown in Fig. 3, where she/he transfers the service charge to_
the cloud storage S ’s account, and sets the data value of the transaction as:
_Bhasht||h(IDP_ )||h(C||waP _||wP_ ).
_• D1 sends (Bhasht,C,_ _waP_ _,_ _wP_ ) to S .
11
-----
_• S verifies that the service charge has been received, and checks the validity of TimePeriodP_
and Bhasht by checking the following equation:
_e(wP_ _,_ _g) = e(H(waP_ ), _QP_ ).
If the checking passes, S accepts (C, _waP_ _,_ _wP_ ).
Fig. 3: Transaction on the Ethereum blockchain of single doctor case
_Case 2: There is a set of doctors {Di}(i ∈_ _I). Multiple doctors generate the EHRs for P in_
_turn. Here, we assume that D1 is the first doctor to generate the EHRs and D|I| is the last one to_
_generate the EHRs._
_• For each doctor {Di}(i ∈_ _I), P computes a warrant wD,i to delegate Di to generate EHRs_
as:
_waP,i = IDP_ _||IDDi||TimePeriodP,i||AuxP,i,_
_wP,i = αP_ _H(waP,i)._
_·_
_• P sends warrant (waP,i,_ _wP,i) to Di, i = 1,_ 2, _···,_ _|I|._
For the first doctor D1, she/he performs as follows:
_• D1 generates an EHR M1 for P._
_• D1 encrypts M1 as:_
_C1 = E(tkP_ _,_ _M1||waP,1||wP,1)._
_• Based on the current time t1, D1 extracts the hash value of the block that is the latest one to_
be attached into the blockchain. This hash value is denoted by Bhasht1.
_• D1 creates a transaction Tx(D1) shown in Fig. 4, where she/he transfers 0 service charge to_
the next doctor D2’s account, and sets the data value as:
_Bhasht1||h(IDP_ )||h(C1||waP,1||wP,1).
_• D1 sends (Bhasht1,C1,_ _waP,1,_ _wP,1) to D2._
12
-----
Fig. 4: Transaction on the Ethereum blockchain of multi-doctor case: the transaction is conducted by the first doctor
For the doctor Di, i = 2, 3, _···,_ _|I|−_ 1, she/he performs as follows:
_• Di verifies the validity of (Bhashti−1, Ci−1, waP,i−1, wP,i−1) received from Di−1 by check-_
ing the following equation:
_e(wP,i−1,_ _g) = e(H(waP,i−1),_ _QP_ ).
_• Di decrypts Ci−1 to obtain {M1,_ _···,_ _Mi−1}._
_• Di generates an EHR Mi for P._
_• Di encrypts Mi as:_
_Ci = E(tkP_ _,_ _M1||···||Mi||waP,i||wP,i)._
_• Based on the current time ti, Di extracts the hash value of the block that is the latest one to_
be attached into the blockchain. This hash value is denoted by Bhashti.
_• Di creates a transaction Tx(Di) shown in Fig. 5, where she/he transfers 0 service charge to_
the next doctor Di+1’s account, and sets the data value as:
_Bhashti||h(IDP_ )||h(Ci||waP,i||wP,i).
_• Di sends (Bhashti,Ci,_ _waP,i,_ _wP,i) to Di+1._
Fig. 5: Transaction on the Ethereum blockchain of multi-doctor case: the transaction is conducted by the ith doctor
For the doctor D|I|, she/he performs as follows:
13
-----
_• D|I| verifies the validity of (Bhasht|I|−1, C|I|−1, waP,|I|−1, wP,|I|−1) received from mathcalD|I|−1_
by checking the following equation:
_e(wP,|I|−1,_ _g) = e(H(waP,|I|−1),_ _QP_ ).
_• D|I| decrypts C|I|−1 to obtain {M1,_ _···,_ _M|I|−1}._
_• D|I| generates an EHR M|I| for P._
_• D|I| encrypts M|I| as:_
_C|I| = E(tkP_ _,_ _M1||···||M|I|||waP,|I|||wP,|I|)._
_• Based on the current time t|I|, D|I| extracts the hash value of the block that is the latest one_
to be attached into the blockchain. This hash value is denoted by Bhasht|I|.
_• D|I| creates a transaction TxD|I| shown in Fig. 6, where she/he transfers the service charge_
to the cloud storage S ’s account, and sets the data value as:
_Bhasht|I|||h(IDP_ )||h(C|I|||waP,|I|||wP,|I|).
_• D|I| sends (Bhasht|I|,C|I|,_ _waP,|I|,_ _wP,|I|) to S ._
_• S verifies that the service charge has been received, then checks the validity of TimePeriodP_
and Bhasht|I|. Finally, S checks the following equation:
_e(wP,|I|,_ _g) = e(H(waP,|I|),_ _QP_ ).
If the checking passes, S accepts (C|I|, waP,|I|, wP,|I|).
Fig. 6: Transaction on the Ethereum blockchain of multi-doctor case: the transaction is conducted by the last doctor
**Audit. Given the EHR (IDP**, C, waP, wP ), the auditor A is able to check the correctness
and timeliness as follows:
_• Pares the EHR and obtain (C,_ _waP_ _,_ _wP_ ).
Extract the corresponding transaction from the Ethereum blockchain and acquire the corre
_•_
sponding account information.
14
-----
Verify the number of created transactions matches the number of recorded EHRs, if the
_•_
verification fails, reject.
_• Verify the validity of wP_, if the verification fails, reject.
Verify the timeliness of the EHR by checking the transaction time, if the verification fails,
_•_
reject, where the transaction time is derived from the block.
_• Compute Bhasht||h(IDP_ )||h(C||waP _||wP_ ) and check whether it equals to the transaction
information.
If all above verification passes, the correctness and timeliness of the EHR is guaranteed.
**6. Security analysis**
_6.1. TP-EHR is secure against EHR forgery attacks_
For the single doctor case, TP-EHR can resist forgery attacks performed by any adversaries. If
an adversary forges EHRs, he cannot modify the corresponding transaction in the blockchain. As
such, when the auditor checks the correctness of outsourced EHRs, the forged EHRs cannot pass
the checking, since the blockchain ensures that recorded transactions cannot be modified.
For the case of multiple doctors, we prove that TP-EHR achieves resistance against EHR
forgery attacks in two aspects.
First, the malicious storage server cannot forge outsourced EHRs. In TP-EHR, the EHRs are
encrypted before outsourcing. Due to the security of encryption algorithm, the malicious storage
server cannot forge valid EHRs without valid encryption/decryption keys.
Second, a semi-trusted doctor cannot forge outsourced EHRs, even if the EHRs are generated
by the doctor himself. If a semi-trusted doctor attempts to forge EHRs, he can perform two attacks:
1. A doctor generates EHRs and outsources the EHRs to the cloud storage but convinces the
later that the EHRs are generated by other doctors.
2. A doctor colludes with the cloud server, to replace existing EHRs with new ones.
For attack 1. Note that although in TP-EHR, we do not require patients to authenticate their
EHRs before outsourcing, patients are required to authenticate their doctor before the diagnosing, where a patient generates a warrant to delegate a doctor, and the warrant includes the patient
identity, doctor identity, the valid period, and other auxiliary information. The warrant is constructed on the secure signature scheme [8] and thereby is existentially unforgeable. Therefore, it
is computationally infeasible to perform attack 1 for the doctor.
In regarding to attack 2, each part of EHRs generated by one doctor is integrated into a transaction in Ethereum, when a doctor attempts to replace existing EHRs with new ones generated by
him, the only thing he can do is to fork the Ethereum blockchain and enable the blockchain with
the transaction corresponding to newly generated EHRs to be accepted by a majority of nodes.
Therefore, the security against EHR forgery at this point is based on the security of underlying
blockchain.
15
-----
_6.2. TP-EHR is secure against EHR modification attacks_
As discussed before, the outsourced EHRs in TP-EHR cannot be forged by adversaries. We
need to further consider EHR modification attacks that adversaries substitute existing EHRs for
target ones, such that cover up the target EHRs.
In TP-EHR, for the single doctor case, we utilize two security mechanisms to resist EHR modification attacks. The first one is the secure delegation mechanism, where the patient first generates
a warrant to delegate the doctor. The warrant is built on a secure signature algorithm [8] and cannot be forged. The second one is the blockchain-based tamper-resist mechanism, where the EHRs
generated by the doctor as well as the corresponding warrant are integrated into a transaction in
Ethereum blockchain. As such, if the adversary cannot break the security of Ethereum blockchain,
he cannot break the security of TP-EHR by performing EHRs modification attacks.
The security of multiple doctors’ case is extended from the single doctor one, where TPEHR needs to ensure that the chronological order that multiple doctors generate EHRs cannot
be modified. This essentially corresponds to the timeliness of outsourced EHRs which we will
elaborate on it in Section 6.4.
_6.3. TP-EHR is secure against impersonation attacks_
In TP-EHR, impersonation attacks performed by external adversaries are considered, where an
adversary extracts the treatment key by performing password guessing attacks. Once the adversary
succeeds, he can impersonate the victim to make an appointment. As a consequence, the adversary
can further perform various attacks, such as Distributed Denial of Service (DDoS) to corrupt TPEHR.
TP-EHR can resist impersonation attacks due to the security of the treatment key request. The
treatment key request in TP-EHR is based on a password-based authentication mechanism which
resists password guessing attacks. Specifically, in the Appointment algorithm, if the password
used by the patient is the same as the one stored in the hospital, the patient and the hospital would
jointly execute a key agreement protocol, and a treatment key is shared between them; Otherwise,
the patient would obtain a random key, and cannot perform the following algorithms [1, 24].
As such, TP-EHR resist password guessing attacks and thereby is secure against impersonation
attacks.
_6.4. TP-EHR guarantees the timeliness of EHRs_
The timeliness of EHRs in TP-EHR is twofold.
First, the timeliness of EHRs is reflected in the corresponding transaction time in the blockchain.
Specifically, in TP-EHR, each EHR is relative to one transaction in the blockchain. When the
transaction is recorded in to the blockchain, anyone is able to efficiently extract the time when the
EHR was generated from the transaction time.
Second, for a patient, the timeliness of her/his EHRs from multiple doctors is also reflected in
the chronological order, where these EHRs essentially form an authentication structure with the
aid of the blockchain, which is shown in Fig. 7, where gray line denotes that the EHR is recorded
into the corresponding block, the dashed orange line denotes that the prior EHR is integrated into
the later one. In fact, if a patient’s EHRs are generated by multiple doctors, the outsourced EHRs
can actually reflect identities of the doctors in a chronological order.
16
-----
Fig. 7: Block-aided authenticated EHR chain
According to the above analysis, we can see that the timeliness of outsourced EHRs in TPEHR are based on the observation in the underlying blockchain: the outsourced EHRs are as hard
to fork as the underlying blockchain such that an adversary without a large fraction of the network’s
mining hashrate cannot fork Ethereum and thus cannot break the timeliness of outsourced EHRs
in TP-EHR.
_6.5. On the necessity of blockchain in TP-EHR_
Without the blockchain, TP-EHR is vulnerable to EHR forgery and removal attacks without
detection.
In particular, if a doctor attempts to forge EHRs that have been outsourced to the cloud server
to conceal his mistake in medical malpractices, he can incentivize the cloud server to collude
with him, and the cloud server can arbitrarily forge and remove existing EHRs from its storage.
Note that in reality, due to efficiency reasons, the patients would not be required to sign the EHRs
generated from doctors. In fact, the EHR forgery and removal attacks can also be performed
without detection, even if EHRs signed by the patients.
To ensure the security of outsourced EHRs, most of existing schemes assume that the cloud
server would not collude with the doctors to forge/remove existing EHRs [29, 17, 23]. Under this
assumption, there is an authentication mechanism between the cloud server and doctors in these
schemes, which protects outsourced EHRs from illegal modification.
In TP-EHR, each EHR generated by a doctor is integrated into a transaction of the underlying blockchain. As long as the security of the blockchain remains tamper-resistant to adversaries,
we ensure the correctness and integrity of outsourced EHRs in TP-EHR. This would not introduce any additional security mechanisms, strong assumptions, and trusted entities. Therefore, the
blockchain technology plays a key role in TP-EHR to ensure the security.
**7. Performance evaluation**
We evaluate the performance of TP-EHR in terms of communication and computational overhead. We conduct the experiments on a computer with Window 10 system, an Inter Core 2 i5 CPU,
and 8GB DDR 3 of RAM. We utilize C language and MIRACL Library to implement TP-EHR.
The security level is selected to be 80 bits.
17
-----
_7.1. Communication overhead_
In the experiment, we assume the appointment information is a string with 512 bits. We would
not show the communication between the doctors and the cloud server, since it depends on the size
of patients’ EHRs.
For the hospital, the communication costs include two parts. One is to interact with patients
for appointment, another one is to designate doctors.
For the patient, her/his communication costs include two parts. One is to make an appointment
with the hospital, another one is to delegate doctors.
In Fig. 8 and Fig. 9, we show the communication overhead on the patient, hospital, and doctor,
which shows that entities in TP-EHR would not bear heavy communication costs.
TP-EHR is built on the Ethereum blockchain. To date, a light client protocol of Ethereum
has been released, which enables users to conduct transaction without maintaining and storing the
Ethereum blockchain. Therefore, TP-EHR is efficient in terms of communication overhead.
Fig. 8: Communication overhead on the patient and hospital
Fig. 9: Communication overhead on the doctor
18
-----
_7.2. Computational overhead_
In the experiment, we would not analyze the computational costs on generating EHRs for
doctors, since the computational costs to generate EHRs are subject to multiple factors, such as
the type and size of EHRs.
We first estimate the computational costs in terms of basic cryptographic operations, the notations are shown in Table 1.
We show the computation costs on the hospital, patient, doctor, and cloud server in Table 2,
where n denotes the number of doctors.
Table 1: Notation of operations
**Symbol** **Operation**
_ExpG_ Exponent operation in G
_MulG_ Group operation in G
_HashZp_ Hash a value into Zp
_PairGT_ Computing pairing e(χ, _ς_ ) where χ, _ς ∈_ _G_
_CTrans_ Conduct a transaction in Ethereum
_Enc_ Encrypt a message by using symmetric encryption algorithm
_HashG_ Hash a value into G
Table 2: Computation costs
**Computation costs**
Hospital 2ExpG + 3MulG + _HashZp +_ _HashG_
Patient (2 + _n)_ _·_ _ExpG +_ 3MulG +(n + 1) _·_ _HashG +_ _HashZp_
Doctor (single doctor case) _Enc_ +CTrans + _HashZp_
Doctor (multi-doctor case) 2PairGT + _HashG +_ _Enc_ +CTrans + _HashZp_
Cloud server 2PairGT + _HashG_
We also show the computation delay on the hospital, patient, doctor, and cloud server in Fig.
10.
As shown in the experiment results, in TP-EHR, it takes within 1 second to outsource EHRs to
the cloud storage for a doctor who is equipped with a computer.
Actually, conducting a transaction in Ethereum takes averagely 3 minutes to confirm it. Consequently, for a patient in the multi-doctor case, the time interval between the doctor Di and Di+1 to
generate EHRs only requires 3 minutes. This requirement is practical, since the time interval between two successive doctors to generate EHRs for the same patient is much larger than 3 minutes
in reality.
19
|Symbol|Operation|
|---|---|
|Exp G|Exponent operation in G|
|Mul G|Group operation in G|
|Hash Zp|Hash a value into Z p|
|Pair GT|Computing pairing e(χ,ς) where χ,ς G ∈|
|CTrans|Conduct a transaction in Ethereum|
|Enc|Encrypt a message by using symmetric encryption algo- rithm|
|Hash G|Hash a value into G|
|Col1|Computation costs|
|---|---|
|Hospital|2Exp +3Mul +Hash +Hash G G Zp G|
|Patient|(2+n) ·Exp +3Mul +(n+1) ·Hash +Hash G G G Zp|
|Doctor (single doctor case)|Enc+CTrans+Hash Zp|
|Doctor (multi-doctor case)|2Pair +Hash +Enc+CTrans+Hash GT G Zp|
|Cloud server|2Pair +Hash GT G|
-----
Fig. 10: Computational delay
_7.3. Monetary costs to store EHRs_
In TP-EHR, the main monetary costs to store EHRs are caused by conducting transactions
in Ethereum. Specifically, in Store of TP-EHR, once a doctor generates an EHR for the patient,
she/he needs to create a new transaction to record the EHR and protect it from illegal modification.
Recording a transaction in Ethereum requires a transaction fee for the doctor. At the time of writing
this paper (Oct. 2018), conducting a transaction in Ethereum averagely takes 8 US cents, which is
acceptable in reality.
_7.4. On the scalability of TP-EHR_
TP-EHR is constructed on the Ethereum blockchain. Since the Ethereum blockchain is publicly verifiable and resistant to modification, the security of TP-EHR is ensured. Recently, lots of
new blockchains based on different mechanisms are proposed. For example, Ouroboros [19] that
is a proof-of-stake-based blockchain. Actually, public verifiability and resistance against modification are two fundamental properties for secure blockchain systems. Therefore, TP-EHR can be
constructed on existing secure blockchain systems. We also stress that the security of blockchain
systems is related to the number of miners (stakeholders in proof-of-stake-based blockchain), and
one of most important reasons that TP-EHR is built on Ethereum is that Ethereum is a widely-used
blockchain system.
**8. Conclusion and future work**
In this paper, we have proposed TP-EHR, a blockchain-based secure eHealth system that ensures the confidentiality of outsourced EHRs and prevents outsourced EHRs from illegally modifying. The security of TP-EHR can be guaranteed even if a malicious doctor who generates and
outsources EHRs colludes with the cloud server to tamper with the EHRs. TP-EHR is constructed
on the Ethereum blockchain, where the EHRs generated by one doctor in a treatment are integrated
into a transaction of Ethereum, such that the correctness and integrity of EHRs are based on the
security of Ethereum, and the time when the EHRs are generated can be efficiently extracted. The
security analysis has demonstrated that TP-EHR is secure against various attacks existing in actual
20
-----
cloud-assisted eHealth systems. We have also conducted a comprehensive performance analysis,
which proves that TP-EHR is practical and efficient in terms of communication and computation
overhead.
For the future work, we will investigate how to utilize the blockchain technique to enhance
cloud-assisted eHealth systems in terms of security, performance, and functionality.
**Acknowledgements**
This work is supported by the National Key Research and Development Program of China
(2016QY061205), Science and technology service industry project of Sichuan provincial science and Technology Department (2017GFW0002, 2017GFW0088), Achievements transformation project of Sichuan provincial science and Technology Department (2017CC0006), Provincial
Academy of science and technology project of Sichuan provincial science and Technology Department (2017JZ0015), the National Nature Science Foundation of China (61672437, 61702428)
and the Sichuan Science and Technology Program (2018GZ0185, 2018GZ0085, 2017GZ0159).
**References**
[1] M. Abdalla and D. Pointcheval, “Simple password-based encrypted key exchange protocols,” in Proc. CT-RSA.
Springer, 2005, pp. 191–208.
[2] U. R. Acharya, S. Bhat, J. E. W. Koh, S. V. Bhandary, and H. Adeli, “A novel algorithm to detect glaucoma risk
using texton and local configuration pattern features extracted from fundus images,” Comp. in Bio. and Med.,
vol. 88, pp. 72–83, 2017.
[3] F. Armknecht, J. Bohli, G. O. Karame, Z. Liu, and C. A. Reuter, “Outsourced proofs of retrievability,” in Proc.
_of CCS._ ACM, 2014, pp. 831–843.
[4] F. Armknecht, J. Bohli, G. O. Karame, and F. Youssef, “Transparent data deduplication in the cloud,” in Proc.
_of CCS._ ACM, 2014, pp. 831–843.
[5] A. Azaria, A. Ekblaw, T. Vieira, and A. Lippman, “Medrec: Using blockchain for medical data access and
permission management,” in 2016 IEEE 2nd International Conference on Open and Big Data (OBD), Aug
2016, pp. 25–30.
[6] M. Bellare, J. Kilian, and P. Rogaway, “The security of the cipher block chaining message authentication code,”
_Journal of Computer and System Sciences, vol. 61, no. 3, pp. 362–399, 2000._
[7] D. Boneh and M. Franklin, “Identity-based encryption from the weil pairing,” in Proc. CRYPTO. Springer,
2001, pp. 213–229.
[8] D. Boneh, B. Lynn, and H. Shacham, “Short signatures from the weil pairing,” in Proc. of ASIACRYPT.
Springer, 2001, pp. 514–532.
[9] Z. Cai, H. Yan, P. Li, Z. Huang, and C. zhi Gao, “Towards secure and flexible ehr sharing in mobile health cloud
under static assumptions,” Cluster Computing, vol. 20, no. 3, pp. 2415–2422, 2017.
[10] S. Caíno-Lores, A. G. Fernández, F. G. Carballeira, and J. Carretero, “Efficient design assessment in the railway
electric infrastructure domain using cloud computing,” Integrated Computer-Aided Engineering, vol. 24, no. 1,
pp. 57–72, 2017.
[11] F. Caraffini, F. Neri, and L. Picinali, “An analysis on separability for memetic computing automatic design,”
_Information Sciences, vol. 265, pp. 1–22, 2014._
[12] V. Casola, A. Castiglione, K. R. Choo, and C. Esposito, “Healthcare-related data in the cloud: Challenges and
opportunities,” IEEE Cloud Computing, vol. 3, no. 6, pp. 10–14, 2016.
[13] Y. Cheng, X. Fu, X. Du, B. Luo, and M. Guizani, “A lightweight live memory forensic approach based on
hardware virtualization,” Information Sciences, vol. 379, pp. 23 – 41, 2017.
[14] A. C. Ekblaw, “Medrec: blockchain for medical data access, permission management and trend analysis,” Master’s thesis, Massachusetts’ Institute of Technology, 2017.
21
-----
[15] L. Fan, W. Buchanan, C. Thummler, O. Lo, A. Khedim, O. Uthmani, A. Lawson, and D. Bell, “Dacar platform
for ehealth services cloud,” in Proc. CLOUD. IEEE, 2011, pp. 219–226.
[16] J. L. Fernández-Alemá, I. C. Señor, P. A. O. Lozoya, and A. Toval, “Security and privacy in electronic health
records: A systematic literature review,” Journal of biomedical informatics, vol. 46, no. 3, pp. 541–562, 2013.
[17] L. Guo, C. Zhang, J. Sun, and Y. Fang, “Paas: A privacy-preserving attribute-based authentication system for
ehealth networks,” in Proc. of ICDCS. IEEE, 2012, pp. 224–233.
[18] X. Hei, X. Du, S. Lin, and I. Lee, “Pipac: Patient infusion pattern based access control scheme for wireless
insulin pump system,” in 2013 Proceedings IEEE INFOCOM, 2013, pp. 3030–3038.
[19] A. Kiayias, A. Russell, B. David, and R. Oliynykov, “Ouroboros: A provably secure proof-of-stake blockchain
protocol,” in Proc. CRYPTO. Springer, 2017, pp. 357–388.
[20] M. Korytkowski, “Novel visual information indexing in relational databases,” Integrated Computer-Aided Engi_neering, vol. 24, no. 2, pp. 119–128, 2017._
[21] W. Lee and C. Lee, “A cryptographic key management solution for hipaa privacy/security regulations,” IEEE
_Transactions on Information Technology in Biomedicine, vol. 12, no. 1, pp. 34–41, 2008._
[22] J. Li, J. Wu, and L. Chen, “Block-secure: Blockchain based scheme for secure p2p cloud storage,” Information
_Sciences, vol. 465, pp. 219 – 231, 2018._
[23] H. Lin, J. Shao, C. Zhang, and Y. Fang, “Cam: Cloud-assisted privacy preserving mobile health monitoring,”
_IEEE Transactions on Information Forensics and Security, vol. 8, no. 6, pp. 985–997, 2013._
[24] J. Liu, N. Asokan, and B. Pinkas, “Secure deduplication of encrypted data without additional independent
servers,” in Proc. CCS. ACM, 2015, pp. 874–885.
[25] E. Luo, Q. Liu, and G. Wang, “Hierarchical multi-authority and attribute-based encryption friend discovery
scheme in mobile social networks,” IEEE Communications Letters, vol. 20, no. 9, pp. 1772–1775, 2016.
[26] S. Nakamoto, “Bitcoin: A peer-to-peer electronic cash system.” https://bitcoin.org/bitcoin.pdf.
[27] T. F. I. Revolution, K. Schwab. Crown Business, 2017.
[28] M. Smolik and V. Skala, “Large scattered data interpolation with radial basis functions and space subdivision,”
_Integrated Computer-Aided Engineering, vol. 25, no. 1, pp. 49–62, 2018._
[29] J. Sun, X. Zhu, C. Zhang, and Y. Fang, “Hpcc: Cryptography based secure ehr system for patient privacy and
emergency healthcare,” in Proc. of ICDCS. IEEE, 2011, pp. 373–382.
[30] F. Tschorsch and B. Scheuermann, “Bitcoin and beyond: A technical survey on decentralized digital currencies,”
_IEEE Communications Survey & Tutorials, vol. 18, no. 3, pp. 2084–2123, 2016._
[31] S. Underwood, “Blockchain beyond bitcoin,” Commun. ACM, vol. 59, no. 11, pp. 15–17, Oct. 2016.
[32] C. Wang, S. S. Chow, Q. Wang, K. Ren, and W. Lou, “Privacy-preserving public auditing for secure cloud
storage,” IEEE Transactions on Computers, vol. 62, no. 2, pp. 362–375, 2013.
[33] K. Wang, Y. Shao, L. Shu, C. Zhu, and Y. Zhang, “Mobile big data fault-tolerant processing for ehealth networks,” IEEE Network, vol. 30, no. 1, pp. 36–42, 2016.
[34] Q. Wang, C. Wang, J. Li, K. Ren, and W. Lou, “Enabling public verifiability and data dynamics for storage
security in cloud computing,” in Proc. of ESORICS. Springer, 2009, pp. 355–370.
[35] Y. Wang, Q. Wu, B. Qin, W. Shi, R. H. Deng, and J. Hu, “Identity-based data outsourcing with comprehensive
auditing in clouds,” IEEE Transactions on Information Forensics and Security, vol. 12, no. 4, pp. 940–952, 2017.
[36] G. Wood, “Ethereum: A secure decentralised generalised transaction ledger,” Ethereum Project Yellow Paper,
vol. 151, 2014.
[37] Q. Xia, E. B. Sifah, K. O. Asamoah, J. Gao, X. Du, and M. Guizani, “Medshare: Trust-less medical data sharing
among cloud service providers via blockchain,” IEEE Access, vol. 5, pp. 14 757–14 767, 2017.
[38] L. Yang, Z. Han, Z. Huang, and J. Ma, “A remotely keyed file encryption scheme under mobile cloud computing,” Journal of Network and Computer Applications, vol. 106, pp. 90–99, 2018.
[39] Y. Zhang, C. Xu, H. Li, K. Yang, J. Zhou, and X. Lin, “HealthDep: An efficient and secure deduplication scheme for cloud-assisted ehealth systems,” IEEE Transactions on Industrial Informatics, to appear, doi:
10.1109/TII.2018.2832251.
[40] Y. Zhang, C. Xu, X. Liang, H. Li, Y. Mu, and X. Zhang, “Efficient public verification of data integrity for
cloud storage systems from indistinguishability obfuscation,” IEEE Transactions on Information Forensics and
_Security, vol. 12, no. 3, pp. 676–688, 2017._
22
-----
[41] Y. Zhang, R. Deng, X. Liu, and D. Zheng, “Outsourcing service fair payment based on blockchain and its
applications in cloud computing,” IEEE Transactions on Services Computing, 2018, to appear.
[42] Y. Zhang, R. H. Deng, X. Liu, and D. Zheng, “Blockchain based efficient and robust fair payment for outsourcing
services in cloud computing,” Information Sciences, vol. 462, pp. 262 – 277, 2018.
23
-----
| 15,366
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1016/J.INS.2019.02.038?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1016/J.INS.2019.02.038, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBYNCND",
"status": "GREEN",
"url": "https://nottingham-repository.worktribe.com/preview/4236514/TP-EHR-elsarticle.pdf"
}
| 2,019
|
[
"JournalArticle"
] | true
| 2019-02-14T00:00:00
|
[
{
"paperId": "c60e320dbf11b032ddc5702be9968118757b87e8",
"title": "A study on rotation invariance in differential evolution"
},
{
"paperId": "009636e875cdb02b4aa3acb654dd107363310b9c",
"title": "Identity-based key-exposure resilient cloud storage public auditing scheme from lattices"
},
{
"paperId": "83efae6f1113b801109a6e6ae19fa8a8d639628a",
"title": "Anonymous Communication via Anonymous Identity-Based Encryption and Its Application in IoT"
},
{
"paperId": "d15d79ed3e9144da66721350f1fb6720ce5822ae",
"title": "Block-secure: Blockchain based scheme for secure P2P cloud storage"
},
{
"paperId": "db8856896009135f0ad9866cca3ac7f1cc4f2af8",
"title": "Blockchain based efficient and robust fair payment for outsourcing services in cloud computing"
},
{
"paperId": "091c6a805f1f8ab3a3f5de6520897427aa238bd3",
"title": "Outsourcing Service Fair Payment Based on Blockchain and Its Applications in Cloud Computing"
},
{
"paperId": "9d58ae4e232c3c1f8a2947ef79978ba7dc4225a1",
"title": "M-SSE: An Effective Searchable Symmetric Encryption With Enhanced Security for Mobile Devices"
},
{
"paperId": "8979d673054463975685ed4902d61094814ee2bd",
"title": "Secure data uploading scheme for a smart home system"
},
{
"paperId": "5a054a3c6b636d1148a75b7993198a34f3e5ad95",
"title": "A trust-based collaborative filtering algorithm for E-commerce recommendation system"
},
{
"paperId": "c525cd0c9ef9a696bc8889bf321707f06de967fc",
"title": "A Homomorphic Network Coding Signature Scheme for Multiple Sources and its Application in IoT"
},
{
"paperId": "48d9e082953dd4b6744a947e7cae9c6a4e684d2a",
"title": "Centralized Duplicate Removal Video Storage System with Privacy Preservation in IoT"
},
{
"paperId": "5b4428ef1f954d6ca100f580cfe6c18966ac39bb",
"title": "HealthDep: An Efficient and Secure Deduplication Scheme for Cloud-Assisted eHealth Systems"
},
{
"paperId": "58b4dd141e68b8711c617f82244d290ac1667814",
"title": "A remotely keyed file encryption scheme under mobile cloud computing"
},
{
"paperId": "5255885ac0e5be99f6c6458d29fc77edeb661d4c",
"title": "Public audit for operation behavior logs with error locating in cloud storage"
},
{
"paperId": "a3d0a1ef3ea6b94ccfc32c057761e44ff948e44a",
"title": "Diagnosis of attention deficit hyperactivity disorder using imaging and signal processing techniques"
},
{
"paperId": "8b17915ae9c050b5664a535a18a97a93f0fef5a7",
"title": "A novel algorithm to detect glaucoma risk using texton and local configuration pattern features extracted from fundus images"
},
{
"paperId": "44dacdec625e31df66736a385e7001ef33756c5f",
"title": "Ouroboros: A Provably Secure Proof-of-Stake Blockchain Protocol"
},
{
"paperId": "d5ba6148389ffc112800df9af0a2864c0c50e78d",
"title": "Spiking Neural P Systems with Communication on Request"
},
{
"paperId": "49af9119c09b97af977595b011afd8a3f588412d",
"title": "MeDShare: Trust-Less Medical Data Sharing Among Cloud Service Providers via Blockchain"
},
{
"paperId": "14ba3a4b68a85d0ef20f0bb749f1fadc68415dfa",
"title": "Parametric and adaptive encryption of feature-based computer-aided design models for cloud-based collaboration"
},
{
"paperId": "3ab91629168d778473d42e95f6f56e7729153493",
"title": "Novel visual information indexing in relational databases"
},
{
"paperId": "9f27f72719d517833ac26bf189f502d1c16a38d6",
"title": "Identity-Based Data Outsourcing With Comprehensive Auditing in Clouds"
},
{
"paperId": "289c3801319f2813c45d0e6e6b1d927b4686c40c",
"title": "Efficient Public Verification of Data Integrity for Cloud Storage Systems from Indistinguishability Obfuscation"
},
{
"paperId": "1b6412f2e7811395a1292f842a949de6f32e4938",
"title": "Towards secure and flexible EHR sharing in mobile health cloud under static assumptions"
},
{
"paperId": "59b746d543757827efac1c911c1b0e634dd22a61",
"title": "A lightweight live memory forensic approach based on hardware virtualization"
},
{
"paperId": "2f7620c2e21fda47d8549c79046f512c35079faa",
"title": "Mining association rules on Big Data through MapReduce genetic programming"
},
{
"paperId": "c9a7cbc8f90471fa7ac3dfcccbd0643defeb543f",
"title": "Large scattered data interpolation with radial basis functions and space subdivision"
},
{
"paperId": "16eb7fc4052d156ea157f6718b15f863bc89c9d2",
"title": "Efficient design assessment in the railway electric infrastructure domain using cloud computing"
},
{
"paperId": "54c9f9c886490803aedf9b86552c2f32fb3f8085",
"title": "Healthcare-Related Data in the Cloud: Challenges and Opportunities"
},
{
"paperId": "efe573cbfa7f4de4fd31eda183fefa8a7aa80888",
"title": "Blockchain beyond bitcoin"
},
{
"paperId": "d90d90c25a626bf61982a2fc9d8676f326dc96b4",
"title": "Cryptographic Public Verification of Data Integrity for Cloud Storage Systems"
},
{
"paperId": "bd8a307efcffbf57d2e5c3c23577de44d883d865",
"title": "MedRec: Using Blockchain for Medical Data Access and Permission Management"
},
{
"paperId": "67ce62fd545772361f27977a73f250adcec4e079",
"title": "Linear Algebra for Computational Sciences and Engineering"
},
{
"paperId": "93a5ad5172818a341221ccb9db09f7c8f72b3534",
"title": "Hierarchical Multi-Authority and Attribute-Based Encryption Friend Discovery Scheme in Mobile Social Networks"
},
{
"paperId": "8db5d1d7169a1f5391cb184332b95835ae668cf4",
"title": "Bitcoin and Beyond: A Technical Survey on Decentralized Digital Currencies"
},
{
"paperId": "53f6601aaf7f61429b644de2d2d17aa25f662cb0",
"title": "Mobile big data fault-tolerant processing for ehealth networks"
},
{
"paperId": "a48627888bbaa822623a83bdb8bab9b42c338cae",
"title": "SCLPV: Secure Certificateless Public Verification for Cloud-Based Cyber-Physical-Social Systems Against Malicious Auditors"
},
{
"paperId": "5a50a537aa435c7b59ec6c8d3b2eb1fb79eedfeb",
"title": "Computer-Aided Diagnosis of Parkinson’s Disease Using Enhanced Probabilistic Neural Network"
},
{
"paperId": "4a8484bade6f0411a7b6c7e2e83c411b6a12edc8",
"title": "Secure Deduplication of Encrypted Data without Additional Independent Servers"
},
{
"paperId": "e34fe9866ca4bf387bb24db58e9e33f66ce52531",
"title": "Transparent Data Deduplication in the Cloud"
},
{
"paperId": "404eaf897a7c4d7d53ea5d7316ea7873475e90d8",
"title": "Outsourced Proofs of Retrievability"
},
{
"paperId": "fa06a399656562d1a9ee6138401d415c5c8a6d9f",
"title": "Securely Outsourcing Attribute-Based Encryption with Checkability"
},
{
"paperId": "4c93d5b556f711f0905904ac846dc5a5c763ea9b",
"title": "An analysis on separability for Memetic Computing automatic design"
},
{
"paperId": "df063e4d77580acb74d5d6f49de8f0b91452f5fd",
"title": "Security and privacy in electronic health records: A systematic literature review"
},
{
"paperId": "6a261e385180a793354300225cfd876a8db1c16f",
"title": "CAM: Cloud-Assisted Privacy Preserving Mobile Health Monitoring"
},
{
"paperId": "4567482fbb9af350973a708e8e8843850cdfec54",
"title": "PIPAC: Patient infusion pattern based access control scheme for wireless insulin pump system"
},
{
"paperId": "9fe91ae98e6119dd6e0efad3c5b5f2f654bf0107",
"title": "Privacy-Preserving Public Auditing for Secure Cloud Storage"
},
{
"paperId": "51dc0fd8ecc59ab9559dfe32e93e4cc2a2b474b9",
"title": "PAAS: A Privacy-Preserving Attribute-Based Authentication System for eHealth Networks"
},
{
"paperId": "63c9d674dec11c08fbd7e2246cf38b98e0ce7891",
"title": "DACAR Platform for eHealth Services Cloud"
},
{
"paperId": "613b4c81c8bfa52dfe99f0a944128746522e020f",
"title": "HCPP: Cryptography Based Secure EHR System for Patient Privacy and Emergency Healthcare"
},
{
"paperId": "b3d6e545cf259c63b81526624e2a0a47844818aa",
"title": "Enabling Public Verifiability and Data Dynamics for Storage Security in Cloud Computing"
},
{
"paperId": "4a6734b94e193dd1ea0a4d6af281afe14b91a051",
"title": "Transactions papers a routing-driven Elliptic Curve Cryptography based key management scheme for Heterogeneous Sensor Networks"
},
{
"paperId": "75288c260964c2de2ffd3361e7878eaeb44ac7ba",
"title": "Simple Password-Based Encrypted Key Exchange Protocols"
},
{
"paperId": "3c0c82f42172bc1da4acc36b656d12351bf53dae",
"title": "Short Signatures from the Weil Pairing"
},
{
"paperId": "a7a9f305ebc4d5e9d128b0505fed18e62a393f46",
"title": "Identity-Based Encryption from the Weil Pairing"
},
{
"paperId": "7a7a794baaa5567a04e15f760da3d2d3b2b25ef8",
"title": "The Security of the Cipher Block Chaining Message Authentication Code"
},
{
"paperId": "1177ca99331067347a41ddec45707971fa998079",
"title": "MedRec : blockchain for medical data access, permission management and trend analysis"
},
{
"paperId": "553615e4a07799763f9ff122b8de419e39f2d65e",
"title": "SECURE DEDUPLICATION WITH EFFICIENT AND RELIABLE CONVERGENT KEY MANAGEMENT IN CLOUD STORAGE"
},
{
"paperId": "3c50bb6cc3f5417c3325a36ee190e24f0dc87257",
"title": "ETHEREUM: A SECURE DECENTRALISED GENERALISED TRANSACTION LEDGER"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": "54841fbde70e3017b362d135a86b17dd9fd185dd",
"title": "A Cryptographic Key Management Solution for HIPAA Privacy/Security Regulations"
},
{
"paperId": "433cb7fd7e9bbb143818afd558a816d50bccf36b",
"title": "An effective key management scheme for heterogeneous sensor networks"
}
] | 15,366
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0098a7d94d5f61a071f083217238feb64947560a
|
[
"Computer Science"
] | 0.93847
|
Advances on Smart Object Management
|
0098a7d94d5f61a071f083217238feb64947560a
|
Journal on spesial topics in mobile networks and applications
|
[
{
"authorId": "1716566",
"name": "K. Pentikousis"
},
{
"authorId": "145196223",
"name": "Ramón Agüero"
},
{
"authorId": "1397447682",
"name": "A. Timm-Giel"
},
{
"authorId": "1782400",
"name": "S. Sargento"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"J spes top mob netw appl",
"Mobile Networks and Applications",
"Mob Netw Appl"
],
"alternate_urls": null,
"id": "6fff9d04-a29e-4e37-87e8-27e06da9055c",
"issn": "1383-469X",
"name": "Journal on spesial topics in mobile networks and applications",
"type": "journal",
"url": "https://link.springer.com/journal/11036"
}
| null |
DOI 10.1007/s11036 014 0493 z
# Advances on Smart Object Management
Kostas Pentikousis & Ramón Agüero &
Andreas Timm-Giel & Susana Sargento
Published online: 9 February 2014
# Springer Science+Business Media New York 2014
1 Special issue introduction
The first part of this issue features four papers that discuss
advanced management techniques for Smart Objects. The socalled Internet of Things (IoT) is one of the cornerstones of the
Future Internet. One illustrative example of the relevance of
IoT in future network development is its growing adoption
within the smart city paradigm, as a means to provide enhanced citizen services. In this sense, basic IoT technology is
no longer at the purely academic research level, but is starting
to be integrated to the the fabric of our daily activities.
One of the elements that are required to support the successful deployment of this type of architectures is having the appropriate management mechanisms in place. The call for papers for
this special issue was a result of a dedicated workshop on the
management of smart objects. The workshop was collocated
with the 4th International Conference on Mobile Networks and
Management (MONAMI 2012), which was organized in close
collaboration with the Technical University of Hamburg in
[September 2012 (see www.mon-ami.org/2012). The four](http://www.mon-ami.org/2012)
papers which were accepted for publication in this special
K. Pentikousis (*)
European Center for Information and Communication Technologies
(EICT), EUREF Campus Haus 13, Torgauer Straße 12-15,
10829 Berlin, Germany
e-mail: [email protected]
R. Agüero
University of Cantabria, Santander, Spain
e-mail: [email protected]
A. Timm-Giel
Hamburg University of Technology, Hamburg, Germany
e-mail: [email protected]
S. Sargento
University of Aveiro, Aveiro, Portugal
e-mail: [email protected]
issue deal with management architecture alternatives, service
development frameworks, security challenges, and the role that
contextual information has in the Internet of Things. All in all,
they provide a comprehensive outlook on some of the problems
that need to be addressed for this type of deployments. It is
worth highlighting that three of the works do really exemplify
the need of real deployments and employ implementation over
existing technologies to assess the feasibility of their
proposed architectures, frameworks and techniques.
In the first paper, JaeSeung Song et al. review some of the
architectural choices for M2M networks. They start from the
challenges that need to be addressed and discuss how the
various standardization bodies (for instance, ETSI and
3GPP) are tackling them. They then present their approach
for some of the technical functionalities required to control
and manage M2M networks. Finally the authors describe a
realization of a subset of the aforementioned techniques over
a real testbed, using the service model proposed by the
SENSEI European project. The authors use three performance indicators to assess the goodness of the techniques,
namely the stability, the scalability and the robustness. The
results, that are as well included within the CAMPUS 21
project, show that the proposed scheme can run on relatively
low-memory devices, making it very attractive for realworld IoT deployments.
One key success factor for IoT is the possibility to enable
fast service creation, which is open to the general public, so
that a user does not need to be an expert to be able to create
his/her own service. In order to address these challenges,
Sylvain Cherrier et al. propose, in the second paper of this
special issue, a framework to deploy services to be used over
IoT based on the composition of behaviours. They go beyond
their previous proposal, D-Lite, which was based on Finite
State Transducers, and propose a simpler way of modeling the
interactions between IoT components. The BeC3 architecture
also allows the exploitation of available modules and
-----
components. The paper presents an implementation that assesses the possibilities that are brought forward by their
proposal.
If there is one particular characteristic of IoT that sets it
apart is the requirement for delivering a massive amount of
contextual information. At the time of this writing, the Big
Data paradigm is taking hold of the strategies for future
development at all global industrial players, and it is not
unrealistic to assume that it will become commonplace in
the short-term. One of the potential benefits that might be
provided by IoT massive deployments is precisely the amount
of information that they might be able to provide. Clearly
there must be some tradeoff between the amount of data to
be acquired/processed/delivered and the cost to do that. The
third paper proposes a four-layer scheme to alleviate this
problem and to allow applications that are executed over the
IoT substrate to benefit from the potential information at hand
without a strong impact over the operational lifetime of devices, or the required communication overhead. Stefan
Forsström and Theo Kanter use a proof-of-concept prototype
to assess the feasibility of their proposal, which is based on
limiting the information exchange considering its relevance.
Finally, an aspect that must be carefully addressed in order
to promote real deployment of IoT is security. Sometimes
security is left as a feature that is taken for granted, and it does
not really receive enough attention early on in the system
design process. In the fourth paper of this special issue,
Swaminathan Sankararaman et al. propose a method to systematically place jammers within a particular network deployment. This would allow to have security in place, without the
burden to add complex and expensive ciphering schemes. The
paper provides a thorough mathematical formulation of the
problem while the proposed solution is assessed through
simulation.
2 Guest editor biographies
Kostas Pentikousis is the Head of IT Infrastructure at EICT
GmbH, a public-private partnership which acts as a trusted
third party and an international platform for interdisciplinary
collaborative research. Prior to joining EICT, he was a senior
research engineer at Huawei Technologies in Berlin, Germany
and a standards delegate to IETF. From 2005 to 2009 he was a
senior research scientist with VTT Technical Research Centre
of Finland. He earned his Bachelor’s degree in informatics
(1996) from Aristotle University of Thessaloniki, Greece, and
his Master’s (2000) and doctoral degrees (2004) in computer
science from the State University of New York at Stony
Brook. Dr. Pentikousis conducts research in Internet protocols
and network architecture,with contributions ranging from
system design and implementation to performance evaluation and standardization.
Ramón Agüero received a degree in Telecommunications
Engineering from the University of Cantabria in 2001 and the
PhD in 2008. He is currently an Associate Professor at the
Communications Engineering Department at that university.
He has participated in several collaborative research projects
and his research focuses on future network architectures,
especially regarding the (wireless) access part of the network.
He is also interested on multi-hop (mesh) networks and
device-to-device communications. He has published more
than 100 technical papers in such areas and he is a regular
TPC member and reviewer on various related conferences and
journals.
[Andreas Timm-Giel (www.tuhh.de/comnets) is full](http://www.tuhh.de/comnets)
professor and head of the Institute of Communication
Networks (ComNets) at Hamburg University of Technology
(TUHH). Furthermore, he is coordinating TUHH’s research
center on Mobile Sensor and Data Networks (SOMSED) and
is deputy head of the School of Electrical Engineering, Computer Science and Mathematics. From 2002 till 2009 he was
with the Communication Networks group of Bremen University as senior researcher and lecturer. He was leading several
industrial, national and EC funded research projects and from
2006 he was additionally directing the interdisciplinary activity “Adaptive Communications” of TZI (Center of Computing
and Communication Technologies). Before, he was with
MediaMobil GmbH and its Joint Venture M2SAT Ltd. for
three years, acting as Technical Project Leader and Manager
Network Operations. He received his PhD (Dr.-Ing) and Master Degree (Dipl.-Ing) from Bremen University in 1999 and
1994 respectively. Here he lead a group on Mobile and Satellite Communications and was involved in several EU funded
projects for more than 5 years. His research interests are
mobile and wireless communications, sensor networks and
the Future Internet. Prof. Timm-Giel is author or coauthor
more than 100 peer-reviewed publications in journals and on
international conferences. He is frequent reviewer and TPC
member for international conferences and journals and is
Member of IEEE and VDE/ITG. He is speaker of the ITG
group 5.2.1 “System Architectures and Traffic Engineering”
and member of the editorial board of the Elsevier’s International Journal of Electronics and Communications.
[Susana Sargento (http://www.av.it.pt/ssargento) received](http://www.av.it.pt/ssargento)
her Ph.D. in 2003 in Electrical Engineering. She joined the
Department of Computer Science of the University of Porto in
September 2002, and is in the Universidade de Aveiro and the
Instituto de Telecomunicações since February 2004, where
she is leading the Network Architectures and Protocols
[(NAP) group (http://nap.av.it.pt). She is also a Guest Faculty](http://nap.av.it.pt)
of the Department of Electrical and Computer Engineering
from Carnegie Mellon University, USA, since August 2008,
where she performed Faculty Exchange in 2010/2011. She has
been involved in several national and European projects,
taking leaderships of several activities in the projects, such
-----
as the QoS and ad-hoc networks integration activity in the FP6
IST-Daidalos Project. She has been recently involved in several FP7 projects (4WARD, Euro-NF, C-Cast, WIP, Daidalos,
C-Mobile), national projects, and Carnegie Mellon Portugal
research projects (DRIVE-IN with the Carnegie Melon
University). She has been TPC-Chair and organizing several
conferences, such as MONAMI’11, NGI’09, IEEE ISCC’07,
NTMS’12, IEEE FEDNET (with IEEE NOMS’12), IEEE
IoT-SoS in IEEE WoWMoM 2013 and ISCC 2014. She
has also been a reviewer of numerous international conferences and journals, such as IEEE Wireless Communications,
IEEE Networks, IEEE Communications. Her main research
interests are in the areas of Next Generation and Future
Networks, more specifically QoS, mobility, self- and cognitive networks. She regularly acts as an Expert for European
Research Programmes.
-----
| 2,478
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/s11036-014-0493-z?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/s11036-014-0493-z, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://link.springer.com/content/pdf/10.1007/s11036-014-0493-z.pdf"
}
| 2,014
|
[
"JournalArticle",
"Review"
] | true
| 2014-02-01T00:00:00
|
[] | 2,478
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Mathematics",
"source": "external"
},
{
"category": "Mathematics",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/009d04dd5b51c4dca6af817be440c667888dbfd7
|
[
"Computer Science",
"Mathematics"
] | 0.847571
|
Nash equilibrium seeking over directed graphs
|
009d04dd5b51c4dca6af817be440c667888dbfd7
|
Autonomous Intelligent Systems
|
[
{
"authorId": "39486934",
"name": "Yutao Tang"
},
{
"authorId": "145526137",
"name": "Peng Yi"
},
{
"authorId": "2108082483",
"name": "Yanqiong Zhang"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"AIS",
"Artif Intell Simul",
"Auton Intell Syst",
"Autonomous and Intelligent Systems",
"Artificial Intelligence and Simulation"
],
"alternate_urls": [
"http://www.wikicfp.com/cfp/program?id=123"
],
"id": "a90270c0-16c8-480a-af81-7c7beb97d433",
"issn": "2730-616X",
"name": "Autonomous Intelligent Systems",
"type": "journal",
"url": "https://www.springer.com/journal/43684"
}
|
In this paper, we aim to develop distributed continuous-time algorithms over directed graphs to seek the Nash equilibrium in a noncooperative game. Motivated by the recent consensus-based designs, we present a distributed algorithm with a proportional gain for weight-balanced directed graphs. By further embedding a distributed estimator of the left eigenvector associated with zero eigenvalue of the graph Laplacian, we extend it to the case with arbitrary strongly connected directed graphs having possible unbalanced weights. In both cases, the Nash equilibrium is proven to be exactly reached with an exponential convergence rate. An example is given to illustrate the validity of the theoretical results.
|
p g
### Systems
## S H O R T PA P ER Open Access
# Nash equilibrium seeking over directed graphs
#### Yutao Tang[1], Peng Yi[2][,][3*], Yanqiong Zhang[4] and Dawei Liu[5]
**Abstract**
In this paper, we aim to develop distributed continuous-time algorithms over directed graphs to seek the Nash
equilibrium in a noncooperative game. Motivated by the recent consensus-based designs, we present a distributed
algorithm with a proportional gain for weight-balanced directed graphs. By further embedding a distributed
estimator of the left eigenvector associated with zero eigenvalue of the graph Laplacian, we extend it to the case
with arbitrary strongly connected directed graphs having possible unbalanced weights. In both cases, the Nash
equilibrium is proven to be exactly reached with an exponential convergence rate. An example is given to illustrate
the validity of the theoretical results.
**Keywords: Nash equilibrium, Directed graph, Exponential convergence, Proportional control, Distributed**
computation
**1 Introduction**
Nash equilibrium seeking in noncooperative games has attracted much attention due to its broad applications in
multi-robot systems, smart grids, and sensor networks
[1–3]. In such problems, each decision-maker/player has
an individual payoff function depending upon all players’
decisions and aims at reaching an equilibrium from which
no player has incentive to deviate. Information that one
player knows about others and the information sharing
structure among these players play a crucial role in resolving these problems. In a classical full-information setting,
each player has access information including its own objective function and the decisions taken by the other players in the game [4–6]. As the decisions of all other agents
can be not directly available due to the privacy concerns
or communication cost, distributed designs only relying
on each player’s local information are of particular interest, and sustained efforts have been made to generalize the
[*Correspondence: [email protected]](mailto:[email protected])
2Department of Control Science and Engineering, Tongji University, Shanghai,
200092, China
3Shanghai Institute of Intelligent Science and Technology, Tongji University,
Shanghai, 200092, China
Full list of author information is available at the end of the article
classical algorithms to this case via networked information
sharing.
In multi-agent coordination literature, the information
structure (or the information sharing topology) among
agents is often described by graphs [7]. Following this
terminology, the Nash equilibrium seeking problem in
the classical full-information setting involves a complete
graph where any two players can directly communicate
with each other [4, 5, 8–10]. A similar scenario is the
case when this full-decision information is obtained via
broadcasts from a global coordinator [11]. By contrast, distributed rules via local communication and computation
do not require this impractical assumption on the information structure.
To overcome the difficulty brought by the lack of full information, a typical approach is to leverage the consensusbased mechanism to share information via network diffusion [12–15]. To be specific, each player maintains a local
estimate vector of all players’ decisions and updates this
vector by an auxiliary consensus process with its neighbors. After that, the player can implement a best-response
or gradient-play rule with the estimate of the joint decision. For example, the authors conducted an asynchronous
gossip-based algorithm for finding a Nash equilibrium in
© The Author(s) 2022. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use,
sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original
author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other
third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line
to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view
a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
-----
[16]. The two awake players will appoint their estimates
as their average and then take a gradient step. Similar results have been delivered for general connected graphs by
extending classical gradient-play dynamics [17, 18]. Along
this line, considerable progress has been made with different kinds of discrete-time or continuous-time Nash equilibrium seeking algorithm with or without coupled decision constraints even for nontrivial dynamic players [19–
26]. However, all these results except a few for special
aggregative games heavily reply on the assumption that
the underlying communication graph is undirected, which
definitely narrows down the applications of these Nash
equilibrium seeking algorithms.
Based on the aforementioned observations, this paper is
devoted to the solvability of the Nash equilibrium seeking
problem for general noncooperative games over directed
graphs. Moreover, we aim to obtain an exponential convergence rate. Note that the symmetry of information sharing
structure plays a crucial role in both analysis and synthesis
of existing Nash equilibrium seeking algorithms. However,
the information structure will lose such symmetry over directed graphs, which certainly makes the considered problem more challenging.
To solve this problem, we start from the recent work [17].
In [17], the authors presented an augmented gradientplay dynamics and showed the dynamics converge to consensus on the Nash equilibrium exponentially fast under
undirected and connected graphs. We will first develop a
modified version of gradient-play algorithms for weightbalanced digraphs by adding a proportional gain, and then
extend it to the case with arbitrary strongly connected
digraph by further embedding a distributed estimator of
the left eigenvector associated with zero eigenvalue of the
graph Laplacian. Under some similar assumptions on the
cost functions as in [17], we show that the developed two
algorithms can indeed recover the exponential convergence rate in both cases. Moreover, by adding such a freechosen proportional gain parameter, we provide an alternative way to remove the extra graph coupling condition
other than singular perturbation analysis as that in [17].
To the best knowledge of us, this is the first exponentially
convergent continuous-time result to solve the Nash equilibrium seeking problem over general directed graphs.
The remainder of this paper is organized as follows:
Some preliminaries are presented in Sect. 2. The problem
formulation is given in Sect. 3. Then, the main designs are
detailed in Sect. 4. Following that, an example is given to
illustrate the effectiveness of our algorithms in Sect. 5. Finally, concluding remarks are given in Sect. 6.
**2 Preliminaries**
In this section, we present some preliminaries of convex
analysis [27] and graph theory [7] for the following analysis.
**2.1 Convex analysis**
Let R[n] be the n-dimensional Euclidean space and R[n][×][m] be
the set of all n × m matrices. 1n (or 0n) represents an ndimensional all-one (or all-zero) column vector and 1n×m
(or 0n×m) all-one (or all-zero) matrix. We may omit the
subscript when it is self-evident. diag(b1,..., _bn) represents_
an n × n diagonal matrix with diagonal elements bi with
_i = 1,...,_ _n. col(a1,...,_ _an) = [a[⊤]1_ [,...,] _[a]n[⊤][]][⊤]_ [for column vec-]
tors ai with i = 1,..., _n. For a vector x and a matrix A, ∥x∥_
denotes the Euclidean norm and ∥A∥ the spectral norm.
A function f : R[m] → R is said to be convex if, for any
0 ≤ _a ≤_ 1 and ζ1, _ζ2 ∈_ R[m], f (aζ1 + (1 – a)ζ2) ≤ _af (ζ1) +_
(1 – a)f (ζ2). It is said to be strictly convex if this inequality is strict whenever ζ1 ̸= ζ2. A vector-valued function
_�_ : R[m] → R[m] is said to be ω-strongly monotone, if for
any ζ1, _ζ2 ∈_ R[m], (ζ1 – ζ2)[⊤][�(ζ1) – �(ζ2)] ≥ _ω∥ζ1 – ζ2∥[2]._
Function � : R[m] → R[m] is said to be ϑ-Lipschitz, if for any
_ζ1,_ _ζ2 ∈_ R[m], ∥�(ζ1) – �(ζ2)∥≤ _ϑ∥ζ1 – ζ2∥. Apparently, the_
gradient of an ω-strongly convex function is ω-strongly
monotone.
**2.2 Graph theory**
A weighted directed graph (digraph) is described by G =
(N, _E,_ _A) with the node set N = {1,...,_ _N} and the edge_
set E . (i, _j) ∈_ _E denotes an edge from node i to node j._
The weighted adjacency matrix A = [aij] ∈ R[N][×][N] is defined by aii = 0 and aij ≥ 0. Here aij > 0 iff there is an
edge (j, _i) in the digraph. The neighbor set of node i is_
defined as Ni = {j | (j, _i) ∈_ _E}. A directed path is an al-_
ternating sequence i1e1i2e2...ek–1ik of nodes il and edges
_em = (im,_ _im+1) ∈_ _E for l = 1,2,...,_ _k. If there is a directed_
path between any two nodes, then the digraph is said to be
strongly connected. The in-degree and out-degree of node
_i are defined by di[in]_ [=][ �]j[N]=1 _[a][ij][ and][ d]i[out]_ = [�]j[N]=1 _[a][ji][. A digraph]_
is weight-balanced if di[in] [=][ d]i[out] holds for any i = 1,..., _N_ .
The Laplacian matrix of G is defined as L ≜ _D[in]_ – A with
_D[in]_ = diag(d1[in][,...,] _[d]N[in][). Note that][ L][1][N][ =][ 0][N][ for any digraph.]_
When it is weight-balanced, we have 1[⊤]N _[L][ =][ 0][⊤]N_ [and the ma-]
trix Sym(L) ≜ _[L][+]2[L][⊤]_ is positive semidefinite.
Consider a group of vectors {1, _a2,...,_ _aN_ } with ai the ith
standard basis vector of R[N], i.e., all entries of ai are zero
except the i-th, which is one. These vectors are verified
to be linearly independent. We apply the Gram-Schmidt
process to them and obtain a group of orthonormal vectors {ˆa1,..., ˆaN }. Let M1 = ˆa1 ∈ R[N] and M2 = [aˆ 2 ... ˆaN ] ∈
1
R[N][×][(][N][–1)]. It can be verified that M1 = √N **[1][N]** [,][ M]1[⊤][M][1][ = 1,]
_M2[⊤][M][2][ =][ I][N][–1][,][ M]2[⊤][M][1][ =][ 0][N][–1][, and][ M][1][M]1[⊤]_ [+][ M][2][M]2[⊤] [=][ I][N] [.]
Then, for a weight-balanced and strongly connected digraph, we can order the eigenvalues of Sym(L) as 0 = λ1 <
_λ2 ≤··· ≤_ _λN and further have λ2IN–1 ≤_ _M2[⊤]_ [Sym][(][L][)][M][2][ ≤]
_λN_ _IN_ .
**3 Problem formulation**
In this paper, we consider a multi-agent system consisting of N agents labeled as N = {1,..., _N}. They play an_
-----
_N_ -player noncooperative game defined as follows: Agent
_i is endowed with a continuously differentiable cost func-_
tion Ji(zi, **_z–i), where zi ∈_** R denotes the decision (or action)
profile of agent i and z–i ∈ R[N][–1] denotes the decision profile of this multi-agent system except for agent i. In this
game, each player seeks to minimize its own cost function
_Ji by selecting a proper decision zi. Here we adopt a uni-_
dimensional decision variable for the ease of presentation
and multiple dimensional extensions can be made without
any technical obstacles.
The equilibrium point of this noncooperative game can
be defined as in [5].
**Definition 1 Consider the game G = {N,** _Ji,_ R}. A decision
profile z[∗] = col(z1[∗][,...,] _[z]N[∗]_ [) is said to be a Nash equilibrium]
(NE) of the game G if Ji(zi[∗][,] **_[z]–[∗]i[)][ ≤]_** _[J][i][(][z][i][,]_ **_[z][∗]–i[) for any][ i][ ∈]_** _[N]_
and zi ∈ R.
At a Nash equilibrium, no player can unilaterally decrease its cost by changing the decision on its own, and
thus all agents tend to keep at this state. Denote F(z) ≜
col(∇1J1(z1, **_z–1),...,_** ∇N _JN_ (zN, **_z–N_** )) ∈ R[N] with ∇iJi(zi,
**_z–i) ≜_** _∂∂zi_ _[J][i][(][z][i][,]_ **_[z][–][i][)][ ∈]_** [R][. Here][ F][ is called the pseudogradi-]
ent associated with J1,..., _JN_ .
To ensure the well-posedness of our problem, the following assumptions are made throughout the paper:
**Assumption 1 For each i ∈** _N, the function Ji(zi,_ **_z–i) is_**
twice continuously differentiable, strictly convex and radially unbounded in zi ∈ R for any fixed z–i ∈ R[N][–1].
**Assumption 2 The pseudogradient F is l-strongly mono-**
tone and [¯]l-Lipschitz for two constants l, [¯]l > 0.
These assumptions have been used in [17] and [21]. Under these assumptions, our game G admits a unique Nash
equilibrium z[∗] which can be characterized by the equation
_F(z[∗]) = 0 according to Propositions 1.4.2 and 2.2.7 in [28]._
In a full-information scenario when agents can have access to all the other agents’ decisions, a typical gradientplay rule
_z˙i = –_ _[∂][J][i]_ (zi, **_z–i),_** _i ∈_ _N_
_∂zi_
can be used to compute this Nash equilibrium z[∗]. In this
paper, we are more interested in distributed designs and
assume that each agent only knows the decisions of a subset of all agents during the phase of computation.
For this purpose, a weighted digraph G = (N, _E,_ _A) is_
used to describe the information sharing relationships
among the agents with node set N and weight matrix
_A ∈_ R[N][×][N] . If agent i can get the information of agent j,
then there is a directed edge from agent j to agent i in the
graph with weight aij > 0. Note that agent i may not have
the full-information of z–i except the case with a complete
communication graph. Thus, we have a noncooperative
game with incomplete partial information. This makes the
classical gradient-play rule unimplementable.
To tackle this issue, a consensus-based rule has been
developed in [17] and each agent is required to estimate
all other agents’ decisions and implement an augmented
gradient-play dynamics:
**z˙[i]** = –
_N_
� � �
_aij_ **z[i]** – z[j][�] – Ri∇iJi **z[i][�],** (1)
_j=1_
where Ri = col(0i–1,1, **0N–i) and z[i]** = col(z1[i] [,...,] _[z]N[i]_ [). Here]
**z[i]** ∈ R[N] represents agent i’s estimate of all agents’ decisions
with zi[i] [=][ z][i][ and][ z]–[i] _i_ [=][ col][(][z]1[i] [,...,] _[z]i[i]–1[,]_ _[z]i[i]+1[,...,]_ _[z]N[i]_ [). Function]
∇iJi(z[i]) = _∂[∂]z[J][i]i[i]_ [(][z]i[i][,] **[z][i]–i[) is the partial gradient of agent][ i][’s cost]**
function evaluated at the local estimate z[i].
For convenience, we define an extended pseudogradient
as F(z) = col(∇1J1(z[1]),..., ∇N _JN_ (z[N] )) ∈ R[N] for this game G.
The following assumption on this extended pseudogradient F was made in [17]:
**Assumption 3 The extended pseudogradient F is lF** Lipschitz with lF > 0.
Let l = max{[¯]l, _lF_ }. According to Theorem 2 in [17], along
the trajectory of system (1), z[i](t) will exponentially converge to z[∗] as t goes to +∞ if graph G is undirected and
satisfies a strong coupling condition of the form: λ2 > _[l]l[2]_ [+] _[l][.]_
Note that the coupling condition might be violated in applications for a given game and undirected graph (since
the scalars λ2 and _[l]l[2]_ [+][ l][ are both fixed). Although the au-]
thors in [17] further relaxed this connectivity condition by
some singular perturbation technique, the derived results
are still limited to undirected graphs.
In this paper, we assume that the information sharing
graph is directed and satisfies the following condition:
**Assumption 4 Digraph G is strongly connected.**
The main goal of this paper is to exploit the basic idea
of algorithm (1) and develop effective distributed variants
to solve this problem for digraphs under Assumption 4
including undirected connected graphs as a special case.
Since the information flow might be asymmetric in this
case, the resultant equilibrium seeking problem is thus
more challenging than the undirected case.
**4 Main result**
In this section, we first solve our Nash equilibrium seeking
problem for the weight-balanced digraphs and then extend
the derived results to general strongly connected ones with
unbalanced weights.
-----
**4.1 Weight-balanced graph**
To begin with, we make the following extra assumption:
**Assumption 5 Digraph G is weight-balanced.**
Motivated by algorithm (1), we propose a modified version of gradient-play rules for game G as follows:
Let V (z¯1, ¯z2) = [1]2 [(][∥¯][z][1][∥][2][ +][ ∥¯][z][2][∥][2][). Then its time derivative]
along the trajectory of system (3) satisfies that
_V˙_ = –z¯[⊤]1 �M1[⊤] [⊗] _[I][N]_ �R� – ¯z[⊤]2 �M2[⊤] [⊗] _[I][N]_ �R�
– αz¯[⊤]2 ��M2[⊤][LM][2]� ⊗ _IN_ �z¯2
= –z˜[⊤]R� – αz¯[⊤]2 ��M2[⊤] [Sym][(][L][)][M][2]� ⊗ _IN_ �z¯2
≤ –αλ2∥¯z2∥[2] – ˜z[⊤]R�. (4)
Since ˜z = (M1 ⊗ _IN_ )z¯1 + (M2 ⊗ _IN_ )z¯2 ≜ **z˜1 + ˜z2, we split ˜z**
into two parts to estimate the above cross term and obtain
that
–z˜[⊤]R� = (z˜1 + ˜z2)[⊤]R�F�z˜1 + ˜z2 + z[∗][�] – F�z[∗][��]
= –z˜[⊤]1 _[R]�F�z˜1 + ˜z2 + z[∗][�]_ – F�z˜1 + z[∗][��]
– ˜z[⊤]2 _[R]�F�z˜1 + ˜z2 + z[∗][�]_ – F�z˜1 + z[∗][��]
– ˜z[⊤]1 _[R]�F�z˜1 + z[∗][�]_ – F�z[∗][��]
– ˜z[⊤]2 _[R]�F�z˜1 + z[∗][�]_ – F�z[∗][��].
As we have F(1N ⊗ _y) = F(y) for any y ∈_ R[N], it follows by
the strong monotonicity of F that
**z˜[⊤]1** _[R]�F�z˜1 + z[∗][�]_ – F�z[∗][��]
**z˙[i]** = –α
_N_
� � �
_aij_ **z[i]** – z[j][�] – Ri∇iJi **z[i][�],** (2)
_j=1_
where Ri, z[i] are defined as above and α > 0 is a constant to
be specified later. Putting it into a compact form, we have
**z˙ = –αLz – RF(z),** (3)
where z = col(z[1],..., **z[N]** ), R = diag(R1,..., _RN_ ) and L = L ⊗
_IN with the extended pseudogradient F(z)._
Different from algorithm (1) and its singularly perturbed
extension presented in [17], we add an extra parameter α
to increase the gain of the proportional term Lz. With this
gain being large enough, the effectiveness of algorithm (3)
is shown as follows:
**Theorem 1 Suppose Assumptions 1–5 hold. Let α >** _λ[1]2_ [(][ l]l[2] [+]
_l). Then, for any i ∈_ _N, along the trajectory of system (3),_
**_z[i](t) exponentially converges to z[∗]_** _as t goes to +∞._
_Proof We first show that at the equilibrium of system (3),_
_zi indeed reaches the Nash equilibrium of game G. In fact,_
letting the righthand side of (2) be zero, we have αLz[∗] +
_RF(z[∗]) = 0. Premultiplying both sides by 1[⊤]N_ [⊗] _[I][N][ gives]_
**0 = α�1[⊤]N** [⊗] _[I][N]_ �(L ⊗ _IN_ )z[∗] + �1[⊤]N [⊗] _[I][N]_ �RF�z[∗][�].
Using 1[⊤]N _[L][ = 0 gives][ 0][ = (][1][⊤]N_ [⊗] _[I][N]_ [)][R][F][(][z][∗][). By the notation of]
_R and F, we have F(z[∗]) = 0. This further implies that Lz[∗]_ =
**0. Recalling the property of L under Assumption 4, one can**
determine some θ ∈ R[N] such that z[∗] = 1 ⊗ _θ_ . This means
**_F(1⊗θ_** ) = 0 and thus ∇iJi(θi, _θ–i) = 0, or equivalently, F(θ_ ) =
**0. That is, θ is the unique Nash equilibrium z[∗]** of G and
**z[∗]** = 1 ⊗ _z[∗]._
Next, we show the exponential stability of system (3) at
its equilibrium z[∗] = 1 ⊗ _z[∗]. For this purpose, we denote_
**z˜ = z – z[∗]** and perform the coordinate transformation ¯z1 =
(M1[⊤] [⊗] _[I][N]_ [)][z][˜][ and][ ¯][z][2][ = (][M]2[⊤] [⊗] _[I][N]_ [)][z][˜][. It follows that]
**z˙¯** 1 = –�M1[⊤] [⊗] _[I][N]_ �R�,
**z˙¯** 2 = –α��M2[⊤][LM][2]� ⊗ _IN_ �z¯2 – �M2[⊤] [⊗] _[I][N]_ �R�,
where � ≜ **_F(z) – F(z[∗])._**
= √[z][¯]1[⊤]
_N_
= √[z][¯]1[⊤]
_N_
�F�1 ⊗ � √z¯1 + y[∗]�� – F�1 ⊗ _y[∗][��]_
_N_
� �
_F_ _y[∗]_ + √[z][¯][1]
_N_
�
�
– F _y[∗][��]_
≥ _[l]_
_N_ [∥¯][z][1][∥][2][,]
where we use the identity (1[⊤] ⊗ _IN_ )R = IN and ˜z[⊤]1 _[R][ =]_ √z¯[⊤]1N [.]
Note that ∥R∥ = ∥M2∥ = 1 by definition. This implies that
∥R[⊤]z˜2∥≤∥˜z2∥ = ∥¯z2∥. Then, under Assumptions 2 and 3,
we have
–z˜[⊤]R� ≤ √[2][l] ∥¯z1∥∥¯z2∥ + l∥¯z2∥[2] – _[l]_ (5)
_N_ _N_ [∥¯][z][1][∥][2][.]
Bringing inequalities (4) and (5) together gives
_V˙_ ≤ – _[l]_ √ ∥¯z1∥∥¯z2∥
_N_ [∥¯][z][1][∥][2][ – (][αλ][2][ –][ l][)][∥¯][z][2][∥][2][ + 2]N[l]
�
= – ∥¯z1∥ ∥¯z2∥[�] _Aα_
� �
∥¯z1∥
(6)
∥¯z2∥
with Aα = � – √NllN _[αλ]–_ √[2]l[–]N[l] �. When α > _λ12_ [(][ l]l[2] [+][ l][), matrix][ A][α][ is]
positive definite. Thus, there exists a constant ν > 0 such
-----
that
_V˙_ ≤ –νV .
Recalling Theorem 4.10 in [29], one can conclude the exponential convergence of z(t) to z[∗], which implies that z[i](t)
converges to z[∗] as t goes to +∞. The proof is thus complete.
_Remark 1 Algorithm (3) is a modified version of the_
gradient-play dynamics (1) with an adjustable proportional control gain α. The criterion to choose α clearly
presents a natural trade-off between the control efforts and
graph algebraic connectivity. By choosing a large enough
_α, this theorem ensures the exponential convergence of_
all local estimates to the Nash equilibrium z[∗] over weightbalanced digraphs and also provides an alternative way to
remove the restrictive graph coupling condition presented
in [17].
**4.2 Weight-unbalanced graph**
In this subsection, we aim to extend the preceding design
to general strongly connected digraphs. In the following,
we first modify (3) to ensure its equilibrium as the Nash
equilibrium of game G, and then implement it in a distributed manner by adding a graph imbalance compensator.
At first, we assume that a left eigenvector of the Laplacian L associated with the trivial eigenvalue is known and
denoted by ξ = col(ξ1,..., _ξN_ ), i.e., ξ [⊤]L = 0. Without loss of
generality, we assume ξ [⊤]1 = 1. Then, ξ is componentwise
positive by Theorem 4.16 in Chap. 6 of [30]. Here we use
this vector ξ to correct the graph imbalance in system (2)
as follows:
Note that the aforementioned vector ξ is usually unknown to us for general digraphs. To implement our algorithm, we embed a distributed estimation rule of ξ into
system (7) as follows:
_i_
**_ξ˙_** = –
_N_
� _aij�ξ_ _[i]_ – ξ _[j][�],_ (9)
_j=1_
where ξ _[i]_ = col(ξ1[i][,...,] _[ξ]N[ i]_ [) with][ ξ]i[ i][(0) = 1 and][ ξ]j[ i] [= 0 for any]
_j ̸= i ∈_ _N ._
Here the diffusion dynamics of ξ _[i]_ is proposed to estimate
the eigenvector ξ by col(ξ1[1][,...,] _[ξ]N[ N]_ [). The following lemma]
shows the effectiveness of (9).
**Lemma 2 Suppose Assumption 4 holds. Then, along the**
_trajectory of system (9), ξi[i][(][t][) > 0][ for any t][ ≥]_ [0][ and expo-]
_nentially converges to ξi as t goes to +∞._
_Proof Note that the matrix –L is essentially nonnegative_
in the sense that κI – L is nonnegative for all sufficiently
large constant κ > 0. Under Assumption 4, matrix –L is
also irreducible. By Theorem 3.12 in Chap. 6 of [30], the
matrix exponential exp(–Lt) is componentwise positive for
any t ≥ 0. As the evolution of ξ i = col(ξi[1][,...,] _[ξ]i[ N]_ [) is gov-]
erned by **_ξ[˙] i = –Lξ i with initial condition ξ i(0) = col(0,1,_** **0).**
Thus, ξ i(t) = exp(–Lt)ξ i(0) > 0 for any t. By further using
Theorems 1 and 3 in [tially converges to the value12], we have that ξi[∗] [=] �Njξ=1i _[ξ][j][ for any] ξi[i][(][t][) exponen-][ i][ ∈]_ _[N][ as]_
_t goes to +∞. Since ξ = col(ξ1,...,_ _ξN_ ) is a left eigenvector
of L associated with eigenvalue 0, one can easily verify that
_ξ_ [∗⊤]L = 0. Under Assumption 4, 0 is a simple eigenvalue of
_L. Then, there must be a constant c ̸= 0 such that ξ = cξ_ [∗].
Note that ξ [⊤]1 = ξ [∗⊤]1 = 1. One can conclude that c = 1 and
thus complete the proof.
The whole algorithm to seek the Nash equilibrium is presented as follows:
**z˙[i]** = –αξi
_N_
� � �
_aij_ **z[i]** – z[j][�] – Ri∇iJi **z[i][�].** (7)
_j=1_
Similar ideas can be found in [14] and [31]. We put this
system into a compact form
**z˙ = –αL�z – RF(z),** (8)
where � = diag(ξ1,..., _ξN_ ) and L� = �L ⊗ _IN_ . It can be easily verified that �L is the associated Laplacian of a new digraph G[′], which has the same connectivity topology as digraph G but with scaled weights, i.e., a[′]ij [=][ ξ][i][a][ij][ for any][ i][,] _[j][ ∈]_
_N . As this new digraph G[′]_ is naturally weight-balanced, we
denote λ[′]2 [as the minimal positive eigenvalue of][ Sym][(][�][L][).]
Here is an immediate consequence of Theorem 1.
**Lemma 1 Suppose Assumptions 1–4 hold and let α >**
1
_λ[′]2_ [(][ l]l[2] [+] _[l][).][ Then][,][ for any i][ ∈]_ _[N][,][ along the trajectory of system]_
(8), z[i](t) exponentially converges to z[∗] _as t goes to +∞._
**Theorem 2 Suppose Assumptions 1–4 hold and let α >**
1
_λ[′]2_ [(][ l]l[2] [+] _[l][).][ Then][,][ for any i][ ∈]_ _[N][,][ along the trajectory of system]_
(10), z[i](t) exponentially converges to z[∗] _as t goes to +∞._
_N_
**z˙[i]** = –αξi[i] � _aij�z[i]_ – z[j][�] – Ri∇iJi�z[i][�],
_j=1_
(10)
_N_
**_ξ˙_** _i = –_ � _aij�ξ_ _[i]_ – ξ _[j][�]_
_j=1_
with ξi[i][(0) = 1 and][ ξ]j[ i] [= 0 for any][ j][ ̸][=][ i][ ∈] _[N][ .]_
Bringing Lemmas 1 and 2 together, we provide the second main theorem of this paper.
-----
_Proof First, we put the algorithm into a compact form:_
**z˙ = –αL�′** **z – RF(z),**
(11)
**_ξ˙ = –Lξ_**,
where L�′ = �[′]L ⊗ _IN and �[′]_ = diag(ξ1[1][,...,] _[ξ]N[ N]_ [). From this,]
one can further find that the composite system consists of
two subsystems in a cascaded form as follows:
**z˙ = –αL�z – RF(z) – α�(��L) ⊗** _IN_ �z,
**_ξ˙ = –Lξ_**,
where L� is defined as in (8) and �� = �[′] – �. Note
that the term α[(��L) ⊗ _IN_ ]z can be upper bounded by
_γp exp(–βpt)∥z∥_ for some positive constants γp and βp according to Lemma 2. By viewing α[(��L) ⊗ _IN_ ]z as a vanishing perturbation of the upper subsystem, the unperturbed z-subsystem is globally exponentially stable at its
equilibrium z[∗] = 1N ⊗ _z[∗]_ by Lemma 1. Recalling Corollary
9.1 in [29], the whole algorithm (11) is globally exponentially stable at its equilibrium. This implies that along the
trajectory of system (11), z[i](t) exponentially converges to
_z[∗]_ as t goes to +∞. The proof is thus complete.
_Remark 2 In contrast to the algorithm (2) with propor-_
tional gains in Theorem 1, this new rule (10) further includes a distributed left eigenvector estimator to compensate the imbalance of the graph Laplacian. Compared with
those equilibrium seeking results in [15, 17, 18] for undirected graphs, the proportional control and graph imbalance compensator together facilitate us to solve this problem for strongly connected digraphs including undirected
graphs as its special case.
**5 Simulation**
In this section, we present an example to verify the effectiveness of our designs.
Consider an eight-player noncooperative game. Each
player has a pay-off function of the form Ji(xi, _x–i) = cixi –_
_xif (x) with x = col(x1,...,_ _x8) and f (x) = D –_ [�]i[8]=1 _[x][i][ for]_
a constant D > 0. Suppose the communication topology
among the agents is depicted by a digraph in Fig. 1 with
all weights as one. The Nash equilibrium of this game
can be analytically determined as z[∗] = col(z1,..., _zn) with_
_zi[∗]_ [= 46 – 4][ ∗] _[i][.]_
Since the communication graph is directed and weightunbalanced, the gradient-play algorithm developed in [17]
might fail to solve the problem. At the same time, Assumptions 1–4 can be easily confirmed. Then we can resort to
Theorem 2 and use algorithm (10) to seek the Nash equilibrium in this eight-player noncooperative game.
For simulations, let ci = 4i and D = 270. We sequentially choose α = 2 and α = 10 for algorithm (10). Since
the righthand side of our algorithm is Lipschitz, we conduct the simulation via the forward Euler method with
a small step size [32]. The simulation results are shown
in Figs. 2–4. From Fig. 2, one can find that the estimate
_ξ_ (t) converges quickly to the left eigenvector of the graph
**Figure 2 Profile of ξi[i][(][t][) in our example]**
**Figure 3 Profile of zi(t) in our example with α = 2**
**Figure 1 Digraph G in our example**
-----
**Figure 4 Profile of zi(t) in our example with α = 10**
**Figure 5 Profile of ηi(t) in our example with α = 10**
Laplacian ξ = col(4,4,3,2,2,1,1,1)/18. At the same time,
col(z1(t),..., _z8(t)) approaches the Nash equilibrium z[∗]_ of
this game for different proportional parameters. Moreover, a larger proportional gain α is observed to imply a
faster rate of convergence. We also show the profile of
_ηi(t) ≜_ _t[2](zi(t) – zi[∗][) in Fig.][ 5][ to confirm the exponential]_
convergence rate when α = 10. These results verify the effectiveness of our designs in resolving the Nash equilibrium seeking problem over general strongly connected digraphs.
**6 Conclusion**
Nash equilibrium seeking problem over directed graphs
has been discussed with consensus-based distributed
rules. By selecting some proper proportional gains and
embedding a distributed graph imbalance compensator,
the expected Nash equilibrium is shown to be reached ex
ponentially fast over general strongly connected digraphs.
In the future, we may use the adaptive high-gain techniques as in [21, 33] to extend the results to fully distributed versions. Another interesting direction is to incorporate high-order agent dynamics and nonsmooth cost
functions.
**Funding**
This work was partially supported by the National Natural Science Foundation
of China under Grants 61973043, 62003239, and 61703368, Shanghai Sailing
Program under Grant 20YF1453000, Shanghai Municipal Science and
Technology Major Project No. 2021SHZDZX0100, and Shanghai Municipal
Commission of Science and Technology Project No. 19511132101.
**Availability of data and materials**
Not applicable.
**Code availability**
Not applicable.
**Declarations**
**Competing interests**
Peng Yi is an editorial board member for Autonomous Intelligent Systems and
was not involved in the editorial review, or the decision to publish, this article.
All authors declare that there are no competing interests.
**Authors’ contributions**
All authors contributed to the study conception and design. Material
preparation and analysis were performed by YT, PY, YZ, and DL. The first draft of
the manuscript was written by YT and all authors commented on previous
versions of the manuscript. All authors read and approved the final manuscript.
**Author details**
1School of Artificial Intelligence, Beijing University of Posts and
Telecommunications, Beijing, 100876, China. [2]Department of Control Science
and Engineering, Tongji University, Shanghai, 200092, China. [3]Shanghai
Institute of Intelligent Science and Technology, Tongji University, Shanghai,
200092, China. [4]School of Automation, Hangzhou Dianzi University,
Hangzhou, 310018, China. [5]China Research and Development Academy of
Machinery Equipment, Beijing, 100086, China.
**Publisher’s Note**
Springer Nature remains neutral with regard to jurisdictional claims in
published maps and institutional affiliations.
Received: 8 December 2021 Accepted: 5 April 2022
**References**
1. D. Fudenberg, J. Tirole, Game Theory (MIT Press, Cambridge, 1991)
2. T. Ba¸sar, G. Zaccour, Handbook of Dynamic Game Theory (Springer, New
York, 2018)
3. M. Maschler, S. Zamir, E. Solan, Game Theory (Cambridge University Press,
Cambridge, 2020)
4. S. Li, T. Ba¸sar, Distributed algorithms for the computation of
noncooperative equilibria. Automatica 23(4), 523–533 (1987)
5. T. Basar, G.J. Olsder, Dynamic Noncooperative Game Theory (2nd) (SIAM,
Philadelphia, 1999)
6. M.S. Stankovic, K.H. Johansson, D.M. Stipanovic, Distributed seeking of
Nash equilibria with applications to mobile sensor networks. IEEE Trans.
Autom. Control 57(4), 904–919 (2011)
7. M. Mesbahi, M. Egerstedt, Graph Theoretic Methods in Multiagent Networks
(Princeton University Press, Princeton, 2010)
8. J.S. Shamma, G. Arslan, Dynamic fictitious play, dynamic gradient play, and
distributed convergence to Nash equilibria. IEEE Trans. Autom. Control
50(3), 312–327 (2005)
-----
9. P. Frihauf, M. Krstic, T. Basar, Nash equilibrium seeking in noncooperative
games. IEEE Trans. Autom. Control 57(5), 1192–1207 (2011)
10. G. Scutari, F. Facchinei, J.-S. Pang, D.P. Palomar, Real and complex
monotone communication games. IEEE Trans. Inf. Theory 60(7),
4197–4231 (2014)
11. S. Grammatico, Dynamic control of agents playing aggregative games
with coupling constraints. IEEE Trans. Autom. Control 62(9), 4537–4548
(2017)
12. R. Olfati-Saber, J.A. Fax, R.M. Murray, Consensus and cooperation in
networked multi-agent systems. Proc. IEEE 95(1), 215–233 (2007)
13. B. Swenson, S. Kar, J. Xavier, Empirical centroid fictitious play: an approach
for distributed learning in multi-agent games. IEEE Trans. Signal Process.
63(15), 3888–3901 (2015)
14. Y. Lou, Y. Hong, L. Xie, G. Shi, K.H. Johansson, Nash equilibrium
computation in subnetwork zero-sum games with switching
communications. IEEE Trans. Autom. Control 61(10), 2920–2935 (2016)
15. J. Koshal, A. Nedi´c, U.V. Shanbhag, Distributed algorithms for aggregative
games on graphs. Oper. Res. 64(3), 680–704 (2016)
16. F. Salehisadaghiani, L. Pavel, Distributed Nash equilibrium seeking: a
gossip-based algorithm. Automatica 72, 209–216 (2016)
17. D. Gadjov, L. Pavel, A passivity-based approach to Nash equilibrium
seeking over networks. IEEE Trans. Autom. Control 64(3), 1077–1092 (2019)
18. M. Ye, G. Hu, Distributed Nash equilibrium seeking in multiagent games
under switching communication topologies. IEEE Trans. Cybern. 48(11),
3208–3217 (2017)
19. S. Liang, P. Yi, Y. Hong, Distributed Nash equilibrium seeking for
aggregative games with coupled constraints. Automatica 85, 179–185
(2017)
20. X. Zeng, J. Chen, S. Liang, Y. Hong, Generalized Nash equilibrium seeking
strategy for distributed nonsmooth multi-cluster game. Automatica 103,
20–26 (2019)
21. C. De Persis, S. Grammatico, Distributed averaging integral Nash
equilibrium seeking on networks. Automatica 110, 108548 (2019)
22. P. Yi, L. Pavel, Distributed generalized Nash equilibria computation of
monotone games via double-layer preconditioned proximal-point
algorithms. IEEE Trans. Control Netw. Syst. 6(1), 299–311 (2018)
23. A. Romano, L. Pavel, Dynamic NE seeking for multi-integrator networked
agents with disturbance rejection. IEEE Trans. Control Netw. Syst. 7(1),
129–139 (2020)
24. Y. Zhang, S. Liang, X. Wang, H. Ji, Distributed Nash equilibrium seeking for
aggregative games with nonlinear dynamics under external disturbances.
IEEE Trans. Cybern., 1–10 (2019)
25. Z. Deng, S. Liang, Distributed algorithms for aggregative games of multiple
heterogeneous Euler–Lagrange systems. Automatica 99, 246–252 (2019)
26. T. Tatarenko, W. Shi, A. Nedi´c, Geometric convergence of gradient play
algorithms for distributed Nash equilibrium seeking. IEEE Trans. Autom.
Control 66(11), 5342–5353 (2020)
27. A. Ruszczynski, Nonlinear Optimization (Princeton University Press,
Princeton, 2006)
28. F. Facchinei, J.-S. Pang, Finite-Dimensional Variational Inequalities and
_Complementarity Problems (Springer, New York, 2003)_
29. H.K. Khalil, Nonlinear Systems, 3rd edn. (Prentice Hall, New Jersey, 2002)
30. A. Berman, R.J. Plemmons, Nonnegative Matrices in the Mathematical
_Sciences (SIAM, Philadelphia, 1994)_
31. C.N. Hadjicostis, A.D. Domínguez-García, T. Charalambous, Distributed
averaging and balancing in network systems: with applications to
coordination and control. Found. Trends Syst. Control. 5(2–3), 99–292
(2018)
32. R.J. LeVeque, Finite Difference Methods for Ordinary and Partial Differential
_Equations (SIAM, Philadelphia, 2007)_
33. Y. Tang, X. Wang, Optimal output consensus for nonlinear multiagent
systems with both static and dynamic uncertainties. IEEE Trans. Autom.
Control 66(4), 1733–1740 (2020)
-----
| 11,681
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2005.10495, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://link.springer.com/content/pdf/10.1007/s43684-022-00026-2.pdf"
}
| 2,020
|
[
"JournalArticle"
] | true
| 2020-05-21T00:00:00
|
[
{
"paperId": "4e97fd0688a04f3749252a3ae47b1726bacb6999",
"title": "Nonlinear Systems"
},
{
"paperId": "888fe21a91a694864599d0ef8700d40d961303dd",
"title": "Distributed Nash Equilibrium Seeking for Aggregative Games With Nonlinear Dynamics Under External Disturbances"
},
{
"paperId": "b7471a4d2d866062f1b8b72aec42b3c8b3e77a9a",
"title": "Game Theory"
},
{
"paperId": "03764d114ba4ab173e2fc49665ca7c05e191e08e",
"title": "Optimal Output Consensus for Nonlinear Multiagent Systems With Both Static and Dynamic Uncertainties"
},
{
"paperId": "5d0aaa55deb2419372a998c5cd8af835078b08a3",
"title": "Generalized Nash equilibrium seeking strategy for distributed nonsmooth multi-cluster game"
},
{
"paperId": "a770574c91d21907acd193965fb7fad8231eba7b",
"title": "Dynamic NE Seeking for Multi-Integrator Networked Agents With Disturbance Rejection"
},
{
"paperId": "1bee9e6b19c12d8a7561e3bb8da644396e7bfb6c",
"title": "Distributed algorithms for aggregative games of multiple heterogeneous Euler-Lagrange systems"
},
{
"paperId": "71a06b720ecf1eb54b82736cd2892f586cb21b0d",
"title": "Distributed averaging integral Nash equilibrium seeking on networks"
},
{
"paperId": "1dfc9babe892f45596fce2ad2dec399884a9f661",
"title": "Distributed Nash Equilibrium Seeking in Multiagent Games Under Switching Communication Topologies"
},
{
"paperId": "98607235597b7099cee20619270741cbbd0b10e1",
"title": "Geometric Convergence of Gradient Play Algorithms for Distributed Nash Equilibrium Seeking"
},
{
"paperId": "fa232df830ad553351b18df48c13c8535a0c85e5",
"title": "Distributed Averaging and Balancing in Network Systems: with Applications to Coordination and Control"
},
{
"paperId": "1129989fa87f6bcdc728dde535ce4952f01ccfb0",
"title": "A Passivity-Based Approach to Nash Equilibrium Seeking Over Networks"
},
{
"paperId": "a10148f0b7c7c566f95a0cafc5dc10e6e69d2c43",
"title": "Distributed Generalized Nash Equilibria Computation of Monotone Games via Double-Layer Preconditioned Proximal-Point Algorithms"
},
{
"paperId": "4e97fb26f31037f9cbcaa121c1ebefd1ef44423c",
"title": "Distributed Nash equilibrium seeking: A gossip-based algorithm"
},
{
"paperId": "a4198fda5f680d13b5e419e635e69d3438e5c423",
"title": "Dynamic Control of Agents Playing Aggregative Games With Coupling Constraints"
},
{
"paperId": "b8c41b7d1bd94ec480295d1c63ee0d840038defe",
"title": "Distributed Nash equilibrium seeking for aggregative games with coupled constraints"
},
{
"paperId": "0d32f8bb4a9cba9b8528bcfb5a77fc16ba6b3a13",
"title": "Distributed Algorithms for Aggregative Games on Graphs"
},
{
"paperId": "ae459bc664de1cf5991ba88fdcf266c1ffe31b54",
"title": "Distributed Nash Equilibrium Seeking by a Consensus Based Approach"
},
{
"paperId": "441a53ed867264b323f515d10a91ffef1660a72f",
"title": "Nash Equilibrium Computation in Subnetwork Zero-Sum Games With Switching Communications"
},
{
"paperId": "4bf27b68636e304196068afe1ca2d4406f97eb48",
"title": "Empirical Centroid Fictitious Play: An Approach for Distributed Learning in Multi-Agent Games"
},
{
"paperId": "d320093c389dcc800c454ab242f1aac2eae7e54e",
"title": "Real and Complex Monotone Communication Games"
},
{
"paperId": "48610281a7cddb909ac899539b8aeb29c4f05b31",
"title": "Nash Equilibrium Seeking in Noncooperative Games"
},
{
"paperId": "c988ff495a4532ae90720ea7154e66734143f9a6",
"title": "Distributed Seeking of Nash Equilibria With Applications to Mobile Sensor Networks"
},
{
"paperId": "72080c14258cf8ca884329d50fdbe51790670b5a",
"title": "Graph Theoretic Methods in Multiagent Networks"
},
{
"paperId": "c6c425302fd138615e5596d70a78cbb5c7291655",
"title": "Finite difference methods for ordinary and partial differential equations - steady-state and time-dependent problems"
},
{
"paperId": "aa6be519b394b44ab24c6ad964f8a2c6a9b23571",
"title": "Consensus and Cooperation in Networked Multi-Agent Systems"
},
{
"paperId": "64682d9c2a839c195793f0e09d60930adf8567c3",
"title": "Dynamic fictitious play, dynamic gradient play, and distributed convergence to Nash equilibria"
},
{
"paperId": "b07c157e7d40e06a4f2d486b16d5180d8b24acb9",
"title": "Algebraic Graph Theory"
},
{
"paperId": "657d414162a07601536683f6ed373ae85fadc09c",
"title": "Distributed algorithms for the computation of noncooperative equilibria"
},
{
"paperId": "099e9af1b8f874457dea9526516ce8c8dcb93662",
"title": "Nonnegative Matrices in the Mathematical Sciences"
},
{
"paperId": "395f0d11f226f93ff155ab44600e558ad7d8d455",
"title": "Nonlinear Optimization"
},
{
"paperId": "6c81b0ec5c6532229680e6e71f305c00415d34a2",
"title": "Handbook of Dynamic Game Theory"
},
{
"paperId": "e500153f067cfc219d229c15b0ed00524ef06ca1",
"title": "Finite-Dimensional Variational Inequalities and Complementarity Problems"
},
{
"paperId": null,
"title": "Dynamic Noncooperative Game Theory (2nd)"
}
] | 11,681
|
en
|
[
{
"category": "Mathematics",
"source": "external"
},
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/009e0025e29265a67a20891e6c39f24c438467a9
|
[
"Mathematics",
"Computer Science"
] | 0.855175
|
Safety Metric Temporal Logic Is Fully Decidable
|
009e0025e29265a67a20891e6c39f24c438467a9
|
International Conference on Tools and Algorithms for Construction and Analysis of Systems
|
[
{
"authorId": "1702514",
"name": "Joël Ouaknine"
},
{
"authorId": "1683653",
"name": "J. Worrell"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Tool Algorithm Constr Anal Syst",
"TACAS",
"Tools and Algorithms for Construction and Analysis of Systems",
"Int Conf Tool Algorithm Constr Anal Syst"
],
"alternate_urls": null,
"id": "4f5b082d-cee7-4eff-9273-dfedd95a5560",
"issn": null,
"name": "International Conference on Tools and Algorithms for Construction and Analysis of Systems",
"type": "conference",
"url": null
}
| null |
# Safety Metric Temporal Logic Is Fully Decidable
Jo¨el Ouaknine and James Worrell
Oxford University Computing Laboratory, UK
_{joel, jbw}@comlab.ox.ac.uk_
**Abstract. Metric Temporal Logic (MTL) is a widely-studied real-time**
extension of Linear Temporal Logic. In this paper we consider a fragment of MTL, called Safety MTL, capable of expressing properties such
as invariance and time-bounded response. Our main result is that the
satisfiability problem for Safety MTL is decidable. This is the first positive decidability result for MTL over timed ω-words that does not involve
restricting the precision of the timing constraints, or the granularity of
the semantics; the proof heavily uses the techniques of infinite-state verification. Combining this result with some of our previous work, we conclude that Safety MTL is fully decidable in that its satisfiability, model
checking, and refinement problems are all decidable.
## 1 Introduction
Timed automata and real-time temporal logics provide the foundation for several well-known and mature tools for verifying timed and hybrid systems [21].
Despite this success in practice, certain aspects of the real-time theory are notably less well-behaved than in the untimed case. In particular, timed automata
are not determinisable, and their language inclusion problem is undecidable [4].
In similar fashion, the model-checking problems for (linear-time) real-time logics
such as Metric Temporal Logic and Timed Propositional Temporal Logic are also
undecidable [5, 6, 17].
For this reason, much interest has focused on fully decidable real-time speci
fication formalisms. We explain this term in the present context as follows. We
represent a computation of a real-time system as a timed word : a sequence of
instantaneous events, together with their associated timestamps. A specification
denotes a timed language: a set of allowable timed words. Then a formalism (a
logic or class of automata) is fully decidable if it defines a class of timed languages that is closed under finite unions and intersections and has a decidable
language-inclusion problem[1]. Note that language emptiness and universality are
special cases of language inclusion.
In this paper we are concerned in particular with Metric Temporal Logic
(MTL), one of the most widely known real-time logics. MTL is a variant of
1 This phrase was coined in [12] with a slightly more general meaning: a specification
formalism closed under finite unions, finite intersections and complementation, and
for which language emptiness is decidable. However, since the main use of complementation in this context is in deciding language inclusion, we feel that our definition
is in the same spirit.
H. Hermanns and J. Palsberg (Eds.): TACAS 2006, LNCS 3920, pp. 411–425, 2006.
_⃝c_ Springer-Verlag Berlin Heidelberg 2006
-----
412 J. Ouaknine and J. Worrell
Linear Temporal Logic in which the temporal operators are replaced by timeconstrained versions. For example, the formula �[0,5]ϕ expresses that ϕ holds
for the next 5 time units. Until recently, the only positive decidability results for
MTL involved placing syntactic restrictions on the precision of the timing constraints, or restricting the granularity of the semantics. For example, [5, 12, 19]
ban punctual timing constraints, such as ♦=1ϕ (ϕ is true in exactly one time
unit). Semantic restrictions include adopting an integer-time model, as in [6, 11],
or a bounded-variation dense-time model, as in [22]. These restrictions guarantee that a formula has a finite tableau: in fact they yield decision procedures
for model checking and satisfiability that use exponential space in the size of
the formula. However, both the satisfiability and model checking problems are
undecidable in the unrestricted logic, cf. [5, 17].
The main contribution of this paper is to identify a new fully decidable frag
ment of MTL, called Safety MTL. Safety MTL consists of those MTL formulas
which, when expressed in negation normal form, are such that the interval I
is bounded in every instance of the constrained until operator UI and the constrained eventually operator ♦I . For example, the time-bounded response formula �(a → ♦=1b) (every a-event is followed after one time unit by a b-event) is
in Safety MTL, but not �(a → ♦(1,∞)b). Because we place no limit on the precision of the timing constraints or the granularity of the semantics, the tableau of a
Safety MTL formula may have infinitely many states. However, using techniques
from infinite-state verification, we show that the restriction to safety properties
facilitates an effective analysis.
In [16] we already gave a procedure for model checking Alur-Dill timed au
tomata against Safety MTL formulas. As a special case we obtained the decidability of the validity problem for Safety MTL (‘Is a given formula satisfied
by every timed word?’). The two main contributions of the present paper complement this result, and show that Safety MTL is fully decidable. We show the
decidability of the satisfiability problem (‘Is a given Safety MTL formula satisfied
by some timed word?’) and, more generally, we claim decidability of the refine_ment problem (‘Given two Safety MTL formulas ϕ1 and ϕ2, does every timed_
word that satisfies ϕ1 also satisfy ϕ2?’). Note that Safety MTL is not closed
under negation, so neither of these results follow trivially from the decidability
of validity.
Closely related to MTL are timed alternating automata, introduced in [15, 16].
Both cited works show that the language-emptiness problem for one-clock timed
alternating automata over finite timed words is decidable. This result is the
foundation of the above-mentioned model-checking procedure for Safety MTL.
The procedure involves translating the negation of a Safety MTL formula ϕ into
a one-clock timed alternating automaton over finite words that accepts all the
_bad prefixes of ϕ. (Every infinite timed word that fails to satisfy a Safety MTL_
formula ϕ has a finite bad prefix, that is, a finite prefix none of whose extensions
satisfies ϕ.) In contrast, the results in the present paper involve considering
timed alternating automata over infinite timed words.
-----
Safety Metric Temporal Logic Is Fully Decidable 413
Our main technical contribution is to show the decidability of language
emptiness over infinite timed words for a class of timed alternating automata
rich enough to capture Safety MTL formulas. A key restriction is that we only
consider automata in which every state is accepting. We have recently shown
that language emptiness is undecidable for one-clock alternating automata with
B¨uchi or even weak parity acceptance conditions [17]. Thus the restriction to
safety properties is crucial.
As in [16], we make use of the notion of a well-structured transition system
_(WSTS) [9] to give our decision procedure. However, whereas the algorithm in_
[16] involved reduction to a reachability problem on a WSTS, here we reduce
to a fair nontermination problem on a WSTS. The fairness requirement is connected to the assumption that timed words are non-Zeno. Indeed, we remark
that our results provide a rare example of a decidable nontermination problem
on an infinite-state system with a nontrivial fairness condition. For comparison,
undecidability results for nontermination under various different fairness conditions for Lossy Channel Systems, Timed Networks, and Timed Petri Nets can
be found in [2, 3].
**Related Work. An important distinction among real-time models is whether one**
records the state of the system of interest at every instant in time, leading to an
_interval semantics [5, 12, 19], or whether one only sees a countable sequence of in-_
stantaneous events, leading to a point-based or trace semantics [4, 6, 7, 10, 11, 22].
In the interval semantics the temporal operators of MTL quantify over the whole
time domain, whereas in the point-based semantics they quantify over a countable
set of positions in a timed word. For this reason the interval semantics is more natural for reasoning about states, whereas the point-based semantics is more natural
for reasoning about events. In this paper we adopt the latter.
MTL and Safety MTL do not differ in terms of their decidability in the interval
semantics: Alur, Feder, and Henzinger [5] showed that the satisfiability problem
for MTL is undecidable, and it is easy to see that their proof directly carries over
to Safety MTL. We pointed out in [16] that the same proof does not apply in the
point-based semantics, and we recently gave a different argument to show that
MTL is undecidable in this setting. However, our proof crucially uses a ‘liveness
formula’ of the form ♦p, and it does not apply to Safety MTL. The results in
�
this paper confirm that by excising such formulas we obtain a fully decidable
logic in the point-based setting.
## 2 Metric Temporal Logic
In this section we define the syntax and semantics of Metric Temporal Logic
(MTL). As discussed above, we adopt a point-based semantics over timed words.
A time sequence τ = τ0τ1τ2 . . . is an infinite nondecreasing sequence of time
values τi ∈ R≥0. Here it is helpful to adopt the convention that τ−1 = 0. If
_{τi : i ∈_ N} is bounded then we say that τ is Zeno, otherwise we say that
_τ is non-Zeno. A timed word over finite alphabet Σ is a pair ρ = (σ, τ_ ), where
_σ = σ0σ1 . . . is an infinite word over Σ and τ is a time sequence. We also represent_
-----
414 J. Ouaknine and J. Worrell
a timed word as a sequence of timed events by writing ρ = (σ0, τ0)(σ1, τ1) . . ..
Finally, we write T Σ[ω] for the set of non-Zeno timed words over Σ.
**Definition 1. Given an alphabet Σ of atomic events, the formulas of MTL are**
_built up from Σ by monotone Boolean connectives and time-constrained versions_
_of the next operator_ _, until operator_ _and the dual until operator_ _as_
_⃝_ _U_ _U[�]_
_follows:_
_ϕ ::= ⊤| ⊥| ϕ1 ∧_ _ϕ2 | ϕ1 ∨_ _ϕ2 | a | ⃝I ϕ | ϕ1 UI ϕ2 | ϕ1_ _U[�]I ϕ2_
_where a ∈_ _Σ, and I ⊆_ R≥0 is an open, closed, or half-open interval with end_points in N ∪{∞}._
_Safety MTL is the fragment of MTL obtained by requiring that the interval I_
_in each ‘until’ operator UI have finite length. (Note that no restriction is placed_
_on the dual until operators_ _U[�]I or next operators ⃝I_ _.)_
Additional temporal operators are defined using the usual conventions. We have
the constrained eventually operator ♦I _ϕ ≡⊤UI ϕ, and the constrained al-_
_ways operator �I ϕ ≡⊥_ _U[�]I ϕ. We use pseudo-arithmetic expressions to denote_
intervals. For example, the expression ‘= 1’ denotes the interval [1, 1]. In case
_I = [0,_ ) we simply omit the annotation I on temporal operators. Finally,
_∞_
given a ∈ _Σ, we write ¬a for_ [�]b∈Σ\{a} _[b][.]_
**Definition 2. Given a timed word ρ = (σ, τ** ) and an MTL formula ϕ, the satis_faction relation (ρ, i)_ = ϕ (read ρ satisfies ϕ at position i) is defined as follows:
_|_
**– (ρ, i) |= a iff σi = a**
**– (ρ, i) |= ϕ1 ∧** _ϕ2 iff (ρ, i) |= ϕ1 and (ρ, i) |= ϕ2_
**– (ρ, i) |= ϕ1 ∨** _ϕ2 iff (ρ, i) |= ϕ1 or (ρ, i) |= ϕ2_
**– (ρ, i) |= ⃝I ϕ iff τi+1 −** _τi ∈_ _I and (ρ, i + 1) |= ϕ_
**– (ρ, i) |= ϕ1 UI ϕ2 iff there exists j ⩾** _i such that (ρ, j) |= ϕ2, τj −_ _τi ∈_ _I, and_
(ρ, k) |= ϕ1 for all k with i ⩽ _k < j._
**– (ρ, i) |= ϕ1** _U[�]I ϕ2 iff for all j ⩾_ _i such that τj −_ _τi ∈_ _I, either (ρ, j) |= ϕ2 or_
_there exists k with i ⩽_ _k < j and (ρ, k) |= ϕ1._
_We say that ρ satisfies ϕ, denoted ρ_ = ϕ, if (ρ, 0) = ϕ. The language of ϕ is
_|_ _|_
_the set L(ϕ) =_ _ρ_ _T Σ[ω]_ : ρ = ϕ _of non-Zeno words that satisfy ϕ._
_{_ _∈_ _|_ _}_
_Example 1. Consider an alphabet Σ = {req_ _i, aq_ _i, rel i : i = X, Y } denoting the_
actions of two processes X and Y that request, acquire, and release a lock. The
following formulas are all in Safety MTL.
**– 2(aq** _X →_ _2<3¬aq_ _Y ) says that Y cannot acquire the lock less than 3 seconds_
after X acquires the lock.
**– 2(aq** _X →_ _rel_ _X_ _U[�]<3 ¬aqY ) says that Y cannot acquire the lock less than 3_
seconds after X acquires the lock, unless X first releases it.
**– 2(req** _X →_ ♦<2(aq X ∧ ♦=1rel X )) says that whenever X requests the lock, it
acquires the lock within 2 seconds and releases it exactly one second later.
-----
Safety Metric Temporal Logic Is Fully Decidable 415
## 3 Timed Alternating Automata
In this paper, following [15, 16], a timed alternating automaton is an alternating
automaton augmented with a single clock variable[2].
We use x to denote the single clock variable of an automaton. A clock con
_straint is a term of the form x ▷◁c, where c ∈_ N and ▷◁ _∈{<, ⩽, ⩾, >}. Given a_
set S of locations, Φ(S) denotes the set of formulas generated from S and the set
of clock constraints by positive Boolean connectives and variable binding. Thus
_Φ(S) is generated by the grammar_
_ϕ ::= s | x ▷◁c | ⊤| ⊥| ϕ1 ∧_ _ϕ2 | ϕ1 ∨_ _ϕ2 | x.ϕ,_
where s _S and x.ϕ binds x to 0 in ϕ._
_∈_
In the definition of a timed alternating automaton, below, the transition func
tion δ maps each location s _S and event a_ _Σ to an expression in Φ(S). Thus_
_∈_ _∈_
alternating automata allow two modes of branching: existential branching, represented by disjunction, and universal branching, represented by conjunction.
Variable binding corresponds to the automaton resetting x to 0. For example,
_δ(s, a) = (x < 1)_ _s_ _x.t means that when the automaton is in location s with_
_∧_ _∧_
clock value less than 1, it can make a simultaneous a-labelled transition to locations s and t, resetting the clock as it enters t.
**Definition 3. A timed alternating automaton is a tuple A = (Σ, S, s0, δ), where**
**– Σ is a finite alphabet**
**– S is a finite set of locations**
**– s0 ∈** _S is the initial location_
**– δ : S** _Σ_ _Φ(S) is the transition function._
_×_ _→_
_We consider all locations of_ _to be accepting._
_A_
The following example illustrates how a timed alternating automaton accepts a
language of timed words.
_Example 2. We define an automaton_ over the alphabet Σ = _a, b_ that accepts
_A_ _{_ _}_
all those timed words in which every a-event is followed one time unit later by a
_b-event._ has three locations _s, t, u_, with s the initial location. The transition
_A_ _{_ _}_
function δ is given by the following table:
_a_ _b_
_s_ _s_ _x.t_ _s_
_∧_
_t_ _t ∧_ (x ⩽ 1) (t ∧ (x < 1)) ∨ (u ∧ (x = 1))
A run of starts in location s. Every time an a-event occurs, the automaton
_A_
makes a simultaneous transition to both s and t, thus opening up a new thread
2 Virtually all decision problems, and in particular language emptiness, are undecid
able for alternating automata with more than one clock.
-----
416 J. Ouaknine and J. Worrell
of computation. The automaton resets a fresh copy of clock x when it moves
from location s to t, and in location t it only performs transitions as long as the
clock does not exceed one. Therefore if location t is entered at some point in a
non-Zeno run, it must eventually be exited. Inspecting the transition table, we
see that the only way for this to happen is if a b-event occurs exactly one time
unit after the a-event that spawned the t-state.
Next we proceed to the formal definition of a run.
Define a tree to be a directed acyclic graph (V, E) with a distinguished root
_node such that every node is reachable by a unique finite path from the root. It_
is clear that every tree admits a stratification, level : V → N, such that v E v[′]
implies level (v[′]) = level (v) + 1 and the root has level 0.
Let A = (Σ, S, s0, δ) be an automaton. A state of A is a pair (s, ν), where
_s ∈_ _S is a location and ν ∈_ R≥0 is the clock value. Write Q = S × R≥0 for
the set of all states. A finite set of states is a configuration. Given a clock value
_ν, we define a satisfaction relation |=ν between configurations and formulas in_
_Φ(S) according to the intuition that state (s, ν) can make an a-transition to_
configuration C if C |=ν δ(s, a). The definition of C |=ν ϕ is given by induction
on ϕ ∈ _Φ(S) as follows: C |=ν t if (t, ν) ∈_ _C, C |=ν x ▷◁c if ν ▷◁c, C |=ν x.ϕ if_
_C |=0 ϕ, and we handle the Boolean connectives in Φ(S) in the obvious way._
**Definition 4. A run ∆** _of_ _on a timed word (σ, τ_ ) consists of a tree (V, E)
_A_
_and a labelling function l : V_ _Q such that if l(v) = (s, ν) for some level-n_
_→_
_node v ∈_ _V, then {l(v[′]) | v E v[′]} |=ν′ δ(s, σn), where ν[′]_ = ν + (τn − _τn−1)._
_The language of_ _, denoted L(_ ), consists of all non-Zeno words over which
_A_ _A_
_A has a run whose root is labelled (s0, 0)._
Figure 1 depicts part of a run of the automaton from Example 2 on the timed
_A_
word (a, 0.3), (b, 0.5), (a, 0.8), (b, 1.3), (b, 1.8) . . . .
_⟨_ _⟩_
_0.3,a_ _0.5,b_ _0.5,b_
_0.3,a_ _0.2,b_
_(s,0.8)_ _(s,1.3)_ _(s,1.8)_
_(s,0.3)_ _(s,0.5)_
_(s,0)_ _(t,0)_ _(t,0.5)_ _(u,1)_
_(t,0)_ _(t,0.2)_
_(t,0.5)_ _(u,1)_ _(u,1.5)_
## ∆ 0 ∆ 1 ∆ 2 ∆ 3 ∆ 4 ∆ 5
**Fig. 1. Consecutive levels in a run of**
_A_
One typically applies the acceptance condition in an alternating automaton
to all paths in the run tree [20]. In the present context, since every location
is accepting, the tree structure plays no role in the definition of acceptance; in
this respect, a run could be viewed simply as a sequence of configurations. This
motivates the following definition.
**Definition 5. Given a run ∆** = ((V, E), l) of A, for each n ∈ N the configu_ration ∆n = {l(v) | v ∈_ _V, level_ (v) = n} consists of the states at level n in ∆
_(cf. the dashed boxes in Figure 1)._
|0.3,a|Col2|0.5,b|Col4|0.5,b|Col6|Col7|
|---|---|---|---|---|---|---|
|0.3,a 0.3,a 0.2,b (s,0.3) (s,0.5) (s,0) (t,0) (t,0.2)|(s,0.8) (t,0) (t,0.5)|0.5,b|(s,1.3) (t,0.5) (u,1)|0.5,b|(s,1.8) (u,1) (u,1.5)||
||||||||
||||||||
||||||||
-----
Safety Metric Temporal Logic Is Fully Decidable 417
The reader may wonder why we mention trees at all in Definition 4. The reason
is quite subtle: the tree structure is convenient for expressing a certain fairness
property (cf. Lemma 2) that allows a Zeno run to be transformed into a non-Zeno
run by inserting extra time delays.
Definition 4 only allows runs that start in a single state. More generally, we
allow runs that start in an arbitrary configuration C = {(si, νi)}i∈I . Such a run
is a forest consisting of |I| different run trees, where the i-th run starts at (si, νi).
**3.1** **Translating Safety MTL into Timed Automata**
Given a Safety MTL formula ϕ, one can define a timed alternating automaton
_Aϕ such that L(Aϕ) = L(ϕ). Since space is restricted, and since we have already_
given a similar translation in [16], we refer the reader to [18] for details. However,
we draw the reader’s attention to two important points. First, it is the restriction
to timed-bounded until operators combined with the adoption of a non-Zeno
semantics that allows us to translate a Safety MTL formula into an automaton
in which every location is accepting; this is illustrated in Example 2, where
location t, corresponding to the response formula ♦=1b, is accepting. Secondly,
we point out that each automaton Aϕ is local according to the definition below.
This last observation is important because it is the class of local automata for
which Section 5 shows decidability of language emptiness.
**Definition 6. An automaton A = (Σ, S, s0, δ) is local if for each s ∈** _S and_
_a_ _Σ, each location t_ = s appearing in δ(s, a) lies within the scope of a re_∈_ _̸_
_set quantifier x.(_ ), i.e., the automaton resets the clock whenever it changes
_−_
_location._
We call such automata local because the static and dynamic scope of any reset quantification agree, i.e., the scope does not ‘extend’ across transitions to
different locations. An investigation of the different expressiveness of local and
non-local temporal logics is carried out in [8].
## 4 The Region Automaton
Throughout this section let A = (Σ, S, s0, δ) be a timed alternating automaton,
and let cmax be the maximum constant appearing in a clock constraint in A.
**4.1** **Abstract Configurations**
We partition the set R≥0 of nonnegative real numbers into the set REG =
_{r0, r1, . . ., r2cmax +1} of regions, where r2i = {i} for i ⩽_ _cmax_, r2i+1 = (i, i + 1)
for i < cmax, and r2cmax +1 = (cmax _, ∞). The successor of each region is given by_
_succ(ri) = ri+1 for i < 2cmax + 1 and succ(r2cmax +1) = r2cmax +1. Henceforth let_
rmax denote r2cmax +1 and write reg(u) to denote the region containing u ∈ R≥0.
The fractional part of a nonnegative real x ∈ R≥0 is frac(x) = x −⌊x⌋.
We use the regions to define a discrete representation of configurations that
abstracts away from precise clock values, recording only their values to the nearest integer and the relative order of their fractional parts, cf. [4].
-----
418 J. Ouaknine and J. Worrell
**Definition 7. An abstract configuration is a finite word over the alphabet**
_Λ = ℘(S_ _REG) of nonempty finite subsets of S_ _REG._
_×_ _×_
Define an abstraction function H : ℘(Q) → _Λ[∗], yielding an abstract config-_
uration H(C) for each configuration C as follows. First, lift the function reg
to configurations by reg(C) = (s, reg(ν)) : (s, ν) _C_ . Now given a configu_{_ _∈_ _}_
ration C, partition C into a sequence of subsets C1, . . ., Cn, such that for all
(s, ν) ∈ _Ci and (t, ν[′]) ∈_ _Cj, frac(ν) ⩽_ _frac(ν[′]) iff i ⩽_ _j (so (s, ν) and (t, ν[′]) are_
in the same block Ci iff ν and ν[′] have the same fractional part). Then define
_H(C) = ⟨reg(C1), . . ., reg(Cn)⟩∈_ _Λ[∗]._
_Example 3. Consider the automaton_ from Example 1. The maximum clock
_A_
constant appearing in A is 1, and the corresponding regions are r0 = {0},
r1 = (0, 1), r2 = {1} and r3 = (1, ∞). Given a concrete configuration C =
(s, 1), (t, 0.4), (s, 1.4), (t, 0.8), the corresponding abstract configuration H(C)
_{_ _}_
is ⟨{(s, r2)}, {(t, r1), (s, r3)}, {(t, r1)}⟩.
The image of the function H, which is a proper subset of Λ[∗], is the set of well_formed words according to the following definition._
**Definition 8. Say that an abstract configuration w ∈** _Λ[∗]_ _is well-formed if it_
_is empty or if both of the following hold._
**– The only letter of w containing a pair (s, r) with r a singular region is the**
_first letter w0._
**– Whenever w0 contains a singular region, the only nonsingular region that**
_also appears in w0 is rmax_ _._
_Write W ⊆_ _Λ[∗]_ _for the set of well formed words._
We model the progression of time by introducing the notion of the time successor
of an abstract configuration. We first illustrate the idea informally with concrete
configurations.
_Example 4. Consider a configuration C =_ (s, 1.2), (t, 2.5), (s, 0.8) . Intuitively,
_{_ _}_
the time successor of C is C[′] = {(s, 1.4), (t, 2.7), (s, 1)}, where time has advanced
0.2 units and the clock value in C with largest fractional part has moved to a new
region. On the other hand, a time successor of C = (s, 1), (t, 0.5) is obtained
_{_ _}_
after any time evolution δ, with 0 < δ < 0.5, so that the clock value with zero
fractional part moves to a new region, while all other clock values remain in
the same region. (Different values of δ lead to different configurations, but the
underlying abstract configuration is the same.)
The definition below formally introduces the time successor of an abstract configuration. The two clauses correspond to the two different cases in Example 4.
The first clause models the case where a clock with zero fractional part advances
to the next region, while the second clause models the case where the clock with
maximum fractional part advances to the next region.
-----
Safety Metric Temporal Logic Is Fully Decidable 419
**Definition 9. Let w = w0 · · · wn ∈** _W be an abstract configuration. We say that_
_w is transient if w0 contains a pair (s, r) with r singular._
**– If w = w0 · · · wn is transient, then its time successor is w0[′]** _[w][1][ · · ·][ w][n][, where]_
_w0[′]_ [=][ {][(][s,][ succ][(][r][)) : (][s,][ r][)][ ∈] _[w][0][}][.]_
**– If w = w0 · · · wn is not transient, then its time successor is wn[′]** _[w][0]_ _[· · ·][ w][n][−][1][,]_
_where wn[′]_ [=][ {][(][s,][ succ][(][r][)) : (][s,][ r][)][ ∈] _[w][n][}][.]_
**4.2** **Definition of R(A)**
The region automaton R( ) is a nondeterministic infinite-state untimed au_A_
tomaton (with ε-transitions) that mimics . The states of R( ) are abstract
_A_ _A_
configurations, representing levels in a run of, and the transition relation
_A_
contains those pairs of states representing consecutive levels in a run. We partition the transitions into two classes: conservative and progressive. Intuitively,
a transition is progressive if it cycles the fractional order of the clock values in
a configuration. This notion will play a role in our analysis of non-Zenoness in
Section 5.
The definition of R( ) is as follows:
_A_
**– Alphabet. The alphabet of R(** ) is Σ.
_A_
**– States. The set of states of R(A) is the set W ⊆** _Λ[∗]_ of well-formed words
over alphabet Λ = ℘(S × REG). The initial state is {(s0, r0)}.
**– ε-transitions. If w** _W has time successor w[′]_ = w, then we include a
transition w _−→ε_ _w[′] ∈(excluding self-loops here is a technical convenience).̸_
This transition is classified as conservative if w is transient, otherwise it is
progressive.
**– Labelled transitions. Σ-labelled transitions in R(** ) represent instanta_A_
neous transitions of . Given a _Σ, we include a transition w_ _a_ _w[′]_ in
_A_ _∈_ _−→_
_R(A) if there exist A-configurations C and C[′]_ with H(C) = w, H(C[′]) = w[′],
_C = {(si, νi)}i∈I and_
�
_C[′]_ = _{Mi : Mi |=νi δ(si, a)} ._
We say that this transition is progressive ifi∈I _C[′]_ = ∅ or
max _frac(ν) : (s, ν)_ _C[′]_ _< max_ _frac(ν) : (s, ν)_ _C_ _,_ (1)
_{_ _∈_ _}_ _{_ _∈_ _}_
otherwise we say that the transition is conservative. Note that (1) says that
the clocks in C with maximal fractional part get reset in the course of the
transition.
The above definition of the Σ-labelled transition relation (as a quotient) is
meant to be succinct and intuitive. However, it is straightforward to compute
the successors of each state w _W directly from the transition function δ of_
_A. For example, if δ(s, a) = s ∧ ∈x.t then we include a transition ⟨{(s, r1)}⟩_ _−→a_
_⟨{(t, r0)}, {(s, r1)}⟩_ in R(A).
Given a _Σ, write w_ =a _w[′]_ if w[′] can be reached from w by a sequence of
_∈_ _⇒_
_ε-transitions, followed by a single a-transition. The following is a variant of [16,_
Definition 15].
-----
420 J. Ouaknine and J. Worrell
**Lemma 1. Let ∆** _be a run of A on a timed word (σ, τ_ ), and recall that ∆n ⊆ _Q_
_is the set of states labelling the n-th level of ∆. Then R(_ ) has a run
_A_
=σ⇒1 _H(∆2)_
=σ2
_⇒· · ·_
[∆] : H(∆0)
=σ⇒0 _H(∆1)_
_on the untimed word σ_ _Σ[ω]._
_∈_
_Conversely, if R(_ ) has an infinite run r on σ _Σ[ω], then there is a time_
_A_ _∈_
_sequence τ and a run ∆_ _of_ _on (σ, τ_ ) such that [∆] = r.
_A_
Lemma 1 is a first step towards reducing the language-emptiness problem for
to the language-emptiness problem for R( ). What is lacking is a characteri_A_ _A_
sation of non-Zeno runs of in terms of R( ). Also, since R( ) has infinitely
_A_ _A_ _A_
many states, its own language-emptiness problem is nontrivial. We deal with
both these issues in Section 5.
## 5 A Decision Procedure for Satisfiability
Let be a local timed alternating automaton. We give a procedure for deter_A_
mining whether has nonempty language. The key ideas are as follows. We
_A_
define the notion of a progressive run of the region automaton R( ), such that
_A_
_R(_ ) has a progressive run iff has a non-Zeno run. We then use a backward_A_ _A_
reachability analysis to determine the set of states of R( ) from which there is a
_A_
progressive run. The effectiveness of this analysis depends on a well-quasi-order
on the states of R( ).
_A_
**5.1** **Background on Well-Quasi-Orders**
Recall that a quasi-order on a set Q is a reflexive and transitive relation ≼ _⊆_
_Q_ _Q. Given such an order we say that L_ _Q is a lower set if x_ _Q, y_ _L_
_×_ _⊆_ _∈_ _∈_
and x ≼ _y implies x ∈_ _L. The notion of an upper set is similarly defined. We_
define the upward closure of S ⊆ _Q, denoted ↑_ _S, to be {x | ∃y ∈_ _S : y ≼_ _x}._
This is the smallest upper set that contains S. A basis of an upper set U is a
subset Ub ⊆ _U such that U = ↑_ _Ub. A cobasis of a lower set L is a basis of the_
upper set Q _L._
_\_
**Definition 10. A well-quasi-order (wqo) is a quasi-order (Q, ≼) such that**
_for any infinite sequence q0, q1, q2, . . . in Q, there exist indices i < j such that_
_qi ≼_ _qj._
_Example 5. Let ⩽_ be a quasi-order on a finite alphabet Λ. Define the induced
_monotone domination order ≼on Λ[∗], the set of finite words over Λ, by a1 . . . am ≼_
_b1 . . . bn if there exists a strictly increasing function f : {1 . . .m} →{1, . . ., n}_
such that ai ⩽ _bf_ (i) for all i ∈{1, . . ., m}. Higman’s Lemma states that if ⩽ is
a wqo on Λ, then the induced monotone domination order ≼ is a wqo on Λ[∗].
**Proposition 1. [9, Lemma 2.4] Let (Q, ≼) be a wqo. Then**
_1. each lower set L_ _Q has a finite cobasis;_
_⊆_
_2. each infinite decreasing sequence L0 ⊇_ _L1 ⊇_ _L2 ⊇· · · of lower sets eventually_
_stabilises, i.e., there exists k ∈_ N such that Ln = Lk for all n ⩾ _k._
-----
Safety Metric Temporal Logic Is Fully Decidable 421
**5.2** **Progressive Runs**
**Definition 11. Overloading terminology, we say that a run r : w −→** _w[′]_ _−→_
_w[′′]_ _of R(_ ) is progressive if it contains infinitely many progressive
_−→· · ·_ _A_
_transitions._
The above definition is motivated by the notion of a progressive run of an (ordinary) timed automaton [4, Definition 4.11]. However our definition is more
primitive. In particular, Lemma 2, which for us is a property of progressive runs,
is the actual analog of Alur and Dill’s definition of a progressive run.
**Lemma 2. Suppose ∆** _is a run of_ _over (σ, τ_ ) such that the corresponding
_A_
_run [∆] of R(_ ) is progressive. Then there exists an infinite sequence of integers
_A_
_n0_ _<n1_ _<· · · such that τn0 <τn1 <· · · and every path in ∆_ _running from a level-ni_
_node to a level-ni+1 node contains a node (s, ν) in which ν = 0 or ν > cmax_ _._
We use Lemma 2 in the proof of Theorem 1 below, which closely follows [4,
Lemma 4.13].
**Theorem 1.** _has a non-Zeno run iff R(_ ) has a progressive run.
_A_ _A_
_Proof (sketch). It is straightforward that if ∆_ is a non-Zeno run of, then [∆]
_A_
is a progressive run of R( ). The interesting direction is the converse.
_A_
Suppose that R( ) has a progressive run r on a word σ _Σ[ω]. Then by_
_A_ _∈_
Lemma 1 there is a time sequence τ and a run ∆ of over (σ, τ ) such that
_A_
[∆] = r. If τ is non-Zeno then there is nothing to prove. We therefore suppose
that τ is Zeno, and show how to modify ∆ by inserting extra time delays to
obtain a non-Zeno run ∆[′].
Since τ is Zeno there exists N ∈ N such that τj − _τi < 1/4 for all i, j ⩾_ _N_ .
Let n0 < n1 < · · · be the sequence of integers in Lemma 2 where, without loss of
generality, N < n0. Define a new time sequence τ _[′]_ by inserting extra delays in τ
as follows:
_τi[′]+1_ _i_ [=] � _τi+1 −_ _τi if i ̸∈{n1, n2, . . .}_
_[−]_ _[τ][ ′]_ 1/2 if i ∈{n1, n2, . . .}.
Clearly τ _[′]_ is non-Zeno. We claim that a run ∆[′] over the timed word (σ, τ _[′]) can be_
constructed by appropriately modifying the clock values of the states occurring
in ∆ to account for the extra delay. What needs to be checked here is that the
modified clock values remain in the same region.
Consider a path π through ∆, and let π[m, n] denote the segment of π from
level m to level n in ∆. If the clock x does not get reset in the segment π[n0, ni]
for some i, then, by Lemma 2, it is continuously greater than cmax along the
segment π[n1, ni]: so the extra delay in ∆[′] is harmless on this part of π. Now if
_x gets reset in the segment π[ni, ni+1] for some i, it can thereafter never exceed_
1/4 along π. Thus, by Lemma 2, it must get reset at least once in every segment
_π[nj, nj+1] for j ⩾_ _i. In this case the extra delay in ∆[′]_ is again harmless. _⊓⊔_
**5.3** **Fixed-Point Characterisation**
Let PR _W denote the set of states of R(_ ) from which a progressive run can
_⊆_ _A_
originate. In order to compute PR we first characterise it as a fixed-point.
-----
422 J. Ouaknine and J. Worrell
**Definition 12. Let I ⊆** _W be a set of states of R(A). Define Pred_ +(I) to consist
_of those w_ _W such that there is a (possibly empty) sequence of conservative_
_∈_
_transitions w_ _w[′]_ _w[′′]_ _w[(][n][)], followed by a single progressive_
_−→_ _−→_ _−→· · · −→_
_transition w[(][n][)]_ _w[(][n][+1)], such that w[(][n][+1)]_ _I._
_−→_ _∈_
It is straightforward that PR is the greatest fixed point of Pred +(−) : 2[W] _→_ 2[W]
with respect to the set-inclusion order[3]. Given this characterisation, one idea to
compute PR is via the following decreasing chain of approximations:
_W ⊇_ _Pred_ +(W ) ⊇ (Pred +)[2](W ) ⊇· · · . (2)
But it turns out that we have to refine this idea a little to get an effective
procedure. We start by observing the existence of a well-quasi-order on W .
**Definition 13. Define the quasi-order ≼** _on W ⊆_ _Λ[∗]_ _to be the monotone dom-_
_ination order over Λ (cf. Example 5)._
We might hope to use Proposition 1 to show that the chain (2) stabilises after
finitely many steps. However Pred + does not map lower sets to lower sets in general. This reflects a failure of the progressive-transition relation to be downwards
compatible with ≼ in the sense of [9]. (This is not surprising—the possibility of
_w_ _W performing a progressive transition depends on its first and last letters.)_
_∈_
_Example 6. Consider the automaton_ in Example 2, with associated regions in_A_
cluding r0 = {0}, r1 =(0, 1) and r2 = _{1}. Then, in R(A), w = ⟨{(s, r1)}, {(t, r1)}⟩_
makes a progressive ε-transition to w[′] = ⟨{(t, r2)}, {(s, r1)}⟩. However, ⟨{(s, r1)}⟩,
which is a subword of w, does not belong to Pred +(↓ _w[′]). Indeed, any state reach-_
able from ⟨{(s, r1)}⟩ by a sequence of conservative transitions followed by a single
progressive transition must contain the letter {(s, r2)}.
Although Pred + fails to enjoy one-step compatibility with ≼, it satisfies a kind of
infinitary compatibility. More precisely, even though Pred + does not map lower
sets to lower sets, its greatest fixed point is a lower set.
**Proposition 2. PR is a lower set.**
_Proof. We exploit the correspondence between non-Zeno runs of_ and progres_A_
sive runs of R( ), as given in Proposition 1.
_A_
Suppose w[′] _∈_ _PR and w ≼_ _w[′]. Then there exist A-configurations C, C[′]_ such
that C ⊆ _C[′], H(C) = w and H(C[′]) = w[′]. Since w[′]_ _∈_ _PR, by Proposition 1 A_
has a run ∆[′] on some non-Zeno word ρ such that ∆[′]0 [=][ C][′][. Now let][ ∆] [be the]
subgraph of ∆[′] consisting of all nodes reachable from those level-0 nodes of ∆[′]
labelled by elements of C _C[′]. Then ∆_ is also a run of on ρ, so w _PR by_
_⊆_ _A_ _∈_
Proposition 1 again.
_⊓⊔_
3 It is not possible for w to belong to the greatest fixed point of Pred + merely by virtue
of being able to perform an infinite consecutive sequence of ε-transitions that includes
infinitely many progressive ε-transitions. The reason is that once all the clock values
in a configuration have advanced beyond the maximum of clock constant cmax, then
the configuration is no longer capable of performing ε-transitions (cf. Section 4.2.)
-----
Safety Metric Temporal Logic Is Fully Decidable 423
In anticipation of applying Proposition 2, we make the following definition.
**Definition 14. Define Ψ : 2[W]** _→_ 2[W] _by Ψ_ (I) = W _\ ↑_ (W \ Pred +(I)).
By construction, Ψ maps lower sets to lower sets. Also, being a monotone selfmap of (2[W] _,_ ), it has a greatest fixed point, denoted gfp(Ψ ).
_⊆_
**Proposition 3. PR is the greatest fixed point of Ψ** _._
_Proof. Since PR is both a fixed point of Pred+ and a lower set we have:_
_Ψ_ (PR) = W _\ ↑_ (W \ Pred +(PR))
= W (W _PR)_
_\ ↑_ _\_
= W (W _PR)_
_\_ _\_
= PR .
That is, PR is a fixed point of Ψ . It follows that PR _gfp(Ψ_ ).
_⊆_
The reverse inclusion, gfp(Ψ ) _PR follows easily from the fact that Ψ_ (I)
_⊆_ _⊆_
_Pred_ +(I) for all I ⊆ _W_ . _⊓⊔_
Next we assert that Ψ is computable.
**Proposition 4. Given a finite cobasis of a lower set L** _W_ _, there is a procedure_
_⊆_
_to compute a finite cobasis of Ψ_ (L).
Proposition 4 is nontrivial since the definition of Ψ involves Pred +, which refers to
multi-step reachability (by conservative transitions), not just single-step reachability. We refer the reader to [18] for a detailed proof. The proof exploits the fact
that conservative transitions on local automata have a very restricted ability to
transform a configuration—for instance, the only way they can change the order
of the fractional values of the clocks is by resetting some clocks to 0.
**5.4** **Main Results**
**Theorem 2. The satisfiability problem for Safety MTL is decidable.**
_Proof. Since every Safety MTL formula can be translated into a local automaton,_
it suffices to show that language emptiness is decidable for local automata.
Given a local automaton, let Ψ be as in Definition 14. Since Ψ is monotone
_A_
and maps lower sets to lower sets, W _Ψ_ (W ) _Ψ_ [2](W ) is a decreasing
_⊇_ _⊇_ _⊇· · ·_
sequence of lower sets in (W, ≼). By Proposition 1 this sequence stabilises after
some finite number of iterations. By construction, the stabilising value is the
greatest fixed point of Ψ, which by Proposition 3 is the set PR. Furthermore,
using Proposition 4 we can compute a finite cobasis of each successive iterate
_Ψ_ _[n](W_ ) until we eventually obtain a cobasis for PR. We can then decide whether
the initial state of R( ) is in PR which, by Theorem 1, holds iff has nonempty
_A_ _A_
language.
_⊓⊔_
-----
424 J. Ouaknine and J. Worrell
We leave the complexity of the satisfiability problem for future work. The argument used to derive the nonprimitive recursive lower bound for MTL satisfiability
over finite timed words [16] does not apply here.
By combining the techniques used to prove Theorem 2 with the techniques
used in [16] to show that the model-checking problem is decidable for Safety
MTL, one can show the decidability of the refinement problem: ‘Given two Safety
MTL formulas ϕ1 and ϕ2, does every word satisfying ϕ1 also satisfy ϕ2?’
**Theorem 3. The refinement problem for Safety MTL is decidable.**
## 6 Conclusion
It is folklore that extending linear temporal logic in any way that enables expressing the punctual specification ‘in one time unit ϕ will hold’ yields an undecidable
logic over a dense-time semantics. Together with [17], this paper reveals that
there is an unexpected factor affecting the truth or falsity of this belief. While
[17] shows that Metric Temporal Logic is undecidable over timed ω-words, the
proof depends on being able to express liveness properties, such as ♦p. On
�
the other hand, this paper shows that the safety fragment of MTL remains
fully decidable in the presence of punctual timing constraints. This fragment
is not closed under complement, and the decision procedures for satisfiability
and model checking are quite different. The algorithm for satisfiability solves a
nontermination problem on a well-structured transition system by iterated backward reachability, while the algorithm for model checking, given in a previous
paper [16], used forward reachability.
**Acknowledgement. The authors would like to thank the anonymous referees**
for providing many helpful suggestions to improve the presentation of the paper.
## References
1. P. A. Abdulla, J. Deneux, J. Ouaknine and J. Worrell. Decidability and complexity
results for timed automata via channel systems. In Proceedings of ICALP 05, LNCS
3580, 2005.
2. P. A. Abdulla and B. Jonsson. Undecidable verification problems with unreliable
channels. Information and Computation, 130:71–90, 1996.
3. P. A. Abdulla, B. Jonsson. Model checking of systems with many identical timed
processes. Theoretical Computer Science, 290(1):241–264, 2003.
4. R. Alur and D. Dill. A theory of timed automata. Theoretical Computer Science,
126:183–235, 1994.
5. R. Alur, T. Feder and T. A. Henzinger. The benefits of relaxing punctuality. Journal
_of the ACM, 43:116–146, 1996._
6. R. Alur and T. A. Henzinger. Real-time logics: complexity and expressiveness.
_Information and Computation, 104:35–77, 1993._
7. R. Alur and T. A. Henzinger. A really temporal logic. Journal of the ACM, 41:181–
204, 1994.
-----
Safety Metric Temporal Logic Is Fully Decidable 425
8. P. Bouyer, F. Chevalier and N. Markey. On the expressiveness of TPTL and MTL.
_Research report LSV-2005-05, Lab. Sp´ecification et V´erification, May 2005._
9. A. Finkel and P. Schnoebelen. Well-structured transition systems everywhere! The
_oretical Computer Science, 256(1-2):63–92, 2001._
10. T. A. Henzinger. It’s about time: Real-time logics reviewed. In Proceedings of
_CONCUR 98, LNCS 1466, 1998._
11. T. A. Henzinger, Z. Manna and A. Pnueli. What good are digital clocks? In Pro
_ceedings of ICALP 92, LNCS 623, 1992._
12. T. A. Henzinger, J.-F. Raskin, and P.-Y. Schobbens. The regular real-time lan
guages. In Proceedings of ICALP 98, LNCS 1443, 1998.
13. G. Higman. Ordering by divisibility in abstract algebras. Proceedings of the London
_Mathematical Society, 2:236–366, 1952._
14. R. Koymans. Specifying real-time properties with metric temporal logic. Real-time
_Systems, 2(4):255–299, 1990._
15. S. Lasota and I. Walukiewicz. Alternating timed automata. In Proceedings of FOS
_SACS 05, LNCS 3441, 2005._
16. J. Ouaknine and J. Worrell. On the decidability of Metric Temporal Logic. In
_Proceedings of LICS 05, IEEE Computer Society Press, 2005._
17. J. Ouaknine and J. Worrell. Metric temporal logic and faulty Turing machines.
Proceedings of FOSSACS 06, LNCS, 2006.
18. J. Ouaknine and J. Worrell. Safety MTL is fully decidable. Oxford University
Programming Research Group Research Report RR-06-02.
19. J.-F. Raskin and P.-Y. Schobbens. State-clock logic: a decidable real-time logic. In
_Proceedings of HART 97, LNCS 1201, 1997._
20. M. Vardi. Alternating automata: Unifying truth and validity checking for temporal
logics. In Proceedings of CADE 97, LNCS 1249, 1997.
21. F. Wang. Formal Verification of Timed Systems: A Survey and Perspective. Pro
_ceedings of the IEEE, 92(8):1283–1307, 2004._
22. T. Wilke. Specifying timed state sequences in powerful decidable logics and timed
automata. Formal Techniques in Real-Time and Fault-Tolerant Systems, LNCS
863, 1994.
-----
| 13,634
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/11691372_27?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/11691372_27, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://link.springer.com/content/pdf/10.1007/11691372_27.pdf"
}
| 2,006
|
[
"JournalArticle"
] | true
| 2006-03-25T00:00:00
|
[] | 13,634
|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Medicine",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/009f5229d00856f877f08ccd69ef9ebf23f92a3f
|
[
"Medicine"
] | 0.819188
|
Application of a Machine Learning Technology in the Definition of Metabolically Healthy and Unhealthy Status: A Retrospective Study of 2567 Subjects Suffering from Obesity with or without Metabolic Syndrome
|
009f5229d00856f877f08ccd69ef9ebf23f92a3f
|
Nutrients
|
[
{
"authorId": "80341737",
"name": "D. Masi"
},
{
"authorId": "1409472120",
"name": "R. Risi"
},
{
"authorId": "2150802474",
"name": "Filippo Biagi"
},
{
"authorId": "2151253226",
"name": "Daniel Vasquez Barahona"
},
{
"authorId": "46865925",
"name": "Mikiko Watanabe"
},
{
"authorId": "101722551",
"name": "R. Zilich"
},
{
"authorId": "49979923",
"name": "G. Gabrielli"
},
{
"authorId": "2150806098",
"name": "Pierluigi Santin"
},
{
"authorId": "4594827",
"name": "S. Mariani"
},
{
"authorId": "4194183",
"name": "C. Lubrano"
},
{
"authorId": "3624061",
"name": "L. Gnessi"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-169249",
"http://www.mdpi.com/journal/nutrients/",
"https://www.mdpi.com/journal/nutrients"
],
"id": "3416dd37-f45d-40ed-b04e-875fcff8fa2f",
"issn": "2072-6643",
"name": "Nutrients",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-169249"
}
|
The key factors playing a role in the pathogenesis of metabolic alterations observed in many patients with obesity have not been fully characterized. Their identification is crucial, and it would represent a fundamental step towards better management of this urgent public health issue. This aim could be accomplished by exploiting the potential of machine learning (ML) technology. In a single-centre study (n = 2567), we used an ML analysis to cluster patients with metabolically healthy (MHO) or metabolically unhealthy (MUO) obesity, based on several clinical and biochemical variables. The first model provided by ML was able to predict the presence/absence of MHO with an accuracy of 66.67% and 72.15%, respectively, and included the following parameters: HOMA-IR, upper body fat/lower body fat, glycosylated haemoglobin, red blood cells, age, alanine aminotransferase, uric acid, white blood cells, insulin-like growth factor 1 (IGF-1) and gamma-glutamyl transferase. For each of these parameters, ML provided threshold values identifying either MUO or MHO. A second model including IGF-1 zSDS, a surrogate marker of IGF-1 normalized by age and sex, was even more accurate with a 71.84% and 72.3% precision, respectively. Our results demonstrated high IGF-1 levels in MHO patients, thus highlighting a possible role of IGF-1 as a novel metabolic health parameter to effectively predict the development of MUO using ML technology.
|
# nutrients
_Article_
## Application of a Machine Learning Technology in the Definition of Metabolically Healthy and Unhealthy Status: A Retrospective Study of 2567 Subjects Suffering from Obesity with or without Metabolic Syndrome
**Davide Masi** **[1,]*[,†], Renata Risi** **[1,2,†]** **, Filippo Biagi** **[1], Daniel Vasquez Barahona** **[1], Mikiko Watanabe** **[1]** **,**
**Rita Zilich** **[3], Gabriele Gabrielli** **[4], Pierluigi Santin** **[5], Stefania Mariani** **[1]** **, Carla Lubrano** **[1]** **and Lucio Gnessi** **[1]**
[����������](https://www.mdpi.com/article/10.3390/nu14020373?type=check_update&version=2)
**�������**
**Citation: Masi, D.; Risi, R.; Biagi, F.;**
Vasquez Barahona, D.; Watanabe, M.;
Zilich, R.; Gabrielli, G.; Santin, P.;
Mariani, S.; Lubrano, C.; et al.
Application of a Machine Learning
Technology in the Definition of
Metabolically Healthy and Unhealthy
Status: A Retrospective Study of 2567
Subjects Suffering from Obesity with
or without Metabolic Syndrome.
_[Nutrients 2022, 14, 373. https://](https://doi.org/10.3390/nu14020373)_
[doi.org10.3390/nu14020373](https://doi.org/10.3390/nu14020373)
Academic Editor:
Riccardo Caccialanza
Received: 25 November 2021
Accepted: 13 January 2022
Published: 15 January 2022
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright:** © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
1 Department of Experimental Medicine, Section of Medical Pathophysiology, Food Science and Endocrinology,
Sapienza University of Rome, 00161 Rome, Italy; [email protected] (R.R.);
[email protected] (F.B.); [email protected] (D.V.B.);
[email protected] (M.W.); [email protected] (S.M.); [email protected] (C.L.);
[email protected] (L.G.)
2 MRC Metabolic Diseases Unit, MRC Institute of Metabolic Science, University of Cambridge,
Cambridge CB2 1TN, UK
3 Mix-x Partner, 20153 Milano, Italy; [email protected]
4 Rulex Inc., 16122 Genova, Italy; [email protected]
5 Deimos Engineering, 33100 Udine, Italy; [email protected]
***** Correspondence: [email protected]; Tel.: +39-06-499-707-16
† These authors contributed equally to this work.
**Abstract: The key factors playing a role in the pathogenesis of metabolic alterations observed in**
many patients with obesity have not been fully characterized. Their identification is crucial, and it
would represent a fundamental step towards better management of this urgent public health issue.
This aim could be accomplished by exploiting the potential of machine learning (ML) technology. In a
single-centre study (n = 2567), we used an ML analysis to cluster patients with metabolically healthy
(MHO) or metabolically unhealthy (MUO) obesity, based on several clinical and biochemical variables.
The first model provided by ML was able to predict the presence/absence of MHO with an accuracy
of 66.67% and 72.15%, respectively, and included the following parameters: HOMA-IR, upper body
fat/lower body fat, glycosylated haemoglobin, red blood cells, age, alanine aminotransferase, uric
acid, white blood cells, insulin-like growth factor 1 (IGF-1) and gamma-glutamyl transferase. For
each of these parameters, ML provided threshold values identifying either MUO or MHO. A second
model including IGF-1 zSDS, a surrogate marker of IGF-1 normalized by age and sex, was even more
accurate with a 71.84% and 72.3% precision, respectively. Our results demonstrated high IGF-1 levels
in MHO patients, thus highlighting a possible role of IGF-1 as a novel metabolic health parameter to
effectively predict the development of MUO using ML technology.
**Keywords: metabolic syndrome; insulin-like growth factor 1; artificial intelligence**
**1. Introduction**
Artificial intelligence (AI) is becoming increasingly present in the swiftly evolving
medical field, and it is expected to generate impactful advancements in the management of
a variety of diseases. The potential medical applications of AI are endless and include the
possibility of focusing on primary or secondary prevention, personalisation of treatment,
evaluation of risk factors and likelihood of developing specific disorders. Machine learning
(ML) is a form of AI which creates algorithms, learning from and acting on data [1]. Unlike
traditional analytical approaches, ML can probe information even with only a small amount
of prior knowledge and learning from data given as input [2]. The advantage of ML is
-----
_Nutrients 2022, 14, 373_ 2 of 14
the possibility to analyse an increasing amount of qualitative and quantitative data in
an integrated system [3]. ML has already been successfully exploited to design the best
model to yield good metabolic control in type 2 diabetes mellitus (T2DM) [2] and to predict
the risk of obesity in early childhood and young people [4,5]. In certain diseases such as
obesity, marked by a wide variety of phenotypes and heterogenous manifestations, ML has
the potential to optimally characterise individuals, and can provide valuable information
to design a personalised management plan. With the help of ML technology, a recent
study has succeeded in subclassifying obese phenotypes into different metabolic clusters,
reflecting underlying pathophysiology [6].
Obesity is defined as an abnormal fat accumulation, with a detrimental effect on
health that has been historically diagnosed as a body mass index (BMI) equal or greater
than 30 kg/m[2] [7,8]. The current diagnostic criteria, however, have poorly characterized
the obese population, as they do not take into account body fat distribution, which is
largely responsible for the cardiometabolic risk associated with obesity. The pattern of fat
deposition presents with a great interindividual variability and results in different clinical
presentations. As an example, visceral fat has been associated with a growing burden
of noncommunicable diseases, such as metabolic syndrome, diabetes and cardiovascular
disease [9]. The metabolic syndrome refers to the co-occurrence of several known cardiovascular risk factors, including altered glucose metabolism, obesity, atherogenic dyslipidaemia
and hypertension. There has been recent controversy about its definition, although the most
widely used criteria for the diagnosis are those established by the National Cholesterol Education Program Adult Treatment Panel III (NCEP ATP III) and the International Diabetes
Federation (IDF) [9]. Given the frequent association between metabolic syndrome and
obesity, clinical scientists distinguish a metabolically healthy obesity (MHO), characterized
by the absence of the parameters defining metabolic syndrome except for waist circumference, from a metabolically unhealthy obesity (MUO), characterized by a significantly
higher risk of complications and mortality [10]. The factors involved in the pathogenesis of
metabolic impairment in obesity have yet to be fully elucidated. As far as cardiovascular
risk is concerned, the prognostic significance of obesity phenotypes is still under debate; a
few studies have characterised their transition trajectories considering that alterations in
the physical activity level and morbidity disabilities may precede the onset of metabolic
abnormalities [11]. Findings from epidemiological studies have shown that the prevalence
of MHO ranges from less than 10% to almost 50% in obese individuals according to different definitions of metabolic health and the population studied [12–14]. Substantially,
poor metabolic health may increase mortality regardless of obesity status [15,16]. The
characterization of metabolic status would allow to identify obese patients who are at
higher risk of complications, since moderate weight loss can be sufficient to transition from
MUO to MHO and might also lower the risk of adverse outcomes. Applying the concept of
metabolic health in management strategies may allow to easily achieve attainable goals
and ultimately protect from cardio-metabolic diseases and early death [17].
One of the key predictive factors for metabolic disruption in obesity is insulin-like
growth factor 1 (IGF-1), a mitogenic hormone involved in several processes like growth,
angiogenesis and differentiation. In individuals with obesity, lower IGF-1 serum levels
and a blunted response to growth hormone-stimulating dynamic tests are associated
with greater metabolic impairment [18–25]. However, the usefulness of IGF-1 serum
measurement is limited by a poor standardization of its normal values, as they vary
significantly with gender, age and body fat [26]. In order to overcome this limit, the IGF-1 z
standard deviation score (IGF-1 zSDS) has been previously adopted as a surrogate marker
of IGF-1 normalized by age, gender and BMI [27].
Taking these considerations into account, the aim of the study was to define a model
predicting the diagnosis of MHO in the cohort of patients that have accessed the High
Specialization Centre for the Care of Obesity, Sapienza University of Rome, between 2010
and 2019 through ML technology.
In particular, we aimed to:
-----
_Nutrients 2022, 14, 373_ 3 of 14
(1) Describe the cohort of patients at the time of their first access to our obesity specialisation centre with a rigorous collection of anthropometric, clinical and metabolic data.
(2) Apply AI with a logic ML approach in the obese subgroup of patients to identify new
parameters possibly involved mechanistically in the pathogenesis of the metabolic
syndrome (either clinical, biochemical or instrumental), which could help distinguish MUO from MHO patients and define the best model capable of predicting the
development of MUO, with a special focus on IGF-1 zSDS.
**2. Materials and Methods**
_2.1. Study Design_
This was an observational retrospective study. Data were derived from a database
including medical records of all patients attending the High Specialization Centre for the
Care of Obesity, Sapienza University of Rome, between 2001 and 2019. The study was
approved by the Medical Ethical Committee of Sapienza University of Rome (ref. CE5475)
and was conducted in accordance with the Declaration of Helsinki (1964) and subsequent
amendments. All patients undergoing clinical examination provided written consent
upon admission to our specialisation centre. Inclusion of patients in the ML analysis was
regulated by the following criteria:
Inclusion criteria: age 18 years old and body mass index 30 kg/m[2].
_−_ _≥_ _≥_
Exclusion criteria: (1) pregnancy or breastfeeding; (2) patients with type 1 diabetes
_−_
mellitus and severe chronic liver or kidney dysfunction; (3) tobacco habit and alcohol
abuse; (4) current medication with drugs that could lead to weight gain.
_2.2. Subjects and Measurements_
All clinical, anthropometric, biochemical and hormonal parameters that are routinely
part of the diagnostic path that patients undertake when hospitalized in our centre were
included in the database. All patients had extensive blood tests performed, such as complete
blood count and a comprehensive metabolic panel, including but not limited to renal and
liver function testing, serum electrolytes and additional analyses as needed.
2.2.1. Anthropometric Measurements
Anthropometric parameters were obtained between 8 and 10 a.m. in fasting subjects wearing light clothing and no shoes. Body weight was obtained with the use of a
balance-beam scale (Seca GmbH & Co., Hamburg, Germany). Height was rounded to the
nearest 0.5 cm. Waist circumference was measured at the level of the iliac crest and hip
circumference at the level of the symphysis-greater trochanter to the closest centimetre.
Subsequently, the following indirect anthropometric indices were derived: body mass
index (BMI) calculated as weight divided by squared height in metres (kg/m[2]); waist hip
ratio (WHR) calculated as waist circumference (cm) divided by hip circumference (cm).
Arterial blood pressure was measured at the right arm, with the patients in the sitting
position after five minutes of rest. The average of three different measurements with a
mercury sphygmomanometer was used for the analysis.
2.2.2. Routine Laboratory Assessments
Blood samples were collected between 8 and 9 a.m. by venepuncture from fasting
patients. Samples were then transferred to the local laboratory and handled according to
the local standards of practice.
The following assays were measured: complete blood count (CBC), fasting blood glucose (FBG), insulin, total cholesterol (TC), triglyceride (TG), high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), glycosylated haemoglobin
(HbA1c), aspartate aminotransferase (AST), alanine aminotransferase (ALT), alkaline phosphatase (ALP), gamma-glutamyl transferase (γ GT), serum albumin, serum creatinine,
direct and indirect serum bilirubin, C-reactive protein (CRP), erythrocyte sedimentation
-----
_Nutrients 2022, 14, 373_ 4 of 14
rate (ESR), serum sodium, serum potassium, serum calcium, serum phosphorus and
25-hydroxyvitamin D.
To predict insulin resistance, a homeostatic model assessment of insulin resistance
(HOMA-IR) was calculated according to the following formula: HOMA-IR = (insulin (mU/l)
fasting blood glucose (mmol/l))/22.5.
_×_
2.2.3. Hormonal Assessments
In accordance with the European Society of Endocrinology Clinical Guideline on the
Endocrine Work-up in Obesity [28], patients were tested for secondary forms of obesity,
such as hypothyroidism or hypercortisolism, as appropriate.
TSH measurements were based on a chemiluminescent immunoassay (CLIA) using
ADVIA Centaur (Siemens Medical Solutions Diagnostics, Tokyo, Japan), whereas serum
cortisol was measured by an immunoradiometric assay (Abbott Diagnostics, Chicago,
IL, USA).
Moreover, insulin-like growth factor 1 (IGF-1) was measured in all patients presenting
with signs and symptoms of adult-onset growth hormone deficiency [29]. Specifically,
IGF-1 was assayed by an immunoradiometric assay, after ethanol extraction (Diagnostic
System Laboratories Inc., Webster, TX, USA). The normal ranges in <23, 23–30, 30–50,
50–100-year-old patients were 195–630, 180–420, 100–415, 70–250 mg/l, respectively. Since
IGF-1 serum levels strictly depend on age and gender, we calculated the SDS of IGF-1 levels
according to age (zSDS) to analyse the relationships between IGF-1 levels and the other
parameters. In order to obtain a z-score, we calculated the mean and S.D. of IGF-1 levels in
young (<30 years), adults (30–50 years), middle-aged (50–65 years), and elderly (>65 years)
women and men, as previously described [27]. zSDS is defined by the following formula:
IGF-1 zSDS = (IGF-1 mean)/S.D.
_−_
2.2.4. Dual-Energy X-ray Absorptiometry
Human body composition parameters were measured with dual-energy X-ray absorptiometry (DXA) (Hologic A Inc., Bedford, MA, USA, QDR 4500W). All scans were
administered by trained research technicians using standardized procedures recommended
by GE-Healthcare. The instrument was calibrated daily. Whole body as well as regional
body composition were assessed. Delimiters for regional analysis were determined by
standard software (Hologic Inc., Marlborough, MA, USA, S/N 47168 VER. 11.2). Regions
of the head, trunk, arms and legs were distinguished with the use of specific anatomic
landmarks.
Therefore, for each patient, the following parameters were measured: whole-body fat
mass (FM, kg and %), truncal fat mass (TFM, kg and %), appendicular fat mass (AFM), lean
mass (kg). Appendicular lean mass (ALM, kg) was determined by summing lean mass
measurements of the arms and legs. Fat distribution was assessed by upper body/lower
body fat index, calculated as the ratio between upper body fat (head, arms and trunk fat,
kg) and lower body fat (leg fat, kg) [30].
_2.3. Characteristics of the Logic Machine Learning (LML)_
ML is a subdomain of AI that “learns” inherent statistical patterns in data to make
predictions about unseen data [31]. The power of this technology involves the analysis of a
plethora of variables, with subsequent identification of models that stratify patients at risk,
thus guiding the appropriate therapeutic strategy [3].
A specific type of ML approach is the “rule generation method”, which constructs
models that are described by a set of intelligible rules, thus allowing to derive important
insights about the variables included in the analysis and their relationships with the target
attribute. In particular, Rulex[®®] (Innovation Lab, Rulex Analytics, Genova, Italy), which
was chosen for this analysis, is a logic machine learning (LML) original proprietary “clear
box-explainable” AI algorithm. This type of algorithm, unlike “black box” AI, does not pose
the problem of transparency and can be used with the objective of understanding a given
-----
_Nutrients 2022, 14, 373_ 5 of 14
phenomenon by producing sets of intelligible rules expressed in the form “if premise . . .,
_then consequence . . . ”, where “premise” refers to the combination of conditions (conditional_
clauses) on the input variables, and “consequence” contains information about the target
function (yes or no/presence or absence of disease) [2,32]. Therefore, the Rulex[®®] data
analysis process can be summarized in the following steps: (1) ML technology creates
a model from known variables and is able to establish a ranking with the most relevant
variables that explain the starting premise; (2) the model makes it explicit if there are
threshold values of the most important variables previously identified; (3) the model, if
used in a prediction, starting from variables of a new patient, makes it explicit why the
response is yes or no.
In our study, the premises were the following two: (1) “the patient is metabolically
healthy” and (2) “the patient is metabolically unhealthy”. Specifically, patients were considered as metabolically healthy obese if they did not show any of the features of metabolic
syndrome described by the ATP III criteria on top of increased waist circumference (≥94 cm
for men and 80 cm for women) [33], whereas they were considered as metabolically
_≥_
unhealthy when two or more of the features of metabolic syndrome were present. Patients
taking antidiabetic, antilipidemic and antihypertensive drugs were considered to have
diabetes, dyslipidaemia and hypertension, respectively.
Sample size for ML analysis was measured using the Vapnik–Chervonenkis dimension,
according to which at least 500 patients per class were required.
Rulex[®®] ML selected the most relevant variables to predict the development of MUO,
starting from all those included in the database (anthropometric data, biochemical and
hormonal assays, body composition by DXA) apart from blood pressure, lipid profile and
glycaemic parameters that are included in the definition of metabolic syndrome itself. Two
different predictive models were created with the highest accuracy, the first including IGF-1
among the variables selected and the second with IGF-1 zSDS instead of IGF-1. Given the
collinearity of these two variables, it was not possible to include them together in the same
model.
**3. Results**
_3.1. Population_
Our centre registered a total of 4541 hospitalizations from 2001 to 2019. Among them,
3529 patients accessing the centre in this period were diagnosed with obesity. Of these,
2824 individuals underwent only one hospitalization, while 705 more than one in different
years. Only 2567 met the inclusion criteria and were included in the ML analysis. Baseline
characteristics and age distribution of the study population are summarized in Table 1,
broken down by metabolic status. Specifically, metabolic syndrome, diagnosed according
to the ATPIII criteria [33], was significantly more prevalent among male subjects compared
to their female counterparts (Table 1). Patients with MUO had significantly higher blood
pressure, HOMA-IR, uric acid, TG, total cholesterol, LDL-cholesterol and upper/legs fat
ratio. Intriguingly, patients with MHO had higher IGF-1 values than their counterparts
with MUO (Table 1).
The calculated IGF-1 SDS was 0.86 1.98 in our population, and its distribution
_−_ _±_
in the overall study population, as well as in the metabolically healthy and unhealthy
obese subgroups, is summarized in Figure 1A,B, respectively. It is noteworthy that it was
significantly lower in the group of patients with MUO compared to the metabolically
healthy counterparts (−0.6 ± 0.8 vs. −0.2 ±0.6, p < 0.0001, Table 1).
-----
_Nutrients 2022, 14, 373_ 6 of 14
**Table 1. Baseline characteristics of study population included in the ML analysis, broken down by**
presence/absence of metabolic impairment.
**MHO** **MUO** **Overall**
**(n = 695)** **(n = 1872)** **(n = 2567)**
Age (yrs) 45.9 ± 13.5 47.6 ± 13.5 ** 47.1 ± 13.4
Gender (%F) 82.3% 74.6% * 76.7%
Obesity duration (yrs) 25.5 ± 15.4 26.4 ± 15.1 26.1 ± 15.2
BMI (kg/m[2]) 38.0 ± 6.1 39.8 ± 6.8 *** 39.3 ± 6.6
WC (cm) 116.6 ± 15.3 121.9 ± 15.4 ** 120.5 ± 15.4
HC (cm) 121.5 ± 14.5 122.4 ± 14.9 122.2 ± 14.7
WHR 0.95 ± 0.12 0.99 ± 0.09 1.0 ± 0.1
SBP (mmHg) 126.4 ± 10.9 131.9 ± 16.3 * 130.4 ± 15.2
DBP (mmHg) 79.3 ± 10.8 83.1 ± 11.1 ** 82.1 ± 11.0
IGF-1 (ng/mL) 165.2 ± 77.2 154.4 ± 74.5 * 157.3 ± 76.1
IGF-1 zSDS _−0.96 ± 2.3_ _−1.1 ± 1.96_ _−1.1 ± 2.1_
AST (U/L) 19.5 ± 7.5 22.1 ± 12.1 *** 21.4 ± 8.7
ALT (U/L) 23.7 ± 16.4 30.3 ± 22.1 *** 28.5 ± 21.3
γ GT (U/L) 23.4 ± 24.4 28.9 ± 16.5 * 27.4 ± 19.4
Uric acid (mg/dL) 4.9 ± 1.3 5.5 ± 1.5 *** 5.3 ± 1.4
HOMA-IR 3.5 ± 3.2 5.7 ± 5.4 *** 5.1 ± 4.5
HbA1c (%) 5.7 ± 1.1 6.2 ± 1.1 6.1 ± 1.1
Vitamin D (ng/mL) 21.9 ± 10.2 20.5 ± 10.3 ** 20.9 ± 10.3
Folate (ng/mL) 7.9 ± 23.2 8.8 ± 35.3 8.6 ± 28.4
TG (mg/dL) 91.6 ± 27.2 150 ± 80.1 *** 134.2 ± 62.7
TC (mg/dL) 144 ± 33.3 195.1 ± 41 *** 181,3 ± 37.2
HDLC (mg/dL) 59.6 ± 11.3 45.2 ± 10.6 ** 49.1 ± 10.9
LDLC (mg/dL) 116.5 ± 30.7 120.1 ± 30.2 ** 119.1 ± 30.5
Creatinine (mg/dL) 0.7 ± 0.16 0.8 ± 0.23 0.8 ± 0.19
Ca (mg/dL) 9.32 ± 0.44 9.34 ± 0.44 9.3 ± 0.44
Ph (mg/dL) 3.5 ± 0.5 3.5 ± 0.6 3.5 ± 0.6
Na (mmol/L) 141.5 ± 2.6 140.9 ± 2.5 141.1 ± 2.5
K (mmol/L) 4.2 ± 0.3 4.2 ± 0.4 4.2 ± 0.4
Albumin (g/dL) 4.3 ± 0.4 4.3 ± 0.4 4.3 ± 0.4
CRP (µg/L) 0.5 ± 0.5 0.7 ± 0.6 ** 0.6 ± 0.6
ESR (mm/h) 26.1 ± 16.4 27.9 ± 17.2 * 27.4 ± 16.8
Body fat (%) 41.6 ± 6.3 40.7 ± 6.7 ** 40.9 ± 6.5
Lean mass (%) 58.4 ± 6.4 59.3 ± 6.7 ** 59.1 ± 6.6
Trunk fat (%) 39.1 ± 6.5 39.4 ± 6.5 39.3 ± 6.5
Upper/legs fat 1.62 ± 0.3 1.97 ± 0.36 *** 1.9 ± 0.32
Abbreviation: MHO, metabolically healthy obese; MUO, metabolically unhealthy obese; yrs, years; BMI, body
mass index; WC, waist circumference; HC, hip circumference; WHR, waist to hip ratio; SBP, systolic blood
pressure; DBP, diastolic blood pressure; IGF-1, insulin-like growth factor 1; IGF-1 zSDS, insulin-like growth
factor z standard deviation score; AST, aspartate aminotransferase; ALT, alanine aminotransferase; γ GT, gammaglutamyl transferase; HOMA-IR, model assessment-estimated insulin resistance; HbA1c, haemoglobin A1C; TG,
triglycerides; TC, total cholesterol; HDLC, high-density lipoprotein cholesterol; LDLC, low-density lipoprotein
cholesterol; Ca, calcium; Ph, phosphate; Na, sodium; K, potassium; CRP, C-reactive protein; ESR, erythrocyte
sedimentation rate. * p < 0.05. ** p < 0.01. *** p < 0.001.
-----
_ents_ **2022, 14, 373** 7 of 14
_Nutrients 2022, 14, 373_ 7 of 14
(A)
(B)
**Figure 1. (A) Distribution of IGF-1 zSDS in the overall study population.Figure 1. (A) Distribution of IGF-1 zSDS in the overall study population. ((B). Distribution of IGF-1 B). Distribution of IGF-1**
zSDS in the MUO and MHO subgroups. Abbreviations: IGF-1 zSDS, insulin-like growth factor 1 z zSDS in the MUO and MHO subgroups. Abbreviations: IGF-1 zSDS, insulin-like growth factor 1 z
standard deviation score; MUO, metabolically unhealthy obese group; MHO, metabolically healthy standard deviation score; MUO, metabolically unhealthy obese group; MHO, metabolically healthy
obese group. Variables are expressed as percentile of total population.
obese group. Variables are expressed as percentile of total population.
_3.2. Logic Machine Learning 3.2. Logic Machine Learning_
We considered in the ML analysis all variables in the database, except for those in-We considered in the ML analysis all variables in the database, except for those
cluded in the definition of metabolic syndrome itself, in order to identify the best model for included in the definition of metabolic syndrome itself, in order to identify the best model
predicting the presence/absence of MHO.for predicting the presence/absence of MHO. The machine learning system consideredThe machine learning system considered all the variables in the database together and not one after the other. Sixall the variables in the database together and not one after the other. Six modelling cyclesmodelling cycles were performed (learning set = 70% and test set = 30%) to analyse the various facets of this phenome-were performed (learning set = 70% and test set = 30%) to analyse the various facets of
non. this phenomenon.
In the model including IGF-1, the most important variables defining the outcome, In the model including IGF-1, the most important variables defining the outcome, startstarting from the most influencing to the least, were: HOMA-IR, upper/legs fat, HbA1c, ing from the most influencing to the least, were: HOMA-IR, upper/legs fat, HbA1c, RBC,
RBC, age, ALT, uric acid, WBC, IGF-1, γGT. The model was predictive of the presence/ab-age, ALT, uric acid, WBC, IGF-1, γGT. The model was predictive of the presence/absence
of metabolically healthy obesity with a precision of 66.67% and 72.15%, respectively
sence of metabolically healthy obesity with a precision of 66.67% and 72.15%, respectively
(Figure 2A). In a second model we included IGF-1 zSDS as variable in place of IGF-1.
(Figure 2A). In a second model we included IGF-1 zSDS as variable in place of IGF-1. In
In this model, the variables defining the outcome were: HOMA-IR, HbA1c, age, upper/legs
this model, the variables defining the outcome were: HOMA-IR, HbA1c, age, upper/legs
-----
_Nutrients_ **2022, 14, 373** 8 of 14
_Nutrients 2022, 14, 373_ 8 of 14
### fat, RBC, ALT, WBC, γGT, uric acid, neutrophils, AST, IGF-1 zSDS. In particular, in this fat, RBC, ALT, WBC, γGT, uric acid, neutrophils, AST, IGF-1 zSDS. In particular, in this model IGF-1 zSDS values >0.03 and <0.52 predicted the presence/absence of MHO, respec-model IGF-1 zSDS values >0.03 and <0.52 predicted the presence/absence of MHO, re- tively. Overall, the model increased its precision, reaching the value of 71.84% for the spectively. Overall, the model increased its precision, reaching the value of 71.84% for the presence of MHO and 72.3% for its absence (Figure 2B). presence of MHO and 72.3% for its absence (Figure 2B).
(A)
(B)
**Figure 2. Figure 2.(A () Model no. 1 with the most relevant variables and threshold values that predict the A) Model no. 1 with the most relevant variables and threshold values that predict the**
development of MUO. (development of MUO. (BB) Model no. 2 with the most relevant variables and threshold values that ) Model no. 2 with the most relevant variables and threshold values that
predict the development of MUO. Abbreviations: yrs, years; HOMA-IR, model assessment of insu-predict the development of MUO. Abbreviations: yrs, years; HOMA-IR, model assessment of insulin
lin resistance; HbA1c, haemoglobin A1C; RBC, red blood cell; ALT, alanine aminotransferase; WBC, resistance; HbA1c, haemoglobin A1C; RBC, red blood cell; ALT, alanine aminotransferase; WBC,
white blood cell; γGT, gamma-glutamyl transferase; AST, aspartate aminotransferase, IGF-1 zSDS, white blood cell; γGT, gamma-glutamyl transferase; AST, aspartate aminotransferase, IGF-1 zSDS,
insulin-like growth factor 1 z standard deviation score; MUO, metabolically unhealthy obese group; insulin-like growth factor 1 z standard deviation score; MUO, metabolically unhealthy obese group;
MHO, metabolically healthy obese group. IGF-1, insulin-like growth factor 1MHO, metabolically healthy obese group. IGF-1, insulin-like growth factor 1. .
### 4. Discussion 4. Discussion
In the current study (1) we described the characteristics of a relatively large population
### In the current study (1) we described the characteristics of a relatively large popula
of patients with obesity admitted to an Italian third tier obesity centre; (2) we adopted
### tion of patients with obesity admitted to an Italian third tier obesity centre; (2) we adopted an ML approach to identify the variables involved in the characterization of MHO in the
-----
_Nutrients 2022, 14, 373_ 9 of 14
an ML approach to identify the variables involved in the characterization of MHO in the
study population.
Notably, we found that more women than men were hospitalized for obesity in the
study period. Moreover, male subjects were significantly more likely to be diagnosed with
MS, hypertension, dyslipidaemia and diabetes mellitus compared to the female counterpart.
This is in accordance with previous studies showing that women seek for medical attention
earlier than their male counterparts and that MS prevalence is higher among men compared
to women [34,35].
Moreover, we identified two models predicting the presence of MHO in our study
population through the use of an ML approach, including all the anthropometric, general
and biochemical data collected during hospitalisation. In both models, HOMA-IR proved
to be a robust tool for the characterisation of metabolic phenotype among patients with
obesity, as values >3.48 and <2.48 (in model 1) or >2.47 and <2.10 (in model 2) identified
MUO and MHO patients, respectively. These results are close enough to the optimal cutoffs
identified by Gayoso-Diz and colleagues, who found that HOMA-IR levels significantly
increased with rising number of MS components from 1.7 (without MS components) to 5.3
(with five components) [36]. ML confirmed that insulin resistance appears to be one of the
main players in the pathophysiology of metabolic derangement in obese patients, an aspect
that was already emphasised in the original, but now outdated, WHO definition of MS in
1998 [37], although it is no longer a requirement to make a diagnosis.
Furthermore, a previous study showed that there are age and gender-specific differences in HOMA-IR levels, with increased levels in women older than fifty [38]. Interestingly,
50 years of age is the same threshold value identified by Rulex[®®] to discriminate between
MHO and MUO. This result provides evidence that there are age differences in the way
metabolic health is expressed and that, as already proved [39], the prevalence of MS and consequently of MUO has a steep increase with age. In this regard, recent strands of research
suggest that the prevalence of MUO increases with menopause and may partially explain
the apparent acceleration in cardiovascular diseases after menopause [40,41], although
menopause may be considered a predictor of MS independent of women’s age [42].
Although there is no doubt that insulin resistance is the major aetiological factor in
the development of MS, Osei and colleagues have recently investigated the significance
of HbA1c as a surrogate marker for MS, showing that in subjects with increased HbA1c,
some, albeit not all, of the components of MS could be defined by HbA1c [43]. In this
regard, as suggested by the Rulex[®®] model, a glycosylated haemoglobin above 5.25%,
although not diagnostic for diabetes or prediabetes, contributes to the identification of
metabolic impaired patients. Our finding confirms that HbA1c may be a valid predictor
of MUO status [44] and the threshold value we found reflects what is currently reported
in the literature according to which a HbA1c of 5.45% can predict the presence of MS [45].
Moreover, elevated levels of serum uric acid (SUA) have been suggested to associate with
cardiovascular disease, obesity and MS [46]. In this regard, the ML analysis confirmed that
patients with normal levels of SUA, and specifically below 6.25 mg/dl, are more likely to
have MHO.
Another interesting parameter that was identified by ML in predicting MUO is the
value of liver enzymes. Specifically, ALT levels above 29.35 U/L (first model) or 28.9 U/L
(second model) describe the cohort of patients with MUO. A slight increase in liver indices,
especially AST, can be considered as a red flag for the development of nonalcoholic liver
disease (NAFLD), commonly recognized as the hepatic manifestation of the MS, as reflected
by the presence of ALT, AST and BMI in the surrogate marker of NAFLD hepatic steatosis
index (HSI) [47,48]. ML confirmed that in subjects with obesity or MS, screening for NAFLD
by liver enzymes and/or ultrasound should be part of routine workup, as recommended in
the clinical practice guidelines for the management of NAFLD provided by the European
Association for the Study of Obesity [49]. ML also proved that ALT values in the normal
range may play a role in the identification of MHO patients, but failed to define a specific
threshold value for ALT in predicting MUO. Regarding γGT, which was also included
-----
_Nutrients 2022, 14, 373_ 10 of 14
in the models, serum levels higher than 17.45 U/L (first model) or 11.1 U/L (second
model) identify the group of patients with MUO. Of interest, both AST and γGT are
already included in validated, noninvasive tools for the assessment of liver fibrosis such as
Fibrosis-4 (FIB-4), NFS (NAFLD Fibrosis Score) and fatty liver index (FLI) [50]. In light of
this, as recently suggested by Godoy-Matos et al., the proper understanding of NAFLD
spectrum—as a continuum from obesity to MS and diabetes—may contribute to the early
detection and to the establishment of a targeted treatment [47,51].
Among all the variables of fat distribution evaluated with DXA, the upper/leg fat
index was identified by ML as the best predictor of MUO. An elevated ratio (>2.01), as
reported in our analysis, indicates upper body fat accumulation and central obesity, which
both lead to metabolic complications; contrarily to lower body fat, which confers reduced
risk [52]. Additionally, as we have already described, prominent upper body fat deposition
is likely to predispose individuals to apnoea. Indeed, fat accumulation in strategic locations,
such as the head and upper airway, predisposes to pharyngeal narrowing and upper
airways collapsibility resulting in obstructive sleep apnoea syndrome (OSAS) [30]. In
turn, OSAS is a risk factor for insulin resistance and diabetes and is often found in the
setting of MS. Occasionally, in a subset of patients with OSAS, secondary polycythaemia
will develop [53].
Even though a true polycythaemia is not generally found, according to our analysis an
RBC count >4.45 (10[12]/L) is a predisposing factor for MUO. When exclusively considering
the female population, the calculated cutoff was higher (>4.74 10[12]/L). These results are
along the line of already published data reporting that subjects affected by MS exhibit a
higher count of RBCs compared to metabolically healthy subjects. It has been reported
that, despite the presence of chronic inflammation which has suppressive erythropoietic
effects, erythropoiesis correlates with central obesity and insulin resistance [54] and that
RBC count is, even though still within normal range, significantly higher in the presence of
MS for each sex [55].
Innumerable etiopathogenetic mechanisms responsible for the onset of MS among patients with obesity have been identified, but chronic, low-grade and systemic inflammation
has been acknowledged as the common denominator [56]. The WBC count is an objective
marker of acute infection, tissue damage and inflammation [57]. A few studies have already
confirmed that the WBC count is correlated with the increase of certain variables of MS [58].
In this regard, our analysis found that a neutrophilic leucocytosis is often common in
MUO, suggesting an altered immune response and increased susceptibility to bacterial
and viral infections, as known from the recent COVID-19 pandemic [59–62] and previous
cross-sectional studies [63].
A further key predictive factor in the development of MS is IGF-1, a polypeptide
hormone structurally similar to insulin, which promotes tissue growth and maturation
through upregulation of anabolic processes. Adult-onset growth hormone deficiency (GHD)
is relatively common in patients with obesity, being associated with a worse metabolic
profile [64,65]. Epidemiological studies have suggested that IGF-1 levels in the upper
normal range are associated with increased insulin sensitivity, better liver status and
reduced blood pressure [66–69].
Noteworthy, the first model provided by Rulex[®®] including IGF-1, was predictive
of the presence/absence of metabolically healthy obesity with a precision of 66.67% and
72.15%, respectively. However, the usefulness of IGF-1 serum measurement is limited by a
poor standardization of its normal values, as both age and gender can significantly affect
serum IGF-1 concentrations. By the age of 65 years old, daily spontaneous GH secretion is
reduced by up to 50–70%, and consequently IGF-1 levels decline progressively as they vary
significantly with gender, age and body fat, similar to what happens with bone mineral
density (BMD). This leads to the need of a score keeping these factors into consideration,
such as the T- and Z-score developed to better evaluate BMD. In this regard, when added
IGF-1 zSDS as a variable, our second model increased its precision, reaching the value of
71.84% for the presence of metabolically healthy obesity and 72.3% for its absence.
-----
_Nutrients 2022, 14, 373_ 11 of 14
Our study suggests that ML may have a broad application in the risk stratification of
people suffering from obesity and supports its potential role in the health care system to
identify those at higher risk, among the wide population of subjects with obesity, and to
identify the parameters characterising the state of MHO, a phenotype that could represent
the first goal to be achieved in the management of chronic obesity in order to reduce the risk
of death. Moreover, we found that the surrogate marker IGF-1 zSDS, more than IGF-1 alone,
can increase the precision of the model in the prediction of the presence/absence of MHO,
suggesting its potential application in clinical practice as a marker of metabolic impairment.
The strengths and limitations of this study warrant mention. Firstly, this study was
conducted in a large cohort that was nationally representative of the Italian obese population. However, our patient cohort is not gender balanced. The main limitation of the study
is that Rulex[®®], like many other ML algorithms, needs a large amount of data to yield relevant results. Further prospective studies, with a larger number of patients, and comparison
studies with other supervised machine learning models, such as support vector machine,
naïve Bayes algorithm and random forest algorithm, are needed to confirm our results.
**5. Conclusions**
Integration of ML technology in medicine may help scientists understand in a deeper
way the pathogenesis of complex diseases, such as the metabolic ones. One possible
application of this ML analysis is the development of an algorithm, which, in a similar way
to the fracture risk assessment tool (FRAX) for osteoporosis [70], can accurately predict the
risk of developing MUO at 5 or 10 years in the population of patients with obesity, thus
identifying the clinical phenotype with the highest risk and encouraging more and more
precise and targeted therapeutic approaches.
**Author Contributions: L.G. and C.L. designed the study; D.M., R.R., F.B. and D.V.B. contributed to**
the data collection and manuscript writing; M.W. and S.M. contributed to the supervision, review and
editing of the manuscript; R.Z., G.G. and P.S. contributed to the formal analysis, software application
and interpretation of the results; L.G. took charge of funding acquisition. All authors have read and
agreed to the published version of the manuscript.
**Funding: This research was funded by Novo Nordisk S.p.A., which had no role in the study design,**
conduct of the study, collection, management, analysis and interpretation of the data; or the preparation and review of the manuscript. MW received salary support from PRIN 2017 Prot.2017L8Z2E,
Italian Ministry of Education, Universities and Research. This work was also funded with support
from PRIN 2017 Prot.2017L8Z2E and PRIN 2020 Prot.2020NCKXBR, Italian Ministry of Education,
Universities and Research.
**Institutional Review Board Statement: The study was conducted according to the guidelines of the**
Declaration of Helsinki, and was approved by the Medical Ethical Committee of Sapienza University
of Rome (ref. CE5475).
**Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.**
**Data Availability Statement: Data will be made available upon reasonable request to the correspond-**
ing author.
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. Gómez González, E.; Gómez Gutiérrez, E. Artificial Intelligence in Medicine and Healthcare: Applications, Availability and Societal Impact;
[Publications Office of the European Union: Luxembourg, 2020; ISBN 9789276184546. Available online: https://publications.jrc.ec.](https://publications.jrc.ec.europa.eu/repository/handle/JRC120214)
[europa.eu/repository/handle/JRC120214 (accessed on 13 January 2022).](https://publications.jrc.ec.europa.eu/repository/handle/JRC120214)
2. Giorda, C.B.; Pisani, F.; De Micheli, A.; Ponzani, P.; Russo, G.; Guaita, G.; Zilich, R.; Musacchio, N. Determinants of good metabolic
control without weight gain in type 2 diabetes management: A machine learning analysis. BMJ Open Diabetes Res. Care 2020, 8,
[e001362. [CrossRef]](http://doi.org/10.1136/bmjdrc-2020-001362)
3. [Chen, C. Ascent of machine learning in medicine. Nat. Mater. 2019, 18, 407. [CrossRef]](http://doi.org/10.1038/s41563-019-0360-1)
4. Dugan, T.M.; Mukhopadhyay, S.; Carroll, A.; Downs, S. Machine Learning Techniques for Prediction of Early Childhood Obesity.
_[Appl. Clin. Inform. 2015, 6, 506–520. [CrossRef]](http://doi.org/10.4338/ACI-2015-03-RA-0036)_
-----
_Nutrients 2022, 14, 373_ 12 of 14
5. Singh, B.; Tawfik, H. Machine Learning Approach for the Early Prediction of the Risk of Overweight and Obesity in Young People.
In Proceedings of the Computational Science—ICCS 2020; Krzhizhanovskaya, V.V., Závodszky, G., Lees, M.H., Dongarra, J.J., Sloot,
P.M.A., Brissos, S., Teixeira, J., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 523–535.
6. Lin, Z.; Feng, W.; Liu, Y.; Ma, C.; Arefan, D.; Zhou, D.; Cheng, X.; Yu, J.; Gao, L.; Du, L.; et al. Machine Learning to Identify
[Metabolic Subtypes of Obesity: A Multi-Center Study. Front. Endocrinol. (Lausanne) 2021, 12, 843. [CrossRef]](http://doi.org/10.3389/fendo.2021.713592)
7. [World Health Organization. World Health Organization. Health Topics. Obesity. 2019. Available online: https://www.who.int/](https://www.who.int/topics/obesity/en/)
[topics/obesity/en/ (accessed on 20 November 2021).](https://www.who.int/topics/obesity/en/)
8. Watanabe, M.; Risi, R.; De Giorgi, F.; Tuccinardi, D.; Mariani, S.; Basciani, S.; Lubrano, C.; Lenzi, A.; Gnessi, L. Obesity treatment
within the Italian national healthcare system tertiary care centers: What can we learn ? Eat. Weight Disord.—Stud. Anorexia, Bulim.
_[Obes. 2020, 26, 771–778. [CrossRef]](http://doi.org/10.1007/s40519-020-00936-1)_
9. [Després, J.P. Body Fat Distribution and Risk of Cardiovascular Disease: An Update. Circulation 2012, 126, 1301–1313. [CrossRef]](http://doi.org/10.1161/CIRCULATIONAHA.111.067264)
10. Tsatsoulis, A.; Paschou, S.A. Metabolically Healthy Obesity: Criteria, Epidemiology, Controversies, and Consequences. Curr.
_[Obes. Rep. 2020, 9, 109–120. [CrossRef]](http://doi.org/10.1007/s13679-020-00375-0)_
11. Donini, L.M.; Merola, G.; Poggiogalle, E.; Lubrano, C.; Gnessi, L.; Mariani, S.; Migliaccio, S.; Lenzi, A. Disability, Physical
Inactivity, and Impaired Health-Related Quality of Life Are Not Different in Metabolically Healthy vs. Unhealthy Obese Subjects.
_[Nutrients 2016, 8, 759. [CrossRef]](http://doi.org/10.3390/nu8120759)_
12. Wang, Y.; Zhu, X.; Chen, Z.; Yang, P.; Liu, L.; Liu, X.; Wu, L.; He, Q.; Li, Y. Natural histories of metabolite BMI phenotypes and
[their impacts on cardiovascular disease risk over a decade-long follow-up. Obes. Res. Clin. Pract. 2021, 15, 579–586. [CrossRef]](http://doi.org/10.1016/j.orcp.2021.10.002)
13. Samocha-Bonet, D.; Dixit, V.D.; Kahn, C.R.; Leibel, R.L.; Lin, X.; Nieuwdorp, M.; Pietiläinen, K.H.; Rabasa-Lhoret, R.; Roden, M.;
Scherer, P.E.; et al. Metabolically healthy and unhealthy obese—The 2013 Stock Conference report. Obes. Rev. Off. J. Int. Assoc.
_[Study Obes. 2014, 15, 697–708. [CrossRef]](http://doi.org/10.1111/obr.12199)_
14. Samocha-Bonet, D.; Chisholm, D.J.; Tonks, K.; Campbell, L.V.; Greenfield, J.R. Insulin-sensitive obesity in humans—A “favorable
[fat” phenotype? Trends Endocrinol. Metab. 2012, 23, 116–124. [CrossRef]](http://doi.org/10.1016/j.tem.2011.12.005)
15. Michalsen, V.L.; Wild, S.H.; Kvaløy, K.; Svartberg, J.; Melhus, M.; Broderstad, A.R. Obesity measures, metabolic health and their
association with 15-year all-cause and cardiovascular mortality in the SAMINOR 1 Survey: A population-based cohort study.
_[BMC Cardiovasc. Disord. 2021, 21, 510. [CrossRef]](http://doi.org/10.1186/s12872-021-02288-9)_
16. Poggiogalle, E.; Lubrano, C.; Gnessi, L.; Mariani, S.; Di Martino, M.; Catalano, C.; Lenzi, A.; Donini, L.M. The decline in muscle
strength and muscle quality in relation to metabolic derangements in adult women with obesity. Clin. Nutr. 2019, 38, 2430–2435.
[[CrossRef]](http://doi.org/10.1016/j.clnu.2019.01.028)
17. Stefan, N.; Häring, H.U.; Schulze, M.B. Metabolically healthy obesity: The low-hanging fruit in obesity treatment? Lancet Diabetes
_[Endocrinol. 2018, 6, 249–258. [CrossRef]](http://doi.org/10.1016/S2213-8587(17)30292-9)_
18. Rosén, T.; Bengtsson, B.Å. Premature mortality due to cardiovascular disease in hypopituitarism. Lancet 1990, 336, 285–288.
[[CrossRef]](http://doi.org/10.1016/0140-6736(90)91812-O)
19. Vahl, N.; Klausen, I.; Christiansen, J.S.; Jørgensen, J.O.L. Growth hormone (GH) status is an independent determinant of serum
[levels of cholesterol and triglycerides in healthy adults. Clin. Endocrinol. 1999, 51, 309–316. [CrossRef]](http://doi.org/10.1046/j.1365-2265.1999.00772.x)
20. Laughlin, G.A.; Barrett-Connor, E.; Criqui, M.H.; Kritz-Silverstein, D. The Prospective Association of Serum Insulin-Like Growth
Factor I (IGF-I) and IGF-Binding Protein-1 Levels with All Cause and Cardiovascular Disease Mortality in Older Adults: The
[Rancho Bernardo Study. J. Clin. Endocrinol. Metab. 2004, 89, 114–120. [CrossRef]](http://doi.org/10.1210/jc.2003-030967)
21. Colao, A.; Spiezia, S.; Di Somma, C.; Pivonello, R.; Marzullo, P.; Rota, F.; Musella, T.; Auriemma, R.S.; De Martino, M.C.; Lombardi,
G. Circulating insulin-like growth factor-I levels are correlated with the atherosclerotic profile in healthy subjects independently
[of age. J. Endocrinol. Investig. 2005, 28, 440–448. [CrossRef]](http://doi.org/10.1007/BF03347225)
22. Miller, K.K.; Biller, B.M.K.; Lipman, J.G.; Bradwin, G.; Rifai, N.; Klibanski, A. Truncal adiposity, relative growth hormone
[deficiency, and cardiovascular risk. J. Clin. Endocrinol. Metab. 2005, 90, 768–774. [CrossRef]](http://doi.org/10.1210/jc.2004-0894)
23. Bancu, I.; Navarro Díaz, M.; Serra, A.; Granada, M.; Lopez, D.; Romero, R.; Bonet, J. Low insulin-like growth factor-1 level in
[obesity nephropathy: A new risk factor? PLoS ONE 2016, 11, e0154451. [CrossRef]](http://doi.org/10.1371/journal.pone.0154451)
24. Watanabe, M.; Masieri, S.; Costantini, D.; Tozzi, R.; De Giorgi, F.; Gangitano, E.; Tuccinardi, D.; Poggiogalle, E.; Mariani, S.;
Basciani, S.; et al. Overweight and obese patients with nickel allergy have a worse metabolic profile compared to weight matched
[non-allergic individuals. PLoS ONE 2018, 13, e0202683. [CrossRef] [PubMed]](http://doi.org/10.1371/journal.pone.0202683)
25. Risi, R.; Masieri, S.; Poggiogalle, E.; Watanabe, M.; Caputi, A.; Tozzi, R.; Gangitano, E.; Masi, D.; Mariani, S.; Gnessi, L.; et al.
Nickel Sensitivity Is Associated with GH-IGF1 Axis Impairment and Pituitary Abnormalities on MRI in Overweight and Obese
[Subjects. Int. J. Mol. Sci. 2020, 21, 9733. [CrossRef]](http://doi.org/10.3390/ijms21249733)
26. [Le Roith, D. Insulin-Like Growth Factors. N. Engl. J. Med. 1997, 336, 633–640. [CrossRef]](http://doi.org/10.1056/NEJM199702273360907)
27. Colao, A.; Di Somma, C.; Cascella, T.; Pivonello, R.; Vitale, G.; Grasso, L.F.S.; Lombardi, G.; Savastano, S. Relationships between
serum IGF1 levels, blood pressure, and glucose tolerance: An observational, exploratory study in 404 subjects. Eur. J. Endocrinol.
**[2008, 159, 389–397. [CrossRef] [PubMed]](http://doi.org/10.1530/EJE-08-0201)**
28. Pasquali, R.; Casanueva, F.; Haluzik, M.; van Hulsteijn, L.; Ledoux, S.; Monteiro, M.P.; Salvador, J.; Santini, F.; Toplak, H.; Dekkers,
O.M. European Society of Endocrinology Clinical Practice Guideline: Endocrine work-up in obesity. Eur. J. Endocrinol. 2020, 182,
[G1–G32. [CrossRef]](http://doi.org/10.1530/EJE-19-0893)
-----
_Nutrients 2022, 14, 373_ 13 of 14
29. Fukuda, I.; Hizuka, N.; Muraoka, T.; Ichihara, A. Adult growth hormone deficiency: Current concepts. Neurol. Med. Chir. 2014, 54,
[599–605. [CrossRef]](http://doi.org/10.2176/nmc.ra.2014-0088)
30. Lubrano, C.; Saponara, M.; Barbaro, G.; Specchia, P.; Addessi, E.; Costantini, D.; Tenuta, M.; Di Lorenzo, G.; Genovesi, G.; Donini,
L.M.; et al. Relationships between body fat distribution, epicardial fat and obstructive sleep apnea in obese patients with and
[without metabolic syndrome. PLoS ONE 2012, 7, e47059. [CrossRef]](http://doi.org/10.1371/journal.pone.0047059)
31. Schwendicke, F.; Samek, W.; Krois, J. Artificial Intelligence in Dentistry: Chances and Challenges. J. Dent. Res. 2020, 99, 769–774.
[[CrossRef] [PubMed]](http://doi.org/10.1177/0022034520915714)
32. Challen, R.; Denny, J.; Pitt, M.; Gompels, L.; Edwards, T.; Tsaneva-Atanasova, K. Artificial intelligence, bias and clinical safety.
_[BMJ Qual. Saf. 2019, 28, 231–237. [CrossRef]](http://doi.org/10.1136/bmjqs-2018-008370)_
33. Executive Summary of the Third Report of The National Cholesterol Education Program (NCEP) Expert Panel on Detection,
Evaluation, And Treatment of High Blood Cholesterol In Adults (Adult Treatment Panel III). JAMA 2001, 285, 2486–2497.
[[CrossRef]](http://doi.org/10.1001/jama.285.19.2486)
34. [White, A.; Witty, K. Men’s under use of health services—Finding alternative approaches. J. Mens. Health 2009, 6, 95–97. [CrossRef]](http://doi.org/10.1016/j.jomh.2009.03.001)
35. Hunt, K.; Adamson, J.; Hewitt, C.; Nazareth, I. Do women consult more than men? A review of gender and consultation for back
[pain and headache. J. Health Serv. Res. Policy 2011, 16, 108–117. [CrossRef]](http://doi.org/10.1258/jhsrp.2010.009131)
36. Gayoso-Diz, P.; Otero-González, A.; Rodriguez-Alvarez, M.X.; Gude, F.; García, F.; De Francisco, A.; Quintela, A.G. Insulin
resistance (HOMA-IR) cut-off values and the metabolic syndrome in a general adult population: Effect of gender and age: EPIRCE
[cross-sectional study. BMC Endocr. Disord. 2013, 13, 47. [CrossRef]](http://doi.org/10.1186/1472-6823-13-47)
37. Alberti, K.G.M.M.; Zimmet, P.Z. Definition, diagnosis and classification of diabetes mellitus and its complications. Part 1:
Diagnosis and classification of diabetes mellitus. Provisional report of a WHO consultation. Diabet. Med. 1998, 15, 539–553.
[[CrossRef]](http://doi.org/10.1002/(SICI)1096-9136(199807)15:7<539::AID-DIA668>3.0.CO;2-S)
38. Gayoso-Diz, P.; Otero-Gonzalez, A.; Rodriguez-Alvarez, M.X.; Gude, F.; Cadarso-Suarez, C.; García, F.; De Francisco, A. Insulin
resistance index (HOMA-IR) levels in a general adult population: Curves percentile by gender and age. The EPIRCE study.
_[Diabetes Res. Clin. Pract. 2011, 94, 146–155. [CrossRef]](http://doi.org/10.1016/j.diabres.2011.07.015)_
39. Hildrum, B.; Mykletun, A.; Hole, T.; Midthjell, K.; Dahl, A.A. Age-specific prevalence of the metabolic syndrome defined by the
International Diabetes Federation and the National Cholesterol Education Program: The Norwegian HUNT 2 study. BMC Public
_[Health 2007, 7, 1–9. [CrossRef] [PubMed]](http://doi.org/10.1186/1471-2458-7-220)_
40. Patni, R.; Mahajan, A. The Metabolic Syndrome and Menopause. J. Midlife. Health 2018, 9, 111–112.
41. Christakis, M.K.; Hasan, H.; De Souza, L.R.; Shirreff, L. The effect of menopause on metabolic syndrome: Cross-sectional results
[from the Canadian Longitudinal Study on Aging. Menopause 2020, 27, 999–1009. [CrossRef]](http://doi.org/10.1097/GME.0000000000001575)
42. Eshtiaghi, R.; Esteghamati, A.; Nakhjavani, M. Menopause is an independent predictor of metabolic syndrome in Iranian women.
_[Maturitas 2010, 65, 262–266. [CrossRef]](http://doi.org/10.1016/j.maturitas.2009.11.004)_
43. Osei, K.; Rhinesmith, S.; Gaillard, T.; Schuster, D. Is Glycosylated Hemoglobin A1c a Surrogate for Metabolic Syndrome in
Nondiabetic, First-Degree Relatives of African-American Patients with Type 2 Diabetes? J. Clin. Endocrinol. Metab. 2003, 88,
[4596–4601. [CrossRef] [PubMed]](http://doi.org/10.1210/jc.2003-030686)
44. Geva, M.; Shlomai, G.; Berkovich, A.; Maor, E.; Leibowitz, A.; Tenenbaum, A.; Grossman, E. The association between fasting
plasma glucose and glycated hemoglobin in the prediabetes range and future development of hypertension. Cardiovasc. Diabetol.
**[2019, 18, 53. [CrossRef]](http://doi.org/10.1186/s12933-019-0859-4)**
45. Sung, K.C.; Rhee, E.J. Glycated haemoglobin as a predictor for metabolic syndrome in non-diabetic Korean adults. Diabet. Med.
**[2007, 24, 848–854. [CrossRef]](http://doi.org/10.1111/j.1464-5491.2007.02146.x)**
46. Tsushima, Y.; Nishizawa, H.; Tochino, Y.; Nakatsuji, H.; Sekimoto, R.; Nagao, H.; Shirakura, T.; Kato, K.; Imaizumi, K.; Takahashi,
[H.; et al. Uric acid secretion from adipose tissue and its increase in obesity. J. Biol. Chem. 2013, 288, 27138–27149. [CrossRef]](http://doi.org/10.1074/jbc.M113.485094)
47. Risi, R.; Tuccinardi, D.; Mariani, S.; Lubrano, C.; Manfrini, S.; Donini, L.M.; Watanabe, M. Liver disease in obesity and underweight:
[The two sides of the coin. A narrative review. Eat. Weight Disord. 2021, 26, 2097–2107. [CrossRef]](http://doi.org/10.1007/s40519-020-01060-w)
48. Watanabe, M.; Risi, R.; Camajani, E.; Contini, S.; Persichetti, A.; Tuccinardi, D.; Ernesti, I.; Mariani, S.; Lubrano, C.; Genco, A.; et al.
Baseline HOMA IR and Circulating FGF21 Levels Predict NAFLD Improvement in Patients Undergoing a Low Carbohydrate
[Dietary Intervention for Weight Loss: A Prospective Observational Pilot Study. Nutrients 2020, 12, 2141. [CrossRef]](http://doi.org/10.3390/nu12072141)
49. EASL-EASD-EASO Clinical Practice Guidelines for the management of non-alcoholic fatty liver disease. J. Hepatol. 2016, 64,
[1388–1402. [CrossRef]](http://doi.org/10.1016/j.jhep.2015.11.004)
50. Angulo, P.; Hui, J.M.; Marchesini, G.; Bugianesi, E.; George, J.; Farrell, G.C.; Enders, F.; Saksena, S.; Burt, A.D.; Bida, J.P.; et al. The
NAFLD fibrosis score: A noninvasive system that identifies liver fibrosis in patients with NAFLD. Hepatology 2007, 45, 846–854.
[[CrossRef]](http://doi.org/10.1002/hep.21496)
51. Godoy-Matos, A.F.; Silva Júnior, W.S.; Valerio, C.M. NAFLD as a continuum: From obesity to metabolic syndrome and diabetes.
_[Diabetol. Metab. Syndr. 2020, 12, 60. [CrossRef]](http://doi.org/10.1186/s13098-020-00570-y)_
52. Jensen, M.D. Role of Body Fat Distribution and the Metabolic Complications of Obesity. J. Clin. Endocrinol. Metab. 2008, 93,
[s57–s63. [CrossRef] [PubMed]](http://doi.org/10.1210/jc.2008-1585)
53. Solmaz, S.; Duksal, F.; Ganida˘glı, S. Is obstructive sleep apnoea syndrome really one of the causes of secondary polycythaemia?
_[Hematology 2015, 20, 108–111. [CrossRef]](http://doi.org/10.1179/1607845414Y.0000000170)_
-----
_Nutrients 2022, 14, 373_ 14 of 14
54. Mardi, T.; Toker, S.; Melamed, S.; Shirom, A.; Zeltser, D.; Shapira, I.; Berliner, S.; Rogowski, O. Increased erythropoiesis and
[subclinical inflammation as part of the metabolic syndrome. Diabetes Res. Clin. Pract. 2005, 69, 249–255. [CrossRef]](http://doi.org/10.1016/j.diabres.2005.01.005)
55. Kotani, K.; Sakane, N.; Kurozawa, Y. Increased red blood cells in patients with metabolic syndrome. Endocr. J. 2006, 53, 711–712.
[[CrossRef] [PubMed]](http://doi.org/10.1507/endocrj.K06-074)
56. Festa, A.; D’Agostino, R.; Howard, G.; Mykkänen, L.; Tracy, R.P.; Haffner, S.M. Chronic Subclinical Inflammation as Part of the
[Insulin Resistance Syndrome. Circulation 2000, 102, 42–47. [CrossRef] [PubMed]](http://doi.org/10.1161/01.CIR.102.1.42)
57. Kannel, W.B.; Anderson, K.; Wilson, P.W.F. White Blood Cell Count and Cardiovascular Disease: Insights from the Framingham
[Study. JAMA 1992, 267, 1253–1256. [CrossRef] [PubMed]](http://doi.org/10.1001/jama.1992.03480090101035)
58. Wang, Y.-Y.; Lin, S.-Y.; Liu, P.-H.; Cheung, B.M.H.; Lai, W.-A. Association between hematological parameters and metabolic
[syndrome components in a Chinese population. J. Diabetes Complicat. 2004, 18, 322–327. [CrossRef]](http://doi.org/10.1016/S1056-8727(04)00003-0)
59. Watanabe, M.; Risi, R.; Tuccinardi, D.; Baquero, C.J.; Manfrini, S.; Gnessi, L. Obesity and SARS-CoV-2: A population to safeguard.
_[Diabetes. Metab. Res. Rev. 2020, 36, e3325. [CrossRef] [PubMed]](http://doi.org/10.1002/dmrr.3325)_
60. Watanabe, M.; Caruso, D.; Tuccinardi, D.; Risi, R.; Zerunian, M.; Polici, M.; Pucciarelli, F.; Tarallo, M.; Strigari, L.; Manfrini, S.;
et al. Visceral fat shows the strongest association with the need of intensive care in patients with COVID-19. Metabolism 2020, 111,
[154319. [CrossRef]](http://doi.org/10.1016/j.metabol.2020.154319)
61. Watanabe, M.; Balena, A.; Tuccinardi, D.; Tozzi, R.; Risi, R.; Masi, D.; Caputi, A.; Rossetti, R.; Spoltore, M.E.; Filippi, V.; et al.
Central obesity, smoking habit, and hypertension are associated with lower antibody titres in response to COVID-19 mRNA
[vaccine. Diabetes. Metab. Res. Rev. 2021, 38, e3465. [CrossRef]](http://doi.org/10.1002/dmrr.3465)
62. Maddaloni, E.; D’Onofrio, L.; Alessandri, F.; Mignogna, C.; Leto, G.; Pascarella, G.; Mezzaroma, I.; Lichtner, M.; Pozzilli, P.; Agrò,
F.E.; et al. Cardiometabolic multimorbidity is associated with a worse Covid-19 prognosis than individual cardiometabolic risk
[factors: A multicentre retrospective study (CoViDiab II). Cardiovasc. Diabetol. 2020, 19, 164. [CrossRef] [PubMed]](http://doi.org/10.1186/s12933-020-01140-2)
63. Yen, M.-L.; Yang, C.-Y.; Yen, B.L.; Ho, Y.-L.; Cheng, W.-C.; Bai, C.-H. Increased high sensitivity C-reactive protein and neutrophil
count are related to increased standard cardiovascular risk factors in healthy Chinese men. Int. J. Cardiol. 2006, 110, 191–198.
[[CrossRef] [PubMed]](http://doi.org/10.1016/j.ijcard.2005.07.034)
64. Lubrano, C.; Tenuta, M.; Costantini, D.; Specchia, P.; Barbaro, G.; Basciani, S.; Mariani, S.; Pontecorvi, A.; Lenzi, A.; Gnessi, L.
[Severe growth hormone deficiency and empty sella in obesity: A cross-sectional study. Endocrine 2015, 49, 503–511. [CrossRef]](http://doi.org/10.1007/s12020-015-0530-0)
[[PubMed]](http://www.ncbi.nlm.nih.gov/pubmed/25614038)
65. Lubrano, C.; Masi, D.; Risi, R.; Balena, A.; Watanabe, M.; Mariani, S.; Gnessi, L. Is Growth Hormone Insufficiency the Missing
[Link Between Obesity, Male Gender, Age, and COVID-19 Severity? Obesity 2020, 28, 2038–2039. [CrossRef] [PubMed]](http://doi.org/10.1002/oby.23000)
66. Clemmons, D.R.; Moses, A.C.; McKay, M.J.; Sommer, A.; Rosen, D.M.; Ruckle, J. The Combination of Insulin-Like Growth Factor I
and Insulin-Like Growth Factor-Binding Protein-3 Reduces Insulin Requirements in Insulin-Dependent Type 1 Diabetes: Evidence
[for in VivoBiological Activity1. J. Clin. Endocrinol. Metab. 2000, 85, 1518–1524. [CrossRef] [PubMed]](http://doi.org/10.1210/jcem.85.4.6559)
67. Gillespie, C.M.; Merkel, A.L.; Martin, A.A. Effects of insulin-like growth factor-I and LR3IGF-I on regional blood flow in normal
[rats. J. Endocrinol. 1997, 155, 351–358. [CrossRef]](http://doi.org/10.1677/joe.0.1550351)
68. Fornari, R.; Marocco, C.; Francomano, D.; Fittipaldi, S.; Lubrano, C.; Bimonte, V.M.; Donini, L.M.; Nicolai, E.; Aversa, A.; Lenzi, A.;
et al. Insulin growth factor-1 correlates with higher bone mineral density and lower inflammation status in obese adult subjects.
_[Eat. Weight Disord. 2018, 23, 375–381. [CrossRef]](http://doi.org/10.1007/s40519-017-0362-4)_
69. Poggiogalle, E.; Lubrano, C.; Gnessi, L.; Mariani, S.; Lenzi, A.; Donini, L.M. Fatty Liver Index Associates with Relative Sarcopenia
[and GH/IGF- 1 Status in Obese Subjects. PLoS ONE 2016, 11, e0145811. [CrossRef]](http://doi.org/10.1371/journal.pone.0145811)
70. Teeratakulpisarn, N.; Charoensri, S.; Theerakulpisut, D.; Pongchaiyakul, C. FRAX score with and without bone mineral density:
[A comparison and factors affecting the discordance in osteoporosis treatment in Thais. Arch. Osteoporos. 2021, 16, 44. [CrossRef]](http://doi.org/10.1007/s11657-021-00911-y)
-----
| 18,095
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC8779369, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2072-6643/14/2/373/pdf?version=1642471197"
}
| 2,022
|
[
"JournalArticle"
] | true
| 2022-01-01T00:00:00
|
[
{
"paperId": "048ceeb9a55d554ad96bff65f93981ab0b13c3dd",
"title": "Natural histories of metabolite BMI phenotypes and their impacts on cardiovascular disease risk over a decade-long follow-up."
},
{
"paperId": "e5bf3d25e5656ef59617b17fae3607490abb1041",
"title": "Machine Learning to Identify Metabolic Subtypes of Obesity: A Multi-Center Study"
},
{
"paperId": "dca5d80dd57944d5372e4e1ab48fc5bc611a2e2c",
"title": "Central obesity, smoking habit, and hypertension are associated with lower antibody titres in response to COVID‐19 mRNA vaccine"
},
{
"paperId": "8db7363e620ddee3c1e293e2408b375bcdc02fc0",
"title": "FRAX score with and without bone mineral density: a comparison and factors affecting the discordance in osteoporosis treatment in Thais"
},
{
"paperId": "a353c2a3e74ce6c987dc8ac1b4e2d053846504f2",
"title": "Obesity measures, metabolic health and their association with 15-year all-cause and cardiovascular mortality in the SAMINOR 1 Survey: a population-based cohort study"
},
{
"paperId": "63379585f075180e334b3cd2cf999a8633158c94",
"title": "Nickel Sensitivity Is Associated with GH-IGF1 Axis Impairment and Pituitary Abnormalities on MRI in Overweight and Obese Subjects"
},
{
"paperId": "bb0f065e879e253294d4b155c254c845dfe9d1ba",
"title": "Liver disease in obesity and underweight: the two sides of the coin. A narrative review"
},
{
"paperId": "6a8624bd17f6104b7dd8858d7f18c53d1704ef43",
"title": "Cardiometabolic multimorbidity is associated with a worse Covid-19 prognosis than individual cardiometabolic risk factors: a multicentre retrospective study (CoViDiab II)"
},
{
"paperId": "ef10dae86783fdaed0f1039d2e2884be781c7d87",
"title": "Determinants of good metabolic control without weight gain in type 2 diabetes management: a machine learning analysis"
},
{
"paperId": "d4f1050f789ab87f25743e2fdab2dcb641f0c538",
"title": "The effect of menopause on metabolic syndrome: cross-sectional results from the Canadian Longitudinal Study on Aging."
},
{
"paperId": "087834447da39eb478ebd3c3ea28c0934294e4f1",
"title": "Is Growth Hormone Insufficiency the Missing Link Between Obesity, Male Gender, Age, and COVID‐19 Severity?"
},
{
"paperId": "7d457c3c6b3b43b85285d083a52419a59fc18bd9",
"title": "Visceral fat shows the strongest association with the need of intensive care in patients with COVID-19"
},
{
"paperId": "a44e7a8165ddce09396d4fa212182af3fa0f97b0",
"title": "NAFLD as a continuum: from obesity to metabolic syndrome and diabetes"
},
{
"paperId": "1f91c430760f2ad072bc9bdaba2f12a659433814",
"title": "Baseline HOMA IR and Circulating FGF21 Levels Predict NAFLD Improvement in Patients Undergoing a Low Carbohydrate Dietary Intervention for Weight Loss: A Prospective Observational Pilot Study"
},
{
"paperId": "07b90f13db36faf6d1f0c176953539e009554374",
"title": "Obesity treatment within the Italian national healthcare system tertiary care centers: what can we learn?"
},
{
"paperId": "e080b5e14956ff940c0ad00d020b4c6b29f2df73",
"title": "Machine Learning Approach for the Early Prediction of the Risk of Overweight and Obesity in Young People"
},
{
"paperId": "1767b6383c331afc7cfbda34654127fa45ace134",
"title": "Obesity and SARS‐CoV‐2: A population to safeguard"
},
{
"paperId": "76b1768c4185b4b6e525e797be137964ffd46cd5",
"title": "Artificial Intelligence in Dentistry: Chances and Challenges"
},
{
"paperId": "3728e74c3308fadca8df367f09b0751b84eab9c8",
"title": "Metabolically Healthy Obesity: Criteria, Epidemiology, Controversies, and Consequences"
},
{
"paperId": "305b19ef98c1ddb882ff636c4548d296e5feed93",
"title": "European Society of Endocrinology Clinical Practice Guideline: Endocrine work-up in obesity."
},
{
"paperId": "eab712c14dc8914b668fa079df73f0312ce4df89",
"title": "The decline in muscle strength and muscle quality in relation to metabolic derangements in adult women with obesity."
},
{
"paperId": "f02ecfbae29d387a54c0c76c2689ba6165aa39bc",
"title": "The association between fasting plasma glucose and glycated hemoglobin in the prediabetes range and future development of hypertension"
},
{
"paperId": "54c7285dfb8c67ba9d7f1dd64764f13245d0e3a4",
"title": "Ascent of machine learning in medicine"
},
{
"paperId": "6e23ae3969078ecd6e59260a895c96c360b4921a",
"title": "Artificial intelligence, bias and clinical safety"
},
{
"paperId": "f92dc17be7daa518b8762180ff780fc72b7d96eb",
"title": "Overweight and obese patients with nickel allergy have a worse metabolic profile compared to weight matched non-allergic individuals"
},
{
"paperId": "1af01d890b358d9940b075abc522a82b1e117a8b",
"title": "Insulin growth factor-1 correlates with higher bone mineral density and lower inflammation status in obese adult subjects"
},
{
"paperId": "05700c7fbf081c4d373c6d07b6101777c26306dd",
"title": "Metabolically healthy obesity: the low-hanging fruit in obesity treatment?"
},
{
"paperId": "5603a429561619918e6002dfc5c63639c5f482cb",
"title": "Disability, Physical Inactivity, and Impaired Health-Related Quality of Life Are Not Different in Metabolically Healthy vs. Unhealthy Obese Subjects"
},
{
"paperId": "cd3a316d1ffff7fe8afe6fe4fd48ee9176f8dd3f",
"title": "Low Insulin-Like Growth Factor-1 Level in Obesity Nephropathy: A New Risk Factor?"
},
{
"paperId": "9c3395869e455addfb87edf04b66d03464958b9c",
"title": "EASL-EASD-EASO Clinical Practice Guidelines for the management of non-alcoholic fatty liver disease."
},
{
"paperId": "3e105ffce2a25ab4d4a74418bda08bbb342472c5",
"title": "EASL-EASD-EASO Clinical Practice Guidelines for the Management of Non-Alcoholic Fatty Liver Disease"
},
{
"paperId": "2a8c122c8241b2c13548fb58695764c8f1172560",
"title": "Fatty Liver Index Associates with Relative Sarcopenia and GH/ IGF- 1 Status in Obese Subjects"
},
{
"paperId": "bd8aee7ba37879063e219eed63d492fb2c9d31a9",
"title": "Machine Learning Techniques for Prediction of Early Childhood Obesity"
},
{
"paperId": "0fbc97908d886cd93b1a3166ccced89547f9d855",
"title": "Is obstructive sleep apnoea syndrome really one of the causes of secondary polycythaemia?"
},
{
"paperId": "48baa60e32afc1a9ebf88c1a430eafad678abf2e",
"title": "Severe growth hormone deficiency and empty sella in obesity: a cross-sectional study"
},
{
"paperId": "c108f9516f3a5c0817c114366f2910c8597179d7",
"title": "Metabolically healthy and unhealthy obese – the 2013 Stock Conference report"
},
{
"paperId": "078de7b2f60b74da28739fcba840ede4a25f5f8e",
"title": "Adult Growth Hormone Deficiency: Current Concepts"
},
{
"paperId": "2ed30fd5c4c12243adc45b794895e089b0cdbcdb",
"title": "Insulin resistance (HOMA-IR) cut-off values and the metabolic syndrome in a general adult population: effect of gender and age: EPIRCE cross-sectional study"
},
{
"paperId": "35cf868ac2194b57aa3ee49ab196e9a9f3c3f436",
"title": "Uric Acid Secretion from Adipose Tissue and Its Increase in Obesity*"
},
{
"paperId": "6184e352266e4f37d3e23e274deda9d395ada478",
"title": "Glycogen Storage Disease type 1a – a secondary cause for hyperlipidemia: report of five cases"
},
{
"paperId": "d1bc6e1471b9878c01a4a1617fb37e0e085c590c",
"title": "Relationships between Body Fat Distribution, Epicardial Fat and Obstructive Sleep Apnea in Obese Patients with and without Metabolic Syndrome"
},
{
"paperId": "b74392bfbf34eb64fe2883383f0daa99c448914b",
"title": "Body Fat Distribution and Risk of Cardiovascular Disease: An Update"
},
{
"paperId": "fc4889218062027cc714d30884fe17f5938de2f1",
"title": "Insulin-sensitive obesity in humans – a ‘favorable fat’ phenotype?"
},
{
"paperId": "769559c9c4e39a18fd10b26cf7abda231056fa8a",
"title": "Insulin resistance index (HOMA-IR) levels in a general adult population: curves percentile by gender and age. The EPIRCE study."
},
{
"paperId": "09b94e0648161792e81f0d845a591982a81e0cff",
"title": "Do women consult more than men? A review of gender and consultation for back pain and headache"
},
{
"paperId": "1daf42a54975230a5227cc52d38da3a1faa2ce22",
"title": "Menopause is an independent predictor of metabolic syndrome in Iranian women."
},
{
"paperId": "f66d9c3624bcef89af078c4ed4f465f280a167b8",
"title": "Men's under use of health services – finding alternative approaches"
},
{
"paperId": "4b2d46f4fbdfafb51ae8253c6fa8344b2ff9b8bf",
"title": "Role of body fat distribution and the metabolic complications of obesity."
},
{
"paperId": "4ac88aa7923a9fe37fb93602089c8dad29dfdd77",
"title": "Relationships between serum IGF1 levels, blood pressure, and glucose tolerance: an observational, exploratory study in 404 subjects."
},
{
"paperId": "e51a68323a972f163d4cfeb2173d6ffa04988572",
"title": "Age-specific prevalence of the metabolic syndrome defined by the International Diabetes Federation and the National Cholesterol Education Program: the Norwegian HUNT 2 study"
},
{
"paperId": "8e6e9824af1a2c55f3e03a197b8b57a15893ea6e",
"title": "Glycated haemoglobin as a predictor for metabolic syndrome in non-diabetic Korean adults"
},
{
"paperId": "62cfbbe28c1c10d01fef01a276dc0c87f66d24de",
"title": "The NAFLD fibrosis score: A noninvasive system that identifies liver fibrosis in patients with NAFLD"
},
{
"paperId": "fdc9ab8cd9bf40769cd81b907801957474e4f953",
"title": "Increased red blood cells in patients with metabolic syndrome."
},
{
"paperId": "8f3187c9c3b7b7fc35400710a742b5a563ba3910",
"title": "Increased high sensitivity C-reactive protein and neutrophil count are related to increased standard cardiovascular risk factors in healthy Chinese men."
},
{
"paperId": "ef9edf396d05efedd560bbba3e59e8e56535be64",
"title": "[Adult growth hormone deficiency]."
},
{
"paperId": "e5eebc0e1eaf3bc81d4a4cdc4b9ea21251204fb9",
"title": "Increased erythropoiesis and subclinical inflammation as part of the metabolic syndrome."
},
{
"paperId": "87a2381fb2db1946bbf72f836630f679dae2730f",
"title": "Circulating insulin-like growth factor-I levels are correlated with the atherosclerotic profile in healthy subjects independently of age"
},
{
"paperId": "e4a342c4ba6e16c545a88ce222bbf0b9391890b0",
"title": "Truncal adiposity, relative growth hormone deficiency, and cardiovascular risk."
},
{
"paperId": "cc4fa476035e8e7e51d1343ed4e062a2f3f1887b",
"title": "Association between hematological parameters and metabolic syndrome components in a Chinese population."
},
{
"paperId": "7cfae70fe6a9c6b14131ccea50f8541193e1bd28",
"title": "Is glycosylated hemoglobin A1c a surrogate for metabolic syndrome in nondiabetic, first-degree relatives of African-American patients with type 2 diabetes?"
},
{
"paperId": "48daa1bb5ccee76fe7b0af6b9d669f07f657550d",
"title": "Chronic subclinical inflammation as part of the insulin resistance syndrome: the Insulin Resistance Atherosclerosis Study (IRAS)."
},
{
"paperId": "799098478530f803869aeec0a6cf80f961085e36",
"title": "The combination of insulin-like growth factor I and insulin-like growth factor-binding protein-3 reduces insulin requirements in insulin-dependent type 1 diabetes: evidence for in vivo biological activity."
},
{
"paperId": "b170ca8132420f5f1e3c6922a737b3bfbac4a5de",
"title": "Growth hormone (GH) status is an independent determinant of serum levels of cholesterol and triglycerides in healthy adults"
},
{
"paperId": "b1fdb09b7f22b250b68f3ac505605069e3349f3b",
"title": "Definition, diagnosis and classification of diabetes mellitus and its complications. Part 1: diagnosis and classification of diabetes mellitus. Provisional report of a WHO Consultation"
},
{
"paperId": "c6d1baff006f163da82bc76a70c82a97c6c8095e",
"title": "Definition, diagnosis and classification of diabetes mellitus and its complications. Part 1: diagnosis and classification of diabetes mellitus. Provisional report of a WHO Consultation"
},
{
"paperId": "2a8fb564a333de10409473b7b9d43ffb18fa848e",
"title": "Effects of insulin-like growth factor-I and LR3IGF-I on regional blood flow in normal rats."
},
{
"paperId": "e300ade9eb6a8d2f0a41aac5d5ae336565f095f3",
"title": "White blood cell count and cardiovascular disease. Insights from the Framingham Study."
},
{
"paperId": "08910c3ad66731589f84e560806865b5d6740d33",
"title": "Premature mortality due to cardiovascular disease in hypopituitarism"
},
{
"paperId": null,
"title": "Artificial Intelligence in Medicine and Healthcare: Applications, Availability and Societal Impact; Publications Office of the European Union: Luxembourg, 2020; ISBN 9789276184546"
},
{
"paperId": "833f411c6830714cfcc581d83d62f379629167f0",
"title": "Health Topics"
},
{
"paperId": "ad2364074b61bb2333ad95af9e37dab60e955e48",
"title": "The prospective association of serum insulin-like growth factor I (IGF-I) and IGF-binding protein-1 levels with all cause and cardiovascular disease mortality in older adults: the Rancho Bernardo Study."
},
{
"paperId": "18bdea2d5f53ed4d1f9c0c755902aa0e6e0c0c56",
"title": "[Insulin-like growth factors]."
},
{
"paperId": null,
"title": "World Health Organization. World Health Organization. Health Topics. Obesity"
}
] | 18,095
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00a032a00f9cc578c5ef5d76527521afd499b173
|
[
"Computer Science"
] | 0.886194
|
GRAPLEr: A distributed collaborative environment for lake ecosystem modeling that integrates overlay networks, high‐throughput computing, and WEB services
|
00a032a00f9cc578c5ef5d76527521afd499b173
|
Concurrency and Computation
|
[
{
"authorId": "2032809",
"name": "Kensworth C. Subratie"
},
{
"authorId": "50167195",
"name": "Saumitra Aditya"
},
{
"authorId": "1409303326",
"name": "Srinivas Mahesula"
},
{
"authorId": "144356414",
"name": "R. Figueiredo"
},
{
"authorId": "2068523",
"name": "C. Carey"
},
{
"authorId": "49017913",
"name": "P. Hanson"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Concurr Comput Pract Exp",
"Concurrency and Computation: Practice and Experience",
"Concurr Comput"
],
"alternate_urls": [
"http://www3.interscience.wiley.com/cgi-bin/jtoc?ID=77004395",
"http://onlinelibrary.wiley.com/journal/10.1002/(ISSN)1532-0634"
],
"id": "312ca99c-9149-490d-813e-c60d5e949f65",
"issn": "1532-0626",
"name": "Concurrency and Computation",
"type": "journal",
"url": "http://www3.interscience.wiley.com/cgi-bin/jhome/77004395?CRETRY=1&SRETRY=0"
}
|
The GLEON Research And PRAGMA Lake Expedition—GRAPLE—is a collaborative effort between computer science and lake ecology researchers. It aims to improve our understanding and predictive capacity of the threats to the water quality of our freshwater resources, including climate change. This paper presents GRAPLEr, a distributed computing system used to address the modeling needs of GRAPLE researchers. GRAPLEr integrates and applies overlay virtual network, high‐throughput computing, and WEB service technologies in a novel way. First, its user‐level IP‐over‐P2P overlay network allows compute and storage resources distributed across independently administered institutions (including private and public clouds) to be aggregated into a common virtual network, despite the presence of firewalls and network address translators. Second, resources aggregated by the IP‐over‐P2P virtual network run unmodified high‐throughput‐computing middleware to enable large numbers of model simulations to be executed concurrently across the distributed computing resources. Third, a WEB service interface allows end users to submit job requests to the system using client libraries that integrate with the R statistical computing environment. The paper presents the GRAPLEr architecture, describes its implementation and reports on its performance for batches of general lake model simulations across 3 cloud infrastructures (University of Florida, CloudLab, and Microsoft Azure).
|
## GRAPLEr: A Distributed Collaborative Environment for Lake Ecosystem Modeling that Integrates Overlay Networks, High-throughput Computing, and Web Services
### Kensworth Subratie Saumitra Aditya Renato Figueiredo
#### University of Florida University of Florida University of Florida Gainesvile, FL, USA Gainesvile, FL, USA Gainesvile, FL, USA
### [email protected] [email protected] [email protected]
Cayelan C. Carey Paul Hanson
#### Virginia Tech University of Blacksburg, VA, USA Wisconsin-Madison
### [email protected] Madison, WI, USA
[email protected]
### ABSTRACT
The GLEON Research And PRAGMA Lake Expedition –
GRAPLE – is a collaborative effort between computer science and lake ecology researchers. It aims to improve our
understanding and predictive capacity of the threats to the
water quality of our freshwater resources, including climate
change. This paper presents GRAPLEr, a distributed computing system used to address the modeling needs of GRAPLE
researchers. GRAPLEr integrates and applies overlay virtual network, high-throughput computing, and Web service
technologies in a novel way. First, its user-level IP-overP2P (IPOP) overlay network allows compute and storage
resources distributed across independently-administered institutions (including private and public clouds) to be aggregated into a common virtual network, despite the presence of firewalls and network address translators. Second,
resources aggregated by the IPOP virtual network run unmodified high-throughput computing middleware (HTCondor) to enable large numbers of model simulations to be
executed concurrently across the distributed computing resources. Third, a Web service interface allows end users to
submit job requests to the system using client libraries that
integrate with the R statistical computing environment. The
paper presents the GRAPLEr architecture, describes its implementation and reports on its performance for batches of
General Lake Model (GLM) simulations across three cloud
infrastructures (University of Florida, CloudLab, and Microsoft Azure).
### Keywords
Climate Change, General Lake Model, Lake Modeling, HTCondor, Distributed Computing, IPOP, Overlay Networks
### 1. INTRODUCTION
The GLEON Research And PRAGMA Lake Expedition –
GRAPLE – aims to improve our understanding and predictive capacity of water quality threats to our freshwater
resources, including climate change. It is predicted that climate change will increase water temperatures in many freshwater ecosystems, potentially increasing toxic phytoplankton blooms [11, 1]. Consequently, understanding how altered climate will affect phytoplankton dynamics is paramount
for ensuring the long-term sustainability of our freshwater resources. Underlying these consequences are complex
physical-biological interactions, such as phytoplankton community structure and biomass responses to short-term weather
patterns, multi-year climate cycles, and long-term climate
trends [5]. New data from high-frequency sensor networks
(e.g., GLEON) provide easily measured indicators of phytoplankton communities, such as in-situ pigment fluorescence,
and show promise for improving predictions of ecosystemscale wax and wane of phytoplankton blooms [18]. However,
translating sensor data to an improved understanding of coupled climate-water quality dynamics requires additional data
sources, model development, and synthesis, and it is this
type of complex challenge that requires increasing computational capacity for lake modeling.
Searching through the complex response surface associated
with multiple environmental starting conditions and phytoplankton traits (model parameters) requires executing and
interpreting thousands of simulations, and thus substantial
compute resources. Furthermore, the configuration, setup,
management and execution of such large batches of simulations is time-consuming, both in terms of computing and
human resources.
This puts the computational requirements well beyond the
capabilities of any single desktop computer system, and to
meet the demands imposed by these simulations it becomes
necessary to tap into distributed computing resources. However, distributed computing resources and technologies are
typically outside the realm of most freshwater science projects.
Designing, assembling, and programming these systems is
not trivial, and requires the level of skill typically available to
-----
**Figure 1: System Architecture (GRAPLEr). Users interact with GRAPLEr using R environments in their**
**desktop (right). The client connects to a Web service tier (GWS) that exposes an endpoint to the public**
**Internet.** **Job batches are prepared using GEMT and are scheduled to execute in distributed HTCondor**
**resources across an IPOP virtual private network.**
experienced system and software engineers. Consequently,
this imposes a barrier to scientists outside information technology and computer science disciplines, and presents challenges to the acceptance of distributed computing as a solution to most lake ecosystem modelers.
GRAPLE is a collaboration between lake ecologists and computer scientists that aims to address this challenge. Through
this inter-disciplinary collaboration, we have designed and
implemented a distributed system platform that supports
compute-intensive model simulations, aggregates resources
seamlessly across an overlay network spanning collaborating institutions, and presents intuitive Web service-based interfaces that integrate with existing environments that lake
ecologists are used to, such as R.
This paper describes GRAPLEr, a cyberinfrastructure that
is unique in how it seamlessly integrates a collection of distributed hardware resources through the IP-over-P2P [6, 8]
overlay virtual network, supports existing models and the
HTCondor distributed computing middleware [17], and exposes a user-friendly interface that integrates with R-based
desktop environments through a Web service. As a multitiered distributed solution, GRAPLEr incorporates several
components into an application-specific solution. Some of
these components are pre-existing solutions which are deployed and configured for our specific uses, while others are
specifically developed to address unique needs.
The rest of this paper is organized as follows: Section 2
describes driving science use cases and motivates the need
for the GRAPLEr cyberinfrastructure. Section 3 describes
the architecture, design, and implementation of GRAPLEr.
Section 4 describes a deployment of GRAPLEr and summarizes results from an experiment that evaluates its capabilities and performance. Section 5 discusses related work, and
Section 6 concludes the paper.
### 2. ARCHITECTURE AND DESIGN
### 2.1 System Architecture (GRAPLEr)
The system architecture of GRAPLEr is illustrated in Figure 1. Starting from the user-facing components of GRAPLEr,
users interact with the system through client-side libraries
that are called from an R development environment (e.g., R
Studio) running on their personal computer. User requests
are mapped by the R library to Application Programming
Interface (API) calls that are then sent to the GRAPLEr
Web Service (GWS) tier. The GWS tier is responsible for
interpreting the user requests, invoking the GRAPLEr Experiment Management Tools (GEMT) to set up a directory
structure for model inputs and outputs, and preparing and
queuing jobs for submission to the HTCondor pool. The
workload management tier is responsible for scheduling and
dispatching model simulations across the compute resources,
which are interconnected through the IPOP virtual network
overlay. These are elaborated below.
### 2.2 Overlay Virtual Network (IPOP)
Rather than investing significant effort in development, porting, and testing new applications and distributed computing middleware, GRAPLEr has focused on an approach in
which computing environments are virtualized and can be
deployed on-demand on cloud resources. While Virtual Machines (VMs) available in cloud infrastructures provide a
basis to address the need for a user-provided software environment, another challenge remains: how to inter-connect
VMs deployed across multiple institutions (including private
and commercial cloud providers) such that HTCondor and
the simulation models work seamlessly? The approach to
address this problem is to apply virtualization at the network layer.
The IPOP [6] overlay virtual network allows GRAPLEr to
define and deploy its own virtual private network (VPN)
that can span physical and virtual machines distributed across
multiple collaborating institutions and commercial clouds.
To accomplish this, IPOP captures and injects network traffic via a virtual network interface or “tap” device. The “tap”
-----
**Figure** **2:** **Workload** **Management** **(HTCondor).**
**GRAPLEr supports unmodified HTCondor software**
**and configuration to work across multiple sites (e.g.,**
**a private cloud at UF and a commercial cloud at MS**
**Azure).**
device is configured within an isolated virtual private address subnet space. IPOP then encrypts and tunnels virtual network packets through the public Internet. The “TinCan” [8] tunnels used by IPOP to carry network traffic use
facilities from WebRTC (Web Real-Time Computing) to create end-to-end links that carry virtual IP traffic instead of
audio or video.
To discover and notify peers that are connected to the GRAPLEr
“group VPN”, IPOP uses the eXtensible Messaging and Presence Protocol (XMPP). XMPP messages carry information
used to create private tunnels (the fingerprint of an endpoint’s public key), as well as network endpoint information
(IP address:port pairs that the device is reachable). For
nodes behind network address translators (NATs), publicfacing address:port endpoints can be discovered using the
STUN (Session Traversal Utilities for NAT) protocol, and
devices behind symmetric NATs can use TURN (Traversal
Using Relays around NAT) to communicate through a relay in the public Internet. Put together, these techniques
handle firewalls and NATs transparently to users and applications, and allow for simple configuration of VPN groups
via an XMPP server.
### 2.3 Workload Management (HTCondor)
A key motivation for the use of virtualization technologies,
including IPOP, is the ability to integrate existing, unmodified distributed computing middleware. In particular, GRAPLEr
integrates HTCondor [17], a specialized workload management system for compute-intensive jobs. Like other full
featured batch systems, HTCondor provides a job queueing mechanism, scheduling policy, priority scheme, resource
monitoring, and resource management. Users submit their
serial or parallel jobs to HTCondor, HTCondor places them
into a queue, chooses when and where to run the jobs based
upon a policy, carefully monitors their progress, and ultimately informs the user upon completion. Figure 2 illustrates the structure of the HTCondor pool that is deployed
for GRAPLE.
### 2.4 Experiment Management Tools (GEMT)
An HTCondor [17] resource pool running across distributed
resources connected by IPOP provides a general-purpose capability where it is possible to run a variety of applications
from different domains. Furthermore, application-tailored
middleware can be layered upon this general-purpose environment to enhance the performance and/or streamline the
configuration of simulations on behalf of users. GEMT (Figure 3) provides a suite of scripts for designing and automating the tasks associated with running General Lake Model
(GLM) based experiments on a very large scale. Here, we
use the term “Experiment” to refer to a collection of simulations that address a science use case question, such as
determining the effects of climate change on water quality
metrics. GEMT provides both the guidelines for the design and layout of individual simulations in the experiment.
The primary responsibility of GEMT is to identify and target the task-level parallelism inherent in the experiment by
generating proper packaging of executables, inputs, and outputs; furthermore, GEMT seeks to effectively exploit the
distributed compute resources across the HTCondor pool by
performing operations such as aggregation of multiple simulations into a single HTCondor job, compression of input
and output files, and the extraction of selected features from
output files.
For the simulations in an experiment, GEMT defines the
naming convention used by the files and directories as well
as their layout. The user may interact with GEMT in two
possible ways: 1) directly, by using a desktop computer configured with the IPOP overlay software and HTCondor job
submission software, or 2) indirectly, by issuing requests
against the GRAPLEr Web service. In the former case,
once the user has followed the GEMT specification for creating their experiment, executing it and collecting the results
becomes a simple matter of invoking two GEMT scripts.
However, the user is left the responsibility of deploying and
configuring both IPOP and HTCondor locally. Additionally, the user is now a trusted endpoint on the VPN which
carries its own security implications. A breach of the user’s
system is a potential vulnerable point to accessing the VPN.
The latter case alleviates the user from both these concerns.
This paper focuses on this latter approach, where GEMT
scripts are invoked indirectly through the Web service.
There are three distinct functional modes for GEMT, which
pertain to the different phases of the experiment’s lifetime.
Starting with its invocation, on the submit node, GEMT
selects a configurable number of simulations to be grouped
as a single HTCondor job. The reason why multiple simulations may be grouped into a single HTCondor job is that,
for short-running simulations, the costs of job scheduling
and transfer of executables can be significant. By grouping
-----
**Figure** **3:** **GRAPLEr** **Experiment** **Management**
**Tools (GEMT). The GEMT Simulation Packager**
**module takes a specification of the raw simulation**
**inputs and groups them together into jobs; these are**
**dispatched for execution through HTCondor, and**
**their execution at the worker nodes is managed by**
**the GEMT Job Runner module. The Result Deliv-**
**ery GEMT module collates results and presents to**
**the user.**
simulations into a single HTCondor job, redundant copies of
the input can be eliminated to reduce the bandwidth transfer cost and only a single scheduling decision is needed to
dispatch all the simulations in the job. The inputs and executables pertaining to a group of simulations are then compressed and submitted as a job to the HTCondor scheduler
for execution. When this job becomes scheduled, GEMT
is invoked in its second phase, this time on the HTCondor
execute node. The execute-side GEMT script coordinates
running each simulation within the job, and preparing the
output so it can be returned to the originator. Finally, in its
third phase, back on the submit node side, GEMT collates
the results of all the jobs that were successful and presents
them in a standard format to the end user.
GEMT implements user configurable optimizations to fine
tune its operations for individual preferences. It can limit
how many simulations are placed in a job, and it will compress these files for transfer. GEMT can also overlap the
client side job creation with server side execution to minimize the wait time before results start being produced.
These features can be set via a configuration file and combine
to provide a simplified mechanism to execute large numbers
of simulations.
### 2.5 GRAPLEr Web Service (GWS)
The GWS module, as illustrated in Figure 4, is a publiclyaddressable Web service available on the Internet, and serves
as a gateway for users to submit requests to run experiments.
GWS acts as a middleware service-tier which exposes an interface to R clients. Requests to run an experiment are made
via this interface over the Internet using the HTTP protocol. The functionality provided by GWS is exposed to the R
client’s user by means of publicly accessible endpoints, each
of which is associated with a corresponding method that is
invoked in the background. GWS utilizes the functionality
of GEMT for simulation processing. GWS generates simulation input files as needed based on the user’s request (e.g., to
vary air temperature according to a statistical distribution
for a climate change scenario), configures and queues jobs,
and consolidates and prepares results for download. GWS
is co-located in the same host as the GEMT client. This
host acts as the submit node to the HTCondor pool, where
it monitors job submission and execution.
Representational State Transfer or REST, is an architectural style for networked hypermedia applications that is
primarily used to build lightweight and scalable Web services. Such a Web service, referred to as RESTful, is stateless with a uniform interface and representation of entities,
and communicates with clients via messages and address resources using URIs. GWS implements this paradigm and is
designed to treat every job submission independently from
any other. Note that there is per-experiment state that is
managed by GWS, such as the status of each HTCondor job
submitted by the GWS. The state of the experiment is maintained on disk, within the local filesytem, leaving the service
itself stateless. GWS implements the public-facing interface
using a combination of open-source middleware for Web service processing - Python Flask [7], and an asynchronous task
queue - Python Celery [16]. The application is hosted using
uWSGi (an application deployment solution) and supplemented by a Nginx reverse proxy server to offload the task
of serving static files from the application. The employed
technology stack facilitates rapid service development and
robust deployment.
The GWS workflow begins when a request is received from
an R client through the service interface, which is handled
by Flask. The request to evaluate a series of simulations can
be provided in one of several ways, as discussed in detail in
the section covering the R Language Package. However, only
data files are accepted as input - no user provided executable
binaries or scripts are executed as part of the experiment.
A single client-side request can potentially unfold into large
numbers (e.g., thousands) of jobs, and GWS places these
requests into a Celery task queue for asynchronous processing. Provisioning a task queue allows GWS to decouple the
time-consuming processing of the input and task submission
to HTCondor from the response to HTTP request.
A 40-character unique identifier (UID) is randomly-generated
for each simulation request received by GWS; it is used as
an identifier to reference the state of an experiment, and is
thus used for any further interactions with the service for a
given experiment. Using the UID returned by GRAPLEr,
an R client can not only configure the job, but also monitor its status, download outputs, and abort the job. Once
the input file has been uploaded to the service, GWS puts
-----
**Figure 4:** **GRAPLEr Web Service (GWS). The**
**GWS is responsible for taking Web service requests**
**from users, interpreting them and creating tasks for**
**remote execution using GEMT.**
the task into the task queue and responds promptly with
the UID. Therefore, the latency that the R developer experiences, from the moment the job is submitted to when the
UID is received, is minimized. A GWS worker thread then
dequeues GEMT tasks from the task queue, and processes
the request according to the parameters defined by the user.
Figure 4 shows the internal architecture and setup of GWS.
A key feature of the GRAPLEr service is to automatically
create and configure an experiment by spawning a range
of simulation scenarios by varying simulation inputs, based
on the user’s request and application-specific knowledge. In
particular, the service uses application-specific information
to identify data in the input file (such as air temperature,
or precipitation), and apply transformations to these data
(i.e., adding or subtracting an offset to the base value provided by the user) to generate multiple simulation scenarios. GWS removes the burden from the user to generate,
schedule, and collate the outputs of thousands of simulations
within their own desktops, and allows them to quickly generate experiment scenarios from a high-level description that
simply enumerates which input variables to consider, what
function to apply to vary them, and how many simulations
to create. The user also has the flexibility to retrieve and
download only a selected subset of the results back to their
desktops, thereby minimizing local storage requirements and
data transfer times.
To illustrate this feature, consider API endpoint 9 in Figure 5. This endpoint exposes a method that enables the user
to generate ‘N’ runs from a single baseline set of input files
by drawing offsets to input values (e.g., air temperature)
**Figure 5: GWS Application Programming Interface**
**(API) Endpoints**
from a random distribution. With this API endpoint, the
GRAPLEr client can upload a single baseline set of input
files, along with a short experiment description file. This
file specifies which distribution (random, uniform, binomial,
or Poisson) to choose samples from, the number of samples,
the variable(s) to be modified, and the operation applied
against a variable to each randomly-generated value (add,
subtract, multiply, or divide). From this single input and
description, GWS generates ‘N’ simulation input files, and
calls GEMT Simulation Packager scripts to submit jobs to
the HTCondor pool.
### 2.6 GRAPLEr R Language Package
The user-facing component of GRAPLEr is an R package
that serves as a thin layer of software between the Web
service and the R client development environment (IDE).
It exposes an R language application programming interface which can be programmatically consumed by client programs wanting to utilize the GRAPLEr functionality. GRAPLEr
is available on github and is installed on the client desktop,
where it integrates into the R development environment. It
acts as a proxy to translate user commands written in R into
Web service calls. It also marshals data between the client
and Web service as necessary. The following example illustrates a sequence of three R calls to submit an experiment
to a GRAPLEr service running on endpoint graplerURL,
from a set of input files placed in sub-directories of a root
directory folder on the client-side (expDir), check its status,
and download results:
```
UID<-GrapleRunExperiment(graplerURL, expDir)
GrapleCheckExperimentCompletion(graplerURL, UID)
GrapleGetExperimentResults(graplerURL, UID)
```
The second example shows how a user can specify a parametersweeping simulation with 10,000 simulations which are derived from a baseline set of input files (stored in the simDir
directory at the client) by modifying the AirTemp column
time series in the GLM meteorological driver input data file
met hourly.cvs, in the range -10 to 30.
-----
```
simDir=C:/Workspace/SimRoot/Sim0
driverFileName=met_hourly.csv
parameterName=AirTemp
startValue=-10
endValue=30
numberOfIncrements=10000
expUID<-GrapleRunExperimentSweep(graplerURL,
simDir, driverFileName, parameterName,
startValue, endValue, numberOfIncrements)
GrapleCheckExperimentCompletion(graplerURL, expUID)
GrapleGetExperimentResults(graplerURL, expUID)
```
To prevent the use of the Web service interface to execute
arbitrary code, custom code – whether binary executables or
R scripts – cannot be sent as part of the simulation requests;
instead, users only provide input files and parameters for
the GLM simulations. The scenarios that can be run are
currently restricted to using GLM tools and our own scripts.
### 3. EVALUATION
In this section, we present a quantitative evaluation of a
proof-of-concept deployment of GRAPLEr. The goal of this
evaluation is to demonstrate the functionality and capabilities of the framework by deploying a large number of simulations to an HTCondor pool. The HTCondor pool is distributed across multiple clouds and connected by the IPOP
virtual network overlay. Rather than focusing solely on the
reduction in execution times, we evaluate a setup that is
representative of an actual deployment composed of execute
nodes with varying capabilities.
A GLM simulation is specified by a set of input files, which
describe model parameters and time-series data that drive
inputs to the simulation, such as air temperature over time,
derived from sensor data. The resulting output at the completion of a model run is a netCDF file containing time series of the simulated lake, with many lake variables, such as
water temperatures at different lake depths. In our experiments, we use the 1-D GLM Aquatic Eco-Dynamics (AED)
model. For a single example GLM-AED simulation of a
moderately deep lake run for eight months at an hourly time
step, the input folder size was approximately 3 MB, whereas
the size of the resulting netCDF file after successful completion of the simulation was 90MB. The test experiment was
designed to run reasonably quickly. However, we note that
simulations run over decades and with output recorded more
frequently may increase simulation time by 1 to 2 orders of
magnitude.
We conducted simulation runs on different systems to obtain
a range of simulation runtimes. With the baseline parameters, GLM-AED simulation times ranged from the best case
of 6 seconds (on a CloudLab system with Intel Xeon CPU
E5-2450 with 2.10GHz clock rate and 20MB cache) to 57
seconds (on a University of Florida system with virtualized
Intel Xeon CPU X565 with 2.60GHz clock rate and 12MB
cache). Note that individual 1-D GLM-AED simulations
can be short-running; the GEMT feature of grouping multiple individual simulations into a single HTCondor job leads
to increased efficiency.
Description of Experiment setup: The GRAPLEr system
**Figure 6:** **Job runtimes for GRAPLEr HTCondor**
**pool, compared to sequential execution times on**
**CloudLab and UF slots.**
deployed for this evaluation was distributed across three
sites: University of Florida, NSF CloudLab, and Microsoft
Azure. The GWS/GEMT service front-end, HTCondor submit node, and HTC-Central Manager were hosted on virtual
machines running in Microsoft’s Azure cloud. We deployed
three HTC-Execute nodes in total, with 16 cores each. Two
nodes were hosted in virtual machines on a VMware ESX
server at the University of Florida and one on a physical
machine in the CloudLab Apt cluster at University of Utah.
All the nodes in this experiment ran Ubuntu-14.04 and HTCondor version 8.2.8; nodes were connected by an IPOP
GroupVPN virtual network, version 15.01. Each of the
nodes was configured to have 16GB of memory allocated
to them.
To conduct the evaluation, we carried out executions of three
different experiments containing 3000, 5000 and 10000 simulations of an example lake with varying meteorological input
data. Figure 6 summarizes the results from this evaluation.
As a reference, we also present the estimated best-case sequential execution time on a single, local machine, taken
the CloudLab and UF machines as a reference. For 10,000
simulations we achieved a speedup of 2.5 (with respect to
sequential execution time of the fast workstation) and 23
(with respect to the sequential execution time at a UF virtual machine).
It is observed that the time taken to complete the job depended greatly on the way simulation tasks were allocated
by the HTCondor scheduler. Note that the speedups are
relatively modest compared to the best-case baseline, while
significant compared to the worst-case baseline. The actual
user-perceived speedup would be a function of which desktop environment a user would access the service from. Furthermore, because HTCondor is best-suited for simulations
that are individually long-running, the raw user-perceived
speedups of GRAPLEr over local execution tend to increase
as longer-running simulations are submitted through the service. We expect that, as demand for modeling tools by the
lake ecology community increases, so will the complexity,
-----
**Figure 7: Input handling**
resolution and simulated epochs of climate change scenarios, further motivating users to move from a local processing
workflow to remote execution through GRAPLEr.
Submission of a job to the HTCondor pool involves processing of input (for sweep requests) and packaging of generated simulations into GEMT. In order to evaluate this
step we carried out experiments to account for the time
taken by GRAPLEr to respond to a request to generate
a given number of simulations and submit them for execution. The results are presented in Table 7. The metric
service response captures the time taken by GRAPLEr to
respond to a request with a UID, which is slightly more
than the time required to upload the base input . The metric input processing captures the time taken to generate and
compress all ‘N’ inputs for job submission.
Though not fully explored yet in the design of GRAPLEr,
another benefit of remote execution through a Web service
interface is the leveraging of storage and data sharing capabilities of the collaborative infrastructure aggregated by
distributed resources connected through the IPOP virtual
network. For instance, the raw output size of the 10,000
simulation scenario described above is 900 GBytes. By keeping this data on the GRAPLEr cloud and allowing users to
share simulation outputs and download selected subsets of
the raw data, the service can provide a powerful capability
to its end users in enabling large-scale, exploratory scenarios, by both reducing computational time and relaxing local
storage requirements at the client side.
### 4. RELATED WORK
Several HTCondor-based high-throughput computing systems have been deployed in support of scientific applications. One representative example is the Open Science Grid
(OSG [12]), which features a distributed set of HTCondor
clusters. In contrast to OSG, which expects each site to run
and manage its own HTCondor pool, GRAPLEr allows sites
to join a collaborative, distributed cluster by joining its virtual HTCondor pool via the IPOP virtual network overlay.
This reduces the barrier to entry for participants to contribute nodes to the network – e.g., by simply deploying one
or more VMs on a private or public cloud. Furthermore,
GRAPLEr exposes a domain-tailored Web service interface
that lowers the barrier to entry for end users.
The NEWT [3] project also provides a RESTful-based Web
service interface to High-Performance Computing (HPC) systems. NEWT is focused on providing access to a particular
set of resources (NERSC), and does not address the need for
a distributed set of (virtualized) computing resources to be
interconnected by overlay virtual networks.
### 5. CONCLUSION
GRAPLEr, a distributed computing system which integrates
and applies overlay virtual network, high-throughput com
puting, and Web service technologies is a novel way to address the modeling needs of interdisciplinary GRAPLE researchers. The system’s contribution is its combination of
power, flexibility, and simplicity for users who are not software engineering experts but who need to leverage extensive
computational resources for scientific research. We have illustrated the system’s ability to identify and exploit parallelism inherent in GRAPLE experiments. Additionally,
the system scales out, by simply adding additional worker
nodes to the pool, to manage both increasingly complex
experiments as well as larger number of concurrent users.
GRAPLEr is best suited for large numbers of long-running
simulations as the distribution and scheduling overhead will
increase the running time for such experiments. As lake
models demand increased resolution and longer time scales
to address climate change scenarios, GRAPLEr provides a
platform for the next generation of modeling tools and simulations to better assess and predict the impact to our planet’s
water systems.
### 6. ACKNOWLEDGMENTS
This material is based upon work supported in part by the
National Science Foundation under Grants No. 1339737 and
1234983. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the
author(s) and do not necessarily reflect the views of the National Science Foundation.
### 7. REFERENCES
[1] J. D. Brookes and C. C. Carey. Resilience to blooms.
_Science, 334(6052):46–47, 2011._
[2] S. R. Carpenter, E. H. Stanley, and M. J.
Vander Zanden. State of the world’s freshwater
ecosystems: physical, chemical, and biological changes.
_Annual review of Environment and Resources,_
36:75–99, 2011.
[3] S. Cholia, D. Skinner, and J. Boverhof. Newt: A
restful service for building high performance
computing web applications. In Gateway Computing
_Environments Workshop (GCE), 2010, pages 1–11._
IEEE, 2010.
[4] R. J. Figueiredo, P. O. Boykin, P. S. Juste, and
D. Wolinsky. Integrating overlay and social networks
for seamless p2p networking. In Workshop on
_Enabling Technologies: Infrastructure for Collaborative_
_Enterprises, 2008. WETICE’08. IEEE 17th, pages_
93–98. IEEE, 2008.
[5] K. Flynn. Castles built on sand: dysfunctionality in
plankton models and the inadequacy of dialogue
between biologists and modellers. Journal of Plankton
_Research, 27(12):1205–1210, 2005._
[6] A. Ganguly, A. Agrawal, P. O. Boykin, and
R. Figueiredo. IP over P2P: Enabling self-configuring
virtual IP networks for grid computing. In
_International Parallel and Distributed Processing_
_Symposium, 2006._
[7] M. Grinberg. Flask Web Development: Developing
_Web Applications with Python. O’Reilly Media, Inc.,_
1st edition, 2014.
[8] P. S. Juste, K. Jeong, H. Eom, C. Baker, and
R. Figueiredo. Tincan: User-defined p2p virtual
network overlays for ad-hoc collaboration. EAI
-----
_Endorsed Transactions on Collaborative Computing,_
14(2), Oct 2014.
[9] E. Litchman and C. A. Klausmeier. Trait-based
community ecology of phytoplankton. Annual Review
_of Ecology, Evolution, and Systematics, pages_
615–639, 2008.
[10] H. W. Paerl, R. S. Fulton, P. H. Moisander, and
J. Dyble. Harmful freshwater algal blooms, with an
emphasis on cyanobacteria. The Scientific World
_Journal, 1:76–113, 2001._
[11] H. W. Paerl and J. Huisman. Blooms like it hot.
_Science, 320(5872):5–7, 2008._
[12] R. Pordes, D. Petravick, B. Kramer, D. Olson,
M. Livny, A. Roy, P. Avery, K. Blackburn, T. Wenaus,
F. W¨urthwein, et al. The open science grid. In Journal
_of Physics: Conference Series, volume 78, page_
012057. IOP Publishing, 2007.
[13] R Core Team. R: A Language and Environment for
_Statistical Computing. R Foundation for Statistical_
Computing, Vienna, Austria, 2015.
[14] A. Rigosi, C. C. Carey, B. W. Ibelings, and J. D.
Brookes. The interaction between climate warming
and eutrophication to promote cyanobacteria is
dependent on trophic state and varies among taxa.
_Limnology and Oceanography, 59(1):99–114, 2014._
[15] V. H. Smith, G. D. Tilman, and J. C. Nekola.
Eutrophication: impacts of excess nutrient inputs on
freshwater, marine, and terrestrial ecosystems.
_Environmental pollution, 100(1):179–196, 1999._
[16] A. Solem. Celery: Distributed Task Queue, 2013.
[17] D. Thain, T. Tannenbaum, and M. Livny. Distributed
computing in practice: The condor experience.
_Concurrency-Practice and Experience,_
17(2-4):323–356, 2005.
[18] K. Weathers, P. C. Hanson, P. Arzberger, J. Brentrup,
J. Brookes, C. C. Carey, E. Gaiser, D. P. Hamilton,
G. S. Hong, B. Ibelings, et al. The global lake
ecological observatory network (gleon): the evolution
of grassroots network science. 2013.
-----
| 8,298
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1509.08955, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/1509.08955"
}
| 2,015
|
[
"JournalArticle"
] | true
| 2015-09-29T00:00:00
|
[
{
"paperId": "64f5b846fe084bd6be3c9aaa4cff803e064cf135",
"title": "Predicting the resilience and recovery of aquatic systems: A framework for model evolution within environmental observatories"
},
{
"paperId": "e9fcd55259d26827394553375008f5ab45fe2ae4",
"title": "Fish and Phytoplankton Exhibit Contrasting Temporal Species Abundance Patterns in a Dynamic North Temperate Lake"
},
{
"paperId": "bee4b0f18cc751090d0442cc2d4c65a05ba5bf3c",
"title": "TinCan: User-Defined P2P Virtual Network Overlays for Ad-hoc Collaboration"
},
{
"paperId": "68ae681d5b0f747f5af7784f5cbdecee9b7949a5",
"title": "The interaction between climate warming and eutrophication to promote cyanobacteria is dependent on trophic state and varies among taxa"
},
{
"paperId": "2240de3f184002ff91c11c830908e041534a1971",
"title": "THE GLOBAL LAKE ECOLOGICAL OBSERVATORY NETWORK (GLEON): THE EVOLUTION OF GRASSROOTS NETWORK SCIENCE"
},
{
"paperId": "0e6475a3589176315e6e7a8f786a908f63313cb4",
"title": "State of the world's freshwater ecosystems: physical, chemical, and biological changes."
},
{
"paperId": "10b53d436f09faaea9d8bda2d88362e21327c564",
"title": "Resilience to Blooms"
},
{
"paperId": "90a8a07d7c2f5ed4d2e70fb520f40a82f715b550",
"title": "NEWT: A RESTful service for building High Performance Computing web applications"
},
{
"paperId": "d779cf8fd016cce6bc49d92b2b207cc664dfc543",
"title": "WS-PGRADE: Supporting parameter sweep applications in workflows"
},
{
"paperId": "e681180bd36e9678b3df39bae6f97750061ec698",
"title": "Trait-Based Community Ecology of Phytoplankton"
},
{
"paperId": "bc6dff14a130c57a91d5a21339c23471faf1d46f",
"title": "Et al"
},
{
"paperId": "4288d6d699c299df06e3c02f7acf50b04b24f9e9",
"title": "Integrating Overlay and Social Networks for Seamless P2P Networking"
},
{
"paperId": "d38e72c6a186310fb6e4e89ac9d25f6766c2ee7e",
"title": "Blooms Like It Hot"
},
{
"paperId": "5c369b759fcb83479b1a3b746fd361d87eea4b5b",
"title": "IP over P2P: enabling self-configuring virtual IP networks for grid computing"
},
{
"paperId": "c8e017b554f7d9fc6e291037a25bf5d1fcff21d0",
"title": "Castles built on sand : dysfunctionality in plankton models and the inadequacy of dialogue between biologists and modellers"
},
{
"paperId": "9b41e7559902658b5a6a85c2d06bdc8dd3d8806e",
"title": "The Open Science Grid"
},
{
"paperId": "0e6ecb5d48b05337ff67ae2d03a338f5afd9c429",
"title": "Distributed computing in practice: the Condor experience"
},
{
"paperId": "9798732004cfaab19e4e1bc664f0d352b5ec55b6",
"title": "Harmful Freshwater Algal Blooms, With an Emphasis on Cyanobacteria"
},
{
"paperId": null,
"title": "GRAPLEr: A distributed collaborative environment for lake ecosystem modeling that integrates overlay networks, high-throughput computing, and WEB services"
},
{
"paperId": "3bbc076a781d4656184c4a03f3b3695f7ec35c06",
"title": "Using wavelet analyses to examine variability in phytoplankton seasonal succession and annual periodicity"
},
{
"paperId": "659408b243cec55de8d0a3bc51b81173007aa89b",
"title": "R: A language and environment for statistical computing."
},
{
"paperId": null,
"title": "Flask Web Development: Developing Web Applications with Python"
},
{
"paperId": null,
"title": "Celery: Distributed Task Queue"
},
{
"paperId": null,
"title": "Glm general lake model. model overview and user information"
},
{
"paperId": "80cdd0a042482b590b01620192193d8c13107beb",
"title": "Eutrophication: impacts of excess nutrient inputs on freshwater, marine, and terrestrial ecosystems."
}
] | 8,298
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00a174e4dbe45c0dcba70c64c6cf54f10cbb4b67
|
[
"Computer Science"
] | 0.914029
|
SoK: Not Quite Water Under the Bridge: Review of Cross-Chain Bridge Hacks
|
00a174e4dbe45c0dcba70c64c6cf54f10cbb4b67
|
International Conference on Blockchain
|
[
{
"authorId": "2107930075",
"name": "Sung-Shine Lee"
},
{
"authorId": "2859030",
"name": "Alexandr Murashkin"
},
{
"authorId": "2311075",
"name": "Martin Derka"
},
{
"authorId": "2377350",
"name": "Jan Gorzny"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"ICBC",
"IEEE Int Conf Blockchain Cryptocurrency",
"IEEE International Conference on Blockchain and Cryptocurrency",
"Int Conf Blockchain"
],
"alternate_urls": null,
"id": "f1ab8d75-7f15-4bb4-ad88-e834ec6ed604",
"issn": null,
"name": "International Conference on Blockchain",
"type": "conference",
"url": null
}
|
The blockchain ecosystem has evolved into a multi-chain world with various blockchains vying for use. Although each blockchain may have its own native cryptocurrency or digital assets, there are use cases to transfer these assets between blockchains. Systems that bring these digital assets across blockchains are called bridges, and have become important parts of the ecosystem. The designs of bridges vary and range from quite primitive to extremely complex. However, they typically consist of smart contracts holding and releasing digital assets, as well as nodes that help facilitate user interactions between chains. In this paper we first provide a high level break-down of components in a bridge and the different processes for some bridge designs. Then, we analyse past exploits in the blockchain ecosystem that specifically targeted bridges. In doing this, we identify risks associated with bridge components.
|
# SoK: Not Quite Water Under the Bridge: Review of Cross-Chain Bridge Hacks
#### Sung-Shine Lee, Alexandr Murashkin, Martin Derka, Jan Gorzny
October 31, 2022
**Abstract**
The blockchain ecosystem has evolved into a multi-chain world
with various blockchains vying for use. Although each blockchain
may have its own native cryptocurrency or digital assets, there are use
cases to transfer these assets between blockchains. Systems that bring
these digital assets across blockchains are called bridges, and have
become important parts of the ecosystem. The designs of bridges vary
and range from quite primitive to extremely complex. However, they
typically consist of smart contracts holding and releasing digital assets,
as well as nodes that help facilitate user interactions between chains.
In this paper, we first provide a high level breakdown of components
in a bridge and the different processes for some bridge designs. In
doing this, we identify risks associated with bridge components. Then
we analyse past exploits in the blockchain ecosystem that specifically
targeted bridges.
### 1 Introduction
In recent years, the blockchain ecosystem has evolved into a multi-chain
world. Various blockchains, like the popular Bitcoin Network [1] or
Ethereum [2], are evolving simultaneously. These blockchains often have
their own native cryptographically-based digital asset or cryptocurrency, like
Bitcoin or Ether. Advanced blockchains like Ethereum support automatically
executed pieces of code, so-called smart contracts, which enable programs to
be developed on these blockchains. In turn, these programs often introduce
additional digital assets, like non-fungible tokens (NFTs).
1
-----
However, for various reasons, it is often desirable to move digital assets
from one blockchain to another. For example, a user may wish to move
their Bitcoin onto Ethereum to deposit it into various Decentralized Finance
(DeFi) protocols which may allow the user to earn interest on their Bitcoin,
akin to a savings account from a bank (see e.g., [3]). In another situation, a
user may have an Ethereum-based NFT which can be used in simulated races
executed by smart contracts. However, the gas fees – the cost of executing a
transaction on Ethereum – may be prohibitively large, and so the simulated
race may be built on a so-called layer two scaling solution built on top of
Ethereum [4]. Examples of these scaling solutions include rollups (also known
as commit-chains [5]), Plasma [6], or a side-chain (see e.g., [7]). These scaling
solutions have their security tied to Ethereum but are able to reduce the cost
of gas fees, and are therefore more attractive for applications such as an
NFT-based simulated race.
The ability to “move” a digital asset from one blockchain to another
requires a protocol, which can be implemented with the support of smart
contracts. Such a protocol is required to ensure that an asset can only be
used on one blockchain at a time, in order to prevent double spending. Double
spending is when one spends a cryptocurrency token twice [8]. In this case,
the double spend would be one spend of the token on its original blockchain
and one spend of the token on the blockchain it was moved to. However,
since a blockchain is a self-contained system, it is impossible to actually
take a digital asset from one source blockchain and put it on a different
destination blockchain. Instead, a representation of the original asset on the
source blockchain must be created on the destination blockchain. Thus such
a protocol must involve some cross-chain communication (see Figure 1), and
this communication protocol (and its implementation) is called a bridge.
Bridges are complicated protocols and software projects. They can be as
complicated as blockchains themselves (especially if they are a decentralized
protocol), and may be required to be implemented in various languages (one
for each blockchain the bridge interacts with), each with its own nuances.
Moreover, bridges need to unlock or mint digital assets on destination
blockchains. Bridges are therefore responsible for distributing valuable digital
assets, and as a result, have become targets for attackers. In the last year,
over $1 billion USD worth of digital assets have been stolen, incorrectly
minted, or locked in these systems [9]. The result is that some users are
without their digital assets, confidence in cross-chain protocols is shaken,
and protocols fail to operate as promised.
2
-----
Figure 1: Cross-chain communication illustrated, from the Ethereum
blockchain to another blockchain, “Another Chain.” In this case, the actor
on the left wants to send Ether (ETH), the native digital asset for Ethereum,
to another actor on the right, who uses Another Chain. The dotted line between the blockchains requires cross-chain communication, or a bridge.
In order to prevent issues like this from happening again, a deep understanding of how bridges work is required. In Section 2, we review the
general structure of a bridge. Next, we review various bridge exploits that
have occurred on the components of a bridge. In Section 3, we recount exploits that have or would have targeted parts of the bridges that hold user
assets in custody. Section 4 describes exploits that would have taken advantage of the part of the bridge issuing asset representations on a destination
blockchain. Exploits that abuse the protocol’s cross-chain communication
system are described in Section 5. Finally, Section 6 describes exploits that
arise due to poorly defined digital assets, namely, some ERC-20 tokens. We
review related work in Section 7. Finally, Section 8 concludes.
### 2 Bridge Architecture
We now describe the high level architecture of bridges. Throughout the section, we will assume that the source blockchain for an asset is Ethereum, but
any blockchain that supports smart contracts can be considered without loss
of generality. We also assume that assets from Ethereum are bridged to an
3
-----
other blockchain that supports smart contracts, which we will call “Another
Chain” at times.
A bridge transfers assets from a source blockchain, where the asset is
originally implemented. The digital asset is implemented either natively, as
in the case of Ether (ETH) on Ethereum, or as a smart contract. Example
of smart contract implemented tokens are ERC-20 tokens [10] and ERC721 tokens [11] (i.e., NFTs). The bridge enables unlocking or creating a
representation of this asset on a destination blockchain.
For the destination asset to be a useful representation, the bridge’s representation on the destination blockchain should mimic the behaviour of the
asset on the source blockchain. In particular, the destination representation
should be transferable to any party on the destination blockchain, if this is
a feature of the asset on the source blockchain. Moreover, the bridge smart
contracts on the destination blockchain should accept this representation
from any party on that blockchain, in order to move that asset back to the
original blockchain. This is necessary as otherwise users on the destination
blockchain may not associate the representation with the original asset. For
example, if a user is given a representation of a newly minted representation
of Ether on a non-Ethereum blockchain, but that representation cannot be
traded for Ether on Ethereum, users are not likely to associate the same
value to it or use it in the same way.
We now describe how bridges work. First, bridges have a custodian on the
source blockchain: a smart contract that locks up assets that are deposited
into it. On the destination blockchain, bridges have a debt issuer that can
_create (or mint) digital representations of tokens for those supported by the_
custodian. The custodian signals (e.g., through an Ethereum event) that
a digital asset was received and that the corresponding debt issuer on the
destination blockchain can mint a representation of the asset. As the representation of the asset can be traded for the original asset, the representation
is in fact a debt token. Since each blockchain is a closed ecosystem, a com_municator reads the event emission on Ethereum to send a signal for debt_
issuance on the destination blockchain. The blockchain is a closed ecosystem because smart contracts are passive; they cannot actively (or regularly)
read from non-transaction data, including data that only exists on other
blockchains. This process is illustrated in Figure 2.
The custodian-debt issuer architecture is designed to avoid double spending of digital assets that have been sent across a bridge; it is important that
bridges only mint digital representations only after receiving the true asset
4
-----
Figure 2: An actor wishes to transfer ETH from Ethereum to Another Chain.
The actor sends their ETH to the bridge Custodian on Ethereum, a smart
contract that accepts the asset. A Communicator waits for the Custodian
to signal that it has received ETH from the user and signals the Debt Issuer
on Another Chain when it detects the event. The Debt Issuer then mints
acETH, the Another Chain representation of ETH.
on the source blockchain. This prevents double spending by only having one
representation of the token freely transferable at a time.
To reverse the process, a user destroys (or burns) the debt token on the
destination chain. The communicator observes the destination chain, looking
for every event corresponding to a burn. When the burn is complete, the
communicator signals the custodian that the asset can now be released on
the source blockchain. This process is illustrated in Figure 3.
Blockchain systems that write data to the blockchain from external
sources are called oracles (see Figure 4). An oracle is an agent that fetches
external information into a blockchain ecosystem [12]. Since the communicator of a bridge writes a signal from another blockchain (from the source
blockchain to the destination blockchain, or vice-versa), these components
are in fact oracles. These oracles are used to write a special signal that indicate that a transaction has been executed on another blockchain. Thus
bridges are a combination of two commonly seen structures in the blockchain
space: asset custodians with debt issuers, and oracles.
There are many security considerations for the various components. First,
only messages signed by the communicator should be considered valid from
the point of the custodian and debt issuer. Otherwise, anyone can send such
messages to issue debt or release assets. Decentralised and trust-less bridges
5
-----
Figure 3: An actor wishes to transfer acETH from Another Chain to
Ethereum. The actor sends their acETH to the Debt Issuer on Another
Chain, a smart contract that accepts the asset and burns it. A Communicator waits for the Debt Issuer to signal that it has received acETH from the
user and signals the Custodian on Ethereum when it detects the event. The
Custodian then unlocks ETH for the user.
Figure 4: Oracles watch a data source to write to an oracle smart contract
on a blockchain, to provide information that cannot be directly queried on
chain. If an oracle is decentralized, it may consist of several oracle nodes, and
further, it may read from multiple data sources or write to multiple oracle
contracts (not pictured).
6
-----
are possible [13], but often involve running nodes; in this way, they are similar to the blockchains that they are trying to bridge assets between. Second,
as the communicator is effectively an oracle reading from a blockchain data
source, it must take care not to signal the debt issuer incorrectly. The communicator wait for the transaction depositing assets to be final : guaranteed
to be included in the blockchain. A true guarantee may be impossible, as
the blockchain may be re-organized. However, if a transaction is included
in a block which is included and forms the longest chain, with each block
appended to that chain, the likelihood of a new chain appearing without the
transaction approaches zero. Thus the communicator should wait until the
transaction depositing assets into the custodian has a sufficient number of
confirmations — blocks appended to a chain containing the deposit transaction — following it on the source chain (or on the destination chain, if
signalling to the custodian to release assets instead).
Other considerations are also important and may impact the bridge’s implementation. Depending on the source and destination blockchains, a user
may not have a one-to-one mappings of identities on both chains. Moreover a digital asset may have different representations, each with a different
implementation, on each different blockchain.
Not all blockchains have the same address format. Bitcoin addresses are
different from Ethereum addresses, but Ethereum addresses and Polygon
addresses (a scaling solution for Ethereum) share address spaces. The bridge
may wish to publish a message so that anyone on a destination chain can be
issued debt when assets are deposited into the custodian. The message may
be a cryptographic puzzle, so that anyone who solves it can claim it, or only
be decoded by the user who deposited the asset. Advanced communicators
may also have built-in mixers (see e.g., [14]), to anonymize assets as they are
transferred between chains, or other unique features.
Not all blockchains implement digital assets in the same way. Bitcoin
uses the Unspent Transaction Output (UTXO) model, while Ethereum is
account based. As a result, a Bitcoin that has been transferred to Ethereum
will be implemented differently and have different semantics, even if it can
ultimately be redeemed for a Bitcoin on the Bitcoin Network via a bridge.
Finally, since bridges rely on communicators to relay messages, it is imperative that these entities can always write on the blockchains they communicate with. In particular, these entities should be guaranteed to be live
so that a communication between chains will always occur eventually.
Communicators may be required to collect a fee in order to guarantee this
7
-----
liveness. First, bridges may be required to pay for transactions to submit
transactions on either a source or destination blockchain (or both). For
example, Ethereum transactions require gas fees, paid in Ether, which are
awarded to the block producers for the chain. This provides an incentive
for the inclusion of a transaction in a block and offsets the costs of block
production. On other blockchains, another asset may be used as a gas fee,
like an ERC-20 token. In testing or centralized solutions, gas fees may be
offset by other sources (e.g., the operator of the blockchain). Second, this
fee may offset the cost of operating the communicator software. Running a
node that is required to interact with a blockchain may be non-trivial. For
example, to interface with Ethereum, an RPC endpoint is required which
may be paid service or a full Ethereum node. The former may cost per read
or write to the blockchain, while the latter may be expensive to keep online
and up-to-date.
### 3 Custodian Attacks
In this section, we review three exploits that have exploited the custodian
component of bridges. The first exploit involves changing the privileged
address that can access the digital assets, using cross-chain function calls.
The second exploit aims to forge proofs that are accepted by custodians to
release assets. The third exploit aims to trick the custodian into emitting
deposits when it should not.
#### 3.1 Truncated Function Signature Hash Collisions and Forced Transaction Inclusions
Depending on the structure of the bridge’s custodian, privileged addresses
may have access to the assets in custody. This is common for centralized bridges, and is a requirement when only transactions from a particular whitelisted account are allowed to unlock these assets. This requirement
is the simplest way to ensure that only an appropriate communicator can
unlock funds.
Moreover, if a bridge is built in a modular way, a vault holding the assets
may be separate from the contracts that are written to by the communicator
directly. It may also be desirable that such a vault has its own privileged
administrator.
8
-----
Figure 5: The structure of a custodian with additional privileged addressed
to manage the custody of assets.
Some bridges have the ability to execute cross-chain function calls. That
is, the bridge can accept transactions on a source blockchain that include
encodings of function calls to be executed on the destination blockchain (or
vice versa). For example, a user may wish to bridge an asset onto another
chain and immediately deposit it into a DeFi application on the destination
chain. This involves calling a deposit function on the destination blockchain,
if the asset is originally on the source blockchain, or vice-versa.
The goal of this exploit is to change a bridge vault’s privileged addresses
to an attacker’s address, using cross-chain function calls.
The situation for this exploit is illustrated in Figure 5. In this situation,
the custodian has an additional field which has special roles, in addition to
receiving instructions from the communicator (possibly via another smart
contract).
With this kind of bridge structure, such an exploit occurred through the
following steps, illustrated in Figures 6 and 7:
9
-----
1. A bridge is deployed so that anyone can call its cross-chain communication contract, specifying a function to execute. The cross-chain
function call is specified via a truncated hash of the function signature;
this is common for Ethereum [15, 16]. Specifically, a function signature
is the first four bytes of the Keccak256 [17] hash of the function’s name
and its ordered argument types (Figure 6).
2. The attacker then defines a function such that, (a) the function has the
same argument types, and (b) when the name along with the arguments
taken by the function, the truncated hash is the same as calling a function changeCustodyAddress expected by the custodian. The attacker
specifies a contract that they own with the function signature defined
above. Finally, the attacker specifies this function and its new contract for the cross-chain execution call, resulting in this transaction’s
inclusion on the destination blockchain (Figure 7).
3. The attacker notes that this transaction is now included in the destination blockchain (even if it fails), and as such can now be communicated
back to the custodian on the source blockchain with proof that executing this transaction happened. The transaction is therefore able to be
replayed on the source blockchain, where it succeeds and the attacker
becomes a privileged actor of the vault.
The sources of the error here are steps (2) and (3). Indeed, step (2) should
be nearly impossible, as hash functions are typically assumed to be collision
_resistant. A hash function is collision resistant if finding two inputs to the_
function that result in the same output is computationally difficult to find
[18]. However, as the next subsection will illustrate, implementation choices
made the attack feasible in one situation. Step (3) is also a problem as a
transaction’s inclusion on the destination blockchain should be insufficient
to replay it on the source blockchain.
**3.1.1** **Real World Example**
This issue was identified in the PolyNetwork bridge [19, 20]. Critically, the
hash collision only required the first 4 bytes of the hashes to match. This
is because the function selector, which decodes the hash, only inspects the
first 4 bytes when choosing which function to call. Thus only a partial hash
10
-----
```
functionName(bytes,bytes,uint256) 7dab77d8
```
_→_
Figure 6: An example of a cross-chain function call (top) and an example
of the truncated hash of a function (bottom). Anyone can call the crosschain communication contract, specifying a function to call on any contract
on another chain. The function must be specified by a hash of the function
signature: its name and ordered argument types. An example is shown under
the figure, where 7dab77d8 is the first four bytes of the Keccak256 [17] hash
of functionName(bytes,bytes,uint256). This hash is used to execute a
call from the other end of the bridge.
11
-----
Figure 7: Illustrating which function signatures should match for a signature
collision. The contract specified by the attacker should have the same signature as the function to call the method to change the custodian’s privileged
addresses.
collision was necessary. However, finding the hash collision was necessary,
but not sufficient to accomplish this attack.
In practice, the attack required running a modified communicator component, the PolyNetwork Relayer. First, the attacker sent a transaction to
the destination blockchain attempting to call the function on the destination
_blockchain that is a hash collision corresponding to a legitimate transac-_
tion to change the custodian’s vault owner. This was communicated to the
destination blockchain, included in the state tree, but ultimately did not execute correctly. This was because the custodian was not on the destination
blockchain, after all; it was on the source blockchain.
However, the transaction was included on the destination chain, with
proof. That is, a transaction signed by the PolyNetwork chain operator
to update the vault owner was available and in the state database for the
destination blockchain. The attacker was then able to force this transaction
to be executed on the source blockchain, so that it was interacting with the
custodian. The proof was verified (since the transaction was included on
the destination chain) and the transaction executed successfully (since it was
now being called on the chain on which it was intended to be called). The
result was that the attacker obtained privileged roles with the vault.
12
-----
**3.1.2** **Solution**
The mitigation for this attack is to counter step (3). This is because step (2)
is the well-established method for resolving function signatures on Ethereum,
and likely cannot be changed without introducing compatibility issues to a
bridge or a hard fork of Ethereum.
A custodian should validate that the transaction, even if it originates on
the destination blockchain, provided to a custodian is legitimate. One way
to do this is to ensure that the communicator cannot be bypassed. This
would prevent the inclusion proof from being considered legitimate, as the
attacker would not have been able to submit it to the custodian with the
communicator’s signature or from the communicator’s address.
#### 3.2 Incorrect Proof-of-Burn Verification
Depending on the structure of a bridge’s custodian, proofs are to be presented
to the custodian in order to release assets. This type of mechanism may be
common for decentralized bridges, allowing anyone with a valid proof to
interact with the custodian directly for withdrawals of assets, removing the
need for a centralized communicator.
The goal of this exploit is to craft fraudulent proofs that would be valid
for the verification process, thereby enabling seemingly correct withdrawals.
This kind of exploit could have occurred through the following steps,
illustrated in Figure 8:
1. An actor deposits funds into a custodian smart contract on the source
blockchain.
2. The communicator relays this information to the debt issuer on the
destination blockchain for the bridge, and the debt issuer provides the
actor with a debt token.
3. The actor burns the debt token by depositing it back into the debt
issuer.
4. The actor receives a so-called proof-of-burn for the token. The proofof-burn is a string generated by the debt issuer showing that the debt
token was burned.
13
-----
Figure 8: An exploit flow when proof-of-burn messages are not verified correctly.
5. The actor submits a (modified) proof-of-burn to the custodian, to unlock assets on the source blockchain, and the custodian considers the
proof valid.
The source of the error here is step (5), which enables an attacker to
submit a modified proof-of-burn (alongside the original proof-of-burn) to
withdraw funds. The real-world exploit occurred because the proof had a
leading byte that was not verified by the custodian when releasing funds,
which we now describe.
**3.2.1** **Real World Example**
This exploit was detected and patched on the Polygon/Matic EthereumPlasma bridge before any harm could be done [21]. We outline the specifics
of how this particular issue manifested itself for completeness, though this
type of exploit may have different manifestations depending on the (incorrect)
implementation of proof generation and verification for a particular bridge.
In this case, the custodian is to release funds if a proof-of-burn for the
debt token is specified in a particular Merkle Patricia trie (see e.g., [22])
14
-----
representing the state of the destination blockchain. In this case, the proofof-burn includes a path to the leaf in the Merkle Patricia trie which specified
that that the debt token was burned (the transaction should be included
when the actor submits it on the destination chain). This proof-of-burn
included a branchMask parameter that should be unique.
The branchMask parameter is encoded with so-called hex-prefix encoding [2]. But at some points within the system implementation, the parameter
is encoded and decoded into 256-bit unsigned integers, and during this process some information is lost. In particular, a path in the Merkle Patricia
trie may have multiple valid encodings with the system.
The system was implemented to determine the path (in a trie) length
encoded by a hex-prefix encoding. To use the encoding’s length, it is important to know if the length of the path is even or odd; this affects how the
encoding is later expanded. The system was implemented to check that the
parameter’s first nibble (4 bits) represented 1 or 3; if so, it considered the
path length to be an odd number. However, it was also implemented such
that in the event that the first nibble is not 1 or 3, the first byte (8 bits) is
discarded but verification proceeds. Thus, there are 2[8] 2(2[4]) = 224 possible
_−_
ways to encode a path in the Merkle Patricia trie in the situation where the
first byte is discarded. In particular, there are 2[8] encodings for every possible
bit setting of the first byte, minus the cases where the first nibble is either 1
(2[4] cases; every configuration of the last 4 bits) or 3 (also 2[4] cases for each
configuration of the last 4 bits).
Thus the attacker would simply find a valid proof where the initial nibble was not 1 or 3, use it, and then replay the transaction for each of the
remaining 223 combinations of bits for the first byte. In each case, the proofof-burn would look legitimate and the exit would succeed, subject to delays
in confirmations, delay periods, or other specific requirements of this bridge
and the blockchains it connected.
**3.2.2** **Solution**
The remedy for this exploit is correct implementation of proof verification.
The original reporter of the issue notes that the first byte should in fact
always be zero, reducing the number of times a valid proof-of-burn can be
used to only once [21].
If the relevant proofs are built and verified correctly, this exploit will not
be common, subject to common cryptography assumptions like the collision
15
-----
resistance of hash functions and the inability to forge digital signatures.
#### 3.3 Inconsistent Deposit Logic
Bridges are often built for custom blockchains. For example, anyone developing a rollup may have a token that is used for governance or to be used
as payment for gas on the rollup (instead of ETH). As a result, sometimes
bridges have custom functionality for some tokens.
Moreover, a token can be “wrapped” within another token. Most commonly, Ether (ETH) is often wrapped into wrapped Ether (wETH). This is
helpful because some decentralized applications do not wish to treat Ether
differently from ERC-20 tokens, and wrapped Ether is an ERC-20 token.
This can be helpful, as native ETH lacks a transferFrom function, among
other helpful functions that are available to ERC-20 tokens. As a result, there
is a wrapped Ether smart contract on the source blockchain that essentially
lets anyone lock one ETH to mint one wETH.
The goal of this exploit is to trick the custodian into emitting events for
deposits which are not real.
This kind of exploit occurred through the following steps, illustrated in
Figure 9. It is fairly restricted in scope and requires special tokens, like
wrapped Ether, to be handled differently than unwrapped assets.
1. The bridge is established in such a way that its final logic for emitting
deposit events is after processing of wrapped assets. The bridge is
also (incorrectly) built so that unwrapped assets allow this logic to be
called, without actually supporting the transfer of those assets.
2. An attacker deposits assets into the custodian, without first wrapping
the assets.
The second step is the source of the issue, and is exemplified in the realworld manifestation we now describe.
**3.3.1** **Real World Example**
This error occurred for the Meter bridge [23]. In this occurrence, the bridge
expected all assets to be transferred in a wrapped form, and assumed a deposit of unwrapped assets is a mistake. However, the deposit of unwrapped
16
-----
Figure 9: Two separate paths to deposit into a custodian contract.
assets was encoded within the same event logic that accepts wrapped assets, even though unwrapped assets were not accepted by the custodian. As
a result, the custodian still emitted an event saying that funds had been
transferred, even though the custodian never received them. That is, the
caller continued to own their assets, but the custodian still emitted an event.
**3.3.2** **Solution**
This particular attack is not conceptually involved. Its mere existence was
enabled by a bug in the code and the branching logic. It serves as a reminder
to the bridge developers that Ethereum’s native Ether is not an ERC-20
token, and both the cases of transferring Ether and its wrapped form need
to be handled properly. Good engineering practices, including implementing
tests, should suffice to mitigate the problem in the future.
### 4 Debt Issuer Attacks
In this section we review one exploit on the debt issuer component of a
bridge. The exploit aims to arbitrarily mint debt tokens on the destination
blockchain.
#### 4.1 Bypassing Signature Verification
The exploit aims to arbitrarily mint debt tokens on the destination
blockchain. In doing so, the attacker can trade these tokens back in, hon
17
-----
Figure 10: Components required for debt token issuance.
estly, and receive the corresponding assets on the source blockchain, as long
as such assets are available, or for other assets on the destination blockchain.
Recall that the debt issuer smart contracts live on destination blockchain,
mint debt tokens which are representations of assets on the source blockchain,
and receive minting signals from a communicator (see Figure 10).
In the most straightforward implementation of debt issuers, these components mint tokens only after receiving a signed message from a communicator. This prevents unwanted tokens from being minted on the destination
blockchain. To check the validity of such a signature, verification logic may
be placed in a smart contract which is external to the contract issuing the
debt tokens on the destination chain (see Figure 11 for an example). Moreover, if there are several communicators in a bridge, each might have its
own verification logic, and modularising this logic may make sense from an
engineering standpoint. This would enable verification of signatures from
multiple sources, each with its own verification scheme. When a message is
received in this situation, it could therefore include the address of the verification logic to be used. The logic for determining which verification contract
should be used must be matched to the message, and including the address
of a contract that implements the verification logic is a straightforward im
18
-----
Figure 11: Modularized signature verification logic in a debt issuer.
plementation. However, problems arise if matching allows messages to be
matched with arbitrary verification logic, as we now describe.
This exploit was executed using the following steps, illustrated in Figure 12:
1. An attacker deploys a smart contract on the destination blockchain that
has a function that the debt issuer expects to call to verify a signature.
The function is implemented so that any signature is “verified”, possibly by implementing the verification function so that it always returns
```
true. This verification contract therefore accepts any string as a valid
```
signature.
2. The attacker from step (1) sends a debt issuance signal to the debt
issuer, referencing the smart contract they deployed in step (1) as the
verification logic.
3. The debt issuer provides debt tokens to the attacker.
The source of the issue here is in steps (2) and (3). The debt issuer
should not have accepted just any smart contract as verification logic. After
the attacker gains the debt tokens, they can behave honestly to bridge the
assets back to the source blockchain, stealing funds from the custodian.
19
-----
Figure 12: Changing the signature verification logic in a debt issuer to mint
debt tokens.
**4.1.1** **Real World Example**
Unfortunately, this situation was exploited on the Wormhole bridge [24]. The
result was that about 120,000 Ether was minted on Solana, which was worth
about $323 million USD at the time the exploit occurred. Much of this Ether
was transferred back to Ethereum.
**4.1.2** **Solution**
The example in Section 4.1.1 was enabled by the attacker’s ability to provide
both the signature (used to confirm the authenticity of the transaction) and
the reference to the signature verifier (used to confirm the signer’s authorization to issue the transaction) within the user transaction. As a result,
the attacker was able to authorize any calls via a friendly custom verifier.
Therefore, a clear prevention of the attack is ensuring that verifiers cannot
be provided by users. Verifiers need to be absolutely trusted elements of
the system, and as such, can be deployed only by trusted entities, and users
cannot be provided with an option to choose a dishonest verifier to authorize
their transaction.
20
-----
### 5 Communicator Attacks
In this section we review two exploits targeting the communicator component
of a bridge. The first exploit aims to trick the communicator into forwarding
invalid messages from one blockchain to the next, while the second uses a
51% attack on a blockchain to cause a blockchain re-organization after the
communicator receives a valid message. These exploits can be thought of as
polluting the data source of an oracle, the communicator.
#### 5.1 Forwarding Invalid Messages
The goal of this exploit is to trick the communicator into forwarding invalid messages from the source blockchain. This will result in incorrect debt
issuance on the destination blockchain, minting debt tokens that are not
mapped to assets in custody on the source blockchain.
The exploit proceeded according to the following steps, illustrated in Figure 13:
1. The bridge is established in such a way that its communicator watches
events emitted from the source blockchain. The communicator watches
for these events on transactions that deal with a particular address,
namely, the address of the custodian for the bridge. Notably, it watches
all events such a transaction.
2. An attacker creates a smart contract on the source blockchain. This
smart contract has a method to interact (correctly) with the custodian
address, but also emits an event that is identical to the one emitted by
the custodian smart contract, immediately before or after interacting
with the custodian smart contract. The event emitted by the attacker’s
contract contains parameters that appear correct to the communicator.
3. The attacker interacts with the custodian (correctly), via the smart
contract deployed in step (2).
The source of the error here is step (1), which has an incorrect implementation of a communicator. In particular, the communicator watches for
_all events on a transaction that interacts with the custodian’s smart con-_
tract, and parses them if they look legitimate. In turn, it sends signals to
the debt issuer for each event it detected. However, because the communicator watched all events in the same transaction, regardless of which contract
21
-----
Figure 13: An illustration of using fake events on the Ethereum blockchain
to exploit a naive bridge communicator.
emitted them, the events emitted by the contract deployed by the attacker
in step (2) are also parsed by the communicator. The result is that the debt
issuer mints more debt tokens than exist in the custodian, and the bridge
has failed to faithfully map the digital asset.
**5.1.1** **Real World Example**
This exploit occurred with the pNetwork pBTC-on-BSC, that is, the
pNetwork-wrapped Bitcoin (pBTC) on Binance Smart Chain bridge [25].
The bridge operators themselves, pNetwork, examined the impact and determined that both legitimate and fraudulent logs for withdrawal requests
by the communicator were processed due to a bug in the code. The bridge
execution was paused as the attack was detected to minimize loss of funds.
22
-----
**5.1.2** **Solution**
A communicator watching events is a core part of the off-chain application
logic of the bridge. This can have many forms, depending on the specific
implementation of the communicator. In order to mitigate this attack, the
developers of the communicator have to ensure that only the events emitted
by the custodian smart contract are watched and acted upon. This may be
a non-trivial task as event logs in the Ethereum protocol are not equipped
with any cryptographic means of authentication (such as digital signatures).
Moreover, these event logs are organized and associated to blocks quite specifically using so-called Bloom filters [26] so that they remain quickly accessible
when searching the blocks (see [2] for details). A thorough understanding
of the stack and libraries used for developing the communicator is crucial
in order to ensure that only the relevant events that cannot be spoofed are
taken into account before issuing assets on the destination chain. This is a
generally applicable rule for all privileged activities that the communicator
performs based on events that it listens to, i.e., it may apply to much more
than just issuing assets.
#### 5.2 Short Term 51% Attacks on the Source Blockchain
The goal of this exploit is to trick the communicator into forwarding messages
that will disappear after a re-organization of the source blockchain. Again,
this will result in incorrect debt issuance on the destination blockchain, minting debt tokens that are not mapped to assets in custody on the source
blockchain.
A blockchain is rarely a real “chain.” Instead, actors who propose new
blocks onto a blockchain are competing for their block to be included. In a
so-called proof-of-work (see e.g., [27]) system like Ethereum or Bitcoin, proposers must solve a cryptographic puzzle for this right; proposers who solve
this problem are miners. Moreover, rules like heaviest computation dictate
which blocks should be considered the so-called canonical chain, onto which
new blocks should be appended [28]. Along the way, forking of the chain
occurs and some blocks are orphaned from the canonical chain. However,
colluding actors can influence which chain is satisfies the rules to consider
a chain canonical, if they have sufficient computational power. This is the
basis of a so-called 51% attack, as it requires a majority of the computational
power (among all miners) to execute.
23
-----
A 51% attack is typically expensive. However, the cost of the attack
is tied to the computational power in the network and the duration of the
attack. More computational power and a longer attack duration increase the
cost, while shorter attacks are cheaper. For blockchains which use different
methods to append blocks, like those which use so-called proof-of-stake (see
e.g., [27]) consensus algorithms, the cost of this attack will depend different
factors. For example, proof-of-stake systems may have an increased risk
for this this attack if there are too few validators who propose blocks, the
validators can be bribed or collude, or if the cost required by stakers is too
low.
The exploit would proceed according to the following steps:
1. An attacker honestly deposits funds into a bridge via the source
blockchain’s custodian, which issues debt on the destination blockchain.
2. After waiting a small-but-not-too small amount of time (enough for several confirmations of the transaction, perhaps about 5; this is about 15
additional blocks on Ethereum at the time of writing), the attacker
rents computational power to enact a moderately long 51% attack
(about 1 hour). This attack establishes another chain which is canonical after the attack but does not have the attackers transaction from
step (1) included on the chain.
The source of the exploit here is contained in step (2), where the attacker
re-organizes the source blockchain but also has the issued funds on the destination blockchain. This attack is a specific instance of double-spending a
token [8].
**5.2.1** **Real World Example**
This exploit has not yet been executed (on Ethereum). At the time of writing, the website Crypto51.app [29] reports that the cost of a 1-hour long
51% attack on Ethereum would cost $600,374 USD. This value is lower for
chains that have less computational power associated with them. Nevertheless, as the real-world examples of bridge attacks referenced in this paper
demonstrate, the attacker’s profit often exceeds this number.
24
-----
**5.2.2** **Solution**
The inherent cause of the attack is the bridge’s wrong assumptions about
the finality of blocks. The bridge needs to ensure that if a reorganization of
the source chain happens and the deposit transaction becomes invalidated,
the same invalidation happens on the target chain. This is a difficult task
for bridges that are not implemented and operated natively by the target
blockchain itself, and reside on it in the form of a third party application.
The native bridges may implement a mechanism that keeps track of deposit
nonces and the total bridged value on the source chain, and subsequently
require “commiting” the nonce and value sequence in the transactions that
release the assets from the custodian. The nonces would have to have a
fixed sequence that prohibits skipping (e.g., integers that increment by 1
with every deposit) so that they guarantee that if a deposit transaction is
dropped, or it value changes, all the subsequent deposits to the target chain
and withdrawals from it become invalid (i.e., result in failed transactions).
Consequently, the bridge would need to ensure that such a reorganization is
properly reflected on the target chain so that the users whose assets were now
not deposited to the target chain or withdrawn have their balances adjusted
accordingly, and the integrity of the asset amounts between the two chains
is not violated.
It is is also important to note that the roles of a source and target chain
are to a certain extent symmetric—a chain that is in the role of a source
during the deposit may be in the role of a target during the withdrawal.
While a bridge may be a native component of one of the chains and may
ensure that the chain can respond to the reorganization of another chain, it
is unlikely that it would be able to guarantee such a reorganization for both
the chains, in particular, for the Ethereum mainnet.
The prevention of the attack described in this section is a difficult problem
and it would make for a great subject for future research.
### 6 Token Interface Attacks
In this section we describe some exploits based on the token interfaces used
in bridges. The first exploit relates to token approvals for bridges, while the
second exploits the EIP-2612 [30] interface function built into some ERC-20
tokens.
25
-----
#### 6.1 Infinite Approvals and Arbitrary Executions
The goal of this exploit is to take user’s funds directly, rather than stealing
them from the custodian or debt issuer, by leveraging bridge components
which can call other smart contracts.
A valuable use case of ERC-20 tokens is the ability to approve others to
spend your tokens. For example, you may wish to approve a bridge to spend
your tokens, so that in the future if you use a decentralized application
to interact with the bridge, it can take funds on your behalf, through the
decentralized application you are interacting with. This is achieved by having
the user call approve (possibly specifying some specific amount) on the token,
listing the bridge’s relevant smart contract address, and later having the
bridge call transferFrom to take funds from the user. The latter call will
only succeed if the user has approved the bridge to act on the user’s behalf.
Due to large gas concerns, users often grant applications and bridges
_infinite approvals. This is because the approve call is a transaction that must_
be executed on chain, for which gas must be paid. As a result, an infinite
approval removes the requirement of subsequent approval calls, saving the
user gas fees in the future.
Moreover, recall from Section 3.1, due to the composability of smart
contracts (especially popular in DeFi applications), bridges often call other
smart contracts directly. To do this, a bridge may have an arbitrary function
```
execute which takes an ABI encoded description of the function to call (see
```
also Section 3.1 and [16]).
This exploit proceeded according to the following steps, illustrated in
Figure 14:
1. A user provides a bridge that can call smart contracts with an infinite
approval to a token.
2. An attacker calls execute with an encoding of transferFrom to take
the honest user’s tokens, rather than their own. This succeeds since
the bridge executes the transferFrom call, and it has approval to take
the user’s tokens. However, since the attacker initiated the call, the
debt issuer on the destination chain issues the debt in the name of the
attacker.
The source of the error for this exploit is in step (2), in which the debt
is incorrectly issued to the wrong party. After the attacker receives the debt
26
-----
Figure 14: An illustration of exploiting bridges with arbitrary execution with
infinite approvals from users.
token on the destination blockchain, they can bridge the asset back to the
source blockchain and withdraw the funds. The user is now powerless to
recover those tokens.
**6.1.1** **Real World Example**
This exploit was possible on an earlier version of the Multichain (formerly
“AnySwap”) project [31]. The attack vector was documented before it was
exploited, and the finders were awarded a $1,000,000 USD bounty for finding
and reporting the issue, which they first demonstrated on a local fork of
Ethereum. At the time the exploit was reported, almost 5,000 accounts had
27
-----
granted infinite approval to the bridge in question.
**6.1.2** **Solution**
As the vulnerability is enabled by the bridge issuing debt tokens to a user
who did not supply the tokens on the source chain, one possible remedy is
to ensure that the debt is always issued to the account that provided the
tokens on the source chain. However, this strongly limits the design of the
bridge and may cause problems when bridging tokens in the custody of a
smart contract. A specific concern with this solution is the use of a so-called
_multisig wallet (see e.g., [32]) — a smart contract that holds tokens on behalf_
of multiple users whose joint signatures are required for releasing such tokens.
Such a smart contract may be available on the source chain, however, due to
how the smart contract addresses are determined (see [2]), such a multisig
wallet may not be available on the target chain. Other measures, such as
disallowing generic calls to functions such as execute, may negatively impact
the required features of the bridge, and thus do not appear viable without
disrupting the business logic.
#### 6.2 Permits and Non-Reverting Fallback Functions
Similar to Section 6.1, the goal of this exploit is to take user’s tokens directly,
rather than stealing them from the custodian or debt issuer, by leveraging
bridge components which can call functions in poorly implemented token
contracts.
Some ERC-20 tokens have a permit function, which enables a user to sign
a message enabling others to use one’s tokens; these implement EIP-2612 [30].
A message is signed using the account’s private key. These messages are not
transactions (they are not executed on-chain), and do not use gas; as a result,
they are attractive in some settings as they are free for users. Once someone
has obtained a signed permit message, they can go to the smart contract for
the token and call a function to verify the signed message. If the verification
succeeds – that is, a verify function for the permit does not revert – the
holder of the permit is approved to use the signer’s tokens (e.g., via the
```
transferFrom function). A function reverts on Ethereum if it runs out of
```
gas, an error occurs, or an assertion (written as a require or assert) fails
in the code being executed. The permit holder can obtain the approval by
calling a function redeemPermit and “using up” the permit. The fact that
28
-----
the verify permit function is expected to revert if the verification fails is key,
as we explain next.
The expectation that a function reverts is problematic when developers
are not aware of the expectation. Smart contracts on Ethereum have a
```
fallback function, which is called whenever a function that is supposed to
```
be called on a smart contract cannot be found (i.e., it is not implemented).
If an ERC-20 smart contract implements a fallback function that does not
revert, every time a function that does not exist within the implementation
is called, the call will succeed. This can be problematic, as we now exemplify
in this exploit.
The exploit proceeded according to the following steps, illustrated in Figure 15:
1. A user wishes to bridge tokens to another chain from Ethereum where
(a) the bridge supports permit redemption for tokens, and (b) the
bridge has custody of at least one token that does not implement the
EIP-2612 permit functions but implements a fallback function that
does not revert.
2. The user gives the bridge infinite approvals (and may or may not successfully send some tokens over the bridge).
3. An attacker asks the bridge to redeem a string, claiming it is a permit, for approval on the token used by the honest user in steps (1) and
(2). The bridge attempts to verify the supplied string; as the verify
function is not implemented for any permit but the fallback function never reverts, the bridge accepts the permit. Next, the bridge
contract calls the redeem function for the permit, which is not implemented either, but since the fallback function never reverts, the
bridge thinks it has succeeded in obtaining the approval. The bridge
calls transferFrom on behalf of the attacker onto the bridge, and issues
debt in the attacker’s name.
As in the Section 6.1, the source of the error for this exploit is in step (3),
in which the debt is incorrectly issued to the wrong party. After the attacker
receives the debt token on the destination blockchain, they can bridge the
asset back to the source blockchain and withdraw the funds. However, this
reasoning is different: the bridge was not implemented to check that verify
was actually called, rather than the fallback function.
29
-----
Figure 15: An illustration of exploiting bridges supporting tokens with the
```
permit functionality.
```
30
-----
**6.2.1** **Real World Example**
The situation above was described as a feasible bug in the deprecated Polygon
Bridge Zap [33]. The operators fixed the issue in subsequent versions even
before it was discovered, but because a version was already deployed on
the blockchain the exploit happened. To mitigate this, the bridge operators
submitted transactions to withdraw funds themselves, but were front-run
by an arbitrage bot. However, the arbitrage bot later returned the frontrun profits when the bot operator learned that the transaction was executed
with good intention. The vast majority of funds were held in escrow by the
Polygon team after this effort, resulting in no significant loss of funds from
the bridge.
**6.2.2** **Solution**
The core idea of the vulnerability lies in the fact that the bridge calls a
non-existent function on a token to redeem the permit, and that the token’s
implementation allows calls to non-existent function without reverting. In
order to eliminate it, the bridge needs to break this condition. This means
that the bridge needs to be aware of the implementation of the token, and
avoid attempts to use the permit logic if the token does not implement it,
or does not implement it correctly. As the Ethereum blockchain does not
allow for checking interface and implementation on-chain, a list of tokens
maintained by the operator of the bridge or an off-chain logic would have to
be used for detecting whether the permit mechanism should be available.
### 7 Related and Future Work
To our knowledge, there is no systematization of the attacks that have occurred on bridges in recent years. Others have studied these components,
they largely do so from a theoretical point of view, rather than reviewing the
faults of previous systems. We first list a few of the relevant works.
McCorry et al. [34] review the literature involving bridges and provide
a detailed breakdown of roles and components. Although their terminology
differs from ours, the key concepts they identify for bridges are compatible
with the major components we define in Section 2. Their work provides
a more fine-grained overview; for example, they dive into concepts like the
31
-----
operator of a communicator (i.e., whether it is centralized, involves a multisignature wallet, or purely trustless), protocol assumptions (e.g., expanding
properties beyond liveness), and things like rate-limiting transactions for the
bridge. They offer research directions for improvements based on various
assumptions and dilemmas that bridges aim to solve or sidestep. However,
their threat models are high level and theoretical; they are not reviews of
attacks that have occurred in practice. For example, they emphasize the
need to prevent censorship of communicators. This is critical for bridge
design and complementary to the review of concrete issues we review in this
work. This is discussed somewhat less formally for layer two solutions in [35].
Zamyatin et al. [36] studies communication between blockchains, necessary for the communicator component of a bridge. They look at which
assumptions are required, classify and evaluate existing cross-chain communication protocols, and generalise the systems we describe in this work. They
list challenges that must be overcome for safe and effective cross-chain communication. These challenges show that bridge communication is difficult
and indicate why bridges are so complex and therefore prone to implementation errors.
For future work, it would be interesting to study preventative measures for
these attacks. While many attacks presented are implementation specifics,
we wonder if a general framework or set of standards can help mitigate these
issues. In particular, custodian or debt issuer standards may reduce errors
with cross-chain calls or decrease erroneous event emissions. Such a standard
could be an interface akin to the ERC-20 or ERC-721 standards.
Moreover, a wishlist of specific properties for security should be explicated. The related work targets the high level properties, but is insufficient
to guide new bridge developers. These high level properties like liveness may
be fairly obvious, but many of the attacks reviewed in this paper may fade
into obscurity and become unknown to new community members. We fear
that new members may repeat these issues, and hope that a guideline for
bridge construction may be established which would improve the quality of
future bridges.
### 8 Conclusion
We explained several bridge attacks and suggested mitigations for most of
them. Our work should not be seen as an exhaustive list of security issues to
32
-----
prevent, but rather as an insightful survey of principles that were exploited
in the past, that the developers of bridges should keep in mind, and as an
illustration of the complexity of the bridge systems, and the size of the attack
surface that they expose for potential exploitation.
**Acknowledgement.** The survey and research of bridge attacks was started
at Quantstamp’s Winter 2021 Research Retreat. The vulnerabilities, attacks,
possible remedies, and solutions were discussed throughout the time with
many security researchers and engineers at Quantstamp. The authors would
like to thank especially their colleagues Kacper Bak, Sebastian Banescu,
Mohsen Ahmadvand, and Marius Guggenmos for their insight and fruitful
input that was incorporated in this paper.
### References
[1] Satoshi Nakamoto. Bitcoin: A peer-to-peer electronic cash system, 2008.
```
https://bitcoin.org/bitcoin.pdf.
```
[2] Gavin Wood. Ethereum: A secure decentralised generalised transaction
[ledger, 2014. Ethereum Project Yellow Paper, https://ethereum.git](https://ethereum.github.io/yellowpaper/paper.pdf)
```
hub.io/yellowpaper/paper.pdf.
```
[3] Johannes Rude Jensen, Victor von Wachter, and Omri Ross. An introduction to decentralized finance (DeFi). Complex Syst. Informatics
_Model. Q., 26:46–54, 2021._
[4] Lewis Gudgeon, Pedro Moreno-Sanchez, Stefanie Roos, Patrick McCorry, and Arthur Gervais. SoK: Layer-two blockchain protocols. Cryp[tology ePrint Archive, Paper 2019/360, 2019. https://eprint.iacr.](https://eprint.iacr.org/2019/360)
```
org/2019/360.
```
[5] Rami Khalil, Alexei Zamyatin, Guillaume Felley, Pedro MorenoSanchez, and Arthur Gervais. Commit-chains: Secure, scalable offchain payments. Cryptology ePrint Archive, Paper 2018/642, 2018.
```
https://eprint.iacr.org/2018/642.
```
[6] Joseph Poon and Vitalik Buterin. Plasma: Scalable autonomous smart
[contracts, 2017. https://www.plasma.io/plasma.pdf.](https://www.plasma.io/plasma.pdf)
33
-----
[7] Aggelos Kiayias and Dionysis Zindros. Proof-of-work sidechains. In
Andrea Bracciali, Jeremy Clark, Federico Pintore, Peter B. Rønne, and
Massimiliano Sala, editors, Financial Cryptography and Data Security _FC 2019 International Workshops, VOTING and WTSC, St. Kitts, St._
_Kitts and Nevis, February 18-22, 2019, Revised Selected Papers, volume_
11599 of Lecture Notes in Computer Science, pages 21–34. Springer,
2019.
[8] Mubashar Iqbal and Raimundas Matulevicius. Exploring sybil and
double-spending risks in blockchain systems. _IEEE Access, 9:76153–_
76177, 2021.
[9] Lily Hay Newman. Blockchains have a ‘bridge’ problem, and hackers
[know it, 2022. https://www.wired.com/story/blockchain-network](https://www.wired.com/story/blockchain-network-bridge-hacks/)
```
-bridge-hacks/.
```
[10] Fabian Vogelsteller and Vitalik Buterin. ERC-20 token standard, 2015.
```
https://github.com/ethereum/EIPs/blob/master/EIPS/eip-20.m
d.
```
[11] William Entriken, Dieter Shirley, Jacob Evans, and Nastassia Sachs.
[ERC-721 token standard, 2018. https://github.com/ethereum/EIPs](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-721.md)
```
/blob/master/EIPS/eip-721.md.
```
[12] Shahinaz Kamal Ezzat, Yasmine N. M. Saleh, and Ayman A. AbdelHamid. Blockchain oracles: State-of-the-art and research directions.
_IEEE Access, 10:67551–67572, 2022._
[13] Drew Stone. Trustless, privacy-preserving blockchain bridges. CoRR,
abs/2102.04660, 2021.
[14] Ferenc B´eres, Istv´an Andr´as Seres, Andr´as A. Bencz´ur, and Mikerah Quintyne-Collins. Blockchain is watching you: Profiling and
deanonymizing ethereum users. In IEEE International Conference on
_Decentralized Applications and Infrastructures, DAPPS 2021, Online_
_Event, August 23-26, 2021, pages 69–78. IEEE, 2021._
[15] Hideyoshi Moriya. How to get ethereum encoded function signatures,
[2018. https://piyopiyo.medium.com/how-to-get-ethereum-enco](https://piyopiyo.medium.com/how-to-get-ethereum-encoded-function-signatures-1449e171c840)
```
ded-function-signatures-1449e171c840.
```
34
-----
[[16] Contract ABI specification (revision e14f2714), 2021. https://docs.s](https://docs.soliditylang.org/en/v0.8.15/abi-spec.html)
```
oliditylang.org/en/v0.8.15/abi-spec.html.
```
[17] Guido Bertoni, Joan Daemen, Micha¨el Peeters, and Gilles Van Assche.
Keccak. IACR Cryptol. ePrint Arch., page 389, 2015.
[18] Alfred Menezes, Paul C. van Oorschot, and Scott A. Vanstone. Handbook
_of Applied Cryptography. CRC Press, 1996._
[19] Yufeng Hu, Siwei Wu, Lei Wu, and Yajin Zhou. The initial analysis of
[the Poly Network attack, 2021. https://blocksecteam.medium.com/](https://blocksecteam.medium.com/the-initial-analysis-of-the-polynetwork-hack-270ac6072e2a)
```
the-initial-analysis-of-the-polynetwork-hack-270ac6072e2a.
```
[20] Yufeng Hu, Siwei Wu, Lei Wu, and Yajin Zhou. The further analysis of
[the Poly Network attack, 2021. https://blocksecteam.medium.com/](https://blocksecteam.medium.com/the-further-analysis-of-the-poly-network-attack-6c459199c057)
```
the-further-analysis-of-the-poly-network-attack-6c459199c0
57.
```
[21] Gerhard Wagner and Immunefi. Polygon double-spend bugfix review [$2m bounty, 2021. https://medium.com/immunefi/polygon-double](https://medium.com/immunefi/polygon-double-spend-bug-fix-postmortem-2m-bounty-5a1db09db7f1)
```
-spend-bug-fix-postmortem-2m-bounty-5a1db09db7f1.
```
[22] Sota Sato, Ryotaro Banno, Jun Furuse, Kohei Suenaga, and Atsushi
Igarashi. Verification of a Merkle Patricia tree library using F. CoRR,
abs/2106.04826, 2021.
[[23] Collin Adams. Breaking down the Mete hack, 2022. https://blog.c](https://blog.chainsafe.io/breaking-down-the-meter-io-hack-a46a389e7ae4)
```
hainsafe.io/breaking-down-the-meter-io-hack-a46a389e7ae4.
```
[24] Dan Goodin. How $323m in crypto was stolen from a blockchain bridge
[called wormhole, 2022. https://arstechnica.com/information-te](https://arstechnica.com/information-technology/2022/02/how-323-million-in-crypto-was-stolen-from-a-blockchain-bridge-called-wormhole/)
```
chnology/2022/02/how-323-million-in-crypto-was-stolen-from
-a-blockchain-bridge-called-wormhole/.
```
[[25] pNetwork. pNetwork post mortem: pBTC-on-BSC exploit, 2021. http](https://medium.com/pnetwork/pnetwork-post-mortem-pbtc-on-bsc-exploit-170890c58d5f)
```
s://medium.com/pnetwork/pnetwork-post-mortem-pbtc-on-bsc-e
xploit-170890c58d5f.
```
[26] Burton H. Bloom. Space/time trade-offs in hash coding with allowable
errors. Commun. ACM, 13(7):422–426, 1970.
35
-----
[27] Yang Xiao, Ning Zhang, Wenjing Lou, and Y. Thomas Hou. A survey of distributed consensus protocols for blockchain networks. IEEE
_Commun. Surv. Tutorials, 22(2):1432–1465, 2020._
[28] Richard Ma, Jan Gorzny, Edward Zulkoski, Kacper Bak, and Olga V
Mack. Fundamentals of Smart Contract Security. Momentum Press,
2019.
[[29] Crypto51. https://www.crypto51.app/.](https://www.crypto51.app/)
[[30] Martin Lundfall. EIP-2612: permit — 712-signed approvals, 2020. ht](https://eips.ethereum.org/EIPS/eip-2612)
```
tps://eips.ethereum.org/EIPS/eip-2612.
```
[31] Yannis Smaragdakis. Phantom functions and the billion-dollar no-op,
[2022. https://media.dedaub.com/phantom-functions-and-the-b](https://media.dedaub.com/phantom-functions-and-the-billion-dollar-no-op-c56f062ae49f)
```
illion-dollar-no-op-c56f062ae49f.
```
[32] Dan Boneh, Manu Drijvers, and Gregory Neven. Compact multisignatures for smaller blockchains. In Thomas Peyrin and Steven D.
Galbraith, editors, Advances in Cryptology - ASIACRYPT 2018 - 24th
_International Conference on the Theory and Application of Cryptol-_
_ogy and Information Security, Brisbane, QLD, Australia, December 2-6,_
_2018, Proceedings, Part II, volume 11273 of Lecture Notes in Computer_
_Science, pages 435–464. Springer, 2018._
[[33] Suhail Gangji. Post-mortem — Polygon bridge vulnerability, 2021. ht](https://medium.com/zapper-protocol/post-mortem-polygon-bridge-vulnerability-cb8029275622)
```
tps://medium.com/zapper-protocol/post-mortem-polygon-bridg
e-vulnerability-cb8029275622.
```
[34] Patrick McCorry, Chris Buckland, Bennet Yee, and Dawn Song. SoK:
Validating bridges as a scaling solution for blockchains. Cryptology
[ePrint Archive, Paper 2021/1589, 2021. https://eprint.iacr.org/](https://eprint.iacr.org/2021/1589)
```
2021/1589.
```
[[35] Bartek Kiepuszewski. L2Bridge risk framework, 2022. https://gov.l2](https://gov.l2beat.com/t/l2bridge-risk-framework/)
```
beat.com/t/l2bridge-risk-framework/.
```
[36] Alexei Zamyatin, Mustafa Al-Bassam, Dionysis Zindros, Eleftherios Kokoris-Kogias, Pedro Moreno-Sanchez, Aggelos Kiayias, and
William J. Knottenbelt. SoK: Communication across distributed ledgers.
36
-----
[Cryptology ePrint Archive, Paper 2019/1128, 2019. https://eprint.i](https://eprint.iacr.org/2019/1128)
```
acr.org/2019/1128.
```
37
-----
| 15,640
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2210.16209, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/2210.16209"
}
| 2,022
|
[
"JournalArticle",
"Conference",
"Review"
] | true
| 2022-10-28T00:00:00
|
[
{
"paperId": "a82bc15ede84502b4186ac6f5aac4004ddd48e74",
"title": "A Survey on Cross-chain Technologies"
},
{
"paperId": "a540736b33c9418a134454d3f434222895585999",
"title": "An overview on cross-chain: Mechanism, platforms, challenges and advances"
},
{
"paperId": "52e97e6ea6431473c8eecb1fb5aca6b2691da436",
"title": "Security and Privacy Challenges in Blockchain Interoperability - A Multivocal Literature Review"
},
{
"paperId": "34998be86a77550314573abeb7fd992a7d65d52f",
"title": "Verification of a Merkle Patricia Tree Library Using F"
},
{
"paperId": "015ed213c9398c10a4fea0eb09a29b7bd9816d81",
"title": "An Introduction to Decentralized Finance (DeFi)"
},
{
"paperId": "18394dcf6ac93ce8cdddb68da83ccff55b73e909",
"title": "Trustless, privacy-preserving blockchain bridges"
},
{
"paperId": "b1adebbdfa010e0eaceb191f19174b261d4366bb",
"title": "Verifiable Observation of Permissioned Ledgers"
},
{
"paperId": "21d2d159e5bf6088078b79b57766b7083f438340",
"title": "Blockchain is Watching You: Profiling and Deanonymizing Ethereum Users"
},
{
"paperId": "2fc4ec8625b33a74412dd6963b343533b1365588",
"title": "SoK: Layer-Two Blockchain Protocols"
},
{
"paperId": "437fe70826fe5ac9ebed17300504d51e87a6f8ad",
"title": "Enabling Enterprise Blockchain Interoperability with Trusted Data Transfer (Industry Track)"
},
{
"paperId": "20d82e2cbf460df9fd7d1b461511e729d0e54f90",
"title": "A Survey of Distributed Consensus Protocols for Blockchain Networks"
},
{
"paperId": "9a9961bc656739be93567a9ac61d4b5da761bd01",
"title": "Proof-of-Work Sidechains"
},
{
"paperId": "83721103a6fd5535e943b1b575cf70862c2322a8",
"title": "Handbook of Applied Cryptography"
},
{
"paperId": "6a03ab12bdd5cc21579dda850d1b4fd0403085c1",
"title": "Compact Multi-Signatures for Smaller Blockchains"
},
{
"paperId": "f39a2c11983b21fd5054d5393614959bfbc4e50f",
"title": "Space/time trade-offs in hash coding with allowable errors"
},
{
"paperId": "a762bf9379b6a572df97215329f4d47368ee3802",
"title": "Blockchain Oracles: State-of-the-Art and Research Directions"
},
{
"paperId": null,
"title": "Breaking down the Mete hack, 2022. https://blog .chainsafe.io/breaking-down-the-meter-io-hack-a46a389e7ae4"
},
{
"paperId": null,
"title": "How $323m in crypto was stolen from a blockchain bridge called wormhole"
},
{
"paperId": null,
"title": "L2Bridge risk framework"
},
{
"paperId": null,
"title": "Blockchains have a ‘bridge"
},
{
"paperId": "b805a6f6f832798a4bd4a91e0e1a87ac7d838dde",
"title": "SoK: Validating Bridges as a Scaling Solution for Blockchains"
},
{
"paperId": "0fb84aa0914355d0a3714b7c613b723b6a80c778",
"title": "Exploring Sybil and Double-Spending Risks in Blockchain Systems"
},
{
"paperId": "31bdf37673eba41523003166db1aaa1a6ef1e128",
"title": "SoK: Exploring Blockchains Interoperability"
},
{
"paperId": null,
"title": "Post-mortem — Polygon bridge vulnerability, 2021. https://medium.com/zapper-protocol/post-mortem-polyg on-bridge-vulnerability-cb8029275622"
},
{
"paperId": null,
"title": "Polygon double-spend bugfix review - $2m bounty"
},
{
"paperId": null,
"title": "Contract ABI specification (revision e14f2714)"
},
{
"paperId": null,
"title": "pNetwork post mortem: pBTC-on-BSC exploit"
},
{
"paperId": null,
"title": "EIP-2612: permit -712-signed approvals"
},
{
"paperId": "e9821e278efb615ded08c51aa6f7315d00e3841c",
"title": "SoK: Communication Across Distributed Ledgers"
},
{
"paperId": "b3f66e7d3ba87767792b62b7e00d2aa7e6cc5033",
"title": "Commit-Chains: Secure, Scalable Off-Chain Payments"
},
{
"paperId": null,
"title": "Fundamentals of Smart Contract Security"
},
{
"paperId": null,
"title": "Cosmos whitepaper: A network of distribtued ledgers"
},
{
"paperId": null,
"title": "ERC-721 token standard"
},
{
"paperId": null,
"title": "How to get ethereum encoded function signatures"
},
{
"paperId": "cbc775e301d62740bcb3b8ec361721b3edd7c879",
"title": "Plasma : Scalable Autonomous Smart Contracts"
},
{
"paperId": "f76f652385edc7f49563f77c12bbf28a990039cf",
"title": "POLKADOT: VISION FOR A HETEROGENEOUS MULTI-CHAIN FRAMEWORK"
},
{
"paperId": "3c50bb6cc3f5417c3325a36ee190e24f0dc87257",
"title": "ETHEREUM: A SECURE DECENTRALISED GENERALISED TRANSACTION LEDGER"
},
{
"paperId": "86af62ff8f7f3957f6ce37e977509be3a6dec327",
"title": "Keccak"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": "6fc1565ab3ccb4d2e119986cd6cb5d863df28035",
"title": "on further"
},
{
"paperId": null,
"title": "The initial analysis of the Poly Network attack , 2021"
},
{
"paperId": null,
"title": "The communicator relays this information to the debt issuer on the destination blockchain for the bridge, and the debt issuer provides the actor with a debt token"
},
{
"paperId": null,
"title": "Authorized licensed use limited to the terms of the applicable"
},
{
"paperId": null,
"title": "An attacker asks the bridge to redeem a string, claiming it is a permit, for approval on the token used by the honest user in steps (1) and (2)"
},
{
"paperId": null,
"title": "The bridge is established in such a way that its final logic for emitting deposit events is after processing of wrapped assets"
},
{
"paperId": null,
"title": "Crypto51"
},
{
"paperId": null,
"title": "A bridge is deployed so that anyone can call its cross-chain communication contract, specifying a function to execute"
},
{
"paperId": null,
"title": "An attacker deploys a smart contract on the destination blockchain that has a function that the debt issuer expects to call to verify a signature"
},
{
"paperId": null,
"title": "The actor burns the debt token by depositing it back into the debt issuer"
},
{
"paperId": null,
"title": "The user gives the bridge infinite approvals (and may or may not successfully send some tokens over the bridge)"
},
{
"paperId": null,
"title": "After waiting a small-but-not-too small amount of time (enough for several confirmations of the transaction, perhaps about 5; this is about 15 additional blocks on Ethereum at the time of writing),"
},
{
"paperId": null,
"title": "The actor receives a so-called proof-of-burn for the token"
},
{
"paperId": null,
"title": "An attacker calls execute with an encoding of transferFrom to take the honest user’s tokens, rather than their own"
},
{
"paperId": null,
"title": "Secure Asset Transfer Protocol"
}
] | 15,640
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00a4188f2bb959f2e55369d89e86ca5eabe25479
|
[
"Computer Science"
] | 0.88654
|
A Fair Decentralized Scheduler for Bag-of-Tasks Applications on Desktop Grids
|
00a4188f2bb959f2e55369d89e86ca5eabe25479
|
2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing
|
[
{
"authorId": "2060786744",
"name": "Javier Celaya"
},
{
"authorId": "1718549",
"name": "L. Marchal"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
# A Fair Decentralized Scheduler for Bag-of-tasks Applications on Desktop Grids
### Javier Celaya[1] and Loris Marchal[2]
February 2010
1:Arag´on Institute of Engineering Research (I3A)
Dept. de Inform´atica e Ingenier´ıa de Sistemas
Universidad de Zaragoza, Zaragoza, Spain
```
[email protected]
```
2:CNRS & University of Lyon
LIP (ENS-Lyon-CNRS-INRIA-UCBL)
Lyon, France
```
[email protected]
# LIP Research Report RR-LIP 2010-07
```
**Abstract**
Desktop Grids have become very popular nowadays, with projects that include
hundred of thousands computers. Desktop grid scheduling faces two challenges. First,
the platform is volatile, since users may reclaim their computer at any time, which
makes centralized schedulers inappropriate. Second, desktop grids are likely to be
shared among several users, thus we must be particularly careful to ensure a fair
sharing of the resources.
In this paper, we propose a decentralized scheduler for bag-of-tasks applications
on desktop grids, which ensures a fair and efficient use of the resources. It aims
to provide a similar share of the platform to every application by minimizing their
maximum stretch, using completely decentralized algorithms and protocols. After
presenting our algorithms, we evaluate them through extensive simulations. We
compare our solution to already existing centralized ones under similar conditions,
and show that its performance is close to the best centralized algorithms.
-----
## 1 Introduction
Taking advantage of unused cycles of networked computers has emerged as a cheap alternative to expensive computing infrastructures. Due to the increasing number of personal
desktop computers connected to the Internet, a tremendous computing power is potentially
at hand. Desktop Grids gathering some of these machines have become widespread thanks
to popular projects, such as Seti@home [19] or Folding@home [20]. Nowadays, projects as
World Community Grid [23] include hundreds of thousands of available computers.
At a smaller scale, software solutions have been proposed to harness idle cycles of
machines in the local area network scale [16]. This allows to use idle desktop computers
located in places like a computer science laboratory or a company, for processing computeintensive applications.
The key characteristic of these platforms that strongly limits their performance is
volatility: a machine can be reclaimed by its owner at any time, and thus disappear
from the pool of available resources [13]. This motivates the use of a robust distributed
architecture to manage resources, and the adaptation of peer-to-peer systems to computing
grids is natural [8,9].
Target applications of these desktop grids are typically embarrassingly parallel. In the
context of computing Grids, a common model for such applications is the bag-of-tasks:
each application is then described as a set of similar tasks, i.e. which have a common data
file size and computing demand [7,21].
Due to the distributed nature of desktop grids, several concurrent applications, originating from different users, are likely to compete for the resources. Traditionally, schedulers of
desktop grids aims at minimizing the overall completion time of an application. However,
in a multi-application setting, it is important to maintain some fairness between users: we
do not want to favor an application with a large number of small jobs compared to another
application with fewer larger jobs. Similarly, if applications can be submitted at different
entry points of the distributed system, we do not want that the location of the user impacts
its experienced running time. To discourage users tampering with their application to get
better performance, we must provide a scheduler that gives a fair share of the available
resources to each user. Similar problems have been addressed in computing Grids [7,14].
However, these schedulers are centralized, and assume perfectly updated information on
the whole platform. In the context of desktop grid, a scheduler needs to be decentralized
and rely only on local information.
In this paper, we propose and evaluate a decentralized scheduler for processing bagof-tasks applications on desktop grids. Our study relies on previous work which proposes
a peer-to-peer architecture to distribute tasks provided with deadlines [9]. We also build
upon a previous study on a centralized scheduler for multiple bag-of-tasks applications on
a heterogeneous platform [7].
-----
## 2 Related work
### 2.1 Desktop grid and scheduling
Desktop grids are now widespread, and many platform management software is available [11]. Among others, BOINC [4] is probably the most common, and uses a classical
client/server architecture. Other types of architecture are also proposed, some of them
inspired by peer-to-peer systems [8].
To cope with node volatility, several mechanisms have been proposed, such as checkpointing and job migration [3,5,24]. These mechanisms allow to efficiently manage computing resources that are likely to be reclaimed by their owners at any time. Similarly,
Kondo et al. [13] emphasize the need for resource selection when processing short-lived
task-parallel application on desktop grids. These studies are complementary to our work,
which focuses on how to share the available resources among several users.
The common scheduling policy in desktop grids is usually FCFS (First Come, First
Served), but more complex strategies have also been proposed. In particular, Al-Azzoni
et al. [2] propose to use linear programming to compute a mapping of applications to the
platform. However, reactivity is achieved at the price of solving a linear program at each
change of the platform, which makes it not very suited to volatile platforms. Besides, only
a centralized scheduler can gather the whole information on the optimization problem.
### 2.2 Scheduling multiple applications
In the context of classical computing Grids, the problem of scheduling multiple applications
have already been studied. As far as fairness is concerned, the most suited metric seems
to be the maximum stretch, or slowdown [14]. The stretch of an application is defined as
the ratio of its response time under the concurrent scheduling policy over its response time
in dedicated mode, i.e., when it is the only application executed on the platform. The
objective is then to minimize the maximum stretch of any application, thereby enforcing
a fair trade-off between all applications.
Previously, we have studied the minimization of maximum stretch for concurrent applications in a centralized settings [7]. In particular, our study shows that interleaving tasks
of several concurrent bag-of-tasks applications allows to reach a better performance than
scheduling each application after the other.
### 2.3 Distributed scheduling
Distributed scheduling has been widely studied in the context of real-time systems, when
tasks have deadline constraints. Among others, Ramamritham et al. [18] propose several
decentralized heuristics to schedule real-time applications in a distributed environments.
More recently, Modi et al. [17] present a distributed algorithm to solve general constraint
optimization problem with a guaranteed convergence.
-----
However, these studies are dedicated to tasks with deadlines, and stable environments
where complex distributed algorithms can converge. In our large-scale and fault-prone system, we cannot hope to reach optimality, and we will rather design fault-tolerant heuristics
inspired by peer-to-peer techniques.
Closer to our problem, Viswanathan et al. [22] proposed a distributed scheduling strategy for computing grid. However, a centralized entity is used to gather the information on
the platform and the applications.
In earlier work, we have compared centralized and decentralized strategies for scheduling
bag-of-tasks applications in computing grids [6], however in this study, applications were
supposed to be available at the same time on a given master node, whereas applications
are likely to be issued at any time and any place in a desktop grid.
## 3 Problem description
In this section, we formally define the problem we target. Our goal is to design a fully
decentralized scheduling architecture for bag-of-tasks applications, oriented to desktop grid
platforms. Our main objective while scheduling tasks of concurrent applications is to ensure
fairness among users.
### 3.1 Application model
Each application Ai consists of a set of ni tasks with computing demand ai, measured in
millions of flops. Let wi = niai be the overall computing size of the application. Each
application Ai has a release time ri, corresponding to the time when the request for its
processing is issued, and a finish time Ci, when the last task of this application terminates.
When scheduling multiple applications, as far as fairness is concerned, the most suited
metric seems to be the maximum stretch, or slowdown [14]. The stretch of an application
is defined as the ratio of its response time under the concurrent scheduling policy (Ci − _ri)_
over its response time in dedicated mode, i.e., when it is the only application executed on
the platform. The objective is then to minimize the maximum stretch of any application,
thereby enforcing a fair trade-off between all applications.
In a distributed context, it is hard to evaluate the response time of an application
in dedicated platform (needed to compute the stretch), since we do not even know the
overall number of nodes. If we would know what is the aggregated computing speed sagg of
the whole platform, then we could approximate this response time as wi/sagg, assuming a
perfect distribution of the application on the computing node. The stretch for application
_Ai would then be (Ci_ _ri)/(wi/sagg)._
_−_
In practice, we do not known the value of sagg, but we assume that it does not vary much,
and that its variations should not be taken into account when computing the slowdown of
each application. Thus, we assume that the aggregated speed has a constant value, and
we approximate the stretch with (Ci − _ri)/wi._
-----
### 3.2 Platform model
The platform model which we are using is inherited from the framework described in [9].
Nodes are organized in a network overlay based on a balanced binary tree, where every
leaf node is a processing node, and every internal node is a routing node. The actual
implementation of such overlay is not detailed in this paper, since significant work is
already done on this subject [1, 12, 15]. These solutions propose tree-based peer-to-peer
overlays with good scalability and fault-tolerance properties. Each machine taking part
of the computation acts simultaneously as a processing node and a routing node of the
overlay. The computational speed of a computing (leaf) node of the overlay is denoted by
_su, measured in millions of flops per second._
### 3.3 Scheduling on the overlay
We use the tree structure of the overlay both for gathering information on the platform
and for scheduling. The information of the platform availability is aggregated from the
leaf nodes to the root node, as detailed below in Section 5.
When an application is released, the corresponding request, containing all necessary
information on the application, is received by some machine the system. The routing node
of this machine processes the request based on the information it holds on the platform.
The routing node can either decide to process the application locally in its own subtree, if
the application is small enough and will not cause a large load imbalance, or it can decide
to forward the application to its father in the overlay, which now faces the same choice.
When finally, a node (possibly the root node) takes the decision to schedule the application
in its subtree, it splits the application and allocates a number of its tasks to each of its
children. Then, the children must take the same decision, until the tasks reach the leaf
nodes. The leaf node inserts the incoming tasks into their task queue, and processes them.
In the following, we first present the local scheduling policy, used by the leaf node to
order their task queue (Section 4). Then we explain how the availability of the platform
is gathered along the tree (Section 5), and finally we describe the global scheduling policy
(Section 6).
## 4 Local scheduler
Each execution node has a local scheduler which decides the processing order of tasks
allocated to this node. The local scheduler has the same objective as the whole platform:
minimizing the maximal stretch among all applications. We rely on a relation between
stretch and deadlines:
_Si =_ _[d][i][ −]_ _[r][i]_ =⇒ _di = ri + Siwi_ (1)
_wi_
Given a value S for this maximum stretch, we can thus compute a deadline for all the
tasks of every application. Then, we can schedule all tasks using and Earliest Deadline
First (EDF) policy, as detailed in Alkgorithm 1: if the deadlines are achievable, the EDF
-----
policy finds a suitable schedule. Finally, we apply a binary search to find the minimal
possible value for the stretch: for a given stretch value, we compute the deadlines with the
previous formula, and apply the EDF policy; if the deadlines are met, we start again with
a smaller stretch values; if they are not met, we increase the stretch value. Algorithm 2
details this binary search.
Desktop Grid environments are particularly error prone. Thus, fault-tolerance is a
required capacity of algorithms dedicated to such platforms. In the context of the local
scheduling, if a failed node had any task in its queue, they are aborted. At this moment,
we just resubmit tasks when they are detected to have failed. This affects the stretch of
the applications involved since they last longer than expected. On the other hand, as it
is empirically shown in Section 7, having some tasks from all or part of the applications
resubmitted gives the scheduler the opportunity to recalculate their stretch and achieve
more similar values between different applications.
**Algorithm 1: Algorithm meetDeadlines(Q).**
**Input: Q is a task queue.**
**Output: Whether all tasks in Q meet their deadlines.**
_e = currentTime_
Order tasks by non-decreasing deadlines: d1 ≤ _d2 ≤· · ·_
**forall tasks in Q do**
_e = e + ai/su_
**if e > di then return false**
**return true**
## 5 Platform’s availability
In this section, we detail the process that gathers information about the state of the
platform, and communicates it to distant nodes, so that each node can be able to efficiently
schedule an application. The state of the platform is based on the availability of nodes to
receive new tasks. Each computing node builds an availability summary, which states how
many new tasks it can receive. This availability summary is then gathered by the routing
node of the tree, until it reaches the root node. This availability summary is designed both
to provide complete availability on the platform, and to induce a limited communication
overhead.
### 5.1 Computing the availability of nodes
In order for the global scheduler to correctly allocate tasks to the platform, each computing
node must provide an availability summary which described its capacity to process tasks
from new applications. The availability of a node consists in the number of tasks of a new
application that this node is able to process. Of course, this number of tasks both depends
-----
**Algorithm 2: Algorithm for function maxStretch(Q, ϵ).**
**Input: Q is the task queue, ϵ is the error tolerance.**
**Output: Optimal maximum stretch that makes all applications meet deadlines.**
_Smin = 0, Smax = 1_
**for i = 1 to |Q| do di = ri + Smax · wi**
Sort applications by increasing deadlines di.
**while meetDeadlines(Q) is false do**
_Smin = Smax_
_Smax = 2Smax_
**for i = 1 to |Q| do di = ri + Smax · wi**
**while Smax** _Smin > ϵ do_
_−_
_Smid = (Smax + Smin)/2_
**for i = 1 to |Q| do di = ri + Smid · wi**
**if meetDeadlines(Q) then**
_Smax = Smid_
**else**
_Smin = Smid_
**return Smax**
on the target maximum stretch and on the application itself. Formally, for a given target
stretch S, and assuming that the new application will be released at time rnew, is made of
tasks of length anew, and has total size wnew, we compute n(S, r, w, a), the maximal number
of tasks that the local computing node can handle locally.
_A4_ _A5_
_h(S, rnew, wnew)_
�������
�������
�������
�������
�������
�������
�������
time r
_A1_ _A2_ _A3_
|��� ��� ��� ��� ��� ���|�� �� �� �� �� ��|� � � � � �|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|Col22|Col23|Col24|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||||||||||||||
_d1_ _d2_ _d3_ _dnew_ _d4_ _d5_
Figure 1: Example task queue. Tasks from applications A1 through A3 are processed as
soon as possible, while applications A4 and A5 are processed as late as possible. The
available computation in the gap between them is h(S, rnew, wnew).
We briefly describe the evaluation of this function through the example depicted in
Figure 1. The complete algorithm to compute n is available in the companion research
report [10]. Given a tuple (S, rnew, wnew, anew), that is, assuming that a new application
_Anew with tasks of size anew, and total size wnew will be released at time rnew, we have to_
evaluate the number of such tasks which can be processed while ensuring a stretch at most
_S. The first step is to construct the task queue that it expects to have at time rnew: it_
discard from its actual task queue the tasks that will be processed at time r (the shaded
-----
tasks in Figure 1). Then, we compute the deadline that each remaining application Ai
would have with stretch S, using Equation (1). The deadline dnew of the new application
_Anew is also computed, and all deadlines are sorted in a non-decreasing order, since the_
tasks are going to be scheduled using an EDF policy. In the example of Figure 1, dnew
lies between d3 and d4. The earliest starting time for the tasks of Anew is after tasks of
_A1, A2, and A3 (that is all tasks with di < dk) have been computed. Similarly, the latest_
completion time for the tasks of Anew must ensure that tasks of A4 and A5 (that is all tasks
with di > dk) will not miss their deadlines, and also that deadline dnew is not exceeded.
This allows to compute the duration of the time slot that can be devoted to Anew, denoted
by h(S, rnew, wnew). Then, the number of atomic tasks that can be processed by the node
is given by:
�h(S, rnew, wnew) �
_n(S, rnew, wnew, anew) =_ (2)
_anew_
Algorithm 3 computes the function h(S, rnew, wnew). It first calculates the deadline di
of every application in the queue with stretch S, and dnew for a potentially new application
of parameters rnew and wnew. Then the queue is sorted in EDF order, and the algorithm
first computes the latest starting time xi of each application Ai such that no application
_Aj with j ≥_ _i misses its deadline. For any combination of parameters which makes any_
application miss its deadline, the function returns 0 (the stretch is not achievable). Then,
the amount of computation (number of flops) available between rnew and dnew is calculated.
For sake of simplicity, we assume that applications are ordered by non-decreasing value of
_di (di ≤_ _di+1), and that the remaining number of tasks for application Ai is Ni[u][.]_
**Algorithm 3: Algorithm for function h(S, rnew, wnew).**
**Input: S is the desired stretch, rnew is the release date of the new application, wnew**
is its size.
**Output: number of flops available for the new application.**
**for i = 1 to n do di = ri + S · wi**
_dnew = rnew + S · wnew_
Order tasks by non-decreasing deadlines: d1 ≤ _d2 ≤· · ·_
_xn = dn −_ _Nn[u]_ _[·][ a][n][/s][u]_
**for i = n** 1 to 1 do
_−_
_xi = min(di, xi+1) −_ _Ni[u]_ _[·][ a][i][/s][u]_
**if x1 < current time then return 0 (at least one application misses its deadline)**
Get k so that dk−1 < dnew ≤ _dk_
_ek = rnew +_ [�]i[k]=1[−][1] _[N][ u]i_ _[a][i][/s][u]_
**return (min(dnew, xk) −** _ek)su_
Deadline constraints are checked with the use of x1. If the first application is forced
to start before the current time, then it means that one or more applications are missing
their deadline with the selected stretch S. Then, the position of the new application in
the queue is calculated. The result is the number of flops that can be executed between
-----
the moment at which the previous application is going to finish, and the deadline of the
new application or the last moment at which next application must start, whichever comes
first. Call ek the moment at which application k − 1 in the queue is expected to finish, it
can be calculated by adding the remaining execution time of the k 1 first applications to
_−_
_r. Then, if the new application would be at position k in the queue we have:_
_h(S, r, w) = (min(dnew, xk) −_ _ek)su_ (3)
### 5.2 Availability summary
One of the main part of routing nodes consists in dividing a set of tasks from a new
application to send a subset of these tasks to each of its sub-branches (we recall that
routing nodes are organized on a tree-based overlay). In order to decide how to split a set
of tasks, the routing nodes must know how many tasks can be processed by each subtree
rooted at each of its children, and how the stretch of the nodes in that subtree is going to
be affected by the new application. This information is provided by the local schedulers
through the availability function, and aggregated at each level of the tree as a summary of
the availability of the nodes of each branch.
The availability function n provided by each execution node is not directly suited for
the aggregation. For this reason, it is summarized in a four-dimensional matrix, called
_availability summary which contains samples of this function: each cell of the matrix,_
identified by a tuple (S[(][i][)], r[(][j][)], w[(][k][)], a[(][l][)]), contains a conservative approximation of the
function for these parameters. A routing node receives a similar matrix (for the same
selected values of the parameters) from each of its children. In order to report a global
availability summary to its father, it simply aggregates all the received matrices by adding
them. By doing so, when a new application is released, the resulting matrix provides the
number of tasks that can be sent to that node, with a guaranteed maximum stretch.
In order to extract the correct information from the availability matrix, routing nodes
look for the cells associated to the parameters ri, wi, ai contained in each request. However, not every combination of parameters can be taken into account when creating the
availability matrix, due to space limitations. These parameters are discretized into a set
of selected values. Thus, whenever a new application arrives, the cells used in the division
process are those identified by the nearest values to the application parameters. What is
more, since the availability summary contains only samples of the original function, we a
priori do not know what happens between two selected values of the parameters. In order
to better understand the behavior of the n function based on the availability summary,
and to be able to get guaranteed interpolated availability between the selected values for
the parameters, we carefully study the evolution of n for each parameter. This also helps
us to decide which is the best selected values for parameters S[(][i][)], r[(][j][)], w[(][k][)] and a[(][l][)]. In
the rest of this section, we will abbreviate n(S, r, w, a) by n(S) when the other parameters
(r, w and a) are fixed, and similarly for the h function defined in Section 4 or other parameters. Note that we study the evolution of the n function on a given computing node
_Pu (with speed su). Then, when the availability of several subtrees are aggregated into a_
-----
single summary, we simply add n functions coming from each subtree. The properties on
this function exhibited below concern its monotonicity for various parameters, so they are
naturally conserved by the aggregation.
**5.2.1** **Evolution of n with task size a**
From Equation 2, it is clear that n(a), evolves as a descending staircase with increasing
values of a. These steps can be calculated as:
_n(a) =_
� _i,_ _a ∈_ � _h(S,r,wi+1_ ) _,_ _[h][(][S,r,w]i_ [)] � _∀i ∈_ N
0, _a > h(S, r, w)_
It can be seen that more precision is needed for smaller values of a. For this reason,
the values a[(][i][)] will be taken from a geometric succession a[(][i][)] = b[i]. b can be empirically
determined, but for now we will use b = 2. The execution nodes will eventually decide
for which of these values they provide information, because it is useless to calculate the
function n for tasks with a length under or over certain limits. For example, a task with
_ai = 2[16]_ would need around one minute to execute on a node with su = 2[10], which given
the nature of the platform seems to be a suitable minimum value.
**5.2.2** **Evolution of n with application size w**
For the other three parameters, the evolution of function h will be studied first. First of
all, we will prove that the larger the new application, the more time it will be devoted on
node Pu, which is expressed by the following proposition:
**Proposition 1. h(w) is monotonically non-decreasing.**
_Proof. As it can be deduced from Equation (1), fixed r and S, the deadline of a new_
application is a linear function of w, let call it d(w) = S _w + r. Due to the EDF ordering,_
_·_
applications are sorted by non-decreasing deadlines. Let us assume that applications are
numbered such that di−1 < di ∀i, then the new application will maintain its position k in
� _dk−1−r_ �
the application queue as long as d(w) ∈ (dk−1, dk], which means w ∈ _S_ _,_ _[d][k]S[−][r]_ .
Given w so that d(w) ∈ (dk−1, dk], from Equation (3) we have h(w) = (min(d(w), xk) −
_ek) · su. As w increases, the first situation is d(w) < xk and then h(w) is a linear function_
of d(w) with slope su, so it is a linear function of w with slope S · su. Afterward, when
_d(w) ≥_ _xk, h(w) has constant value (xk −_ _ek) · su. An example of both situations can_
be seen in Figure 2. In conclusion, h(w) is monotonically non-decreasing in the interval
� _dk−1−r_ �
_S_ _,_ _[d][k]S[−][r]_ for every k. In the special case where k = 1, d0 makes no sense and can
be replaced in the deduction by r. In the special case where the new application will be
executed after the n applications in the queue, so k = n + 1, dn+1 makes no sense and
_xn+1 = ∞, thus the function h(w) is a linear function in the interval_ � _dnS−r_ _[,][ ∞]�_ with slope
_S · su._
-----
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
|A k−1|||A k||
_h(w)_
_ek_ _dk−1_ _xk_ _dk_
1 _Ak_
_d[′]_ _d[′′]_
_d(w)_
Figure 2: At d(w) = d[′], h(w) = (d(w) − _ek)su. At d(w) = d[′′], h(w) = (xk −_ _ek)su._
Finally, when d(w) reaches the deadline dk of some other application, we have xk < dk
and thus xk < d(w), so h(w) = (xk − _ek) · su, as explained before. Now, if we add and_
substract the size of application Ak:
�
_k_ _[a][k]_ _k_ _[a][k]_
_h(w) = (xk −_ _ek) · su =_ _xk +_ _[N]s[ u]u_ _−_ _ek −_ _[N]s[ u]u_
�
_· su_
_ek +_ _Nsk[u]u[a][k]_ trivially equals to ek+1, and from definition of xk in Algorithm 3 we have:
_k_ _[a][k]_ _k_ _[a][k]_
_xk = min(dk, xk+1) −_ _[N][ u]_ _⇒_ _xk +_ _[N][ u]_ = min(dk, xk+1)
_su_ _su_
To sum up, with d(w) = dk,
_h(w)_ = (min(dk, xk+1) − _ek+1)su_
_≤_ (min(d[′], xk+1) − _ek+1)su, ∀d[′]_ _∈_ (dk, dk+1]
So, the maximum value of h(w) with w in interval k is lower than or equal to any value of
_h(w) in interval k_ +1, and thus h(w) is monotonically non-decreasing in all its domain.
Being h(w) monotonically non-decreasing means that n(w) is also non-decreasing. This
result is important in order to compute the correct value of n(S, r, w, a) for each cell of the
matrix in an availability summary. For the cells with parameter w = w[(][i][)], the stored value
must be the number of tasks that the node is available to execute for a new application
with parameter w [w[(][i][)], w[(][i][+1)]). Being the function n(w) non-decreasing, this number
_∈_
can be obtained just by sampling the function with w = w[(][i][)].
For the actual values of w[(][i][)], the same geometric succession as for a[(][i][)] is used, since wi
is a multiple of ai for all applications Ai. This also guarantees that the ratio between the
actual value of w and the one used in the matrix is at most b.
-----
**5.2.3** **Evolution of n with stretch S**
For the stretch parameter, we derive similar results as for w:
**Proposition 2. h(S) is monotonically non-decreasing.**
_Proof. Modifying the value of S modifies the deadline of all the applications in a node,_
so the order of their execution may change. In fact, two applications will exchange their
position in the queue when their deadlines become equal:
_di = dj_ _ri + Si,jwi = rj + Si,jwj_ _Si,j =_ _[r][j][ −]_ _[r][i]_
_⇔_ _⇔_ _wi −_ _wj_
_Si,j only makes sense when it is positive. If it is not, then applications i and j do never_
exchange positions. Assuming that applications are ordered by its release date, so that
_ri < rj ⇔_ _i < j, then it is trivial that Si,j > 0 ⇔_ _wi > wj. That is, applications i and j_
exchange positions only if wi is greater than wj. When S < Si,j application Ai will execute
before Aj, and when S > Si,j they will execute in reverse order.
Again, the available computation for the new application when it occupies position
_k in the queue is h(S) = (min(d(S), xk(S)) −_ _ek)su. It is trivial to see that, when two_
applications before position k exchange positions, ek does not change, as it is the sum
of the remaining time of the k 1 first applications in the queue. The same is true for
_−_
_xk, it does not change when two applications exchange positions after position k. From_
Algorithm 3 we have:
_k_ _[a][k]_
_xk = min(dk, xk+1) −_ _[N][ u]_
_su_
Supposed that application Ai is at position k and application Aj is at position k + 1,
when S = Si,j we have di = dj and thus xk has the same value no matter in which order
application Ai and Aj are:
_j_ _[a][j]_ _i_ _[a][i]_
_xk = min(di, min(dj, xk+2) −_ _[N][ u]_ ) − _[N][ u]_
_su_ _su_
_j_ _[a][j]_ _i_ _[a][i]_
= min(di, min(di, xk+2) − _[N][ u]_ ) − _[N][ u]_
_su_ _su_
_j_ _[a][j]_ _i_ _[a][i]_
= min(di, xk+2) − _[N][ u]_ _−_ _[N][ u]_
_su_ _su_
_i_ _[a][i]_ _j_ _[a][j]_
= min(di, xk+2) − _[N][ u]_ _−_ _[N][ u]_
_su_ _su_
_i_ _[a][i]_ _j_ _[a][j]_
= min(di, min(di, xk+2) − _[N][ u]_ ) − _[N][ u]_
_su_ _su_
_i_ _[a][i]_ _j_ _[a][j]_
= min(dj, min(di, xk+2) − _[N][ u]_ ) − _[N][ u]_
_su_ _su_
-----
So, the only values of the stretch which influence the available computation for the new
application are:
_Si =_ _[r][ −]_ _[r][i]_
_wi −_ _w_
When S = 0, the new application will be the last in the queue. As it increases, it will
advance in the queue over each application with wi > w when it arrives at Si. For simplicity
in notation, we assume that ∀i Si > 0 and Si > Sj ⇔ _i < j . Thus, when S ∈_ (Sk, Sk−1],
the new application would be in the k position of the queue, with applications A1, . . ., Ak−1
before it and Ak, . . ., An after it.
When S is at the beginning of the interval, the new application would have just
exchanged position with application Ak, so d(S) ≥ _xk(S) and h(S) = (xk(S) −_ _ek)su._
_xk(S) is a piecewise function where each segment is a linear function which depends on_
_di, k ≤_ _i ≤_ _n, so its slope is one of wi, k ≤_ _i ≤_ _n. Thus, the minimum slope of xk(S) is_
mini=k nwi > w. xk(S) grows faster than d(S), which slope is w, so for certain value of
_S we have d(S) < xk(S), and then h(S) = (d(S) −_ _ek)su = (r + Sw −_ _ek)su is a linear_
function of slope wsu.
So function h(S) when S ∈ (Sk, Sk−1] consists of two linear segments, the first with
slope greater than wsu and the next with a slope wsu. It is trivially continuous when
_d(S) = xk(S), so we conclude that h(S) is monotonically increasing in interval (Sk, Sk−1]._
When S ∈ (0, Sn], the new application is the last one so xk makes no sense and the slope
of h(S) is wsu in all the interval. When S > S1, the new application is not advancing in
the queue any more, so h(S) grows forever with slope wsu once d(S) < x1(S). Also, it is
worth noting that h(S) is zero while ek > min(d(S), xk(S)).
Now we study the situation when S = Sk. In that situation d(S) = dk(S), so xk(S) ≤
_d(S) ≤_ _d(S[′]), ∀S[′]_ _∈_ (Sk, Sk−1]. Before the exchange, h(S) = (min(d(S), xk+1(S))−ek+1)su.
Again, if we add and substract the size of application Ak:
_h(S) = (min(d(S), xk+1(S)) −_ _ek+1)su_
_k_ _[a][k]_ _k_ _[a][k]_
= (min(dk(S), xk+1(S)) − _[N][ u]_ _−_ _ek+1 +_ _[N][ u]_ )su
_su_ _su_
= (xk(S) − _ek)su_
_≤_ (min(d(S[′]), xk(S[′])) − _ek)su, ∀S[′]_ _∈_ (Sk, Sk−1]
So, the maximum value of h(S) with S in interval k is lower than or equal to any
value of h(S) in interval k 1, and thus h(S) is monotonically non-decreasing in all its
_−_
domain.
The maximum value of S can be arbitrarily large, because it may increase as new
applications enter the system. It is only limited by the capacity of the queue at each node.
Theoretical stretch is lower bounded by one, but using our approximation, its minimum
value tends to zero. The selected set of values for S[(][i][)] is again a geometric succession,
-----
but this time b is a real number between 0 and 1. The cells of the matrix with parameter
_S = S[(][i][)]_ will provide the number of tasks for a new application that rises the stretch to a
value in the interval (S[(][i][+1)], S[(][i][)]]. Again, with Proposition 2 the value at each cell is simply
calculated by sampling function n at S = S[(][i][)].
Unlike parameters a and w, for which the minimum and maximum values are fixed
during the system runtime, the minimum and maximum values of S for which information
is provided by execution nodes depends on the context. The minimum stretch will be the
one for which any application in the system misses its deadline, and thus the function
_n returns 0. The total number of samples values for S will be fixed, which imposes the_
maximum value for S[(][i][)].
**5.2.4** **Evolution of n with release time r**
Finally, we consider the evolution of function n with respect to parameter r. If we consider
an empty queue at node Pu, we have:
_h(r) = (d(r) −_ _r) · su = (S · w + r −_ _r) · su = S · w · su_
In an empty queue, the computation available to a new application does not depend
on r. In any other situation, node Pu will not process more tasks for the new application
than when it is totally free, so h will be lower or equal to S · w · su. In Figure 3, it can
be seen that function h is not monotonic with respect to parameter r. In the first case,
available computation increases as r reaches ek. In the second case, when d(r) reaches xk,
available computation will decrease as r increases.
_ek_
_Ak−1_
_r_ _d(r)_
_xk_
_r_ _d(r)_
=
_⇒_
a)
=
_⇒_
b)
_ek_
_r_ _d(r)_
_xk_
_Ak_
_r_ _d(r)_
|Col1|Col2|Col3|Col4|
|---|---|---|---|
|A k−1||||
|||||
|Col1|Col2|Col3|Col4|
|---|---|---|---|
|A k−1||||
|||||
|Col1|Col2|Col3|Col4|
|---|---|---|---|
||||A k|
|||||
|Col1|Col2|Col3|Col4|
|---|---|---|---|
|||A k||
|||||
Figure 3: Two situations where (a) h(r) increases as r increases and (b) h(r) decreases as
_r increases. Thus h(r) is not monotonic._
Alongside with this lack of monotony, we can question the interest of computing and
aggregating availability summaries for many values of r: if r is too far in the future, it is
-----
very likely that an updated summary will be received in the meantime. Empirical results
show that the best solution is to use just one value for parameter r, near in the future,
and update the availability information periodically. By doing so, information is always
up to date and no bandwidth is wasted because smaller availability summary matrices are
sent. The right value for parameter r depends on the update period, which impacts the
bandwidth consumption. In our test, we used a period of five minutes, and r is set to a
time ahead of the current time so that the number of minutes is a multiple of five. In this
way, two different matrices created at two different moments will have the same value for
_r if they differ in less than five minutes._
### 5.3 Updating the availability information
The availability information stored at each routing node of the tree may be updated for
two reasons: (1) when the availability of a node in that branch changes, or (2) when a
request is routed by that routing node.
Each time the availability of an execution node changes (for example when a new child
arrives or leaves), it creates a new summary and sends it to its father node. The father
will aggregate it with the ones coming from its other children and report the result to
the next ancestor, until the root node is reached. If this update is performed in such a
reactive way, each change in the leaves would lead to a message going up in the tree, which
would quickly flood the upper levels of the tree with updates. To avoid this situation, an
update rate limitation is set. When update messages are being sent at higher rates, they
are discarded in favor of the newer messages.
A node must update the availability information of its children when it allocates some
tasks to them, in order to avoid routing the next request to the same execution nodes.
However, since the summary reports the availability of a whole subtree, it is difficult to
predict the impact of allocated tasks. We adopt here a conservative approach, so that the
summary matrix always contains a lower bound on the number of tasks the subtree is able
to compute, for given values of the parameters. Assume we have just send N tasks from a
new application with task size anew. Then, we first subtract N to all cells with similar a
value:
_n(S, r, w, anew) ←_ _n(S, r, w, anew) −_ _N_
All other cells are updated in a similar way, to account for the compute time of the new
tasks:
� _N_ _a_ �
_·_
_n(S, r, w, anew) ←_ _n(S, r, w, a) −_
_anew_
This allows us to (roughly) estimate the new occupation of the subtree, before a real
summary update is received.
-----
## 6 Global scheduler
The global scheduling policy strongly relies on the availability information which is aggregated in the tree using the mechanism described in the previous section. The routing
nodes perform the functionality of the global scheduler in a decentralized fashion: they
receive an application allocation request at rnew, which contains the values Nnew, anew and
_wnew for a new application Anew, and route it throughout the tree to the execution nodes,_
trying to maintain the global maximum stretch as low as possible.
When a branch nodes receives a request for the allocation of a new application, it uses
the availability summaries of its children to calculate the minimum stretch that can be
achieved by using the resources in its own subtree, using a binary search among the stretch
samples, as detailed in Algorithm 4. Specifically, the algorithm looks for the minimum
sample value S[(][i][)] such that its own availability summary guarantees that all Nnew tasks of
the new application can be processed locally with stretch S[(][i][)].
**Algorithm 4: Algorithm to compute the minimum local stretch**
**Input: availability summary n[(][j][)]** for each child j
Let S[(][k][)] be the smallest sample value for the stretch, and S[(][l][)] the largest one
**while k** = l do
_̸_
_mid =_ (l + k)/2
_⌈_ _⌉_
�
**if** [�]j _[n][(]new[j][)]_ _S[(][mid][)][�]_ _≥_ _Nnew then_
_k = mid_
**else**
_l = mid_ 1
_−_
**return S[(][k][)]**
The new application can thus be scheduled locally in the subtree, but all applications in
this subtree will see their stretch increase to S[(][i][)], which might be large. In some cases, this
would lead to an unacceptable load imbalance in the platform. To prevent such situations,
another information is used: the minimum stretch. The minimum stretch of the platform
is periodically aggregated from the leaf nodes to the root node, and then spread from the
root to all other nodes. Since we simply compute and transfer a minimum value, its size
is negligible, and this information may be included in any other update messages.
Once the minimum local stretch S[(][i][)] is computed, we accept this local solution if and
only if S[(][i][)] _≤_ _B × Smin, for a given bound B. Otherwise, the entire request is sent to the_
father to look for a better allocation. This implements a tradeoff between performance
and load-balancing, so that small applications will remain local, and large applications can
go up the tree until they induce an acceptable slowdown. If parameter B is close to one,
most requests would need to go up the tree until enough execution nodes are at sight; if it
is larger, the ratio between the maximum and the minimum stretch in the whole platform
may be large, and the allocation less fair.
Once a routing node finds a suitable value for the minimum stretch on its local subtree,
-----
the tasks in the request are ready to be split among the children. This is done following the
values of n(S) in the availability summary of each child: child j gets a share of the total
number of tasks proportional to its availability n[(][j][)](S). A new request is then sent to each
child, with the characteristics of the application, and the number of tasks it is in charge
of. This request is treated as previously, except that it cannot be rejected and forwarded
to its father.
When a routing node fails, the information on the availability of its subtree is lost.
As we pointed out in section 3.2, we assume that our scheduler is based on a tree-based
network overlay which already guarantees a high probability of reaching a stable state
when a node fails, by reconstructing and balancing the tree. Usually, the tree overlay
will recover by replacing the failed node by another one and performing some balancing
operations, like in [12]. While some routing nodes may change their positions, the sets
of execution nodes that lay under their branches are maintained. Thus, the availability
summaries must be recalculated only in such nodes, and possibly in the ancestor nodes up
to the root. The cost of this process is not higher than the cost of a normal update, so the
impact of such a the failure on the global scheduling is mostly limited by the time needed
to recover the overlay. During the update of availability summaries, some applications
might be scheduled using improper information, however this effect is mitigated if we rely
on an overlay which only performs local moves to balance the tree.
## 7 Experimental evaluation
Our proposal has been implemented in a platform simulator in order to test and validate
its performance. Using simulations allows us to conduct reproducible experiments, so as to
compare performance of several schedulers on the same scenarios. Our simulations do not
only provide performance measurements of the decentralized scheduler, but also several
results on the impact of its algorithms and protocols on the network and computational
resources of the platform. As we show, these results demonstrate the benefits of adopting
a decentralized scheduler in a desktop grid platform over its centralized counterpart.
### 7.1 Simulation settings
Simulations have been performed on networks including between 50 and 1000 nodes, where
execution nodes have a computing power in the interval [1000, 3000] with steps of 200, in
millions of instructions per second. The network is fully connected, i.e., there is a link
between any two nodes, with a mean link delay of 50 milliseconds and link bandwidth of
1 megabit per second. This scenario represents a desktop grid of home computers with
a modest Internet connection, with different sizes. Workload is generated as a Poisson
process, with a mean inter-arrival time of 10 seconds. With that time, applications arrive
while others are still being executed. The number of tasks per application is ten times the
size of the network, with a random variation of 20%.
_±_
Our decentralized scheduler is tested with and without failures. They occur in each
-----
node as a Poisson process of rate one failure each four hours. Failures are instantaneous, so
that nodes recover immediatelly, but with reset state. A failed node retains no availability
summaries from its children, and all the tasks that were waiting in its queue are aborted.
The tree is supposed to recover automatically.
Along with the simulation of our decentralized scheduler, two centralized schedulers
have also been tested with the same set of applications and under the same conditions.
Both simulate an online centralized scheduler with perfect information about the execution
nodes. The first one tries to minimize the maximum stretch, as in the decentralized version;
we call it MinCent for short. The second one implements a typical FCFS centralized
scheduler, similar to the one used by other popular desktop grid platforms, and thus is
not expected to achieve very good fairness among applications. The comparison of our
decentralized scheduler against both centralized models is interesting because it shows
the inevitable performance loss by the use of decentralized information compared to the
global scheduling algorithms, and the performance gain compared to classical schedulers
not focusing fairness.
### 7.2 Simulation results
In order to compare the performance of the different schedulers, we issued several simulations for various network sizes, with one hundred applications each, registering the
maximum stretch and calculating the mean value for each network size. Figure 4(a) plots
this values for the decentralized and FCFS schedulers, relative to the values obtained with
MinCent, against network size. As it can be seen, with one thousand nodes, the performance lost in the decentralized scheduler without failures is still under 25%. As the
number of nodes increases, the performance is slightly reduced, because in a higher tree,
the information used by the upper levels is less detailed. In the scenario with failures, the
performance is noticiably reduced, as expected, but still better than the one achieved by
the FCFS centralized scheduler. As expected, the classic scheduling policy behaves much
worse in terms of fairness, providing a maximum stretch almost twice higher under the
same conditions.
On the other hand, the appearance of failures has the side effect of reducing the difference between the stretch of different applications, actually providing better fairness. For
instance, for a network of one thousand nodes, the stretch of the applications without failures was in the interval [220 10[−][8], 10876 10[−][8]], while in the case of failures it was in the
_·_ _·_
interval [10303 10[−][8], 12533 10[−][8]]. We deduce that this is due to the fact that the scheduler
_·_ _·_
is able to further adjust the stretch of the applications with the resubmitted tasks.
The good performance ratio against the centralized scheduler with minimum stretch
objective shown in the previous results is due partly to the dynamic and fast update of
availability information throughout the tree when tasks arrive or finish at the execution
nodes. Having up-to-date availability information is decisive for the global scheduler efficiency. Figure 4(b) shows the maximum update time needed for different network sizes,
with an update rate of 10000 bytes per second. As it can be seen, for the network of one
thousand nodes, a change in the local scheduler of any execution node can be propagated
-----
3
5
4
2.5
2
1.5
1
3
2
1
0 200 400 600 800 1000
0 200 400 600 800 1000
Network size (nodes)
(a) Maximum stretch obtained by the decentralized and
FCFS centralized schedulers, relative to MinCent, against
network size.
Network size (nodes)
(b) Maximum update time needed in networks of different sizes, for an update rate
limit of 10000 Bps.
Figure 4: Experimental results.
Update rate (Bps) 2500 5000 10000 20000 40000
Max. update time 12.3 6.61 3.68 2.33 1.81
Mean link usage 2.41% 3.78% 4.28% 5% 5.84%
Peak link usage 11.41% 16.43% 24.88% 40.45% 72.63%
Table 1: Update time and link bandwidth usage for different update rate limits in a network
of 1000 nodes.
in less than four seconds. As expected, higher network sizes make the distribution of the
update time shift to higher values. Even though, the difference is very little due to the
logarithmic increase of the tree height.
With higher update rate limits, shorter times can be achieved, nearly in inverse proportion. However, the update rate limit is a critical parameter since we have recorded in the
simulation test that update messages represent more than 95% of the traffic, due to the
size of an availability summary, so it must be increased carefully when better reactivity is
needed. Table 1 presents update time, mean and peak link bandwidth usage for different
update rate limits in a network of 1000 nodes. While mean link usage is quite low in every
case, peak usage rapidly increases. From a general point of view, mean usage shows that
the traffic generated by the platform protocols to schedule the submitted applications is
very low, even at the root node of the tree. However, traffic peaks may be to intrusive for
a desktop grid. Since the corresponding gain in update time is small, we consider the value
of 10000 bytes per second adequate for the kind of links used in the simulation.
The asymmetry of the tree causes higher levels to cope with higher peak bandwidth
usage than lower ones. However, the increase at each level is not constant, since after a
certain level the update rate limit avoids peak usage to continue growing. For instance,
with an update rate limit of 10000 bytes per second in a network of 1000 nodes, the peak
bandwidth usage was between 24.5% and 25% in any level over the third lowest one.
-----
Moreover, although not used in the scheduling algorithm, the traffic generated by task
data transfers has been measured and compared to the traffic generated by the platform
protocols. We use a task data size of 512 kilobytes, low enough to represent an application
which transmission time does not affect scheduling, and we assume that repositories where
data is stored are not limited in bandwidth. Under these conditions, on each node, the
average data traffic is still 10 times larger than the traffic generated by the platform
protocols.
## 8 Conclusions and future work
In this paper, we have focused on the problem of scheduling concurrent bag-of-tasks applications on desktop grids. We have proposed a decentralized scheduling algorithm, which
makes it particularly convenient for large-scale distributed environments. The objective
of our scheduler is to ensure fairness among applications, by minimizing the maximum
slowdown, or stretch, of all applications.
Through extensive simulation tests, we have compared this scheduler to an online centralized version that also minimizes the maximum stretch, but need a perfect knowledge
of the platform, and to a classical FCFS scheduler, which is commonly used on desktop
grids. We prove that the performance loss compared to the centralized scheduler is reasonable (below 25%), and that the achieved fairness is much better than with FCFS, even
under frequent node failure. We also carefully studied the resource usage of our algorithm,
which proves to use a significantly small quantity of network resources at each node of the
platform. Moreover, the CPU consumption at each node of the platform of our approach
is very low, due to the simplicity of the proposed algorithms. Thus, our decentralized
algorithms has a very low overhead together with a great flexibility and robustness, which
makes it very suited for desktop grid platforms.
Our future works include simulations to study the adaptation of our scheduler to
communication-intensive applications, by taking file size into account when allocating tasks
onto the platform. We also intend to improve our scheduler with more complex overlays
proposed for peer-to-peer platforms.
## Acknowledgment
Javier Celaya and his work has been supported by the CICYT DPI2006-15390 project of
the Spanish Government, grant B018/2007 of the Aragonese Government, grant TME200801125 of the Spanish Government, and the GISED, group of excellence recognized by the
Aragonese Government.
-----
## References
[1] Aberer, K., Cudr´e-Mauroux, P., Datta, A., Despotovic, Z., Hauswirth, M., Punceva,
M., Schmidt, R.: P-Grid: A Self-organizing Structured P2P System. SIGMOD Rec.
32(3), 29–33 (2003)
[2] Al-Azzoni, I., Down, D.G.: Dynamic scheduling for heterogeneous Desktop Grids.
In: GRID ’08: Proceedings of the 9th IEEE/ACM International Workshop on Grid
Computing. pp. 136–143 (2008)
[3] Al-Kiswany, S., Ripeanu, M., Vazhkudai, S.S., Gharaibeh, A.: stdchk: A Checkpoint
Storage System for Desktop Grid Computing. In: ICDCS ’08: Proceedings of the
28th International Conference on Distributed Computing Systems. pp. 613–624. IEEE
Computer Society, Washington, DC, USA (2008)
[4] Anderson, D.P.: BOINC: A System for Public-Resource Computing and Storage.
In: GRID ’04: Proceedings of the 5th IEEE/ACM International Workshop on Grid
Computing. pp. 4–10. IEEE Computer Society, Washington, DC, USA (2004)
[5] Anglano, C., Brevik, J., Canonico, M., Nurmi, D., Wolski, R.: Fault-aware scheduling
for bag-of-tasks applications on desktop grids. In: GRID ’06: Proceedings of the 7th
IEEE/ACM International Conference on Grid Computing. pp. 56–63. IEEE Computer
Society, Washington, DC, USA (2006)
[6] Beaumont, O., Carter, L., Ferrante, J., Legrand, A., Marchal, L., Robert, Y.: Centralized versus Distributed Schedulers for Bag-of-Tasks Applications. IEEE Trans.
Parallel Distrib. Syst. 19(5), 698–709 (2008)
[7] Benoit, A., Marchal, L., Pineau, J.F., Robert, Y., Vivien, F.: Scheduling concurrent bag-of-tasks applications on heterogeneous platforms. to appear in IEEE
Transactions of Computers (2009), PrePrint available online at http://doi.
```
ieeecomputersociety.org/10.1109/TC.2009.117
```
[8] Brasileiro, F., Araujo, E., Voorsluys, W., Oliveira, M., Figueiredo, F.: Bridging the
High Performance Computing Gap: the OurGrid Experience. In: CCGRID ’07: Proceedings of the 7th IEEE International Symposium on Cluster Computing and the
Grid. pp. 817–822. IEEE Computer Society, Washington, DC, USA (2007)
[9] Celaya, J., Arronategui, U.: YA: Fast and Scalable Discovery of Idle CPUs in a P2P
network. In: GRID ’06: Proceedings of the 7th IEEE/ACM International Conference
on Grid Computing. pp. 49–55. IEEE Computer Society, Washington, DC, USA (2006)
[10] Celaya, J., Marchal, L.: A fair distributed scheduler for bag-of-tasks applications
on desktop grids. Resarch Report RRLIP2010-07, LIP (2010), available at http://
```
graal.ens-lyon.fr/~lmarchal
```
-----
[11] Choi, S., Kim, H., Byun, E., Baik, M., Kim, S., Park, C., Hwang, C.: Characterizing
and Classifying Desktop Grid. In: CCGRID ’07: Proceedings of the Seventh IEEE
International Symposium on Cluster Computing and the Grid. pp. 743–748. IEEE
Computer Society, Washington, DC, USA (2007)
[12] Jagadish, H., Ooi, B.C., Vu, Q.H., Zhang, R., Zhou, A.: VBI-Tree: A Peer-to-Peer
Framework for Supporting Multi-Dimensional Indexing Schemes. In: Proceedings of
the 22nd International Conference on Data Engineering, 2006. ICDE ’06. p. 34 (2006)
[13] Kondo, D., Chien, A.A., Casanova, H.: Resource Management for Rapid Application Turnaround on Enterprise Desktop Grids. In: SC ’04: Proceedings of the 2004
ACM/IEEE conference on Supercomputing. p. 17. IEEE Computer Society, Washington, DC, USA (2004)
[14] Legrand, A., Su, A., Vivien, F.: Minimizing the stretch when scheduling flows of
biological requests. In: SPAA ’06: Proceedings of the 18th annual ACM symposium
on Parallelism in algorithms and architectures. pp. 103–112. ACM, New York, NY,
USA (2006)
[15] Li, M., chien Lee, W., Sivasubramaniam, A.: DPTree: A Balanced Tree Based Indexing Framework for Peer-to-Peer Systems. In: ICNP ’06: Proceedings of the Proceedings of the 2006 IEEE International Conference on Network Protocols. pp. 12–21.
IEEE Computer Society (2006)
[16] Litzkow, M.J., Livny, M., Mutka, M.W.: Condor-a hunter of idle workstations. In:
ICDCS: Proceedings of the 8th International Conference on Distributed Computing
Systems (1988)
[17] Modi, P.J., Shen, W.M., Tambe, M., Yokoo, M.: Adopt: asynchronous distributed
constraint optimization with quality guarantees. Artif. Intell. 161(1-2), 149–180 (2005)
[18] Ramamritham, K., Stankovic, J.A., Zhao, W.: Distributed Scheduling of Tasks with
Deadlines and Resource Requirements. IEEE Trans. Comput. 38(8), 1110–1123 (1989)
[19] seti@home, http://setiathome.berkeley.edu/
[20] Shirts, M., Pande, V.: Screen Savers of the World Unite! Science 290(5498), 1903–
1904 (2000)
[21] da Silva, D.P., Cirne, W., Brasileiro, F.V.: Trading Cycles for Information: Using
Replication to Schedule Bag-of-Tasks Applications on Computational Grids. In: Proceedings of the 9th International Euro-Par Conference. pp. 169–180 (2003)
[22] Viswanathan, S., Veeravalli, B., Robertazzi, T.G.: Resource-Aware Distributed
Scheduling Strategies for Large-Scale Computational Cluster/Grid Systems. IEEE
Trans. Parallel Distrib. Syst. 18(10), 1450–1461 (2007)
-----
[23] World community grid, http://www.worldcommunitygrid.org/
[24] Zhou, D., Lo, V.M.: WaveGrid: a scalable fast-turnaround heterogeneous peer-based
desktop grid system. In: IPDPS ’06: Proceedings of the 20th International Parallel
and Distributed Processing Symposium (2006)
-----
| 15,578
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/CCGRID.2010.13?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/CCGRID.2010.13, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://graal.ens-lyon.fr/~lmarchal/pub/reports/RR2010-07.pdf"
}
| 2,010
|
[
"JournalArticle",
"Conference"
] | true
| 2010-05-17T00:00:00
|
[] | 15,578
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00a4470bc587602b08265bb2b60a416249427c23
|
[
"Computer Science"
] | 0.877971
|
A speculative approach to spatial-temporal efficiency with multi-objective optimization in a heterogeneous cloud environment
|
00a4470bc587602b08265bb2b60a416249427c23
|
Secur. Commun. Networks
|
[
{
"authorId": "38452861",
"name": "Qi Liu"
},
{
"authorId": "2113720181",
"name": "Weidong Cai"
},
{
"authorId": "2115732462",
"name": "Jian Shen"
},
{
"authorId": "3213259",
"name": "Zhangjie Fu"
},
{
"authorId": "2110778938",
"name": "Xiaodong Liu"
},
{
"authorId": "48099309",
"name": "N. Linge"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
_Security Comm. Networks 2015; 00:1–11_
DOI: 10.1002/sec
### RESEARCH ARTICLE
# A Speculative Approach to Spatial-Temporal Efficiency with Multi-Objective Optimisation in a Heterogeneous Cloud Environment
### Qi Liu[1], Weidong Cai[1], Jian Shen[2], Zhangjie Fu[3][*], Xiaodong Liu[4], and Nigel Linge[5]
1School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing, China
2Jiangsu Engineering Centre for Network Monitoring, Nanjing University of Information Science and Technology, Nanjing, China
3Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET), Nanjing University of
Information Science and Technology, Nanjing, China
4School of Computing, Edinburgh Napier University, 10 Colinton Road, Edinburgh EH10 5DT, UK
5School of Computing Science and Engineering, University of Salford, Salford, UK
## ABSTRACT
A heterogeneous cloud system, e.g. a Hadoop 2.6.0 platform provides distributed but cohesive services with rich features
on large-scale management, reliability and error tolerance. As big data processing is concerned, newly built cloud clusters
meet the challenges of performance optimisation focusing on faster task execution and more efficient usage of computing
resources. Present proposed approaches concentrate on temporal improvement, i.e. shortening MapReduce (MR) time,
but seldom focus on storage occupation; however, unbalanced cloud storage strategies could exhaust those nodes with
heavy Map/Reduce cycles, and further challenge the security and stability of the entire cluster. In this paper, an adaptive
method is presented aiming at spatial-temporal efficiency in a heterogeneous cloud environment. A prediction model
based on an optimised K-ELM algorithm is proposed for faster forecast of job execution duration and space occupation,
which consequently facilitates the process of task scheduling through a multi-objective algorithm called TS-NSGA-II.
Experiment results have shown that compared to the original load-balancing scheme, our approach can save approximate
47-55 seconds averagely on each task execution. Simultaneously, 1.254‰ of differences on hard disk occupation were
made among all scheduled reducers, which achieves 26.6% improvement over the original scheme. Copyright © 2015
John Wiley & Sons, Ltd.
**KEYWORDS**
MapReduce; Cloud Storage; Load Balancing; Multi-Objective Optimisation; Prediction Model
*Correspondence
Zhangjie Fu, School of Computer and Software, Nanjing University of Information Science and Technology Nanjing, China. E-mail:
[email protected]
Received . . .
## 1. INTRODUCTION
In recent years, distributed computing has been widely
investigated and deployed in both academic and industrial
fields due to its features of large-scale, virtualization,
failure control among connected components, and asynchronised communication. Cloud computing as one of the
successful commercial distributed systems provides users
_†Please ensure that you use the most up to date class file, available from the SEC_
Home Page at
http://www3.interscience.wiley.com/journal/114299116/home
with on-demand services by allocating rational computing
and storage resources transparently [1, 2].
MapReduce paradigm proposed by Google is being
exploited by a fast growing number of companies and
research institutes [3]. Hadoop, as a type of opensource implementation provided by Apache, gives them
a good chance to conduct efficient big data processing
and discover potential and valuable information in a
non-traditional way. Enterprises and companies therefore
benefit from analysing and dealing with real-time data. At
the moment, data analysis applications in a cloud have
shown different complexity, resource requirements and
data delivery deadlines; such diversity has created new
Copyright © 2015 John Wiley & Sons, Ltd. **1**
-----
requirements of job scheduling, workload management
and program design in a cloud. Several projects have
been launched to reduce challenges on writing complex
programs for data analysis and/or data mining, e.g. Pig
[4] built upon the MapReduce engine in the Hadoop
environment. In addition, HBase [5] and Hive [6],
implemented by Apache, are wildly used in a cloud
environment to achieve better performance. In these
applications, however, low-level improvement based on
MapReduce is still required due to its direct interaction
with HDFS (Hadoop Distributed File System) [7]. An
outstanding strategy that improves the security and
stability of a cloud system is necessary.
While the optimization of job scheduling in MapReduce
has been widely conducted in recent activities [8-20],
current Hadoop systems still suffer from poor loadscheduling strategies due to their lack of consideration on
the usage of cloud storage, which would bring heavy loads
on certain data nodes and therefore cause a long delay on
total execution. Although theoretically infinite computing
resources can be provided in a cloud system, unreasonable
increment of mappers/reducers cannot achieve processing
efficiency, and even waste more storage to complete.
A scheme is therefore presented in this paper to achieve
process efficiency and load balance in a cloud system both
spatially and temporally. Our contributions are three folds
as follows:
(1) A prediction model called PMK-ELM is firstly built
providing prediction on the number of reducers
needed for newly coming tasks, as well as possible
execution duration and storage size they may take.
(2) An optimized algorithm based on NSGA-II [21]
called TS-NSGA-II is then designed to maintain
such an equalized status that the total time
completing the job distributed in each reducer is
almost same while keeping the ratio of hard disk
space similar.
(3) A practical Hadoop environment is constructed
to verify the feasibility and performance of the
scheme.
The remainder of this paper is organized into five
sections. Related work on load balancing is reviewed in
Section II. In Section III, preliminaries of core algorithms
manipulated in our approach are introduced. Section IV
explains the adaptive method to achieve fair loads during
map and reduce processes. Results are presented and
evaluated in Section V with a comparison of corresponding
algorithms. Finally, Section VI concludes the paper and
identifies potential future work.
## 2. RELATED WORK
A balanced load is hard to be achieved due to the
imbalanced input data of Reduce phase. [9] proposed an
optimization method, by repartitioning the inputting data
of map and reduce tasks, all available data node can
complete its task at the same time. This method can handle
all kinds of load deflections, but it is too difficult to be
implemented, and it has greatly changed Hadoop. Also,
extra reassigning tasks would produce additional network
overhead. Partition methods are also research hot pots,
such as those methods based on historical data [10] and
the sampling results [11], which could allocate input data
to different nodes more flexibly. Though these methods
can achieve dynamic load balancing, their performance
system was not verified in an actual Hadoop system.
Through offline and online analysis, resource requirements
can be predicted by using a benchmark or real application
workloads, for example, [12] proposed a prediction model
based on SVM in a heterogeneous environment. Combined
with an adaptive algorithm HAP, it can be used for
predicting the amount of data assigned to different tasks
node. However, the reduce tasks required repeated cutting
and consolidation of data blocks, which can lead to
extra time cost. In addition, the training phase of SVM
would require much time. A prediction model focused on
resource consumption of MapReduce processes, based on
a classification and regression tree, was presented by Jing
et al. [16].
The efficiency of virtualization deployment has been
extensively studied. [13] proposed a general method
for estimating resource requirements when running
applications in a virtual environment. [14] studied the
resource requirement of starting a new virtual instance.
Through a resource prediction model, dynamic resource
provision was achieved in a cloud environment. Metrics
for performance and load efficiency assessment in cloud
systems have also gained much attention. [15] described a
method for more accurate assessment of distributed cloud
application performance.
Besides the above methods, some researchers are
studying optimizing the speculative execution strategy
in MapReduce. A key advantage of MapReduce is its
automatic processing failure. Its high fault tolerance makes
it easier for a programmer to use. If a node collapses,
MapReduce will restart the task on different machines.
Some speculative execution strategies have been proposed
in some literature. Google only started backup tasks
when a job was close to completion; their experiments
showed that proposed speculate execution can reduce the
execution time of operation 44% [3]. In order to improve
the performance of the cluster, Hadoop and Microsoft
Dryad [31] also provided an implementation speculative
execution strategy.At first, their strategy was roughly the
same as that proposed by Google. However, an optimized
speculative execution called Longest Approximate Time
to End (LATE) algorithm was proposed in which a
different metric was defined to start tasks for speculative
execution. The remaining time was estimated, rather than
considering the progress of the current task. LATE gave
a more clear assessment of struggling tasks’ impacts
on the overall job running time. But the time that
**2** _Security Comm. Networks 2015; 00:1–11 © 2015 John Wiley & Sons, Ltd._
DOI 10 1002/
-----
every stage occupies was not stable while the std
representing standard deviation used in LATE cannot be
applied in all applications. Qi et al. therefore proposed
MCP to overcome the disadvantages in LATE. MCP
identified slow tasks based on average progress rate of
a whole cluster though in reality, the progress rate can
be unstable. Struggles can be appropriately judged in
homogenous environments. However, there are still a lot of
disadvantages in MCP, including average progress rate and
its mediocre performance in heterogeneous environments.
Data placement schemes have also been researched. To
address this problem, a new Data-gRouping-Aware data
placement scheme was proposed in [19]. It extracts optimal
data groupings and re-organizes data layouts to achieve
load balancing in per group. CoHadoop was proposed
in [20]; it permits applications to decide where data
should be stored. However, these schemes are aimed at
the data placement when storing the data and not fit for
MapReduce. Furthermore, they cannot be applied when
data have been stored.
Comprehensive load and usage efficiency have achieved
large improvement in a distributed environment. However,
it is still challenging to achieve spatial-temporal efficiency
in a cloud system, especially in a heterogeneous one.
## 3. PRELIMINARIES
A detailed introduction to some advanced techniques used
in this paper is given in this section.
**3.1. MapReduce**
In MapReduce, computation works are implemented
through map tasks and reduce tasks. Map tasks put
different pairs of data into multiple lists grouped by
different keys. So, data having the same key are distributed
to the same list. Then, results generated by map tasks, as
intermediate data, are pulled by reduce tasks to process
further and get the final results [22].
MapReduce jobs are divided into multiple tasks, then,
these generated tasks are distributed to nodes and executed
in the cluster. Map tasks are partitioned into different
datanodes according to a logical split of input data
that generally resides on HDFS [23]. Reduce tasks are
produced according to an equation in reduce stage. The
map task reads the data from HDFS as input data, map
functions designed by the user are then applied and put the
results into buffers. This data are written to the memory of
the node executing the map task when it is less than the
threshold user set. Otherwise, this data will be spilled into
the hard disk of the nodes. There are three phases in reduce
tasks, called shuffle (copy), sort (merge), and reduce. In the
shuffle phase, the reduce tasks pull the intermediate data
files generated by the map tasks. Then, the intermediate
files from all the map tasks are sorted in the following
phase. After all the intermediate data are shuffled and
transferred, the reduce phase starts working.
Job scheduling in Hadoop is performed by the
namenode, which manages a number of datanodes in the
cluster. In MapRedeuce 2.x, each datanode will prepare
containers for map tasks and reduce tasks, which can be
seen as an abstraction of resource and used to execute
the task. The number of map and reduce container
is calculated the configuration file. Application Master
periodically checks the heartbeats coming from datanodes
and calculates the reported state of free resources and
current progress of tasks that they are currently executed.
**3.2. Basic ELM**
Recently, Artificial Neural Networks (ANNs) have been
widely applied in applications involving classification
or function approximation [24]. However, they also
suffer from low learning speed, which has become the
main bottleneck when applying an ANN algorithm to
practical applications. In order to overcome this drawback,
many researchers explore the approximation capability
of feedforward neural networks, especially in a limited
training set, from the point of view of mathematics. A
novel machine learning algorithm called Extreme Learning
Machine (ELM) [25, 26] was therefore designed based
on Single-hidden Layer Feedforward Neural networks
(SLFNs) [27].
Let _X = {x1, x2, ..., xN_ _|xi ∈_ _R[D], i = 1, 2, ..., N_ _}_
denote the training set with N samples, D represent
dimension. Let Y = {y1, y2, ..., yN _|yi ∈_ _R} denote the_
vectorised label where column j ({j = 1, 2, . . ., P _}) set_
by 1 for class j while other columns set by 0, and P is
the number of classes. Then, the model of a single layer
hidden layer neural network having L hidden neurons and
an activation function g(x) can be expressed as
_L_
∑
_βj·g(< wj, xi > +bj) = yi_ (1)
_j=1_
where i = 1, 2, ..., N, wj and βj represents the weight
vectors from inputs to hidden layer and from hidden layer
to output layer, respectively, bj is the bias of jth hidden
neuron, g(< wj, xi > +bj) is the output of the jth hidden
neuron with respect to the input sample xi . Note that (1)
can be rewritten in a compact form as
_H · β = Y_ _[′]_ (2)
where H is the hidden layer output matrix of SLFNs
and β is the output weight matrix, Y _[′]_ is the transpose
of Y . Optimal weights and bias of SLFNs can be found
by using back propagation learning algorithms, which
requires users to specify learning rates and momentum.
However, there is no guarantee that the global minimum
error rate can be found. Thus, the learning algorithm
suffers from local minima and over-training. In exploration
of the approximation capability of feedforward neural
networks in a finite training set, it is found that SLFNs
can reach the approximate capacity at a specified error
_Security Comm. Networks 2015; 00:1–11 © 2015 John Wiley & Sons, Ltd._ **3**
DOI 10 1002/
-----
_ε(ε > 0) level with the hidden layer neurons is much less_
than the number of training samples. And based on the
minimum norm least-squares function, the weight matrix
_β in (2) can be solved by_
_β = H_ [+] _· Y_ (3)
Where H [+] is a MooreCPenrose matrix generalized inverse
of matrix H.
**3.3. K-ELM**
K-ELM(Kernel-ELM) has simplified the complexity of the
ELM algorithm, with the improvement of the operation
speed. Meanwhile it improves the simulation precision of
the algorithm and the fitting ability based on the original
ELM algorithm. In K-ELM, a positive number is added to
the diagonal of H _[T]_ _H or HH_ _[T]_, which makes the ELM
algorithm more stable and present a better generalization
performance [28, 29]. The prediction model established
based on the training set can be described as:
Minimum value:
in GA. For example in NSGA-II, the population undergoes
initialization, crossover and mutation as usual. However
there are three main differences:
(1) each chromosome is sorted based on nondomination sorting into a front to obtain a fitness
value;
(2) crowding distance used to measure the diversity of
the population is employed to decide the distance
between individuals;
(3) the population with the current population and
current offspring (obtained by crossover and
mutation) is sorted again based on the rank and the
crowding distance.
After that, the best N (population size) individuals are
selected to be the next generation. The main consideration
in the design of the NSGA-II algorithm consists of six
aspects, involving code generation, determination of the
initial population, fitness evaluation, selection, crossover
and mutation. Detailed procedure is shown in Figure 1.
**Figure 1. Flow chart of NSGA-2**
_LPELM = [1]_
2 _[∥]_ _[β][∥][2][ + 1]2_ _[C]_
_N_
∑
_∥_ _ξi∥[2]_ (4)
_i=1_
Constraint:
_h(xi)β = yiT −_ _ξiT, i = 1, 2, ..., N_ (5)
where β = [β1, β2, ..., βL] is the weight of the hidden
layer outputs. Cis the ridge regression parameter. ζi is the
error vector between expected outputs and training outputs,
_h(xi) is output vector of hidden neurons corresponding to_
the training sample xi. Finally, the output function of ELM
regression can be expressed as
_f_ (x) = h(x)H _[T]_ ( _C[I]_ [+][ HH] _[T][ )][−][1][T]_
_K(x, x1)_
= ... ([ I]C [+ Ω][ELM] [)]−1T (6)
_K(x, xN_ )
.
Similar to SVM, nuclear ELM (or kernel-based ELM,
K-ELM) is not required to set the number of neurons in the
hidden layer and the activation function types. Common
kernel functions are shown as followed.
Linear: K(xi, xj) = xi · xj
Polynomial: K(xi, xj) = (xi · xj + b)[d], b ≥ 0
RBF: K(xi, xj) = exp(−σ||xi − _xj||[2]), σ > 0_
Sigmoid: K(xi, xj) = tan(axi · xj + b), a > 0, b <
0
**3.4. NSGA-2**
NSGA-II as one of the multi-objects optimization
algorithms has lots of operations that are the same as those
**4** _Security Comm. Networks 2015; 00:1–11 © 2015 John Wiley & Sons, Ltd._
DOI 10 1002/
-----
## 4. APPROACH TO LOAD BALANCING
**4.1. A method for partition reconstruction**
MapReduce uses a hash function as the original partition
function, where splits are generated and distributed
to different reducers. The Original hash function may
lead to sever load skew, especially in a heterogeneous
environment, which will decrease the speed of some node.
However, the overall job finishing time is decided by the
node that finishes the task at last according to wooden
barrel effect. Algorithm1 depicts the way that fairly equal
size of splits is ensured for distribution, which helps the
system dispense different volume of data to the different
node having different computing capacity. Before starting
the work, we run the WordCount application on each node
separately to get the approximate capacity of each node.
Then, the volume of data is given away according to a
different capacity. According to algorithm1, the list of the
partition has a relatively balanced data amount according
to different capacities.
**Algorithm 1 Partition Reconstruction**
**Input:**
The input size of reduce stage, size;
The number of data chunks, number;
**Output: partion list**
Get the list of capacity of each server Lc
Set iterator = 1
**for iterator < number,iterator + + do**
Get the _ratio list_ according to _ratio list =_
_capacity/avg capacity_
_Maxr = Max(ratio list)_
_Minr = Min(ratio list)_
**if Maxr/Minr > 1.5 then**
_Maxr = minr = (Maxr + Minr)/2_
Add Minr and Maxr to ratio list
**else**
Break
**end if**
**end for**
_partion list = size ∗_ _ratio list_
**return partion list**
**4.2. A prediction model for load balancing based**
**on K-ELM**
In this section, the training set is set as:TS = {time,
_reducer no, datanode no, input size,shuffle size},_
where _reducer no_ represents the reducer number,
actually, it also indicates the sequence when reducers
run. datanode no represents the number of a datanode.
Generally, a datanode can be mapped to several reducers.
Here input size does not represent the input size of the
whole task, but the input size of reducers at the reduce
stage. shuffle size denotes the data size of a reducer
that needs to shuffle when map processes have finished.
In details, the building progress of prediction model
for execution time based on K-ELM (PMK-ELM) is as
follows:
Step 1: Data pre-processing. First, samples that contain
great network congestion are removed. Then the trimmed
datasets are divided into training samples and test samples.
The training samples are used for training the prediction
model, whereas the test ones are for checking if the
prediction model has been well trained.
Step 2: Model training. To build the K-ELM prediction
model (PMK-ELM), training parameters of the model are
obtained by using the training set sample generated by Step
1. The specific processes are as follows:
(1) Randomly generated weights between the input
layer and the hidden layer, and between the hidden
layer neurons w and the threshold value b;
(2) Use the hidden layer neuron activation function to
calculate the hidden layer output matrix H;
(3) Work out output layer weights.
Step 3: Data validation. Datasets generated by Step 1
are used to validate the PMK-ELM algorithm. According
to the parameters trained in Step 2, the predictive values
of test sets can be retrieved, which are then compared with
the actual values to verify the prediction performance of
the model.
**4.3. TS-NSGA-II**
**4.3.1. Mathematical model**
When a map task is completed, the data will be shuffled
and merged, and then assigned to different reducers;
however, the amount of data assigned to each reducer is
not equal, which consequently causes uneven allocation
of reducers to datanodes. In order to make reduce tasks
consume less time and hard disk space occupation,
following conditions should be satisfied:
(1) The data amount handled by a reducer assigned to
a datanode cannot be more than disk usage of the
datanode;
(2) A reducer can only be assigned to a datanode, but a
datanode can handle multiple reducers, as in Figure
2.
Although an actual reduce process is parallel, it is
assumed in a virtual serialization line. A datanode called
F is further abstracted so that when the procedure arrives
at F, the reduce task is completed, as shown in Figure 3.
Assuming that the output of map tasks can be randomly
divided into m data chunks and there are n datanodes in
the clusters. If tmn represents the execution time that each
reducer needs, then the execution time of each split can be
noted as a matrix Mt, as shown below:
_t11_ _. . ._ _t1n_
... ... ...
_tm1_ _· · ·_ _tmn_
_Mt =_
_Security Comm. Networks 2015; 00:1–11 © 2015 John Wiley & Sons, Ltd._ **5**
DOI 10 1002/
-----
min S =
_n_
∑
_|psi −_ _ps|_ (9)
_i=1_
_InSum =_
_n_
∑
_si_ (10)
_i=1_
**Figure 2. Relationship between datanodes and reducers**
**Figure 3. Virtual serialization**
In order to evaluate the usage of storage space, the
percentage of input size Smn from total unused size slmn
are calculated and noted as psmn.
_psmn = smn/slmn_ (7)
Then the hard disk space ratio of each split can be
described as Ms:
_ps11_ _. . ._ _ps1n_
... ... ...
_psm1_ _· · ·_ _psmn_
_Ms =_
Finally, the elements of Mt and Ms are combined to
format a new matrix M with new elements expressed as
(t, ps)mn, as shown below:
(t, ps)11 _. . ._ (t, ps)1n
... ... ...
(t, ps)m1 _· · ·_ (t, ps)mn
_M =_
The real execution time of datanode i can be described
as ti, whereas the split size can be represented as Si.
Accordingly, the real processing results list L can be
calculated as:
_L = {(t, ps)1, (t, ps)2, ..., (t, ps)n}_
Here, two objective functions can be formatted as shown
in (8) and (9); whereas the constraints are shown as (10)
and (11), where in (10), InSum represents the total sum
of reduce Input size.
_ti > 0, psi > 0._ (11)
**4.3.2. Design of TS-NSGA-II**
The design of algorithm consists of six aspects,
including determination of the initial population, fitness
evaluation, selection, mutation, code generation and
crossover. Major changes have been made on the latter two.
(1) Code generation
Non-negative integers are used as the index of
reducers, i.e. 0, 1, 2, ..., M − 1 for M reducers,
however. On the other hand, N datanodes are
indexed using positive integers, i.e. 1, 2, ..., N . In
this case, distribution of M reducers to N datanodes
may generate N _[M]_ possible combinations.
(2) Crossover
The original NSGA-II algorithm uses Simulated
Binary Crossover (SBX) [19] in this stage; however,
in our scheme, crossover probability called pc is
used for better grouping after being selected. The
Crossover stage in this scheme consists of two
steps:
1) Randomly match a group of chromosomes;
2) During matching chromosomes, randomly
set intersections to make matched individual
chromosomes exchange their information.
Chromosome should always be kept permutations,
so the procedure of crossover is: after randomly
selecting paired chromosomes, two crossover
positions are randomly generated; the cross section
of elements on the other side of the parent is also
removed. Then, the new cross section is added to the
sequence of the parent that has cut out some of the
elements. Taking two pairs of chromosomes as an
example, where chromosome A=2313|1122|32 and
chromosome B = 3123|2213|12. The cross section
is divided by a vertical bar. First, the element
corresponding to |1122| of A is removed from B, so
B’ = 312312; then a gene fragment of A is added to
B, so the offspring B” is 3123|1211|22. Similarly,
the offspring A” is 2313|3222|13. For new produce
offspring A” and B”, it needs to be decided whether
the total data size is bigger than the storage quota.
If not, they are regarded as effective; otherwise,
iteration will be operated. The complete procedure
of the algorithm is shown in Algorithm 2 as follow:
(8)
����
min T =
_n_
∑
_i=1_
����
_t −_ _ti_
_t_
**6** _Security Comm. Networks 2015; 00:1–11 © 2015 John Wiley & Sons, Ltd._
DOI 10 1002/
-----
**Algorithm 2 Crossover**
**Input:**
The list of chromosomes, Li;
Crossover probability, pc;
The hard disk space ratio of each split, PSmn;
**Output: New list of chromosomes, NewLi**
Randomly match a group of individual in Li according
to pc noted as A and B
**while true do**
Randomly generate two number not larger than the
length of A, described as m, n(m <= n)
Divide A into 3 parts:SeqAm,SeqAc,SeqAn
Do the same operator to B
Get SeqBm,SeqBc,SeqBn
_A[′]_ = SeqAm∪SeqAn,B[′] = SeqAm∪SeqAn
_A[′′]_ = A[′]∪SeqBc,B[′′] = B[′]∪SeqAc
Get the ps according to psmn
**if ps is smaller than 1 then**
Break
**else**
Continue
**end if**
**end while**
Replace A with A[′′] and B with B[′′] in NewLi
**return NewLi**
## 5. EXPERIMENT AND ANALYSIS
In order to test the performance and benefits of the
load balancing scheme, a practical heterologous cloud
testing environment was implemented, which consists of
a desktop computer and a server. The server has 288 GB
of memory and 10 TB of SATA hard disks. The desktop
contains 12GB of memory, a single 500GB disk and a Core
2 Quad processor. Eight virtual machines were created in
the server with different amounts of memory and number
of shared processors. The detailed information is shown in
Table I.
**Table I. The detailed information of each virtual machine**
**NodeId** **Memory(GB)** **Core processors**
Node1 10 8
Node2 8 4
Node3 8 1
Node4 8 8
Node5 4 8
Node6 4 4
Node7 18 4
Node8 12 8
K-means (KM) and WordCount algorithms were
manipulated to evaluate the performance of load scheme.
The Purdue MapReduce Benchmarks Suite provides us
with the K-means clustering workload, where 26 GB of
free datasets, and a free datasets of 50GB in WordCount
clustering workload [30] were selected as the inputs.
K-Means 910 800 110
WordCount 800 700 100
A Generic Algorithm (GA) was employed to generate
the parameters that PM-SVM and PMK-ELM need. In the
experiments, max gen was set as 200 and the range of C
and b was from 0 to 1000. σ and p were set between 0 and
100. The size of the population was set as 50. The results
generated by GA are shown in Table III. MAPE was used
to evaluate the results, same as the method mentioned in
[12].
**Table III. The best parameters generated by GA**
**K-Means** **WordCount**
**PMK-** **PM-** **PMK-** **PM-**
**ELM** **SVM** **ELM** **SVM**
_C_ 15.838 - 20.521
_σ_ 0.069 - 0.867
_b_ - 2.285 - 6.961
_p_ - 41.967 - 16.583
_MAP E_ 10.05% 10.60% 12.64% 13.42%
In Table IV, the results are the average value after having
run for 50 times. The training time of PMK-ELM is almost
80 times shorter than PM-SVM. Moreover, for both group,
the test time of PMK-ELM is about 80 times shorter than
PM-SVM. Besides, the accuracy of PMK-ELM is higher
than PM-SVM, too.
All our test applications were built based on Hadoop
2.6.0. According to the Apache Hadoop documents,
mapreduce.tasktracker.reduce.tasks.maximum has been set
as 1.
Overall testing processes were conducted in three
stages.
(1) Dataset Collection. A Hadoop analysis tool was
implemented to get historical data.
(2) Execution Time Prediction. The PMK-ELM was
enabled to predict the execution time of next reduce
tasks.
(3) Load balancing. The core MRContainerAllocator
class was modified in the Hadoop system to apply
the results generated by TS-NSGA-II.
**5.1. Evaluation of PMK-ELM**
To evaluate the performance of PMK-ELM, different
input size and different numbers of reducers were tested
during experiments, as depicted in Table II. SVM (PMSVM) proposed in [12] was also replicated in the testing
environment for comparison purposes. A log analysis tool
was developed to collect training and test sets.
**Table II. Experiment parameters**
**Testing**
**dataset size**
**(pieces)**
**Dataset size**
**(pieces)**
**Training**
**dataset size**
**(pieces)**
_Security Comm. Networks 2015; 00:1–11 © 2015 John Wiley & Sons, Ltd._ **7**
DOI 10 1002/
-----
**Table IV. The performance comparison between PMK-ELM and**
PM-SVM
**Training** **Testing**
**Time(sec)** **Time(sec)**
K-Means PMK-ELM 0.055 0.004
PM-SVM 4.462 0.250
WordCount PMK-ELM 0.043 0.03
PM-SVM 3.324 0.307
**Figure 4. Comparison between PMK-ELM and PM-SVM in**
execution time of K-Means
**Figure 5. Comparison between PMK-ELM and PM-SVM in**
execution time of WordCount
In Figure 4, Figure 5, Figure 6 and Figure 7 the detailed
results of PMK-ELM and PM-SVM are depicted. In Figure
4 and Figure 5, the line of PMK-ELM lays more closely
to the real value than that of PM-SVM in two groups.
On the peaks, this phenomenon is more apparent in both
pictures. Although values predicted by PMK-ELM are
not very accurate under some circumstance, accuracy of
PMK-ELM is relatively higher compared with PM-SVM.
In the Figure 6 and Figure 7, the errors of PMK-ELM
are distributed near 0 intensively, while PM-SVM shows
separate distribution. Trend shown in these pictues in
consistent with that shown in Figure 4 and Figure 5, which
shows the performance of the PMK-ELM is better than
PM-SVM. Furthermore, when the training time and test
time are taken into consideration, PMK-ELM is obviously
a better choice.
**Figure 6. Distribution of error of K-Means**
**Figure 7. Distribution of error of WordCount**
**5.2. The performance of proposed load**
**balancing scheme**
In this section, the K-Means experiment is firstly run once
with its execution time and hard disk space recorded.
Corresponding results are shown in Table V and VI.
From Table V and Table VI, we can see that Reducer3
and Reducer6 consumed when executing the task, so the
overall execution time is decided by the longest time. In
Table VI, Node1 did not take part in the task, which has
a better performance and may help the overall task finish
earlier.
Then, we deleted the results generated by the
application and applied PMK-ELM and TS-NSGA-II
to this application and we got a better performance.
The points shown in Figure 8 and Figure 9 are all
**8** _Security Comm. Networks 2015; 00:1–11 © 2015 John Wiley & Sons, Ltd._
DOI 10 1002/
-----
**Table V. Hard disk space change with original Hadoop settings**
**Before** **After**
**NodeId**
**Execution(GB)** **Execution(GB)**
Node1 405.16 403.08
Node2 406.79 404.69
Node3 404.82 402.75
Node4 412.36 410.23
Node5 405.09 402.83
Node6 413.44 411.32
Node7 404.71 404.71
Node8 404.51 402.11
the feasible solutions created by our scheme in two
groups of experiments. Our scheme randomly chooses
a group of solutions from each group, one is group
A={1,4,6,2,8,5,3}, which represents assigning reducer0 to
datanode1, reducer1 to datanode4 and so on, the other
group is B={1,5,6,4,7,8,3}. The benefits we got are shown
in Figure 8,Figure 9, Table VII, Table VIII and Table IX.
**Table VI. Execution time of different reducers**
**Reducer** **Reducer**
**NodeId**
**Group** **Execution Time(sec)**
Node1 Reducer0 196
Node2 Reducer5 199
Node3 Reducer1 227
Node4 Reducer4 226
Node5 Reducer3 240
Node6 Reducer6 269
Node7 -
Node8 Reducer2 181
**Figure 8. Results of Group A**
As shown in Figure 10, the maxim reducer execution
time of Group A and B is shorter than the original Group,
which determines the group A and B finish the reduce
stage faster than the original. The results shown in Table
IX also prove it. Not only does our load balancing scheme
make the application run faster, but also helps the hard
**Figure 9. Results of Group B**
**Table VII. Hard disk space change with original Hadoop settings**
**After Execution(GB)**
**NodeId** **Before Execution(GB)**
_A_ _B_
Node1 405.16 403.08 403.08
Node2 406.79 404.53 406.79
Node3 404.82 402.70 402.70
Node4 412.36 410.29 408.03
Node5 405.09 402.99 403.03
Node6 413.44 411.05 411.05
Node7 404.71 404.71 402.58
Node8 404.51 402.38 402.41
disk occupation more reasonable. Table VIII shows the
hard disk occupation when PMK-ELM and TS-NSGA-II
are applied. S in Table IX is an evaluation parameter that
has described in Eq.(9) in Section IV, which also shows our
scheme has a better performance in job execution time.
**Figure** **10. Comparison** between original and optimized
schemes in reducer execution time
_Security Comm. Networks 2015; 00:1–11 © 2015 John Wiley & Sons, Ltd._ **9**
DOI 10 1002/
-----
**Table VIII. Comparison between original and optimize schemes**
in disk balancing(S)
**Original** _A_ _B_
_S(‰)_ 1.709 1.415 1.125
**Table IX. The overall execution time change with PMK-ELM and**
TS-NSGA-II
**Original(sec)** _A(sec)_ _B(sec)_
Overall Job
615 560 568
Execution Time
## 6. CONCLUSIONS
In this paper, an adaptive approach is proposed combined
with a prediction model, PMK-ELM and a multi-object
selective algorithm, TS-NSGA-II. The PMK-ELM can
help facilitate the prediction of the execution time of
tasks; whereas the TS-NSGA-II is designed to facilitate
the selection of a suitable number of reducers. The
experiment results have shown that both models achieve
a good performance. About 47-55 seconds have been
saved during experiments. In terms of storage efficiency,
only 1.254‰ of differences on hard disk occupation were
made among all scheduled reducers, which achieves 26.6%
improvement than the original scheme. In the future,
we would like to optimize the speculative strategy in
MapReduce and try to improve the performance of the
strategy.
## ACKNOWLEDGEMENTS
This work is supported by the NSFC (61300238,
61300237, 61232016, 1405254, 61373133), Marie Curie
Fellowship (701697-CAR-MSCA-IFEF-ST), the 2014
Project of six personnel in Jiangsu Province under Grant
No. 2014-WLW-013, the 2015 Project of six personnel
in Jiangsu Province under Grant No. R2015L06, Basic
Research Programs (Natural Science Foundation) of
Jiangsu Province (BK20131004) and the PAPD fund.
## REFERENCES
1. Armbrust M, Fox A, Griffith R, Joseph A, Katz
R, Konwinski A, Zaharia M. A view of cloud
computing. Communications of the ACM 2010;
**53(4): 50-58.**
2. Fu Z, Sun X, Liu Q, Zhou L, Shu J. Achieving Efficient Cloud Search Services: Multi-keyword Ranked
Search over Encrypted Cloud Data Supporting Parallel Computing. IEICE Transactions on Communica_tions 2015; E98B(4): 190-200._
3. Sandholm T, Lai K. MapReduce optimization using
regulated dynamic prioritization. Acm Sigmetrics
_Performance Evaluation Review 2015; 37(1): 299-_
310.
4. Anyanwu K, Kim H S, Ravindra P. Algebraic
optimization for processing graph pattern queries in
the cloud. IEEE Internet Computing 2013; 17(2): 5261.
5. Shamsi J, Khojaye M A, Qasmi M A. Data-intensive
cloud computing: requirements, expectations, challenges, and solutions. Journal of grid computing
2013; 11(2): 281-310.
6. Lee K H, Lee Y J, Choi H, Chung Y D, Moon B.
Parallel data processing with MapReduce: a survey.
_Acm Sigmod Record 2012; 40(4): 11-20._
7. Wu Y, Ye F, Chen K, Zheng W. Modeling of distributed file systems for practical performance analysis. IEEE Transactions on Parallel and Distributed
_Systems 2014; 25(1): 156-166._
8. Kwon Y, Balazinska M, Howe B, Rolia J. Costeffective resource provisioning for mapreduce in a
cloud. IEEE Transactions on Parallel and Distributed
_Systems 2015; 26(5): 1265-1279._
9. Palanisamy B, Singh A, Liu L. CSkewTune:
Mitigating Skew in MapReduce Applications. ACM
_SIGMOD International Conference on Management_
_of Data 2012; 25-36._
10. Fu J, Du Z. Load Balancing Strategy on Periodical
MapReduce Job. Computer Science 2013; 40(30): 3840 (In Chinese).
11. Gufler B, Augsten N, Reiser A, Kemper A. Handling
data skew in MapReduce. International Conference
_on Cloud Computing and Services Science 2011; 574-_
583.
12. Yuanquan F, Weiguo W, Yunlong X, Heng C.
Improving MapReduce performance by balancing
skewed loads. China Communications 2014; 11(8):
85-108.
13. Mei Y, Liu L, Pu X, Sivathanu S, Dong X.
Performance analysis of network I/O workloads
in virtualized data centers. IEEE Transactions on
_Services Computing 2013; 6(1): 48-63._
14. Islam S, Keung J, Lee K, Liu A. IEmpirical prediction
models for adaptive resource provisioning in the
cloud. Future Generation Computer Systems 2012;
**28(1): 155-162.**
15. Breb S, Beier F, Rauhe H, Sattler K U, Schallehn
E, Saake G. Efficient co-processor utilization in
database query processing. Information Systems
2013; 38(8): 1084-1096.
16. Piao J T, Yan J. Computing Resource Prediction for
MapReduce Applications Using Decision Tree. In
_Web Technologies and Applications 2012; 570-577._
17. Xiao Z, Song W, Chen Q. Dynamic resource allocation using virtual machines for cloud computing
environment. IEEE Transactions on Parallel and Dis_tributed Systems 2013; 24(6): 1107-1117._
**10** _Security Comm. Networks 2015; 00:1–11 © 2015 John Wiley & Sons, Ltd._
DOI 10 1002/
-----
18. Chen Q, Liu C, Xiao Z. Improving MapReduce
performance using smart speculative execution
strategy. IEEE Transactions on Computers 2014;
**63(4): 954-967.**
19. Lam Thu Bui, Abbass H, Barlow M, Bender A.
Robustness Against the Decision-Maker’s Attitude to
Risk in Problems With Conflicting Objectives. IEEE
_Transactions on Evolutionary Computation 2012;_
**16(1): 1-19.**
20. Wang J, Shang P, Yin J. DRAW: A New DatagRouping-AWare Data Placement Scheme for Data
Intensive Applications with Interest Locality. IEEE
_Transactions on Magnetics 2013; 49(6): 2514-2520._
21. Eltabakh M Y, Tian Y, Zcan F, Gemulla R, Krettek A,
Mcpherson J. CoHadoop: flexible data placement and
its exploitation in Hadoop. Proceedings of the Vldb
_Endowment 2011; 4(9): 575-585._
22. Aji A, Wang F, Vo H, Lee R, Liu Q, Zhang X,
Saltz J. Hadoop GIS: a high performance spatial data
warehousing system over mapreduce. Proceedings of
_the VLDB Endowment 2013; 6(11): 1009-1020._
23. Lu X, Islam S, Wasi-ur-Rahman M, Jose J, Subramoni H, Wang H, Panda D. Improving Mapreduce Performance in Heterogeneous Environments.
_42nd International Conference on Parallel Process-_
_ing (ICPP) 2013; 641-650._
24. Huang G, Huang G B, Son S, You K. Trends
in extreme learning machines: a review. Neural
_Networks 2015; 61: 32-48._
25. Samat A, Du P, Liu S, Li J, Cheng L. Ensemble
Extreme Learning Machines for Hyperspectral Image
Classification. IEEE Journal of Selected Topics in
_Applied Earth Observations and Remote Sensing_
2014; 77(4): 1060-1069.
26. Huang G, Zhou H, Ding X, Zhang R. Extreme
learning machine for regression and multiclass
classification. IEEE Transactions on Systems, Man,
_and Cybernetics, Part B: Cybernetics 2012; 42(2):_
513-529.
27. Savitha R, Suresh S, Kim H J). A meta-cognitive
learning algorithm for an extreme learning machine
classifier. Cognitive Computation 2014; 6(2): 253263.
28. Jemai J, Manel Z, Khaled M. An NSGA-II algorithm
for the green vehicle routing problem. Evolutionary
_computation in combinatorial optimization 2012; 37-_
48.
29. Ahmad F, Chakradhar S, Raghunathan A, Vijaykumar T. K-means clustering in the cloud C a Mahout
test. IEEE Workshops of International Conference on
_Advanced Information Networking and Applications_
_(WAINA) 2011; 514-519._
30. Esteves R, Pais R, Rong C. Tarazu: optimizing MapReduce on heterogeneous clusters. ACM
_SIGARCH Computer Architecture News 2012; 61-74._
31. Isard M, Budiu M, Yu Y, Birrell A, Fetterly D: Dryad:
distributed data-parallel programs from sequential
building blocks. ACM SIGOPS Operating Systems
_Review 2007; 59-72._
_Security Comm. Networks 2015; 00:1–11 © 2015 John Wiley & Sons, Ltd._ **11**
DOI 10 1002/
-----
| 11,637
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1002/sec.1582?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1002/sec.1582, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GREEN",
"url": "https://napier-repository.worktribe.com/file/451187/1/A%20Speculative%20Approach%20to%20Spatial-Temporal%20Efficiency%20with%20Multi-Objective%20Optimisation%20in%20a%20Heterogeneous%20Cloud%20Environment"
}
| 2,016
|
[
"JournalArticle"
] | true
| 2016-11-25T00:00:00
|
[] | 11,637
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00a7577173c6e8447a139293ccdd023e44d3b41f
|
[
"Computer Science"
] | 0.895039
|
Fault-Tolerant Adaptive Parallel and Distributed Simulation
|
00a7577173c6e8447a139293ccdd023e44d3b41f
|
IEEE International Symposium on Distributed Simulation and Real-Time Applications
|
[
{
"authorId": "1397402663",
"name": "Gabriele D’angelo"
},
{
"authorId": "143857076",
"name": "S. Ferretti"
},
{
"authorId": "1804913",
"name": "M. Marzolla"
},
{
"authorId": "1410748512",
"name": "Lorenzo Armaroli"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"ISDSRTA",
"Distributed Simulation and Real-Time Applications",
"DS-RT",
"IEEE int symp Distrib Simul Real-time Appl",
"IEEE Int Symp Distrib Simul Real-time Appl",
"Distrib Simul Real-time Appl",
"IEEE international symposium on Distributed Simulation and Real-Time Applications"
],
"alternate_urls": null,
"id": "acb0d6ae-0245-43d4-9630-2e2904f50c9d",
"issn": null,
"name": "IEEE International Symposium on Distributed Simulation and Real-Time Applications",
"type": "conference",
"url": "http://www.wikicfp.com/cfp/program?id=771"
}
|
Discrete Event Simulation is a widely used technique that is used to model and analyze complex systems in many fields of science and engineering. The increasingly large size of simulation models poses a serious computational challenge, since the time needed to run a simulation can be prohibitively large. For this reason, Parallel and Distributes Simulation techniques have been proposed to take advantage of multiple execution units which are found in multicore processors, cluster of workstations or HPC systems. The current generation of HPC systems includes hundreds of thousands of computing nodes and a vast amount of ancillary components. Despite improvements in manufacturing processes, failures of some components are frequent, and the situation will get worse as larger systems are built. In this paper we describe FT-GAIA, a software-based fault-tolerant extension of the GAIA/ARTIS parallel simulation middleware. FT-GAIA transparently replicates simulation entities and distributes them on multiple execution nodes. This allows the simulation to tolerate crash-failures of computing nodes, furthermore, FT-GAIA offers some protection against Byzantine failures since synchronization messages are replicated as well, so that the receiving entity can identify and discard corrupted messages. We provide an experimental evaluation of FT-GAIA on a running prototype. Results show that a high degree of fault tolerance can be achieved, at the cost of a moderate increase in the computational load of the execution units.
|
# Fault-Tolerant Adaptive Parallel and Distributed Simulation
### Gabriele D’Angelo Stefano Ferretti Moreno Marzolla
Dept. of Computer Science and Engineering, University of Bologna, Italy
Email: {g.dangelo,s.ferretti,moreno.marzolla}@unibo.it
### Lorenzo Armaroli
Email: [email protected]
**_Abstract—Discrete Event Simulation is a widely used technique_**
**that is used to model and analyze complex systems in many**
**fields of science and engineering. The increasingly large size of**
**simulation models poses a serious computational challenge, since**
**the time needed to run a simulation can be prohibitively large.**
**For this reason, Parallel and Distributes Simulation techniques**
**have been proposed to take advantage of multiple execution units**
**which are found in multicore processors, cluster of workstations**
**or HPC systems. The current generation of HPC systems includes**
**hundreds of thousands of computing nodes and a vast amount of**
**ancillary components. Despite improvements in manufacturing**
**processes, failures of some components are frequent, and the**
**situation will get worse as larger systems are built. In this paper**
**we describe FT-GAIA, a software-based fault-tolerant extension**
**of the GAIA/ARTIS parallel simulation middleware. FT-GAIA[`]**
**transparently replicates simulation entities and distributes them**
**on multiple execution nodes. This allows the simulation to tolerate**
**crash-failures of computing nodes; furthermore, FT-GAIA offers**
**some protection against byzantine failures since synchronization**
**messages are replicated as well, so that the receiving entity**
**can identify and discard corrupted messages. We provide an**
**experimental evaluation of FT-GAIA on a running prototype.**
**Results show that a high degree of fault tolerance can be achieved,**
**at the cost of a moderate increase in the computational load of**
**the execution units.**
I. INTRODUCTION
Computer-assisted modeling and simulation plays an important role in many scientific disciplines: computer simulations
help to understand physical, biological and social phenomena.
Discrete Event Simulation (DES) is of particular interest, since
it is frequently employed to model and analyze many types
of systems, including computer architectures, communication
networks, street traffic, and others.
In a DES, the system is described as a set of interacting
entities; the state of the simulator is updated by simulation
_events, which happen at discrete points in time. The overall_
structure of a sequential event-based simulator is relatively
simple: the simulator engine maintains a list, called Future
Event List (FEL), of all pending events, sorted in non decreasing time of occurrence. The simulator executes a loop, where
at each iteration, the event with lower timestamp t is removed
0The publisher version of this paper is available at
[https://doi.org/10.1109/DS-RT.2016.11. Please cite this paper as: “Gabriele](https://doi.org/10.1109/DS-RT.2016.11)
**D’Angelo, Stefano Ferretti, Moreno Marzolla, Lorenzo Armaroli. Fault-**
**Tolerant Adaptive Parallel and Distributed Simulation. Proceedings of**
**the IEEE/ACM International Symposium on Distributed Simulation and**
**Real Time Applications (DS-RT 2016)”.**
Fig. 1. Structure of a Parallel and Distributed Simulation.
from the FEL, and the simulation time is advanced to t. Then,
the event is executed, possibly triggering the generation of new
events to be scheduled for execution at some future time.
Continuous advances in our understanding of complex systems, combined with the need for higher model accuracy,
demand an increasing amount of computational power and
represent a major challenge for the capabilities of the current
generation of high performance computing systems. Therefore,
sequential DES techniques may be inappropriate for analyzing
large or detailed models, due to the huge number of events
that must be processed. Parallel and Distributed Simulation
(PADS) aims at taking advantage of modern high performance
computing architectures – from massively parallel computers
to multicore processors – to handle large models efficiently [1].
The general idea of PADS is to partition the simulation model
into submodels, called Logical Processs (LPs) which can
be evaluated concurrently by different Processing Elements
(PEs). More precisely, the simulation model is described in
terms of multiple interacting Simulated Entitys (SEs) which
are assigned to different LPs. Each LP is executed on a
different PE, and is in practice the container of a set of entities.
The execution of the simulation is obtained through the
exchange of timestamped messages (representing simulation
events) between entities. Each LP has an queue where messages are inserted before being dispatched to the appropriate
entities. Figure 1 shows the general structure of a parallel and
distributed simulator.
Execution of long-running applications on increasingly
larger parallel machines is likely to hit the reliability wall [2].
-----
1
0.8
0.6
0.4
0.2
0
|10 LPs 100 LPs 1000 LPs|Col2|
|---|---|
hour day month year
Time (t)
Fig. 2. System reliability R(N, t) assuming a MTTF for each LP of one
year; higher is better, log scale on the x axis.
This means that, as the system size (number of components)
increases, so does the probability that at least one of those
components fails, therefore reducing the system Mean Time
To Failure (MTTF). At some point the execution time of the
parallel application may become larger than the MTTF of its
execution environment, so that the application has little chance
to terminate normally.
As a purely illustrative example, let us consider a parallel
machine with N PEs. Let Xi be the stochastic variable
representing the duration of uninterrupted operation of the ith PE, taking into account both hardware and software failures.
Assuming that all Xi are independent and exponentially distributed (this assumption is somewhat unrealistic but widely
used [3]), we have that the probability P (Xi > t) that LP i
operates without failures for at least t time units is
P (Xi > t) = e[−][λt]
where λ is the failure rate. The joint probability that all N LPs
operate without failures for at least t time units is therefore
R(N, t) = [�]i [P] [(][X][i][ > t][) =][ e][−][Nλt][; this is the formula for]
the reliability of N components connected in series, where
each component fails independently, and a single failure brings
down the whole system.
Figure 2 shows the value of R(N, t) (the probability of
no failures for at least t consecutive time units) for systems
with N = 10, 100, 1000 LPs, assuming a MTTF of one year
(λ ≈ 2.7573×10[−][8]s[−][1]). We can see that the system reliability
quickly drops as the number of LPs increases: a simulation
involving N = 1000 LPs and requiring one day to complete
is very unlikely to terminate successfully.
Although the model above is overly simplified, and is not
intended to provide an accurate estimate of the reliability of
actual parallel simulations, it does show that building a reliable
system out of a large number of unreliable parts is challenging.
Two widely used approaches for handling hardware-related
reliability issues are those based on checkpointing, and on
_functional replication. The checkpoint-restore paradigm re-_
quires the running application to periodically save its state on
non-volatile storage (e.g., disk) so that it can resume execution
from the last saved snapshot in case of failure. It should be
observed that saving a snapshot may require considerable time;
therefore, the interval between checkpoints must be carefully
tuned to minimize the overhead.
Functional replication consists on replicating parts of the
application on different execution nodes, so that failures can
be tolerated if there is some minimum number of running
instances of each component. Note that each component must
be modified so that it is made aware that multiple copies of its
peers exist, and can interact with all instances appropriately.
It is important to remark that functional replication is
not effective against logical errors, i.e., bugs in the running
applications, since the bug can be triggered at the same time
on all instances. A prominent – and frequently mentioned –
example is the failure of the Ariane 5 rocket that was caused
by a software error on its Inertial Reference Platforms (IRPs).
There were two IRP, providing hardware fault-tolerance, but
both used the same software. When the two software instances
were fed with the same (correct) input from the hardware,
the bug (an uncatched data conversion exception) caused both
programs to crash, leaving the rocket without guidance [4].
The N -version programming technique [5] can be used to
protect against software errors, and requires running several
functionally equivalent programs that have been independently
developed from the same specifications.
In this paper, we present FT-GAIA, a fault-tolerant extension of the GAIA/ART[`]IS parallel and distributed simulation
middleware [6], [7]. FT-GAIA is based on functional replication, and can handle crash errors and byzantine faults, using
the concept of server groups [8]: simulation entities are replicated so that the model can be executed even if some of them
fail. We show how functional replication can be implemented
as an additional software layer in the GAIA/ART[`]IS stack; all
modifications are transparent to user-level simulation models,
therefore FT-GAIA can be used as a drop-in replacement to
GAIA/ART[`]IS when fault tolerance is the major concern.
This paper is organized as follows. In Section II we review
the art related to fault tolerance in PADS. The GAIA/ART[`]IS
parallel and distributed simulation middleware is described
in Section III. Section IV is devoted to the description of
FT-GAIA, a fault-tolerant extension to GAIA/ART[`]IS. An
empirical performance evaluation of FT-GAIA, based on a
prototype implementation we have developed, is discussed
in Section V. Finally, Section VI provides some concluding
remarks.
II. RELATED WORK
Although fault tolerance is an important and widely discussed topic in the context of distributed systems research,
it received comparatively little attention by the PADS community. The proposed approaches for bringing fault tolerance
to PADS are either based on checkpointing or on functional
replication, with a few works considering also partially centralized architectures.
-----
_A. Checkpointing_
In [9] the authors propose a rollback based optimistic
recovery scheme in which checkpoints are periodically saved
on stable storage. The distributed simulation uses an optimistic synchronization scheme, where out-of-order (“straggler”) events cause rollbacks that are handled according to the
Time Warp protocol [10]. The novel idea is to model failures
as straggler events with a timestamp equal to the last saved
checkpoint. In this way, the authors can leverage the Time
Warp protocol to handle failures.
In [11], [12] the authors propose a new framework called
Distributed Resource Management System (DRMS) to implement reliable IEEE 1516 federation [13]. The DRMS handles
crash failures using checkpoints saved to stable storage, that
is then used to migrate federates from a faulty host to a new
host when necessary. The simulation engine is again based on
an optimistic synchronization scheme, and the migration of
federates is implemented through Web services.
In [14] the authors propose a decoupled federate architecture in which each IEEE 1516 federate is separated into a
virtual federate process and a physical federate process. The
former executes the simulation model and the latter provides
middleware services at the backend. This solution enables
the implementation of fault-tolerant distributed simulation
schemes through migration of virtual federates.
The CUMULVS middleware [15] introduces the support for
fault tolerance and migration of simulations based on checkpointing. The middleware is not designed to support PADS
but it allows the migration of running tasks for load balancing
and to improve a task’s locality with a required resource.
A slightly different approach is proposed in [16]. In which,
the authors introduce the Fault Tolerant Resource Sharing System (FT-RSS) framework. The goal of FT-RSS is to build fault
tolerant IEEE 1516 federations using an architecture in which
a separate FTP server is used as a persistent storage system.
The persistent storage is used to implement the migration of
federates from one node to another. The FT-RSS middleware
supports replication of federates, partial failures and fail-stop
failures.
_B. Functional Replication_
In [17] the authors propose the use of functional replication
in Time Warp simulations with the aim to increase the simulator performance and to add fault tolerance. Specifically, the
idea is to have copies of the most frequently used simulation
entities at multiple sites with the aim of reducing message
traffic and communication delay. This approach is used to
build an optimistic fault tolerance scheme in which it is
assumed that the objects are fault free most of the time. The
rollback capabilities of Time Warp are then used to correct
intermittent and permanent faults.
In [18] the authors describe DARX, an adaptive replication
mechanism for building reliable multi-agent systems. Being
targeted to multi-agent systems, rather than PADS, DARX is
mostly concerned with adaptability: agents may change their
behavior at any time, and new agents may join or leave the
system. Therefore, DARX tries to dynamically identify which
agents are more “important”, and what degree of replication
should be used for those agents in order to achieve the desired
level of fault-tolerance. It should be observed that DARX
only handles crash failures, while FT-GAIA also deals with
Byzantine faults.
III. THE GAIA-ART[`]IS MIDDLEWARE
To make this paper self-contained, we provide in this
section a brief introduction of the GAIA/ART[`]IS parallel and
distributed simulation middleware; the interested reader is
referred to [6], [7], [19] and the software homepage [20].
The Advanced RTI System (ART[`]IS) is a parallel and
distributed simulation middleware loosely inspired by the
Runtime Infrastructure described in the IEEE 1516 standard
“High Level Architecture” (HLA) [21]. ART[`]IS implements a
parallel/distributed architectures where the simulation model
is partitioned in a set of LPs [1]. As described in Section I,
the execution architecture in charge of running the simulation
is composed of interconnected PEs and each PE runs one or
more LPs (usually, a PE hosts one LP).
In a PADSs, the interactions between the model components are driven by message exchanges. The low computation/communication ratio makes PADS communicationbound, so that the wall-clock execution time of distributed
simulations is highly dependent on the performance of the
communication network (i.e., latency, bandwidth and jitter).
Reducing the communication overhead can be crucial to speed
up the event processing rate of PADS. This can be achieved
by clustering interacting entities on the same physical host, so
that communications can happen through shared memory.
Among the various services provided by ART[`]IS, time management (i.e., synchronization) is fundamental for obtaining
correct simulation runs that respect the causality dependencies
of events. ARTIS supports both conservative (Chandy-Misra-[`]
Bryant [22]) and optimistic (Time Warp [10]) synchronization
algorithms. Moreover, a very simple time-stepped synchronization is supported.
The Generic Adaptive Interaction Architecture (GAIA) is a
software layer built on top of ART[`]IS [20]. In GAIA, each LP
acts as the container of some SEs: the simulation model is
partitioned in its basic components (the SEs) that are allocated
among the LPs. The system behavior is modeled by the
interactions among the SEs; such interactions take the form of
timestamped messages that are exchanged among the entities.
From the user’s point of view, a simulation model based on
GAIA/ARTS follows a Multi Agent System (MAS) approach.[`]
In fact, each SE is an autonomous agent that performs some
actions (individual behavior) and interacts with other agents
in the simulation.
In most cases, the interaction between the SEs of a PADS
are not completely uniform, meaning that there are clusters
of SEs where internal interactions are more frequent. The
structure of these clusters of highly interacting entities may
change over time, as the simulation model evolves. The
identification of such clusters is important to improve the
-----
Fig. 3. Layered structure of the FT-GAIA simulation engine. The userdefined simulation model defines a set of entities {A, B, C, D, E, F }; FTGAIA creates multiple (in this example, 3) instances of each entity, that are
handled by GAIA.
performance of a PADS: indeed, by putting heavily-interacting
entities on as few LPs as possible, we may replace most of
the expensive LAN/WAN communications by more efficient
shared memory messages.
In GAIA, the analysis of the communication pattern is
based on simple self-clustering heuristics [19]. For example,
in the default heuristic, every few timesteps for each SE is
found which LP is the destination of the large percentage of
interactions. If it is not the LP in which the SE is contained
then a migration is triggered. The migration of SEs among LPs
is transparent to the simulation model developer; entities
migration is useful not only to reduce the communication
overhead, but also to achieve better load-balancing among
the LPs, especially on heterogeneous execution platforms
where execution units are not identical. In these cases, GAIA
can migrate entities away from less powerful PEs, towards
more capable processors if available.
IV. FAULT-TOLERANT SIMULATION
FT-GAIA is a fault-tolerant extension to the GAIA/ART[`]IS
distributed simulation middleware. As will be explained below,
FT-GAIA uses functional replication of simulation entities to
achieve tolerance against crashes and Byzantine failures of
the PEs.
FT-GAIA is implemented as a software layer on top of
GAIA and provides the same functionalities of GAIA with
only minor additions. Therefore, FT-GAIA is mostly transparent to the user, meaning that any simulation model built for
GAIA can be easily ported to FT-GAIA.
FT-GAIA works by replicating simulation entities (see
Fig. 3) to tolerate crash-failures and byzantine faults of
the PEs. A crash may be caused by a failure of the hardware
– including the network connection – and operating system.
A byzantine failure refers to an arbitrary behavior of a PE
that causes the LP to crash, terminate abnormally, or to send
arbitrary messages (including no messages at all) to other PEs.
Replication is based on the following principle. If a conventional, non-fault tolerant distributed simulation is composed
of N distinct simulation entities, FT-GAIA generates N × M
entities, by generating M independent instances of each simulation entity. All instances A1, . . . AM of the same entity A
perform the same computation: if no fault occurs, they produce
the same result.
Replication comes with a cost, both in term of additional
processing power that is needed to execute all instances, and
also in term of an increased communication load between
the LPs. Indeed, if two entities A and B communicate by
sending a message from A to B, then after replication each
instance Ai must send the same message to all instances
Bj, 1 ≤ i, j ≤ M, resulting in M [2] (redundant) messages.
Therefore, the level of replication M must be chosen wisely
in order to achieve a good balance between overhead and
fault tolerance, also depending on the types of failures (crash
failures or Byzantine faults) that the user wants to address.
_Handling crash failures: A crash failure happens when_
a PE halts, but operated correctly until it halted. In this case,
all simulation entities running on that PE stop their execution
and the local state of computation is lost. From the theory
of distributed systems, it is known that in order to tolerate f
crash failures we must execute at least M = f + 1 instances
of each simulation entity. Each instance must be executed on
a different PEs, so that the failure of a PE only affects one
instance of all entities executed there. This is is equivalent
to running M copies of a monolithic (sequential) simulation,
with the difference that a sequential simulation does not incur
in communication and synchronization overhead. However,
unlike sequential simulations, FT-GAIA can take advantage
of more than M PEs, by distributing all N × M entities
on the available execution units. This reduces the workload
on the PEs, reducing the wall-clock execution time of the
simulation model.
_Handling Byzantine Failures: Byzantine failures include_
all types of abnormal behaviors of a PE. Examples are: the
crash of a component of the distributed simulator (e.g., LP
or entity); the transmission of erroneous/corrupted data from
an entity to other entities; computation errors that lead to
erroneous results. In this case M = 2f + 1 replicas of a
system are needed to tolerate up to f byzantine faults in a
distributed system using the “majority” rule: an SE instance
Bi can process an incoming message m from Aj when it
receives at least f + 1 copies of m from different instances of
the sender entity A. Again, all M instances of the same SE
must be located on different PEs.
_Allocation of Simulation Entities: Once the level of_
replication M has been set, it is necessary to decide where
to create the M instances of all SEs, so that the constraint
that each instance is located on a different PE is met. In FTGAIA the deployment of instances is performed during the
setup of the simulation model. In the current implementation,
there is a centralized service that keeps track of the initial
location of all SE instances. When a new SE is created, the
service creates the appropriate number of instances according
to the redundancy model to be employed, and assigns them
to the LPs so that all instances are located on different LPs.
-----
Note that all instances of the same SE receive the same initial
seed for their internal pseudo-random number generators; this
guarantees that their execution traces are the same, regardless
of the LP where execution occurs and the degree of replication.
_Message Handling: We have already stated that fault-_
tolerance through functional replication has a cost in term of
increased message load among SEs. Indeed, for a replication
level M (i.e., there are M instances of each SE) the number
of messages exchanged between entities grows by a factor of
M [2].
A consequence of message redundancy is that message
filtering must be performed to avoid that multiple copies of the
same message are processed more than once by the same SE
instance. FT-GAIA takes care of automatically filtering the
excess messages according to the fault model adopted; filtering
is done outside of the SE, which are therefore totally unaware
of this step. In the case of crash failures, only the first copy of
each message that is received by a SE is processed; all further
copies are dropped by the receiver. In the case of Byzantine
failures with replication level M = 2f + 1, each entity must
wait for at least f + 1 copies of the same message before it
can handle it. Once a strict majority has been reached, the
message can be processed and all further copies of the same
messages that might arrive later on can be dropped.
_Entities Migration: PADS can benefit from migration_
of entities to balance computation/communication load and
reduce the communication cost, by placing entities that interact
frequently “next” to each other (e.g., on the same LP) [19].
In FT-GAIA, entity migration is subject to the constraint that
instances of the same SE can never reside on the same LP.
Entity migration is handled by the underlying GAIA/ART[`]IS
middleware [6]: each LP runs a fully distributed “clustering
heuristic” that tries to put together (i.e., on the same LP)
the SEs that interact frequently through message exchanges.
Special care is taken to avoid putting too many entities on the
same LPs that would become a bottleneck. Once a new feasible
allocation is found, the entities are migrated by moving their
state to the new LP.
V. EXPERIMENTAL EVALUATION
500
400
300
200
100
0
WCT with different numbers of simulation entities
3 LPs, no fault tolerance
3 LPs, crash tolerance
3 LPs, byzantine f. tolerance
4 LPs, no fault tolerance
4 LPs, crash tolerance
4 LPs, byzantine f. tolerance
5 LPs, no fault tolerance
5 LPs, crash tolerance
5 LPs, byzantine f. tolerance
4000 6000 8000 10000 12000 14000 16000 18000 20000
# Simulation Entities
Fig. 4. Wall Clock Time as a function of the number of LPs, for varying
number of SEs. The number of LPs is equal to the number of PEs. Migration
is disabled. Lower is better.
In this section we evaluate a prototype implementation
of FT-GAIA by implementing a simple simulation model
of a Peer-to-Peer communication system. We execute the
simulation model with FT-GAIA under different workload
parameters (described below) and record the Wall Clock Time
(WCT) (excluding the time to setup the simulation) and other
metrics of interest. The tests were performed on a cluster of
workstations, each being equipped with an Intel Core i5-4590
3.30 GHz processors with 8 GB of RAM. The Operating
System was Debian Jessie. The workstations are connected
through a Fast Ethernet LAN.
_A. Simulation Model_
graphs are peers, while links represent communication connections [23], [24]. In these overlays, nodes have all the same
out-degree, that has been set to 5 in our experiments. During
the simulation, each node periodically updates its neighbor
set. Latencies for message transmission over overlay links are
generated using a lognormal distribution [25].
The simulated communication protocol works as follows.
Periodically, nodes send PING messages to other nodes, that
in turn reply with a PONG message that is used by the
sender to estimate the average latencies of the links (note
that communication links are, in fact, bidirectional). The
destination of a PING is randomly selected to be a neighbor
(with probability p), or a non-neighbor (with probability 1−p).
A neighbor is a node that can be reached through an outgoing
link in the directed overlay graph.
Each node of the P2P overlay is represented by a SE within
some LP. Unless stated otherwise, each LP was executed on a
different PE, so that no two LPs shared their execution node.
We consider three scenarios: a no fault scenario, where no
faults occur, a crash scenario, where crash failures occurs,
and a Byzantine scenario where Byzantine faults occurs.
We executed 15 independent replications of each simulation
run. In all the following charts, mean values are reported with
a 99.5% confidence interval.
_B. Impact of the number of LPs and SEs_
We simulate a simple P2P communication protocol over
randomly generated directed overlay graphs. Nodes of the
Figure 4 shows the WCT of the simulation that was executed
for 10000 timesteps with a varying number of SEs; recall that
the number of SEs is equal to the number of nodes in the
P2P overlay graph. The number of LPs was set to 3, 4, and
5. We show the WCT for the three failure scenarios we are
considering: no failure, a single crash, and a single Byzantine
failure. In all these cases the self-clustering (i.e. migration) is
disabled.
Results with 3 and 4 LPs are similar, with a slight improvement with 4 LPs. Conversely, higher WCT is observed
when 5 LPs are used. As expected, the higher the number
-----
0
300
250
800
700
200
150
600
500
100
50
400
300
200
100
0
WCT with different numbers of LPs (8000 simulation entities)
no fault tolerance
crash tolerance
byzantine f. tolerance
3 3.5 4 4.5 5
WCT with different numbers of simulation entities (varying the amount of LPs on hosts)
0 2000 4000 6000 8000 10000 12000 14000 16000 18000
# LPs
Fig. 5. Wall Clock Time as a function of the number of LPs, with 8000 SEs.
Migration is disabled. Lower is better.
450
400
350
300
250
200
150
100
50
0
WCT with different numbers of LPs (16000 simulation entities)
no fault tolerance
crash tolerance
byzantine f. tolerance
3 3.5 4 4.5 5
byzantine f. tolerance, 16 LP su 4 host
Fig. 7. WCT as a function of the number of LPs, with different numbers
of LPs for each PE. Migration is disabled. Lower is better.
# LPs
Fig. 6. WCT as a function of the number of LPs, with 10000 SEs. Migration
is disabled. Lower is better.
of SEs the higher the WCT, since the simulation incurs in
a higher communication overhead. Moreover, all curves have
a similar trend. In particular, the increment due to the faults
management schemes is mainly due to the higher amount of
messages exchanged among nodes.
Figures 5 and 6 show the WCT when varying the number of
LPs, with 8000 and 16000 SEs, respectively. The two charts
emphasize the increment of the time required to terminate the
simulations with 5 LPs and in presence of Byzantine faults.
This is due to the increased number of messages exchanged
among the LPs: each message needs to be sent to three (2M +
1) different destinations in order to guarantee fault tolerance.
_C. Impact of the number of LPs per host_
over 4 PEs (2 LPs per host), and (iv) 16 LPs over 4 PEs (4 LPs
per host). For each scenario, we consider the three failure
scenarios already mentioned (no failures, crash, Byzantine
failures). Also in these cases, the migration is disabled. Each
curve in the figure is related to one of those scenarios, when
varying the amount of SEs. It is worth noting that, when two
or more LPs are run on the same PE, they can communicate
using shared memory rather than by LAN.
We observe that the scenario with 4 LPs over 4 PEs is influenced by the number of SEs and the failure scenario, while in
the other cases it is the number of LPs that mainly determines
the simulator performance. When 8 LPs are present, slightly
better results are obtained with 4 LPs (rather than 8). This
is due to the better communication efficiency (e.g. reduced
latency) provided by the shared memory with the respect to
the LAN protocols.
The worst performance is measured when 16 LPs are
executed on 4 PEs. This is due to the fact that the amount
of computation in the simulation model is quite limited.
Therefore, partitioning the SEs in 16 LPs has the effect to
increase the communication cost without any benefit under
the computational point of view (i.e. in the model there is not
enough computation to be parallelized).
_D. Impact of the number of failures_
In the previous experiments, we placed each LP in a
different PE. Figure 7 shows the WCT when more than one LP
is placed in a PE. In particular, we consider the following
scenarios: (i) 4 LPs placed over 4 PEs (1 LP per host), (ii)
8 LPs placed over 8 PEs (1 LP per host), (iii) 8 LPs placed
We now study the impact of the number of faults on the
simulation WCT. We consider two scenarios, one with 5 LPs
over 5 PEs (Figure 8), and one with 8 LPs over 4 PEs (Figure
9). The choice of 5 LPs is motivated by the fact that this is
the minimum number of LPs that allows us to tolerate up to
2 Byzantine faults. The scenario with 8 LPs on 4 PEs allows
-----
600
500
400
300
200
100
0
WCT with different numbers of faults (5 LPs)
crash tolerance, 2000 simulation entities
byzantine f. tolerance, 2000 simulation entities
crash tolerance, 6000 simulation entities
byzantine f. tolerance, 6000 simulation entities
0 1 2
# Faults
Fig. 8. WCT as a function of the number of faults; 10000 timesteps with
5 LPs. Migration is disabled. Lower is better.
_E. Impact of SEs migration_
Figure 10 shows the WCT with different failure schemes,
when SEs migration is enabled/disabled. In this case, the trend
obtained with the SEs migration is similar to that obtained
when no migration is performed but the overall performance
are better when the migration is turned off. This is due to the
overhead introduced by the self-clustering heuristics and the
SEs state that is transfered between the LPs. In other words,
the adaptive clustering of SEs, in this case, is unable to give
a speedup.
It is worth noting that, in this prototype, we have decided to
use the very general clustering heuristics that were already implemented in GAIA/ARTS. We think that, more more specific[`]
heuristics will be able to improve the clustering performance
and therefore balance the overhead introduced by the support
of fault tolerance.
60
50
200
150
40
30
100
50
20
10
0
WCT with different number simulation entitities, migration on/off
no migration, no fault tolerance
migration, no fault tolerance
no migration, crash tolerance
migration, crash tolerance
no migration, byzantine f. tolerance
migration, byzantine f. tolerance
0 2000 4000 6000 8000 10000 12000 14000 16000 18000
0
WCT with different numbers of faults (8 LPs)
crash tolerance, 2000 simulation entities
byzantine f. tolerance, 2000 simulation entities
crash tolerance, 6000 simulation entities
byzantine f. tolerance, 6000 simulation entities
0 1 2 3
# Faults
Fig. 9. WCT as a function of the number of faults; 2000 timesteps over 8
LPs. Migration is disabled. Lower is better.
# Simulation Entities
Fig. 10. WCT with SEs migration ON/OFF, as a function of the number
of SEs. Lower is better.
testing 3 Byzantine faults with 2 LPs per hosts, reducing the
communication overhead.
Figure 8 shows the WCTs measured with 0, 1 and 2 faults.
Each curve refers to a scenario composed of 2000 or 6000
SEs with crash or Byzantine failures. As expected, the higher
the number of faults, the higher the WCTs, especially when
Byzantine faults are considered. Indeed, in this case a higher
amount of communication messages is required among nodes
in order to properly handle the faults.
A higher WCT is measured with 8 LPs, as shown in
Figure 9. In this case, the amount of faults does not influence the simulation performance too much. As before, the
computational load of this simulation model is too low for
gaining from the partitioning in 8 LPs. In other words, the
latency introduced by the network communications is so high
that both the number of SEs and and the number of faults have
a negligible impact.
VI. CONCLUSIONS AND FUTURE WORK
In this paper we described FT-GAIA, a software-based faulttolerant extension of the GAIA/ART[`]IS parallel and distributed
simulation middleware. FT-GAIA transparently replicates simulation entities and distributes them on multiple execution
nodes. In this way, the simulation can tolerate crash-failures
and Byzantine faults of computing nodes. FT-GAIA can
benefit from the automatic load balancing facilities provided
by GAIA/ART[`]IS that allow simulated entities to be migrated
among execution nodes. A preliminary performance evaluation
of FT-GAIA has been presented, based on a prototype implementation. Results show that a high degree of fault tolerance
can be achieved, at the cost of a moderate increase in the
computational load of the execution units.
As a future work, we aim at improving the efficiency of FTGAIA by leveraging on ad-hoc clustering heuristics. Indeed,
we believe that specifically tuned clustering and load balancing
mechanisms can significantly reduce the overhead introduced
by the replication of the simulated entities.
-----
ACRONYMS
**DES** Discrete Event Simulation
**FEL** Future Event List
**GVT** Global Virtual Time
**IRP** Inertial Reference Platform
**LVT** Local Virtual Time
**LP** Logical Process
**MTTF Mean Time To Failure**
**PADS Parallel and Distributed Simulation**
**PE** Processing Element
**SE** Simulated Entity
**WCT** Wall Clock Time
REFERENCES
[1] R. M. Fujimoto, Parallel and distributed simulation systems, ser. Wiley
series on parallel and distributed computing. Wiley, 2000.
[2] X. Yang, Z. Wang, J. Xue, and Y. Zhou, “The reliability wall for exascale
supercomputing,” Computers, IEEE Transactions on, vol. 61, no. 6, pp.
767–779, 2012.
[3] G. Bolch, S. Greiner, H. de Meer, and K. Trivedi, Queueing Networks
_and Markov Chains: Modeling and Performance Evaluation with Com-_
_puter Science Applications._ Wiley, 1998.
[4] M. Dowson, “The ariane 5 software failure,” SIGSOFT Softw. Eng.
_Notes, vol. 22, no. 2, pp. 84–, Mar. 1997._
[5] A. Avizienis, “The N-version approach to fault-tolerant software,” IEEE
_Trans. Softw. Eng., vol. 11, no. 12, pp. 1491–1501, Dec. 1985._
[6] L. Bononi, M. Bracuto, G. D’Angelo, and L. Donatiello, “A new adaptive middleware for parallel and distributed simulation of dynamically
interacting systems,” in Proceedings of the 8th IEEE International
_Symposium on Distributed Simulation and Real-Time Applications._
Washington, DC, USA: IEEE Computer Society, 2004, pp. 178–187.
[7] ——, “ART[`]IS: A parallel and distributed simulation middleware for
performance evaluation,” in ISCIS, ser. Lecture Notes in Computer
Science, C. Aykanat, T. Dayar, and I. Korpeoglu, Eds., vol. 3280.
Springer, 2004, pp. 627–637.
[8] F. Cristian, “Understanding fault-tolerant distributed systems,” Commun.
_ACM, vol. 34, no. 2, pp. 56–78, Feb. 1991._
[9] O. P. Damani and V. K. Garg, “Fault-tolerant distributed simulation,”
in Proceedings of the Twelfth Workshop on Parallel and Distributed
_Simulation, ser. PADS ’98._ Washington, DC, USA: IEEE Computer
Society, 1998, pp. 38–45.
[10] D. R. Jefferson, “Virtual time,” ACM Trans. Program. Lang. Syst., vol. 7,
no. 3, pp. 404–425, Jul. 1985.
[11] M. Ekl¨of, F. Moradi, and R. Ayani, “A framework for fault-tolerance
in hla-based distributed simulations,” in Proceedings of the 37th Con_ference on Winter Simulation, ser. WSC ’05._ Winter Simulation
Conference, 2005, pp. 1182–1189.
[12] M. Eklof, R. Ayani, and F. Moradi, “Evaluation of a fault-tolerance
mechanism for hla-based distributed simulations,” in Proceedings of the
_20th Workshop on Principles of Advanced and Distributed Simulation,_
ser. PADS ’06. Washington, DC, USA: IEEE Computer Society, 2006,
pp. 175–182.
[13] “IEEE Standard for Modeling and Simulation (M&S) High Level Architecture (HLA)–Framework and Rules,” IEEE Std 1516-2010 (Revision
of IEEE Std 1516-2000), pp. 1–38, 2010.
[14] D. Chen, S. J. Turner, W. Cai, and M. Xiong, “A decoupled federate
architecture for high level architecture-based distributed simulation,”
_Journal of Parallel and Distributed Computing, vol. 68, no. 11, pp._
1487 – 1503, 2008.
[15] J. A. Kohl and P. M. Papadopoulas, “Efficient and flexible fault tolerance
and migration of scientific simulations using cumulvs,” in Proceedings
_of the SIGMETRICS Symposium on Parallel and Distributed Tools, ser._
SPDT ’98. New York, NY, USA: ACM, 1998, pp. 60–71.
[16] J. L¨uthi and S. Großmann, Computational Science - ICCS 2004: 4th
_International Conference, Krak´ow, Poland, June 6-9, 2004, Proceedings,_
_Part III._ Berlin, Heidelberg: Springer Berlin Heidelberg, 2004, ch. FTRSS: A Flexible Framework for Fault Tolerant HLA Federations, pp.
865–872.
[17] D. Agrawal and J. R. Agre, “Replicated objects in time warp simulations,” in Proceedings of the 24th Conference on Winter Simulation, ser.
WSC ’92. New York, NY, USA: ACM, 1992, pp. 657–664.
[18] Z. Guessoum, J.-P. Briot, N. Faci, and O. Marin, “Towards
Reliable Multi-Agent Systems. An Adaptive Replication Mechanism,”
_International Journal of MultiAgent and Grid Systems, vol. 6, no. 1,_
[2010. [Online]. Available: http://liris.cnrs.fr/publis/?id=4840](http://liris.cnrs.fr/publis/?id=4840)
[19] G. D’Angelo and M. Marzolla, “New trends in parallel and distributed
simulation: From many-cores to cloud computing,” Simulation Mod_elling Practice and Theory (SIMPAT), 2014._
[20] “Parallel And Distributed Simulation (PADS) research group,”
[http://pads.cs.unibo.it, 2016.](http://pads.cs.unibo.it)
[21] IEEE 1516 Standard, Modeling and Simulation (M&S) High Level
Architecture (HLA), 2000.
[22] K. M. Chandy and J. Misra, “Asynchronous distributed simulation via a
sequence of parallel computations,” Commun. ACM, vol. 24, no. 4, pp.
198–206, Apr. 1981.
[23] G. D’Angelo and S. Ferretti, “Simulation of scale-free networks,” in
_Proc. of International Conference on Simulation Tools and Techniques,_
ser. Simutools ’09, 2009, pp. 20:1–20:10.
[24] ——, “LUNES: Agent-based Simulation of P2P Systems,” in Proceed_ings of the International Workshop on Modeling and Simulation of Peer-_
_to-Peer Architectures and Systems (MOSPAS 2011)._ IEEE, 2011.
[25] J. F¨arber, “Network game traffic modelling,” in Proceedings of the 1st
_Workshop on Network and System Support for Games, ser. NetGames_
’02. New York, NY, USA: ACM, 2002, pp. 53–57.
-----
| 10,298
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1606.07310, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/1606.07310"
}
| 2,016
|
[
"JournalArticle"
] | true
| 2016-06-23T00:00:00
|
[
{
"paperId": "4a8e3f9774f4e506fccdfb7e312d1c6d512b3b08",
"title": "New trends in parallel and distributed simulation: From many-cores to Cloud Computing"
},
{
"paperId": "358a8df13e80c5d7c06082fe0d9f84b3de9cb8fb",
"title": "The Reliability Wall for Exascale Supercomputing"
},
{
"paperId": "ba125c48d186fab0f92978b76e371a273ce818a1",
"title": "LUNES: Agent-based simulation of P2P systems"
},
{
"paperId": "39eedb8a2e90410c5675e1f04ac3bd0b114f6605",
"title": "Towards reliable multi-agent systems: An adaptive replication mechanism"
},
{
"paperId": "c6c7624639419701e9605c584596ceed4066e7e9",
"title": "Simulation of scale-free networks"
},
{
"paperId": "8feb59be85093e7518f5191defa33bde575e6007",
"title": "A decoupled federate architecture for high level architecture-based distributed simulation"
},
{
"paperId": "f861d8e4bcda72cbd821454ffed2d20be8ff5e85",
"title": "Queueing Networks and Markov Chains (Modeling and Performance Evaluation With Computer Science Applications)"
},
{
"paperId": "1a5d46d1a34eb98684fb55e67d43f292ff8b543d",
"title": "Evaluation of a Fault-Tolerance Mechanism for HLA-Based Distributed Simulations"
},
{
"paperId": "4718cd052c671ab1ea2a2ea6259d54673369b757",
"title": "Queueing Networks and Markov Chains"
},
{
"paperId": "23957b46f02d77f83d7991e3e2c1f78a072f2b73",
"title": "ARTÌS: A Parallel and Distributed Simulation Middleware for Performance Evaluation"
},
{
"paperId": "53c8657b7190cd6ad4a4f9473aa62daafa052df4",
"title": "A New Adaptive Middleware for Parallel and Distributed Simulation of Dynamically Interacting Systems"
},
{
"paperId": "ad947997cfd98db9dbbe5237c765b5fd241d74ad",
"title": "Network game traffic modelling"
},
{
"paperId": "2ed91e95d0ed5ec03d3ab28b7d784b7e544b3cca",
"title": "Queueing Networks and Markov Chains - Modeling and Performance Evaluation with Computer Science Applications, Second Edition"
},
{
"paperId": "1c0649485441e25a86320e6b881f71a950c47489",
"title": "Efficient and flexible fault tolerance and migration of scientific simulations using CUMULVS"
},
{
"paperId": "d5aaf84502624926802781daf57cdfdd1b8646d7",
"title": "The Ariane 5 software failure"
},
{
"paperId": "3fd7ab96c4b7a48b430bf043e37dcb1afb10f059",
"title": "Replicated objects in time warp simulations"
},
{
"paperId": "a92b3d3cc3e26a8044894142d8bf11a24c888f75",
"title": "Fault-tolerant distributed simulation"
},
{
"paperId": "caab5bab07f1e40b67894f940895ebf91f2bc0cf",
"title": "Understanding fault-tolerant distributed systems"
},
{
"paperId": "d8b256d85f6069a0608ec03eefbc10f7bcf7004f",
"title": "The N-Version Approach to Fault-Tolerant Software"
},
{
"paperId": "770fdb91c74005c8b34a744c6b2188729b2a9c63",
"title": "Virtual time"
},
{
"paperId": "744147608ccb4ecd6db5dc0c2daa8b2a58435342",
"title": "Asynchronous distributed simulation via a sequence of parallel computations"
},
{
"paperId": "3a4374da0c2779999d27bbad444f07b57819373c",
"title": "A FRAMEWORK FOR FAULT-TOLERANCE IN HLA-BASED DISTRIBUTED SIMULATIONS"
},
{
"paperId": null,
"title": "Computational Science -ICCS"
},
{
"paperId": "ab19509a0eba2942b9d611bf05f1b0e44859c062",
"title": "Parallel and distributed simulation systems"
},
{
"paperId": "633beaa0ed32eb37a4be4b5f4e2f9a186e46a403",
"title": "IEEE Standard for Modeling and Simulation (M&S) High Level Architecture (HLA) — Framework and Rules"
}
] | 10,298
|
en
|
[
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00a8502c5d734656511c7cddad5c19a7f972bd4d
|
[] | 0.868377
|
Hybrid Distributed Wind and Battery Energy Storage Systems
|
00a8502c5d734656511c7cddad5c19a7f972bd4d
|
[
{
"authorId": "2055105020",
"name": "J. Reilly"
},
{
"authorId": "113427570",
"name": "R. Poudel"
},
{
"authorId": "46371001",
"name": "V. Krishnan"
},
{
"authorId": "2142422067",
"name": "Benjamin Anderson"
},
{
"authorId": "2174596867",
"name": "Jayaraj Rane"
},
{
"authorId": "1403625945",
"name": "I. Baring-Gould"
},
{
"authorId": "80465977",
"name": "Caitlyn E. Clark"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
will explore how wind-hybrid systems, with a current focus on wind-storage hybrid systems, can be efficiently configured to operate within different environments. A detailed quantitative study will be undertaken later, and results will be reported. Taking lessons learned from other hybrid technologies hybrid-solar or hybrid-hydro in the energy industry, this literature review aims to identify the opportunities and challenges of wind-hybrid systems in various operational use cases. These use cases include isolated grids or microgrids in island mode, grid-connected resources providing energy and ancillary services to the grid, and the ability to transition from grid-connected to island mode.
|
# Hybrid Distributed Wind and Battery Energy Storage Systems
### Jim Reilly,[1] Ram Poudel,[2] Venkat Krishnan,[3] Ben Anderson,[1] Jayaraj Rane,[1] Ian Baring-Gould,[1] and Caitlyn Clark[1]
#### 1 National Renewable Energy Laboratory 2 Appalachian State University 3 PA Knowledge
**NREL is a national laboratory of the U.S. Department of Energy**
**Office of Energy Efficiency & Renewable Energy**
**Operated by the Alliance for Sustainable Energy, LLC**
This report is available at no cost from the National Renewable Energy
Laboratory (NREL) at www.nrel.gov/publications.
**Technical Report**
NREL/TP-5000-77662
June 2022
-----
# Hybrid Distributed Wind and Battery Energy Storage Systems
### Jim Reilly,[1] Ram Poudel,[2] Venkat Krishnan,[3] Ben Anderson,[1] Jayaraj Rane,[1] Ian Baring-Gould,[1] and Caitlyn Clark[1]
#### 1 National Renewable Energy Laboratory 2 Appalachian State University 3 PA Knowledge
**Suggested Citation**
Reilly, Jim, Ram Poudel, Venkat Krishnan, Ben Anderson, Jayaraj Rane, Ian BaringGould, and Caitlyn Clark. 2022. Hybrid Distributed Wind and Batter Energy Storage
_Systems. Golden, CO: National Renewable Energy Laboratory. NREL/TP-5000-77662._
[https://www.nrel.gov/docs/fy22osti/77662.pdf.](https://www.nrel.gov/docs/fy22osti/77662.pdf)
**NREL is a national laboratory of the U.S. Department of Energy**
**Office of Energy Efficiency & Renewable Energy**
**Operated by the Alliance for Sustainable Energy, LLC**
This report is available at no cost from the National Renewable Energy
Laboratory (NREL) at www.nrel.gov/publications.
**Technical Report**
NREL/TP-5000-77662
June 2022
National Renewable Energy Laboratory
15013 Denver West Parkway
Golden, CO 80401
-----
**NOTICE**
This work was authored in part by the National Renewable Energy Laboratory, operated by Alliance for Sustainable
Energy, LLC, for the U.S. Department of Energy (DOE) under Contract No. DE-AC36-08GO28308. Funding
provided by U.S. Department of Energy Office of Energy Efficiency and Renewable Energy Wind Energy
Technologies Office. The views expressed herein do not necessarily represent the views of the DOE or the U.S.
Government.
This report is available at no cost from the National Renewable
[Energy Laboratory (NREL) at www.nrel.gov/publications.](http://www.nrel.gov/publications)
U.S. Department of Energy (DOE) reports produced after 1991
and a growing number of pre-1991 documents are available
free via www.OSTI.gov.
_Cover Photos by Dennis Schroeder: (clockwise, left to right) NREL 51934, NREL 45897, NREL 42160, NREL 45891, NREL 48097,_
_NREL 46526._
NREL prints on paper that contains recycled content.
-----
## Acknowledgments
We are thankful to all project team members from partnering laboratories on the Microgrids,
Infrastructure Resilience, and Advanced Controls Launchpad project:
- Idaho National Laboratory
- Pacific Northwest National Laboratory
- Sandia National Laboratories.
We also express our sincere gratitude to our industry advisory board members for their valuable
insights and real-world test system recommendations during the March 2020 advisory board
meeting: Venkat Banunarayanan (National Rural Electric Cooperative Association), Chris Rose
(Renewable Energy Alaska), Rob Wills (Intergrid), Paul Dockrill (Natural Resource Canada),
Jeff Pack (POWER Engineers), Arvind Tiwari (GE Global Research), Kristin Swenson
(Midcontinent Independent System Operator), Jonathon Monken (PJM), and Scott Fouts (QED
Wind Power).
The authors would also like to thank the peer reviewers Jennifer King (National Renewable
Energy Laboratory) and Jack Flicker (Sandia National Laboratories) for their thorough review.
iii
-----
## List of Acronyms
AC alternating current
BESS battery energy storage system
DC direct current
DER distributed energy resource
DFIG doubly-fed induction generator
HVS high voltage side
Li-ion lithium-ion
LVS low voltage side
MIRACL Microgrids, Infrastructure Resilience, and Advanced Controls Launchpad
MW megawatt
NREL National Renewable Energy Laboratory
PV photovoltaic(s)
SM synchronous motor
SOC state of charge
WTG wind turbine generator
iv
-----
## Executive Summary
For individuals, businesses, and communities seeking to improve system resilience, power
quality, reliability, and flexibility, distributed wind can provide an affordable, accessible, and
compatible renewable energy resource. Distributed wind assets are often installed to offset retail
power costs or secure long term power cost certainty, support grid operations and local loads,
and electrify remote locations not connected to a centralized grid. However, there are technical
barriers to fully realizing these benefits with wind alone. Many of these technical barriers can be
overcome by the hybridization of distributed wind assets, particularly with storage technologies.
Electricity storage can shift wind energy from periods of low demand to peak times, to smooth
fluctuations in output, and to provide resilience services during periods of low resource
adequacy.
Although interconnecting and coordinating wind energy and energy storage is not a new concept,
the strategy has many benefits and integration considerations that have not been welldocumented in distribution applications. Thus, the goal of this report is to promote understanding
of the technologies involved in wind-storage hybrid systems and to determine the optimal
strategies for integrating these technologies into a distributed system that provides primary
energy as well as grid support services. This document achieves this goal by providing a
comprehensive overview of the state-of-the-art for wind-storage hybrid systems, particularly in
distributed wind applications, to enable distributed wind system stakeholders to realize the
maximum benefits of their system. As battery costs continue to decrease and efficiency continues
to increase, an enhanced understanding of distributed-wind-storage hybrid systems in the context
of evolving technology, regulations, and market structure can help accelerate these trends.
v
-----
## Table of Contents
**1** **Introduction ........................................................................................................................................... 1**
1.1 Advantages of Hybrid Wind Systems ........................................................................................... 1
1.2 Considerations and Challenges of Hybrid Wind Systems ............................................................. 4
**2** **Wind-Storage Hybrids: Possible Configurations .............................................................................. 6**
2.1 AC-Coupled Wind-Storage Hybrid Systems ................................................................................ 8
2.2 DC-Coupled Wind-Storage Hybrid System .................................................................................. 8
2.3 Comparison of AC and DC Configurations ................................................................................ 10
**3** **Hybrid System Controls: Stable Integration and Maximum Utilization ........................................ 13**
3.1 Distributed Hybrid System Controls ........................................................................................... 14
3.1.1 Essential Reliability Services and Stability .................................................................... 14
3.1.2 Frequency Response ....................................................................................................... 14
3.1.3 Voltage and Reactive Power Support ............................................................................. 15
3.1.4 Flexibility and Economic Grid Services ........................................................................ 15
3.1.5 Enabling Fast and Accurate Response ........................................................................... 16
3.2 Modeling Controls and Time Scales ........................................................................................... 16
**4** **Operation and Dispatch of Wind-Storage Hybrids .......................................................................... 18**
4.1 Wind-Storage Hybrids Optimal Dispatch ................................................................................... 18
4.2 Wind-Storage Hybrids Supporting Black Start ........................................................................... 19
**5** **Techno-Economic Sizing of Wind-Storage Hybrids ....................................................................... 22**
5.1 Storage Cost Models ................................................................................................................... 22
5.2 Wind-Hybrid Models .................................................................................................................. 23
**6** **Conclusion .......................................................................................................................................... 25**
**References ................................................................................................................................................. 27**
vi
-----
## List of Figures
Figure 1. Possible wind-storage hybrid configurations ................................................................................. 7
Figure 2. Dominant wind turbine technologies. ............................................................................................ 7
Figure 3. Common topology of an AC-coupled wind-storage hybrid system. ............................................. 8
Figure 4. Schematic of DC-coupled photovoltaic-plus-storage systems. ..................................................... 9
Figure 5. Four-port DC/DC converter for an isolated system. .................................................................... 10
Figure 6. Hierarchy of hybrid system control ............................................................................................. 13
Figure 7. Dispatch of photovoltaics-plus-storage system on a typical day ................................................. 19
Figure 8. Distributed black start of wind turbines in an island mode. ........................................................ 20
Figure 9. Battery cost projections for 4-hour Li-ion systems ..................................................................... 23
vii
-----
## 1 Introduction
A distributed hybrid energy system comprises energy generation sources and energy storage
devices co-located at a point of interconnection to support local loads. Such a hybrid energy
system can have economic and operational advantages that exceed the sum of the services
provided by its individual components because of synergies that can exist between the
subsystems. The coordination between its subsystems at the component level is a defining
feature of a hybrid energy system. Recently, wind-storage hybrid energy systems have been
attracting commercial interest because of their ability to provide dispatchable energy and grid
services, even though the wind resource is variable. Building on the past report “Microgrids,
Infrastructure Resilience, and Advanced Controls Launchpad (MIRACL) Controls Research
Road Map,” which highlights the challenges and opportunities for distributed wind grid
integration and control mechanisms, this report initiates and establishes a baseline for future
research on wind-storage hybrids in distribution applications (Reilly et al. 2020).
The objective of this report is to identify research opportunities to address some of the challenges
of wind-storage hybrid systems. We achieve this aim by:
- Identifying technical benefits, considerations, and challenges for wind-storage hybrid
systems
- Proposing common configurations and definitions for distributed-wind-storage hybrids
- Summarizing hybrid energy research relevant to distributed wind systems, particularly
their control, operation, and dispatch
- Suggesting strategies for sizing wind-storage hybrids
- Identifying opportunities for future research on distributed-wind-hybrid systems.
A wide range of energy storage technologies are available, but we will focus on lithium-ion (Liion)-based battery energy storage systems (BESS), although other storage mechanisms follow
many of the same principles. The Li-ion technology has been at the forefront of commercialscale storage because of its high energy density, good round-trip efficiency, fast response time,
and downward cost trends.
#### 1.1 Advantages of Hybrid Wind Systems
Co-locating energy storage with a wind power plant allows the uncertain, time-varying electric
power output from wind turbines to be smoothed out, enabling reliable, dispatchable energy for
local loads to the local microgrid or the larger grid. In addition, adding storage to a wind plant
can enable grid-forming or related ancillary grid services such as inertial support and frequency
responses during transitions between grid-connected and islanded modes. A hybrid system can
also increase revenue by storing rather than wasting energy that cannot be used because of
system rating limits or the absence of loads.
Additional benefits of hybrid energy systems can come from sharing components between other
generation sources such as inverters and optimizing electrical system ratings and interconnection
transformers. It is worth noting, however, that limiting the full system rating can result in a
decrease in revenue. For example, the use of storage during periods of high wind energy output
1
-----
might be limited restricted because of a limit on the total power output of the combined system.
For this reason, rigorous assessments—including hybrid system modeling, planning, and sizing
of the components—are critical to maximize system benefits based on the application, expected
load, and desired grid services. An assessment should also consider the specific grid and local
weather conditions.
The following are some high-level benefits of wind-storage hybrid systems:
- **Dispatchability of variable renewable resources. A storage system, such as a Li-ion**
battery, can help maintain balance of variable wind power output within system
constraints, delivering firm power that is easy to integrate with other generators or the
grid. The size and use of storage depend on the intended application and the
configuration of the wind devices. Storage can be used to provide ramping services, as
has been done with wind installations in Kodiak and along the Alaskan Railbelt with
wind facilities in Anchorage or Fairbanks; for time-of-day shifting, as was deployed in
Kotzebue, Alaska; or to allow for transitions between sources, as has been deployed in
Tuntutuliak and other remote Alaskan communities. In larger grid-connected systems,
photovoltaics (PV) has a diurnal cycle that fits well with a 4-hour storage cycle, charging
the storage device during the day to expand energy supply to, typically, evening peak
load hours. Depending on a site’s wind profile and the driver for energy services, a windstorage hybrid system will require different considerations for storage size. These
requirements have prompted storage asset developers and owners to look to new battery
technologies beyond the short-duration Li-ion systems deployed so far (Energy Storage
Systems, Inc. 2016). Various technologies are evolving to provide long-duration storage.
- **Economic impact. The demand for electricity varies with time, changing with time of**
day, weather, and various socioeconomic factors. Similarly, the price of electricity also
varies with system conditions, congestion, and time of day. A storage system can
leverage this varying pricing to schedule its charging and discharging to increase the
effectiveness of energy arbitrage. Research has also shown that arbitrage can be achieved
across energy and ancillary markets to improve the economics of wind-storage hybrids
(Das, Krishnan, and McCalley 2015). This economic value proposition further improves
for a hybrid resource, which can rely on low-cost renewable energy (or no-cost renewable
energy at times when curtailment requires shutting down wind turbines) to charge and
sell in the larger grid’s energy and ancillary markets. The benefits of a hybrid system
depend on the resource configuration and specific context of the project, and research is
needed to tailor hybrid solutions to specific locations and grid scenarios.
- **System flexibility. Modern energy systems require electricity to maintain constant**
frequency and voltage. However, wind energy is a variable resource that, when combined
with a variable load, increases the overall power variability of the energy system. Hence,
maintaining a balance of supply and demand requires balancing engineering and
economics. To achieve this balance, balancing authorities have look-ahead generation
scheduling and operational planning, starting from day-ahead unit commitment and
dispatch and continuing to real-time dispatch at 5 minutes. This scheduling ensures that
generation resources have sufficient flexibility (e.g., headroom capacity and ramping
capabilities) to meet the energy, load following, ramping, and ancillary service
2
-----
(regulation and spinning reserves) requirements for reliability. As system size decreases,
there are fewer devices on the grid and less need to stabilize frequency and voltage,
requiring faster system response even below 1 hertz (Hz). Regarding flexibility, hybrid
wind systems can provide:
`o` Load leveling or energy shifting to avoid steep ramps and negative prices caused
by excess renewable generation
`o` Complementarity with solar, thereby mitigating issues such as the duck curve
(California ISO 2016), with its mismatch between generation and load, leading to
severe morning and evening net-load ramps
`o` Ramping up or down to support the increase in the frequency and severity of
ramping events in the grid related to increasing variable renewable contributions.
With improved wind forecasting and adequate energy storage, hybrid systems can
provide ramping capability, thereby avoiding generation scarcity events and realtime price spikes that would otherwise necessitate expensive gas generation starts.
- **Enhanced grid stability. In a power system, especially localized grids, generation and**
demand must remain balanced to maintain stability. This balance ensures that voltage,
frequency, and small-signal oscillations remain within acceptable North American
Electric Reliability Corporation and American National Standards Institute levels. A
storage system can function as a source as well as a consumer of electrical power. This
dual nature of storage combined with variable renewable wind power can result in a
hybrid system that improves grid stability by injecting or absorbing real and reactive
power to support frequency and voltage stability.
- **Grid reliability and resilience. A distribution hybrid system with local loads can also**
function as a microgrid, and the microgrid, with appropriate controls, can operate in both
grid-tied and islanded modes. A microgrid with on-site renewable generation and storage
can enhance grid resilience and ensure power supply to critical loads during major
physical or cyber disruptions. Additionally, a distributed wind system can support a
stable and reliable grid when hybridized with storage as well as dispatchable generation
as appropriate. Further reliability improvements can be made by adding redundancy to
the system (by physically distributing assets with parallel capabilities) or using advanced
controls to provide services (such as black start capabilities).
- **Economics with common and standardized components. Most modern utility-scale**
wind turbines have power converters to allow for variable-speed operation of the wind
generator for maximum efficiency and to convert the power to grid-standard voltage and
frequency. The power converter may include AC/DC and DC/AC conversion. A battery
storage system also requires such power converters to regulate charging/discharging.
Other relevant services that these power converters can provide include ramp rate,
decoupled control of real and reactive power for frequency and voltage support, and DCto-AC power conversion in an AC grid-tied scenario. These services are also relevant to
many other distributed energy resources (DERs). Battery systems can utilize the existing
power converter and inverter hardware infrastructure in a wind turbine, and the
components can be optimally sized for their intended uses. The incremental cost of the
3
-----
hardware, even when the component size is increased, can be an economic option for
some deployments, especially in an isolated environment or use case.
- **Other benefits from the circular economy and recycling. Small-scale wind energy**
developers are looking at the economics of employing used batteries from the
transportation industry. Bergey Windpower Co. is planning to use secondhand battery
systems from a nearby Nissan electric car factory to create a home microgrid system. The
Bergey Excel 15 Home Microgrid System uses 18-kilowatt-hour (kWh) recycled electric
vehicle battery packs (Bergey 2020). The batteries used in electric vehicles can be
evaluated for a range of options for reuse and recycling. The research at National
Renewable Energy Laboratory has revealed that the second use of electric vehicle
batteries is both viable and valuable (NREL 2020). NREL’s battery second-use calculator
can be used to explore the effects of different repurposing strategies and assumptions on
economics. Before batteries are recycled to recover critical energy materials, reusing
batteries in secondary applications, like the Excel 15 Home Microgrid System, is a
promising strategy (Ambrose 2020). The value propositions from the circular economy
can make wind-hybrid systems a cost-effective as well as an environmentally friendly
option for a reliable and resilient energy system.
#### 1.2 Considerations and Challenges of Hybrid Wind Systems
Although a hybrid wind system has many benefits, it can pose operational challenges as well.
The following are some high-level considerations and challenges when considering the
deployment of a wind-storage hybrid system or upgrade of a standalone wind power plant to
include storage:
- **Complicated dispatch and valuation of combined resources. A variable wind resource**
can cause cycling of the battery, which can affect its life cycle (Wenzl et al. 2005; Corbus
et al. 2002). How daily cycling compares with random charge/discharge is an economic
question that may be specific to the context. The hybrid system may have challenges
associated with co-location, such as transmission constraints and inverter capacity limits.
Some of these challenges can be managed with a better forecast and control/dispatch
logic. However, a detailed assessment for specific grid scenarios and weather situations is
needed to size the hybrid systems appropriately and optimize resource utilization.
Further, the economic assessment must maximize storage utilization while reducing
curtailments and battery cycling, especially for isolated power systems (Baring-Gould et
al. 2001).
- **Feasibility studies are not as defined and generic as they are for conventional**
**generators. For systems on a central grid, governing market rules and policy incentives**
can make or break the finances of a wind-hybrid project. A co-located wind-storage
system can share infrastructure to provide reliable power at a low cost. Such a system
may also qualify for incentives such as the investment tax credit, provided it complies
with terms and conditions specific to the state, region, or country. In some states, a
battery system must get 75% of its energy from renewable energy sources such as solar
and wind to qualify for the investment tax credit. Depending on policy, the hybrid system
may or may not make sense technically and/or financially.
4
-----
- **The current production tax credit for wind does not consider the addition of energy**
**storage. There are also operational limits to hybridization. These will depend on the**
available resource in a region and the ability to forecast and develop appropriate resource
bids or self-schedules (if participating in markets or central dispatch and compensation
mechanisms) to enhance the value of a hybrid system. The investment tax credit for PV
was expanded to include investments in battery storage (NREL 2018b), but the
production tax credit for wind does not include such considerations.
- **Integrating multiple technologies is complex, and plug-and-play solutions are**
**needed to simplify design. The literature review conducted as part of this report is**
intended to inform the development of control solutions to maximize the benefits of
wind-hybrid system configuration and sizing. A “plug-and-play” distributed wind turbine
system is needed to enhance the market share and realize the full potential of wind to
serve the global demand for clean energy. A defining aspect of “plug and play” is
continued innovation on par with evolving grid codes and other technology solutions.
Considering the possible range of benefits, challenges, and opportunities, this paper will explore
how wind-hybrid systems, with a current focus on wind-storage hybrid systems, can be
efficiently configured to operate within different environments. A detailed quantitative study will
be undertaken later, and results will be reported. Taking lessons learned from other hybrid
technologies (e.g., hybrid-solar or hybrid-hydro [Poudel, Manwell, and McGowan 2020]) in the
energy industry, this literature review aims to identify the opportunities and challenges of windhybrid systems in various operational use cases. These use cases include isolated grids or
microgrids in island mode, grid-connected resources providing energy and ancillary services to
the grid, and the ability to transition from grid-connected to island mode.
5
-----
## 2 Wind-Storage Hybrids: Possible Configurations
Increasingly, wind turbines are being coupled with batteries to mitigate variability and
uncertainty in wind energy generation at a second-by-second resolution. Storage may be
integrated with wind turbines in three ways:
1. Virtually, if the hardware is not co-located but is controlled as a single source
2. Physically co-located yet separately metered and dispatched as a separate source
3. Co-located behind the same meter, in which case the two components act as a singular
source with respect to the grid.
Within this context, wind-storage hybrids can also be coupled in two ways:
1. AC-coupled, in which wind and storage share a point of common coupling on an AC-bus
2. DC-coupled, in which wind and storage share a point of common coupling on a DC-bus.
As we discuss in Section 2.3, AC coupling can be done in all three storage integration cases, but
to date DC-coupled systems are exclusively behind-the-same-meter systems.
In a wind power plant, which may contain two or more wind turbines, the storage can be sited
either at the power plant level (i.e., central storage, as shown in Figure 1a) or at the individual
wind turbine level (i.e., integrated storage, as shown in Figure 1b). Individual turbine-level
storage can either be deployed as a unit behind the dedicated turbine interconnect, typically with
a lower-voltage AC connection, or integrated behind the turbine power converter, which will
take place at a DC voltage. For example, each of the 100 GE 1.6-megawatt (MW) wind turbines
at Tehachapi has 200 kWh of integrated storage (Miller 2014) in the DC link. Unlike turbines
with integrated storage that use the turbines’ existing power conversion equipment, a wind power
plant with AC-connected individual or central storage requires additional equipment such as a
dedicated power converter, switchgear, and transformer. This is one of the trade-offs that need to
be considered when choosing a storage topology and location. A study of the GE turbines at
Tehachapi builds on a precursor study (Fingersh 2003) that explored using the turbine’s
controller and power electronics system to operate an electrolyzer to generate hydrogen from
water, thereby using a component-level strategy for a hybrid system. The GE study (Miller 2014)
does not provide many details about the sizing of integrated storage and the associated power
electronics architecture; we believe this is an opportunity for future research.
6
-----
a) Central storage at the plant level b) Integrated storage at each turbine
**Figure 1. Possible wind-storage hybrid configurations**
A hybrid system can be coupled on a common DC bus, AC bus, or both, depending on the type
of wind turbine. The four main types of wind turbines are summarized in Figure 2 (Singh and
Santoso 2011). Some of these configurations are more amenable to sharing DC-to-ACconversion equipment. A review paper (Badwawi, Abusara, and Mallick 2015) presents power
electronics topologies and control for hybrid systems. A good description of AC versus DC solar
coupling, including their pros and cons with reference to the solar energy industry, is
documented in (Marsh 2019).
**Figure 2. Dominant wind turbine technologies.**
Source: Singh and Santoso (2011)
Key: DFIG – doubly-fed induction generator; IM – induction motor; SM – synchronous motor.
7
-----
#### 2.1 AC-Coupled Wind-Storage Hybrid Systems
In an AC-coupled wind-storage system, the distributed wind and battery connect on an AC bus
(shown in Figure 3). Such a system normally uses an industry-standard, phase-locked loop
feedback control system to adjust the phase of generated power to match the phase of the grid
(i.e., synchronization and control). To integrate electrical power generated by DERs efficiently
and safely into the grid, grid-side inverters accurately match the voltage and phase of the
sinusoidal AC waveform of the grid (Denholm, Eichman, and Margolis 2017).
An AC-coupled wind-storage system has some advantages over DC-coupled systems. ACcoupled systems use legacy hardware and standardized equipment commonly available in the
market, making them relatively easy to install. In an AC-coupled system, energy stored by the
battery can be independent of the output of the wind turbine, allowing the combined system to be
sized and operated based on the energy and grid services that the project will provide. Two
independent units will also have a high total capacity because both units can provide full output
simultaneously. In this scenario, the battery storage can have fewer charging/discharging cycles
than it would in the DC-coupled system. However, this may not always be the case if the hybrid
system is in an isolated mode of operation.
For Type 3 and Type 4 wind turbines (see Figure 2), an AC-coupled wind-storage system would
require two inverters: one DC/AC one-way inverter for the wind (after the DC/AC converter)
and a bidirectional DC/AC inverter for the battery system for charging/discharging, as depicted
in an example system shown in Figure 3. The power conversion equipment is costly but allows
the full capacity of both generation sources to be used.
**Figure 3. Common topology of an AC-coupled wind-storage hybrid system.**
Source: Adapted from Denholm, Eichman, and Margolis (2017)
#### 2.2 DC-Coupled Wind-Storage Hybrid System
In a DC-coupled wind-storage system, the wind turbine and BESS are integrated at the DC link
behind a common inverter, as detailed for PV by Denholm, Eichman, and Margolis (2017) and
adapted for wind-plus-storage systems in Figure 4. The electricity generated by the wind turbine
is rectified and coupled with the BESS, and the battery is maintained through the DC-DC
converter. The grid-side inverter can be one-directional (i.e., DC/AC) or bidirectional, and the
8
-----
battery can store energy from just the turbine or from both the turbine and the grid. This is shown
in Figure 4 and discussed in further detail for PV by Denholm, Eichman, and Margolis (2017).
**Figure 4. Schematics of DC-coupled wind-storage systems.**
Source: Adapted from Denholm, Eichman, and Margolis (2017)
In a DC-coupled system using a one-directional DC/AC inverter, the battery can only be charged
using the wind turbine. Some states and federal programs offer tax credits for such systems
(NREL 2018b). With a bidirectional inverter, the stacked value streams for the BESS may
increase because it can serve energy-shifting functions and participate in energy arbitrage. In
addition, such a system may qualify for tax credits and other incentives available to onedirectional inverters.
Type 3 and Type 4 wind turbines share many of the same components as energy storage systems
and can often share a significant portion of AC/DC and DC/AC infrastructure, with a DC link
capacitor in between (Miller 2013, 2014). In this case, a battery with a DC output can be
connected directly or via its own bidirectional DC-DC converter for power regulation. This type
of storage is known as an integrated storage in the DC link of the wind turbine. A recent master’s
degree thesis at the Norwegian University of Science and Technology evaluated he modular
multilevel converter for medium-voltage integration of a battery in the DC link (Rekdal 2018). A
multilevel converter is a method of generating high-voltage waveforms from lower-voltage
components. Modular multilevel converters are considered a promising battery interface as they
have very high efficiency; excellent AC waveforms; and a scalable, modular structure, while also
allowing for the use of semiconductors with low ratings. However, there is not much research
available in the public domain about how to optimize the size of integrated storage for given
wind power plant sizes and energy resources.
9
-----
For hybrid systems, there has been recent interest in revisiting multiport DC/DC converters to
share power electronics components, simplify operational logics, and develop compact/efficient
architectures. For an isolated application, Zeng et al. (2019) present a four-port DC/DC converter
that can handle wind, PV, battery storage, and loads (see Figure 5). The authors claim that their
multiport converter has the advantage of using a simple topology to interface with sources of
different voltage/current characteristics.
**Figure 5. Four-port DC/DC converter for an isolated system.**
Source: Zeng et al. (2019)
Key: WTG = wind turbine generators; LVS = low voltage side; HVS = high voltage side; BAT = battery storage; PV =
solar photovoltaic.
#### 2.3 Comparison of AC and DC Configurations
Both AC and DC wind-storage hybrids have advantages and disadvantages that depend on the
details of the specific installation. For example, direct-drive, Type 4, full-conversion wind
turbines (e.g., EWT, Enercon) are suitable for integrated AC and DC coupling, as both an AC
and DC bus exist in their typical configuration. However, conventional Type 1 turbines are more
suited for AC coupling because of the lack of a DC bus in their typical configuration. The
configuration also depends on the specifics of the project and economic factors such as market
price for energy and grid services, and tax credit policies for hybrid plants. The following is a
high-level comparison of characteristics of AC and DC hybrid configurations:
- **AC system maturity and battery independence. AC-coupled systems use standard AC**
interconnection equipment available in the market that is easy to install. This allows more
flexibility in the sizing of the wind turbine and battery, both in terms of power and
capacity. In an AC-coupled system, energy stored by the BESS can be independent of the
output of the individual wind turbines.
- **DC systems for smaller and distributed hybrids. As the size of the DER project**
increases, a clear demarcation begins to emerge between the AC and DC coupling based
on the economics of the project and other nontechnical constraints. A DC-based system is
known to interface better with other DC-based distributed generation on the system, but
currently is limited to rather small sizes. Such a system can communicate and supply
10
-----
power over a single distribution line, and interconnection with other on-site DC
generation sources such as PV is simplified. Experts on the future of direct current in
buildings (Glasgo, Azevedo, and Hendrickson 2018) suggest that the two biggest barriers
for DC coupling are industry professionals unfamiliar with DC and comparatively small
markets for DC devices and components.
- **Trends in power-electronic-interfaced sources and loads favoring DC coupling.**
Recent advances achieved in power electronics—which made DC voltage regulation a
simple task—have increased the penetration of DC loads and sources and encouraged
researchers to reconsider DC distribution for portions of today’s power system to increase
overall efficiency (Elsayed, Mohamed, and Mohammed 2015). Although the
conventional rotating-electric machine-based power system predominantly operates via
AC transmission, microgrids intrinsically support DC power. Many distributed energy
systems are driven by static electronic converters (Gu, Li, and He 2014). Compared to its
AC counterpart, a DC microgrid has the potential to achieve higher efficiency, power
capacity, and controllability. Because of these advantages, a DC-based power system
with DC-coupled wind and storage is an enabling technology for microgrids, especially
in small-scale residential applications such as green buildings, sustainable homes, and
energy access applications in areas inaccessible by the national grid.
- **System efficiency and cost. An AC-coupled system will have lower roundtrip efficiency**
for battery charging than a DC-coupled system, which charges the battery directly and
does not have power flow through two inverters (one wind turbine inverter and one BESS
inverter). However, only a portion of the wind turbine power produced goes into the
storage and is thus subject to the losses. An NREL study based on a utility-scale PV
project suggests that using DC coupling rather than AC coupling results in a 1% lower
total cost (Fu, Remo, and Margolis 2018), which is the net result of cost differences
between solar inverters, the structural and electrical balance of system, labor, developer
overhead, sales tax, contingency, and profit. For an actual project, however, cost savings
may also need to account for additional factors such as retrofit considerations, system
performance, design flexibility, and operations and maintenance.
Further design considerations for different hybrid configurations to promote reliability and
flexibility include:
- **DC systems. A DC-coupled wind-storage system requires one less inverter than an AC-**
coupled system (see Figure 3), which reduces wiring and housing costs as well as
conversion losses. Type 3 and Type 4 wind turbines also have hardware components that
can be used for DC coupling at the DC link. Because the BESS is connected directly to
the distributed wind turbine system, excess generation that might otherwise be clipped by
an AC-coupled system at the inverter level can be sent directly to the BESS, which could
improve system economics (DiOrio and Hobbs 2018).
- **AC systems. AC systems use off-the-shelf components, and they do not require**
technology-specific modification or engineering. In addition, AC system components are
modular, which reduces retrofit costs, and they stack well with each other compared to a
DC-coupled system. They require less maintenance time because, unlike a DC-coupled
11
-----
system, batteries do not need to be installed next to the bidirectional inverter. AC-coupled
systems can also use larger battery racks per megawatt-hour of battery capacity and thus
reduce the number of heating, ventilating, and air-conditioning and fire-suppression
systems in the battery containers (Fu, Remo, and Margolis 2018). These systems allow
manageable battery health monitoring and state-of-charge (SOC) planning with an
independent battery management system that has its own bidirectional DC-AC inverter
and can use redundant inverters that provide increased reliability and available capacity.
- **Retrofit to add storage to existing generation. For a retrofit scenario with individual**
wind turbines (i.e., adding battery storage to existing wind turbine generators), an ACcoupled BESS may be the only practical option because of the extensive turbine-specific
modifications that would need to be implemented for a DC-coupled system.
- **Synchronization. A hybrid system coupling in a DC common bus does not require the**
synchronism an AC bus configuration requires. The voltage is fixed for all subsystems in
the hybrid system, and the current from each subsystem is controlled independently. A
battery bank connected directly or through a DC/DC link can regulate the DC bus
voltage. The subsystem can independently perform maximum power point tracking by
using an AC/DC converter for the wind turbine and DC/DC converter for the PV case. A
common DC/AC inverter maintains the voltage across the load.
The wind and solar industries have many similarities for AC- and DC-coupled systems.
Badwawi, Abusara, and Mallick (2015) present a summary of research regarding power
electronic topologies and control. Marsh (2019) also provides a good description of AC versus
DC solar coupling, including pros and cons related to the solar energy industry. A co-located
wind-storage system can share some components and leverage some transmission-level
constraints.
To expand on the grid support capabilities of wind-storage hybrids, GE conducted a study on
wind power plants with integrated storage on each turbine rather than central storage, along with
an extra inverter and transformer for redundancy (Miller 2014). There are always some trade-offs
involved in choosing a storage topology. The GE study does not present details about sizing
integrated storage but rather demonstrates the benefits of the technology. As part of the
MIRACL project, NREL plans to explore integrated storage sizing and configurations using
theoretical and computational approaches through desktop simulations and power-hardware-inthe-loop validation.
12
-----
## 3 Hybrid System Controls: Stable Integration and Maximum Utilization
A defining feature of hybridization is the ability to coordinate generation to effectively balance
varying load or net load (load minus variable renewables), resulting in an economic dispatch of
the generation and storage assets. This is possible by controlling individual devices (e.g.,
generators, storage, load) within the hybrid system, or by controlling the hybrid system as a
single unit, providing a precise power output to benefit the overall power system. A system-level
controller utilizes algorithms to issue commands to each device within the hybrid system based
on load and variable renewable forecasts.
Tertiary (supervisory):
∆E(grid), set points from
grid dispatcher
Secondary (supervisory):
∆V, ∆f at hybrid plant or
microgrid level
Primary: ∆Load and
dynamics at individual
asset level
**Figure 6. Hierarchy of hybrid system control**
Typically, controls use a hierarchical architecture as well as two-way communication from
individual subsystems or devices in a hybrid system to achieve the best hybridization outcomes.
A typical hierarchical control can be classified into three levels, as shown in Figure 6. The
primary control manages the load/current sharing through droop control. The secondary control
responds to the steady-state error on the voltage and frequency, and the tertiary control maintains
coordination based on the status at the point of common coupling.
The objective of control is to maintain the electrical system parameters within acceptable limits
by balancing generation with demand at the hybrid system level, taking system constraints and
the health trajectory of subsystems and individual components into account. It should also be
recognized that these control functions are made at different time steps, with electrical system
parameter adjustments needing to happen very quickly whereas others, such as decisions based
on balancing load or varying renewable energy production, can typically be made over minutes
or hours.
13
-----
#### 3.1 Distributed Hybrid System Controls
Well-designed controls can enable several capabilities that improve hybrid system economics.
**_3.1.1 Essential Reliability Services and Stability_**
Hybrid controls should be flexible (or customizable) to accommodate various essential reliability
services that the hybrid system may provide. Control and coordination between the hybrid
technologies becomes more challenging as the contribution of variable renewables increases on
the grid and more is expected of hybrid systems to support grid stability. For example, toward
the end of rural extension lines or long transmission lines where voltage and frequency are more
sensitive to the dynamic load/generation (i.e., weakly interconnected systems), the phase-locked
loop measurement system, on which frequency and phase estimation and subsequent controls
rely, is known to have issues with frequency and phasor measurement, adversely affecting
stability. Therefore, the controls in a hybrid system should be able to ease and enhance the
stability of the services provided. The use of a battery to provide services such as inertial
response will also decrease mechanical load in the wind turbine (extending its life). Similarly,
wind turbines can provide damping control to offset oscillations (e.g., local, forced, or interarea),
which will be further enhanced with interconnected battery storage.
**_3.1.2 Frequency Response_**
In addition to the (natural or synthetic) inertial response to any generator outages causing
frequency drop, the grid typically uses three additional levels of frequency response: 1) primary
or governor response subject to frequency deviation beyond a “dead band”; 2) secondary
response that uses 4- to 6-second-level automatic generation control signals coming from a
central dispatcher (taking into account both frequency deviation as well as tie-line transactions)
into a variable called an area control error; and 3) tertiary response, typically coming from
additional reserves through market dispatch. For each of these services, there must be headroom
reserved from the maximum available power for both a wind turbine and PV plant. In a hybrid
plant, a battery can complement the variable renewable power and provide these frequency
response services, removing the need to curtail and reserve headroom in the wind turbine, unless
it becomes necessary for reliability reasons.
Droop control is a common way to control and coordinate multiple distributed resources in a
hybrid plant, allowing them to share power and support multiple grid services. A droop for a
resource with a rated power P(rated) in a power system with frequency f = 60 Hz is defined as:
1
𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 [= ∆𝑃𝑃/𝑃𝑃(𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟)]∆𝑓𝑓/60(𝐻𝐻𝐻𝐻)
(1)
For example, Xcel Energy has used wind turbine droop control for years (Porter, Starr, and Mills
2015). The most common droop setting used in many power systems is 5%, but in some cases
more aggressive 3% droop is used as well (NREL 2018b). A 5% droop means that a 5% change
in frequency would result in a 100% change in power. For a BESS system operating at 5% droop
control at a nominal frequency of 60 Hz, a decrease or increase in frequency of 3 Hz (i.e., 5/100
× 60 Hz) should deliver/absorb the rated power of the battery. However, the deliverability of the
power for a BESS and any source of generators will depend on the available headroom or the
14
-----
current state of the resource (i.e., maximum generation, current generation set point, or battery
SOC).
As the contribution level of variable renewable energy grows in a microgrid, additional design
challenges emerge for the integration of BESS and appropriate levels of droop settings. A 2015
study (Weaver et al. 2015) looked at the energy storage requirements of DC microgrids with
high-penetration renewables under droop control. This study suggested that decentralized control
architecture is possible with a distributed or adaptive droop control that is subject to evolving
net-load disturbance or area control error fluctuations, and, consequently, the energy storage
requirements in a microgrid may be minimized with the optimal choice of droop settings.
Another study related to DC microgrids (Zhao and Dörfler 2015) demonstrated that the droop
control strategy can achieve fair and stable load sharing (even in the presence of actuation
constraints) or follow set points provided by the economic dispatch.
**_3.1.3 Voltage and Reactive Power Support_**
In addition to frequency support, voltage and reactive power control are another criterion for
hybrid plants. The control may be to maintain a specific voltage set point or power factor at the
point of interconnection or to maintain the voltage within American National Standards Institute
limits of 0.95 to 1.05 per unit (or 0.90 to 1.1 per unit) for certain locations or contingency
situations. Type 1 and Type 2 wind turbines will typically need external reactive power
resources, such as capacitor banks and static synchronous compensators (STATCOMS) to
provide reactive support and voltage control. Type 3 wind turbines come with a limited range
(+/- 30%) of reactive power control, given the size of the rotor-side converter. Type 4 turbines
come with a full range of reactive power capabilities, just like a PV inverter, and could be
operated like a STATCOM even when the turbine is not producing real power. However, given
the need to curtail real power to produce reactive power, the storage in the hybrid plant can
alleviate the issue by providing reactive power support. With the help of energy storage, the
hybrid plant’s range of reactive power control can be increased and maximized to support the
required power factor or voltage performance.
**_3.1.4 Flexibility and Economic Grid Services_**
Although all the services mentioned are needed to ensure that the hybrid power plant can be
integrated into the grid and support grid reliability and stability, the most important factors from
a project developer’s perspective are the nest utilization of all assets and maximizing profit. To
do that, the modelers will have to understand every possible combination of individual devices
suited for a particular location and develop optimization and management algorithms that can
harness the synergies among various components. For example, among various objective
functions of the hybrid resource optimization and control, reducing energy from a diesel genset
is a desirable outcome, especially in a remote, isolated grid scenario. Efficient use of fuel, or
hedges against winter fuel shortages, should also be accounted for when designing and operating
hybrid plants. For example, this is the case in Alaskan microgrid designs.
In a grid-tied scenario, maximizing the revenue from energy and ancillary markets will be key.
The increased use of variable renewable energy resources has also increased the necessary
reserve, regulation, and ramping capability needed in the grid. A wind-storage hybrid plant is
well-suited to provide these flexibility and ancillary services in addition to firm dispatchable
energy.
15
-----
**_3.1.5 Enabling Fast and Accurate Response_**
Although energy storage can make wind turbines more versatile when hybridized, appropriate
controls and tests must be done to ensure that coordination and response times are good enough
to provide the necessary services. For example, fault ride-through and black-start capability will
need prompt response and even near-instantaneous synchronization with the grid. NREL
researchers have achieved Li-ion battery response times of less than 30‒40 milliseconds (ms)
(NREL 2018a). The response time also depends on which mode the Li-ion battery is operating
in. In a grid-following mode, the response time is about 25 ms, whereas it is about 50 ms in a
grid-forming mode.
In grid-forming mode, a hybrid resource is the primary source of the voltage and frequency
regulation. The underlying inverter of the hybrid resource consists of voltage and current
regulators working together to maintain the nominal state of the grid. The grid-forming inverter
may work as the master or work in parallel with other inverters in the microgrid. The main
challenges during grid-forming mode are to maintain the stability of operation during changing
set points and ensure black start of the microgrid (Fusero et al. 2019). During transitions, such as
connecting and disconnecting from the utility grid or energizing and de-energizing other DERs
in islanded mode, the grid-forming inverter should be able to resynchronize the system with
minimum transients. The mode requires correcting active and reactive power sharing in tandem
with other DERs. To summarize, the inverter in grid-forming mode should be able to mimic the
dynamic behavior of synchronous generators. A precise control of the virtual inertia of the
inverter is important for system stability in both grid-following and grid-forming modes.
#### 3.2 Modeling Controls and Time Scales
The wind-hybrid models used for a simulation could be discrete or continuous. Depending on the
time scales of a discrete simulation, we can capture various dynamics of the hybrid system using
simulation models ranging from electromagnetic transient to the phasor solution at a given
frequency (e.g., 60 Hz). Different resolutions and fidelities of model physics are essential to
capture events ranging from dynamics to minute and hourly deviations. The ability to visualize
and generate data demonstrating the interactions of inverters and batteries at various scales will
aid in an expanded understanding of stability. In general, the power system simulation models
for wind-hybrid systems may be classified as:
- Detail electromagnetic transient simulation (about 1 nanosecond-microsecond, including
modeling power electronics switching).
- Average simulation (about 100 microseconds-milliseconds; good enough to capture the
electrical transients, phase imbalances, faults, and dynamics).
- Phasor solution (at 60 Hz, typically balanced modeling). Sometimes, we may be
interested in a solution at a frequency, such as 60 Hz. A phasor solution solves a much
simpler set of algebraic equations relating to the voltage and current phasors. This
method computes voltages and currents as phasors. Phasors are complex numbers
representing sinusoidal voltages and currents at a particular frequency (Mathworks 2020).
They can be expressed either in Cartesian coordinates (real and imaginary) or in polar
coordinates (amplitude and phase). As the electrical states are ignored in the phasor
solution, the simulation is therefore much faster to execute.
16
-----
- Hybrid simulation and co-simulation (in which certain spatiotemporal characteristics
could be modeled with higher fidelity whereas others could use simpler models for faster
computation). Such modeling can also be done using co-simulation of several existing
tools of varying modeling fidelity to ensure scalability to larger systems and faster
computation. Hybrid simulations may combine simulations at various time scales and
model topologies. One example is combining the electromagnetic transient and transient
stability simulations (Athaide 2018). Another example is the co-simulation of bulk
transmission systems, along with market dispatch, and the individual distribution system
feeders that may connect to a hybrid distributed wind system. The bulk system and
market representation may have to be modeled at 5-minute time scales, whereas the
distribution network may have to be simulated at a higher temporal resolution to respect
voltage bounds (quasi-static steady state).
17
-----
## 4 Operation and Dispatch of Wind-Storage Hybrids
Operation and dispatch of wind-storage hybrids depend on the intended function as well as the
configuration of the hybrid in relation to the external power grid. For example, a hybrid system
operating in an isolated grid may differ significantly than the same hybrid system in gridconnected mode. In an isolated grid, the wind-storage hybrid system may need to operate as a
grid-forming asset, whereas in the grid-connected mode it could normally operate in a gridfollowing mode. This is a common challenge for generation employed in microgrids, and the
complexity increases slightly for a hybrid system in a microgrid.
#### 4.1 Wind-Storage Hybrids Optimal Dispatch
Operating a wind-storage hybrid system involves uncertain intrinsic and extrinsic factors. One of
the major flaws of energy storage dispatch algorithms is that they are often based on forecasts
relying on perfect foresight and/or historical trends. These forecasts are used to optimize net
benefits of the operation and dispatch for a set of geospatial and temporal constraints.
Optimizing operation is governed by technical and economic requirements and can include
multiple time scales or multiperiod formulation of the operation and dispatch of a wind-storage
hybrid system. A margin for error must be included for a real-world system to ensure that its
technical and economic goals are met.
A hybrid system model can have different objectives than the individual subsystem models. The
model may include objective functions, such as optimizing revenue from co-optimized markets,
not just from energy, which is a departure from how energy storage and distributed wind turbines
have been traditionally modeled and dispatched. A wind-storage hybrid system mitigates
variability by injecting more firm generation into the grid. This is particularly helpful in highcontribution systems, weak grids, and behind-the-meter systems that have different market
drivers. A battery combined with a wind generator can provide a wider range of services than
either the battery or the wind generator alone.
A study conducted for an isolated system (Barley and Winn 1996) examined three dispatch
strategies. The results illustrate the nature of the optimal strategy for two simple dispatch
strategies load following and cycle charging (HOMER Energy 2020) for a minimum run time.
The study found that the combination of a simple diesel dispatch strategy with the frugal use of
stored energy is virtually as cost-effective as the ideal predictive strategy.
An NREL study compared an independently coupled and uncoupled dispatch of PV and storage
for a day with a DC-coupled dispatch. As shown in Figure 7, in this case, the DC-coupled system
seems to lose revenue because the shared 50-MW inverter cannot fully utilize the storage system
(the total solar and storage power output is limited to a 50-MW inverter limit) (Denholm,
Eichman, and Margolis 2017). However, such a system (with inverter and load ratio > 1) at times
can avoid clipped energy by forcing the storage to charge with the excess power from PV.
18
-----
**Figure 7. Dispatch of photovoltaics-plus-storage system on a typical day**
Several considerations remain regarding operating and dispatching hybrid plants in grid-tied
mode, including:
- If the hybrid plant is self-scheduled, it needs an algorithm to use forecasts of distributed
wind and prices to dispatch the hybrid wind and storage, considering the maximal
utilization of the storage SOC for multiple look-ahead periods.
- If the hybrid plant will be dispatched by a centralized scheduler and dispatcher, then new
challenges and opportunities arise for the construction of bids and offers that will be sent
from the hybrid plants. If the plant is wind only, then forecasts with their bounds are
typically sent, and in some rare utility-scale applications, the ability of the wind plant to
provide down-regulation (by curtailment) is communicated. If the plant has energy
storage, then communication of SOC and charging and discharging schedules will be key.
For a hybrid plant, the central dispatcher may only want to know 1) the maximum and
minimum generation capability (considering forecasts, available SOC, and price forecasts
for maximizing storage arbitrage); 2) up and down ramp rates for 5- and 10-minute
intervals relevant for regulation and spinning reserve services (from storage rates and
forecasted wind ramps); and 3) operational cost, which may be a function of nominal
wind turbine and storage operational costs, including the impact of cycling on battery
life.
#### 4.2 Wind-Storage Hybrids Supporting Black Start
Black start is the procedure used to restore power when it is lost. It requires a gradual ramping up
of wind turbine power in coordination with other subsystems, including controllable loads.
Wind-storage hybrids of the correct capacity can support black starts of microgrids in island
mode and in permanently isolated grids. In grid-connected mode, the grid normally provides the
required reference voltage to start a wind turbine. Black start is an advanced operation that
requires collaboration and coordination among many subsystems, including storage, using an
advanced control algorithm.
Wind turbines can provide black start in conjunction with an inverter (grid forming) and external
auxiliary power supplies such as battery storage to maintain a minimum DC voltage to initiate
the power ramp-up operation. In the case of the SMA Solar Technology inverter at NREL’s
19
-----
Flatirons Campus microgrid (SMA 2016), the black-start operation starts when, after closing the
DC load-break switch, the inverter checks for voltage at the AC terminals. If no AC voltage is
applied, the AC disconnection unit is closed, and the configured AC voltage set point is ramped
up. The AC voltage set point is usually specified via an external plant control using a Modbus
protocol. If an AC voltage already exists to the inverter terminal, the inverter can synchronize
with the external auxiliary power supply, close the AC disconnection, and support the power
grid. The start voltage must be at least 20% of the nominal AC voltage.
Wind turbines have demonstrated the ability to provide a black start in some special
circumstances. Figure 8 demonstrates a black-start operation utilizing three distributed wind
turbines in an isolated grid. This illustration (Majumder 2020) demonstrates how control systems
gradually adjust the DC voltage, AC voltage, and load to build up the voltage reference for the
second wind turbine to come online and aid the black-start process.
**Figure 8. Distributed black start of wind turbines in an island mode.**
Source: Majumder (2020)
In Figure 8, the black-start operation starts at time, t1, with wind turbine generator 1 (WTG1)
energized using an external auxiliary supply to bring the bus voltage up to 40% of the reference
voltage at t2. From t2 to t3, the wind turbine attains a steady operation at 0.5 MW. At t3, WTG3
is brought into the process and the load in the bus is increased accordingly to 1.2 MW to match
the generation. The voltage ramps up linearly following an external AC reference and reaches
20
-----
the reference voltage at t5. The system remains at steady state until t6, at which point WTG2 is
energized fully to deliver the rated 0.5 MW of power.
Obviously, the black-start operation of the wind turbine is contingent upon the wind resource. An
integrated storage in the DC link of the wind turbine may function as an external auxiliary source
during the operation. For a microgrid with more than one inverter, a superordinate plant control
is required to coordinate various stages of the black start among the inverters.
In the United Kingdom, National Grid ESO has started an ambitious project called Distributed
ReStart (National Grid ESO 2020), which plans to demonstrate the black-start service through
the coordinated operation of DERs.
21
-----
## 5 Techno-Economic Sizing of Wind-Storage Hybrids
Techno-economic evaluation of hybrid plants depends on both the benefits and costs (e.g.,
investment, installation, balance of system, soft, life cycle, and operational costs). Benefits could
include increased revenue by utilizing otherwise trimmed variable renewable energy. Some
components could also be shared for effective cost reduction. With the added flexibility of
energy storage, a hybrid wind power plant may be able to provide—in addition to firm energy—
flexibility and ancillary services with very high dependability. However, because of the shared
inverter, the system may generate less revenue under configurations of hybrid coupling that limit
storage operation during periods of high wind output. We will review some of these trade-offs in
this section, based on the state-of-the-art sizing methods proposed for wind-storage hybrids in
the open-source literature.
The sizing of storage in a wind-storage hybrid depends on various factors, such as resource
profile, load profile, desired storage functions, energy, and other essential reliability services
pricing signals, and the time scale of the analysis. Here, our focus will be on batteries that can
capture and store excess wind turbine energy and send it to the utility grid or a local microgrid as
necessary. The batteries can be integrated with each wind turbine or installed at the wind farm
level, as shown in Figure 1.
The techno-economic sizing of wind-storage systems depends largely on cost models of storage
and wind-hybrid systems. Such sizing tools go beyond conventional decision -making based on
levelized cost of energy-based decision-making. These computer-aided-engineering tools aim to
capture market structure more accurately, along with synergies and value streams from grid
services that may exist at different levels of the co-located subsystems. The market price signal
can make or break the viability of storage for an integrated wind hybrid project. Hence, it is very
important that the different value streams of a hybrid system be evaluated fairly. Some of the
value streams of a wind-hybrid system are not recognized (or are taken for granted) in the legacy
energy market structure that is dominant today.
#### 5.1 Storage Cost Models
In this section, we summarize storage cost models of Li-ion batteries, using data from both the
energy and vehicle industries. We anticipate that the cost models will not deviate significantly
for a hybrid wind power plant compared to a hybrid PV plant, even if a typical wind turbine is
AC, whereas PV is DC. The analyses we include here are taken mainly from Denholm, Eichman,
and Margolis (2017); Fu, Remo, and Margolis (2018); and Cole and Frazier (2019).
An NREL study (Cole and Frazier 2019) looked at the cost projection for 4-hour Li-ion systems
in 2018 dollars. Figure 9 shows the overall capital cost for a 4-hour battery system. Regional
capital cost multipliers for battery systems range from 0.948 to 1.11, with Long Island having the
highest multiplier. This study uses a separate cost projection for the power and energy
components of Li-ion systems. Although the range is considerable, all projections show a decline
in capital costs, with cost reductions of 10%‒52% by 2025.
22
-----
**Figure 9. Battery cost projections for 4-hour Li-ion systems**
Another study analyzed the total net present cost of the hybrid system and compared it with a
system without storage (Dufo-López and Bernal-Agustín 2015) to determine cost per kilowatthour (cycled) of the Li-ion batteries for an economically feasible project. The techno-economic
evaluation of grid-connected storage under a time-of-day electricity tariff suggests that the Li-ion
battery cost would need to be reduced to about 0.085 $/kWhcycled.
#### 5.2 Wind-Hybrid Models
There are a handful of first-generation tools to support techno-economic sizing of storage in
relation to wind-hybrid systems. The popular wind-hybrid models in the industry use
performance analysis at hourly or subhourly time scales. The performance-analysis-based tools
focus on energy balance at each time step of simulation for a typical year. The default time step
of many such models is an hour; hence, there will be 8,760 time steps in a typical year. The most
popular models are Hybrid Optimization of Multiple Energy Resources (Lilienthal 2005); the
Distributed Energy Resources Customer Adoption Model (Stadler et al. 2014, Stadler et al.
2016); and Hybrid2 (Manwell et al. 2006, Baring-Gould 1996), among others. There are inhouse NREL models such as Renewable Energy Integration and Optimization (Cutler et al.
2017) and the System Advisor Model (Blair et al. 2018) for sizing and analyzing hybrid systems,
all of which include the value of resilience (i.e., hours of support during complete grid outage).
These tools use exhaustive performance analysis and/or some optimization techniques like
mixed-integer linear programming to determine the optimal storage size. These models help
design and optimize hybrid systems generally based on the levelized cost of energy or other
relevant objective functions under a set of constraints. They also use market price signals on a
limited basis ($/kWh) but at times miss the value streams associated with hybridization, such as
enhanced essential reliability services; spatiotemporal values of energy and ancillary services
resulting from changing conditions and transmission congestions; associated value streams; and
sharing of infrastructure at component levels. The metrics based on levelized cost of energy
23
-----
based metrics do not consider the difference in value between various distributed-wind-plusstorage configurations. There are not many studies that compare the cost of AC-coupled
distributed wind with DC-coupled distributed-wind-hybrid systems. However, there are some
solar studies that can be used to make an educated guess. Some extra components are needed for
AC-coupled systems, and corresponding labor and balance-of-system costs may range from 1%
to 5% depending on the size and geospatial coordinates of the hybrid project.
There are other tools, such as NREL’s Hybrid Optimization Performance Platform software
(National Renewable Energy Laboratory. Version 1.0. (2021). ), that further consider the synergy
of wind turbine and hybrid systems at the component level and optimize their use. In addition to
quantifying value streams associated with energy and capacity services, they also provide a value
methodology to evaluate the essential reliability services that a wind-hybrid system may provide.
A Joint Institute for Strategic Energy Analysis white paper (Ericson et al., “Hybrid Storage
Market Assessment,” 2017) gives an optimistic evaluation of hybrid storage markets. The paper
evaluates which markets are best suited for battery storage and storage hybrid systems and
reviews regulations and incentives that support or impede the implementation of stand-alone
storage and battery hybrids. California is found to be the most attractive geographic market for
U.S. battery storage because of its storage mandates, high renewables penetration, and regulatory
framework conducive to battery storage projects.
Recently, the scope for adding batteries to grid-connected wind projects is expanding around the
world (Parnel and Stromsta 2020), building on the considerable momentum that already exists
for hybrid solar-plus-storage plants. An earlier study (Ericson et al., “U.S. Energy Storage
Monitor,” 2017) forecasts a twenty-two-fold increase in battery storage and hybrid system
capacity in the United States by 2023 compared to the 2017 baseline.
24
-----
## 6 Conclusion
In this report, we provide a comprehensive overview of the state-of-the-art for wind-storage
hybrid systems, particularly in distributed applications, to enable distributed wind system
stakeholders to realize the maximum benefits from their system. The goal of this report is to
promote understanding of the technologies involved in wind-storage hybrid systems and to
determine the optimal strategies for integrating these technologies into a distributed system that
provides primary energy as well as grid support services.
In our summary of technical benefits and modeling considerations, we identify the main benefit
from storage integration with wind to smooth power output and match energy production with
demand. In addition to smoothing output from the variable wind resource and supporting grid
stability, coupling wind energy generation with a storage system can provide quick-response
frequency and voltage support as well as active power control. Wind-storage hybrid systems can
also support black start of a power system, which can be very beneficial in bringing a power
system back online following a major grid disruption.
Our comparison of distributed-wind-storage hybrid system configurations highlights that turbine
technology, the size of the distributed system, as well as non-technical factors such as market
price for energy and grid services as well as tax credit policies determine which configuration is
best suited to meet generation and load demands while keeping the grid stable. Control strategies
to enable these configurations to meet energy and service demands include baseline reliability
and grid stability control, but also frequency response, as well as voltage and reactive power
support. Additional considerations for controls include enabling flexibility for optimal and
resilient control and achieving time scales for measurement and response that enable these assets
to provide advanced services and operation.
In our assessment of optimal operation and dispatch for distributed-wind-storage hybrid systems,
we highlight the dependence of this optimal operation on the distributed system configuration.
Namely, whether the distributed system is behind or in front of the meter, and whether it is grid
connected or not dictates the optimal operation to achieve both market and grid resilience
benefits.
Similarly, our review of techno-economic feasibility models for hybrid power plant design
indicates that the techno-economic sizing of wind-storage systems depends largely on the system
configuration (whether it is grid connected or not, behind the meter or not) as well as storage
system costs. The hybrid plant design models considered in this report aim to capture market
structure accurately, along with synergies and value streams from grid services. The market price
signal determines the viability of storage in hybrid project design. Hence, it is critical to
comprehensively evaluate hybrid plant value streams, some of which are not recognized by our
current energy market participation and compensation structures.
Based on our assessment of the state-of-the-art of wind-storage hybrid energy systems,
particularly for distributed system applications, opportunities for future work include:
- Developing well-documented, publicly available models for both AC and DC systems
25
-----
- Expanding on the opportunities that complementary wind and solar resources might
provide to a power system
- Evaluating systems in a simulated and power-hardware-in-the-loop environment to aid in
the development of useful case studies to support industry acceptance of distributedwind-storage hybrid systems
- Using wind-storage hybrid simulations to assess various configurations to support the
development of advanced sizing methods for AC- and DC-coupled wind-storage hybrid
systems
- Including other distributed energy resources (such as solar) into distributed hybrid
systems research.
The opportunities for future work outlined here have directly impacted the research to be
addressed through the remainder of the MIRACL project, under which this report was written.
With the remaining life of the project, we plan to conduct research and develop further publicly
available reports that address each of these opportunities.
26
-----
## References
Ambrose, Hanjiro. 2020. “The Second-Life of Used EV Batteries.” Union of Concerned
[Scientists: The Equation (blog). Accessed May 27, 2021. https://blog.ucsusa.org/hanjiro-](https://blog.ucsusa.org/hanjiro-ambrose/the-second-life-of-used-ev-batteries/)
[ambrose/the-second-life-of-used-ev-batteries/.](https://blog.ucsusa.org/hanjiro-ambrose/the-second-life-of-used-ev-batteries/)
Athaide, Denise. 2018. “Electromagnetic Transient—Transient Stability Hybrid Simulation for
Electric Power Systems with Converter Interfaced Generation.” Master’s thesis, Arizona State
University.
[https://repository.asu.edu/attachments/211375/content/Athaide_asu_0010N_18449.pdf.](https://repository.asu.edu/attachments/211375/content/Athaide_asu_0010N_18449.pdf)
Badwawi, Rashid al, Mohammed Abusara, and Tapa Mallick. 2015. “A Review of Hybrid Solar
PV and Wind Energy System.” Smart Science 3(3): 127–138.
[10.1080/23080477.2015.11665647.](http://dx.doi.org/10.1080/23080477.2015.11665647)
Baring-Gould, E. Ian. 1996. Hybrid2: The Hybrid System Simulation Model, Version 1.0, User
_Manual. Golden, CO: National Renewable Energy Laboratory. NREL/TP-440-21272._
[https://www.nrel.gov/docs/legosti/old/21272.pdf.](https://www.nrel.gov/docs/legosti/old/21272.pdf)
Baring-Gould, E. Ian, Charles Newcomb, David Corbus, and Raja Kalidas. 2001. “Field
Performance of Hybrid Power Systems.” Presented at the American Wind Energy Association's
WINDPOWER 2001 Conference, Washington, DC, June 4–7. National Renewable Energy
[Laboratory, Golden, CO, NREL/CP-500-30566. https://www.nrel.gov/docs/fy01osti/30566.pdf.](https://www.nrel.gov/docs/fy01osti/30566.pdf)
Barley, C. Dennis and C. Byron Winn. 1996. “Optimal Dispatch Strategy in Remote Hybrid
[Power Systems.” Solar Energy 58(4–6): 165–179. https://doi.org/10.1016/S0038-](https://doi.org/10.1016/S0038-092X(96)00087-4)
[092X(96)00087-4.](https://doi.org/10.1016/S0038-092X(96)00087-4)
Bergey, Michael. 2020. “Business Model for Rural Cooperative Distributed Wind Microgrids.”
Presented at Distributed Wind Energy Association Distributed Wind 2020, Arlington, VA,
February 26–27. Bergey Windpower, Norman, Oklahoma.
[https://distributedwind.org/event/distributed-wind-2020-lobby-day-business-conference/.](https://distributedwind.org/event/distributed-wind-2020-lobby-day-business-conference/)
Blair, Nate, Nick DiOrio, Janine Freeman, Paul Gilman, Steven Janzou, Ty Neises, and Michael
Wagner. 2018. System Advisor Model (SAM) General Description. Golden, CO: National
Renewable Energy Laboratory. NREL/TP-6A20-70414.
[https://www.nrel.gov/docs/fy18osti/70414.pdf.](https://www.nrel.gov/docs/fy18osti/70414.pdf)
California ISO. 2016. “Fast Fact: What the Duck Curve Tells Us About Managing a Green
[Grid.” https://www.caiso.com/Documents/FlexibleResourcesHelpRenewables_FastFacts.pdf.](https://www.caiso.com/Documents/FlexibleResourcesHelpRenewables_FastFacts.pdf)
Cole, Wesley and A. Will Frazier. 2019. Cost Projections for Utility-Scale Battery Storage.
Golden, CO: National Renewable Energy Laboratory. NREL/TP-6A20-73222.
[https://www.nrel.gov/docs/fy19osti/73222.pdf.](https://www.nrel.gov/docs/fy19osti/73222.pdf)
27
-----
Corbus, David, Charles Newcomb, E. Ian Baring-Gould, and Seth Friedly. 2002. “Battery
Voltage Stability Effects on Small Wind Turbine Energy Capture: Preprint.” Presented at the
American Wind Energy Association WINDPOWER 2002 Conference, Portland, OR, June 2–5.
National Renewable Energy Laboratory, Golden, CO, NREL/CP-500-32511.
[https://www.nrel.gov/docs/fy02osti/32511.pdf.](https://www.nrel.gov/docs/fy02osti/32511.pdf)
Cutler, Dylan, Dan Olis, Emma Elgqvist, Xiangkun Li, Nick Laws, Nick DiOrio, Andy Walker,
and Kate Anderson. 2017. REopt: A Platform for Energy System Integration and Optimization.
Golden, CO: National Renewable Energy Laboratory. NREL/TP-7A40-70022.
[https://www.nrel.gov/docs/fy17osti/70022.pdf.](https://www.nrel.gov/docs/fy17osti/70022.pdf)
Das, Trishna, Venkat Krishnan, and James D. McCalley. 2015. “Assessing the Benefits and
Economics of Bulk Energy Storage Technologies in the Power Grid.” Applied Energy 139: 104–
[118. https://doi.org/10.1016/j.apenergy.2014.11.017.](https://doi.org/10.1016/j.apenergy.2014.11.017)
Denholm, Paul, Josh Eichman, and Robert Margolis. 2017. Evaluating the Technical and
_Economic Performance of PV Plus Storage Power Plants. Golden, CO: National Renewable_
[Energy Laboratory. NREL/TP-6A20-68737. https://www.nrel.gov/docs/fy17osti/68737.pdf.](https://www.nrel.gov/docs/fy17osti/68737.pdf)
DiOrio, Nicholas, and Will Hobbs. 2018. “Economic Dispatch for DC-Connected Battery
Systems on Large PV Plants.” Presented at 2018 PV Systems Symposium—Grid Integration
[Track, Albuquerque, NM, May 3. https://pvpmc.sandia.gov/download/6559/.](https://pvpmc.sandia.gov/download/6559/)
Dufo-López, Rodolfo and José L. Bernal-Agustín. 2015. “Techno-Economic Analysis of GridConnected Battery Storage.” Energy Conversion and Management 91: 394–404.
[10.1016/j.enconman.2014.12.038.](http://dx.doi.org/10.1016/j.enconman.2014.12.038)
Dykes, Katherine, Jennifer King, Nicholas DiOrio, Ryan King, Vahan Gevorgian, David Corbus,
Nate Blair, Kate Anderson, Greg Stark, Craig Turchi, et al. 2020. Opportunities for Research
_and Development of Hybrid Power Plants. Golden, CO: National Renewable Energy Laboratory._
[NREL/TP-5000-75026. https://www.nrel.gov/docs/fy20osti/75026.pdf.](https://www.nrel.gov/docs/fy20osti/75026.pdf)
Elsayed, Ahmed T., Ahmed A. Mohamed, and Osama A. Mohammed. 2015. “DC Microgrids
and Distribution Systems: An Overview.” Electric Power Systems Research 119: 407–417.
[https://doi.org/10.1016/j.epsr.2014.10.017.](https://doi.org/10.1016/j.epsr.2014.10.017)
Energy Storage Systems, Inc. 2016. Beyond Four Hours. Portland, OR.
[https://www.essinc.com/wp-content/uploads/2016/11/Beyond-Four-Hours_ESS-Inc-White-](https://www.essinc.com/wp-content/uploads/2016/11/Beyond-Four-Hours_ESS-Inc-White-Paper_12_2016_mr.pdf)
[Paper_12_2016_mr.pdf.](https://www.essinc.com/wp-content/uploads/2016/11/Beyond-Four-Hours_ESS-Inc-White-Paper_12_2016_mr.pdf)
Ericson, Sean, Eric Rose, Harshit Jayaswal, Wesley Cole, Jill Engel-Cox, Jeffery Logan, Joyce
McLaren, Kate Anderson, and Doug Arent. 2017. U.S Energy Storage Monitor Q2 2017 Full
_Report. GTM Research Group. Golden, CO: National Renewable Energy Laboratory._
[NREL/MP-6A50-70237. https://www.nrel.gov/docs/fy18osti/70237.pdf.](https://www.nrel.gov/docs/fy18osti/70237.pdf)
28
-----
Ericson, Sean, Eric Rose, Harshit Jayaswal, Wesley Cole, Jill Engel-Cox, Jeffery Logan, Joyce
A. McLaren, Katherine H. Anderson, J. Douglas Arent, John Glassmire, et al. 2017. Hybrid
_Storage Market Assessment. The Joint Institute for Strategic Energy Analysis (JISEA). Golden,_
CO: National Renewable Energy Laboratory. NREL/MP-6A50-70237.
[https://doi.org/10.2172/1399357.](https://doi.org/10.2172/1399357)
Fingersh, L.J. 2003. Optimized Hydrogen and Electricity Generation from Wind. Golden, CO:
National Renewable Energy Laboratory. NREL/TP-500-34364.
[https://www.energy.gov/sites/default/files/2014/03/f12/34364.pdf.](https://www.energy.gov/sites/default/files/2014/03/f12/34364.pdf)
Fu, Ran, Timothy Remo, and Robert Margolis. 2018. 2018 U.S. Utility-Scale Photovoltaics-Plus_Energy Storage System Costs Benchmark. Golden, CO: National Renewable Energy Laboratory._
[NREL/TP-6A20-71714. https://www.nrel.gov/docs/fy19osti/71714.pdf.](https://www.nrel.gov/docs/fy19osti/71714.pdf)
Fusero, Michelle, Andrew Tuckey, Alessandro Rosini, Pietro Serra, Renato Procopio, and
Andrea Bonfiglio. 2019. “A Comprehensive Inverter-BESS Primary Control for AC
[Microgrids.” Energies 12(20): 3810. https://doi.org/10.3390/en12203810.](https://doi.org/10.3390/en12203810)
Glasgo, Brock, Inês Lima Azevedo, and Chris Hendrickson. 2018. “Expert Assessments on the
Future of Direct Current in Buildings.” Environmental Research Letters 13(7): 074004.
[https://doi.org/10.1088/1748-9326/aaca42.](https://doi.org/10.1088/1748-9326/aaca42)
Gu, Yunjie, Wuhua Li, and Xiangning He. 2014. “Frequency-Coordinating Virtual Impedance
for Autonomous Power Management of DC Microgrid.” IEEE Transactions on Power
_Electronics 30(4): 2328–2337. https://doi.org/10.1109/TPEL.2014.2325856._
Hybrid Optimization of Multiple Energy Resources (HOMER) Energy. 2020. “Cycle Charging.”
[https://www.homerenergy.com/products/pro/docs/latest/cycle_charging.html.](https://www.homerenergy.com/products/pro/docs/latest/cycle_charging.html)
Lilienthal, Peter. 2005. “HOMER Micropower Optimization Model.” Presented at the 2004 DOE
Solar Energy Technologies Program Review Meeting, October 25–28, 2004, Denver, CO.
National Renewable Energy Laboratory, Golden, CO, NREL/CP-710-37606.
[https://www.nrel.gov/docs/fy05osti/37606.pdf.](https://www.nrel.gov/docs/fy05osti/37606.pdf)
Majumder, Rajat. 2020. “Weak Area Network Control of Wind Turbine Generators.” Webinar,
[April 20. https://www.esig.energy/event/webinar-weak-area-network-control-of-wind-turbine-](https://www.esig.energy/event/webinar-weak-area-network-control-of-wind-turbine-generators/)
[generators/.](https://www.esig.energy/event/webinar-weak-area-network-control-of-wind-turbine-generators/)
Manwell, J.F., A. Rogers, G. Hayman, C.T. Avelar, J.G. McGowan, U. Abdulwahid, and K. Wu.
2006. Hybrid2–A Hybrid System Simulation Model:Theory Manual. Golden, CO: National
Renewable Energy Laboratory.
Marsh, Jacob. 2019. “AC vs. DC Solar Battery Coupling: What You Need To Know.”
[EnergySage, Accessed June 18, 2020. https://news.energysage.com/ac-vs-dc-solar-battery-](https://news.energysage.com/ac-vs-dc-solar-battery-coupling-what-you-need-to-know/)
[coupling-what-you-need-to-know/.](https://news.energysage.com/ac-vs-dc-solar-battery-coupling-what-you-need-to-know/)
29
-----
Mathworks. 2020. “Introducing the Phasor Simulation Method.” Accessed June 17, 2020.
[https://www.mathworks.com/help/physmod/sps/powersys/ug/introducing-the-phasor-simulation-](https://www.mathworks.com/help/physmod/sps/powersys/ug/introducing-the-phasor-simulation-method.html)
[method.html.](https://www.mathworks.com/help/physmod/sps/powersys/ug/introducing-the-phasor-simulation-method.html)
Miller, N.W. 2013. “GE Wind Plant Advanced Controls.” 1st International Workshop on Grid
Simulator Testing of Wind Turbine Drivetrains. Golden, CO: GE Energy Consulting.
[https://www.nrel.gov/grid/assets/pdfs/turbine_sim_12_advanced_wind_plant_controls.pdf.](https://www.nrel.gov/grid/assets/pdfs/turbine_sim_12_advanced_wind_plant_controls.pdf)
Miller, N.W. 2014. “GE Experience With Turbine Integrated Battery Energy Storage.” Presented
[at the 2014 IEEE PES General Meeting, July 27–31, National Harbor, MD. https://www.ieee-](https://www.ieee-pes.org/presentations/gm2014/PESGM2014P-000717.pdf)
[pes.org/presentations/gm2014/PESGM2014P-000717.pdf.](https://www.ieee-pes.org/presentations/gm2014/PESGM2014P-000717.pdf)
National Grid ESO. 2020. “Distributed ReStart: What Is the Distributed ReStart Project?”
[Accessed August 19, 2020. https://www.nationalgrideso.com/future-energy/projects/distributed-](https://www.nationalgrideso.com/future-energy/projects/distributed-restart)
[restart.](https://www.nationalgrideso.com/future-energy/projects/distributed-restart)
National Renewable Energy Laboratory (NREL). 2018a. “March Developments at NREL Shave
Response Time for Large-Scale Batteries Down to Milliseconds.” Energy Systems Integration
[Newsletter, March 2018. https://www.nrel.gov/esif/esi-news-201803.html.](https://www.nrel.gov/esif/esi-news-201803.html)
National Renewable Energy Laboratory. 2018b. Federal Tax Incentives for Energy Storage
_Systems. Golden, CO: National Renewable Energy Laboratory. NREL/FS-7A40-70384._
[https://www.nrel.gov/docs/fy18osti/70384.pdf.](https://www.nrel.gov/docs/fy18osti/70384.pdf)
National Renewable Energy Laboratory. 2020. “Battery Second-Use Repurposing Cost
[Calculator.” Accessed July 17, 2020. https://www.nrel.gov/transportation/b2u-calculator.html.](https://www.nrel.gov/transportation/b2u-calculator.html)
National Renewable Energy Laboratory. Hybrid Optimization and Performance Platform
[(HOPP). Version 1.0. (2021). https://github.com/NREL/HOPP](https://github.com/NREL/HOPP)
Parnel, John and Karl-Erik Stromsta. 2020. “Storage Hybrid Plants Becoming More Attractive in
Maturing Wind and Solar Markets.” Greentech Media, Accessed June 12, 2020.
[https://www.greentechmedia.com/articles/read/storage-co-location-getting-more-attractive-for-](https://www.greentechmedia.com/articles/read/storage-co-location-getting-more-attractive-for-maturing-wind-and-solar-markets)
[maturing-wind-and-solar-markets.](https://www.greentechmedia.com/articles/read/storage-co-location-getting-more-attractive-for-maturing-wind-and-solar-markets)
Porter, Kevin, Kevin Starr, and Andrew Mills. 2015. “Variable Generation in Electricity
Markets.” Presented at the Utility Variable Generation Integration Group Fall Technical
Workshop, October 15. Reston, VA: Utility Variable-Generation Integration Group.
[https://www.esig.energy/download/variable-generation-electricity-markets-kevin-porter-kevin-](https://www.esig.energy/download/variable-generation-electricity-markets-kevin-porter-kevin-starr-andrew-mills/)
[starr-andrew-mills/.](https://www.esig.energy/download/variable-generation-electricity-markets-kevin-porter-kevin-starr-andrew-mills/)
Poudel, R.C., J.F. Manwell, and J.G. McGowan. 2020. “Performance Analysis of Hybrid
Microhydro Power Systems.” Energy Conversion and Management 215: 112873.
[https://doi.org/10.1016/j.enconman.2020.112873.](https://doi.org/10.1016/j.enconman.2020.112873)
30
-----
Reilly, J., R. Poudel, V. Krishnan, R. Preus, I. Baring-Gould, B. Anderson, B. Naughton, F.
Wilches-Bernal, and R. Darbali. 2021. Distributed Wind Controls: A Research Roadmap for
_Microgrids, Infrastructure Resilience, and Advanced Controls Launchpad (MIRACL). Golden,_
CO: National Renewable Energy Laboratory. NREL/TP-7A40-76748.
[https://www.nrel.gov/docs/fy21osti/76748.pdf.](https://www.nrel.gov/docs/fy21osti/76748.pdf)
Rekdal, Kristin. 2018. “Battery Energy Storage Integration via DC/AC Converter in Grid
Connected Wind Turbines.” Master’s thesis, Department of Electric Power Engineering,
[Norwegian University of Science and Technology. https://ntnuopen.ntnu.no/ntnu-](https://ntnuopen.ntnu.no/ntnu-xmlui/bitstream/handle/11250/2507658/18855_FULLTEXT.pdf)
[xmlui/bitstream/handle/11250/2507658/18855_FULLTEXT.pdf](https://ntnuopen.ntnu.no/ntnu-xmlui/bitstream/handle/11250/2507658/18855_FULLTEXT.pdf)
Singh, Mohit and Surya Santoso. 2011. Dynamic Models for Wind Turbines and Wind Power
_Plants. Golden, CO: National Renewable Energy Laboratory. NREL/SR-5500-52780._
[https://www.nrel.gov/docs/fy12osti/52780.pdf.](https://www.nrel.gov/docs/fy12osti/52780.pdf)
SMA. 2016. “Operating manual: SUNNY CENTRAL STORAGE.” SMA.
[https://files.sma.de/downloads/SCS-BE-E7-en-12.pdf.](https://files.sma.de/downloads/SCS-BE-E7-en-12.pdf)
Stadler, Michael, Gonçalo Cardoso, Salman Mashayekh, Thibault Forget, Nicholas DeForest,
Ankit Agarwal, and Anna Schönbein. 2016. “Value Streams in Microgrids: A Literature
[Review.” Applied Energy 162(4): 980–989. 10.1016/j.apenergy.2015.10.081.](http://dx.doi.org/10.1016/j.apenergy.2015.10.081)
Stadler, Michael, Markus Groissböck, Gonçalo Cardoso, and Chris Marnay. 2014. “Optimizing
Distributed Energy Resources and Building Retrofits With the Strategic DER-CAModel.”
_[Applied Energy 132: 557–567. 10.1016/j.apenergy.2014.07.041.](http://dx.doi.org/10.1016/j.apenergy.2014.07.041)_
Weaver, Wayne W., Rush D. Robinett III, Gordon G. Parker, and David G. Wilson. 2015.
“Energy Storage Requirements of DC Microgrids With High Penetration Renewables Under
Droop Control.” International Journal of Electrical Power & Energy Systems 68(6): 203–209.
[10.1016/j.ijepes.2014.12.070.](http://dx.doi.org/10.1016/j.ijepes.2014.12.070)
Wenzl, Heinz, E. Ian Baring-Gould, Rudi Kaiser, Bor Yann Liaw, Per Lundsager, Jim Manwell,
Alan Ruddell, and Vojtech Svoboda. 2005. “Life Prediction of Batteries for Selecting the
Technically Most Suitable and Cost Effective Battery.” Journal of Power Sources 144(2): 373–
[384. https://dx.doi.org/10.1016/j.jpowsour.2004.11.045.](https://dx.doi.org/10.1016/j.jpowsour.2004.11.045)
Zeng, Jianwu, Jiahong Ning, Xia Du, Taesic Kim, Zhaoxia Yang, and Vincent Winstead. 2019.
“A Four-Port DC-DC Converter for a Standalone Wind and Solar Energy System.” IEEE
_[Transactions on Industry Applications 56(1): 446–454. 10.1109/TIA.2019.2948125.](https://doi.org/10.1109/TIA.2019.2948125)_
Zhao, Jinxin and Florian Dörfler. 2015. “Distributed Control and Optimization in DC
[Microgrids.” Automatica 61: 18–26. https://doi.org/10.1016/j.automatica.2015.07.015.](https://doi.org/10.1016/j.automatica.2015.07.015)
31
-----
| 22,016
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.2172/1874259?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.2172/1874259, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "public-domain",
"status": "GREEN",
"url": "https://www.nrel.gov/docs/fy22osti/77662.pdf"
}
| 2,022
|
[
"Review"
] | true
| 2022-06-22T00:00:00
|
[] | 22,016
|
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00aa86a02e0ed382527c76d41dbeedfc8922d890
|
[
"Computer Science"
] | 0.896764
|
Empirical Studies of TESLA Protocol: Properties, Implementations, and Replacement of Public Cryptography Using Biometric Authentication
|
00aa86a02e0ed382527c76d41dbeedfc8922d890
|
IEEE Access
|
[
{
"authorId": "3331419",
"name": "K. Eledlebi"
},
{
"authorId": "2292768",
"name": "C. Yeun"
},
{
"authorId": "145522617",
"name": "E. Damiani"
},
{
"authorId": "1395903855",
"name": "Yousof Al-Hammadi"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://ieeexplore.ieee.org/servlet/opac?punumber=6287639"
],
"id": "2633f5b2-c15c-49fe-80f5-07523e770c26",
"issn": "2169-3536",
"name": "IEEE Access",
"type": "journal",
"url": "http://www.ieee.org/publications_standards/publications/ieee_access.html"
}
|
This study discusses the general overview of Timed Efficient Stream Loss-tolerant Authentication (TESLA) protocol, including its properties, key setups, and improvement protocols. The discussion includes a new proposed two-level infinite <inline-formula> <tex-math notation="LaTeX">$\mu $ </tex-math></inline-formula>TESLA (TLI <inline-formula> <tex-math notation="LaTeX">$\mu $ </tex-math></inline-formula>TESLA) protocol that solves the authentication delay and synchronization issues. We theoretically compared TLI <inline-formula> <tex-math notation="LaTeX">$\mu $ </tex-math></inline-formula>TESLA with the previously proposed protocols in terms of security services and showed that the new protocol prevents excessive use of the buffer in the sensor node and reduces the DoS attacks on the network. In addition, it accelerates the authentication process of the broadcasted message with less delay and assures continuous receipt of packets compared to previous TESLA Protocols. We also addressed the challenges faced during the implementation of TESLA protocol and presented the recent solutions and parameter choices for improving the efficiency of the TESLA protocol. Moreover, we focused on utilizing biometric authentication as a promising approach to replace public cryptography in the authentication process.
|
Received February 3, 2022, accepted February 14, 2022, date of publication February 18, 2022, date of current version March 3, 2022.
_Digital Object Identifier 10.1109/ACCESS.2022.3152895_
# Empirical Studies of TESLA Protocol: Properties, Implementations, and Replacement of Public Cryptography Using Biometric Authentication
KHOULOUD ELEDLEBI 1, CHAN YEOB YEUN 1,2, (Senior Member, IEEE),
ERNESTO DAMIANI 1,2, (Senior Member, IEEE), AND YOUSOF AL-HAMMADI 1,2
1Center for Cyber-Physical Systems, Khalifa University, Abu Dhabi, United Arab Emirates
2Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, United Arab Emirates
Corresponding author: Chan Yeob Yeun ([email protected])
This work was supported in part by the Center for Cyber Physical Systems (C2PS), Khalifa University; and in part by the Technology
Innovation Institute (TII) under Grant 8434000386-TII-ATM-2035-2020.
**ABSTRACT** This study discusses the general overview of Timed Efficient Stream Loss-tolerant
Authentication (TESLA) protocol, including its properties, key setups, and improvement protocols. The
discussion includes a new proposed two-level infinite µTESLA (TLI µTESLA) protocol that solves
the authentication delay and synchronization issues. We theoretically compared TLI µTESLA with the
previously proposed protocols in terms of security services and showed that the new protocol prevents
excessive use of the buffer in the sensor node and reduces the DoS attacks on the network. In addition,
it accelerates the authentication process of the broadcasted message with less delay and assures continuous
receipt of packets compared to previous TESLA Protocols. We also addressed the challenges faced during the
implementation of TESLA protocol and presented the recent solutions and parameter choices for improving
the efficiency of the TESLA protocol. Moreover, we focused on utilizing biometric authentication as a
promising approach to replace public cryptography in the authentication process.
**INDEX TERMS Biometric authentication, lightweight cryptography, machine learning, TESLA protocol.**
**I. INTRODUCTION TO LIGHTWEIGHT CRYPTOGRAPHY**
Currently, the internet of things (IoT) is rapidly expanding
and being applied to several fields, such as in healthcare
monitoring, environmental monitoring, smart censoring,
and vital decision-making in different professional careers.
However, the challenging features of IoT include their
involvement in constrained devices such as RFIDs, sensor
devices, and mobile phones, which have limited energy
resources, communication bandwidth, and memory storage.
With the increase in the application of these IoT devices,
they become vulnerable to malicious attacks, and thus, the
implementation of efficient yet lightweight security protocols
is urgently needed [1].
Lightweight cryptography involves simplified encryption
protocols and schemes with low computational complexity
that can be processed on such constrained devices to provide
adequate security, considering the limited energy, bandwidth,
The associate editor coordinating the review of this manuscript and
approving it for publication was Mouloud Denai .
and memory storage [1], [2]. It implements appropriate
cryptographic functions/properties without expensing the
power of their constrained devices and occupies less RAM
for the applications to enable the network to secure their
members and the data [3]–[5].
In context, confidentiality is an essential aspect for
maintaining the security services in cryptographic protocols,
where only the authorized users in a certain organization
or system should be allowed to communicate and transfer
information to one another. In addition to authenticating the
user or the device, the integrity of the message should not be
manipulated by an attacker during transmission. Moreover,
the authentication process between two parties should be
completed within a short time interval to avoid the occurrence
of a DoS attack during the process. Furthermore, the
availability of the network members is vital for ensuring the
connection and communication with the authorized parties
to prevent the connection of a malicious node pretending
as a system component. Finally, the entire authentication
process should not expose the computational demands and
-----
communication bandwidth to avoid a high communication
and computation overhead [4].
Therefore, the maintenance of all the security services
is becoming a challenge to researchers in the design
of cryptographic protocols, and the services are required
to be prioritized by focusing on the confidentiality and
authentication of users along with providing multiple layers
of authorization [5]. However, the integrity of the message,
especially for constrained devices, is still a weak property
that needs to be maintained during the implementation
of simple lightweight cryptographic schemes, where users
should be allowed to verify whether the received data is
transmitted from a legitimate claimed source and is not being
manipulated during the transmission process [5]. All the
previous challenges have motivated us to focus on developing
a lightweight cryptographic protocol feasible for constrained
devices, aiming to achieve user/device authentication and
integrity properties, while considering their limited power
resources, limited memory space and limited computational
capabilities.
In this study, we focused our analysis on the Timed
Efficient Stream Loss-tolerant Authentication (TESLA)
protocol, which is a lightweight cryptography capable
of providing the existing security services with low
cost [6]. Additionally, the protocol has the following specific
requirements:
1- Simple functions that are understandable and adaptable
to several types of IoT devices are implemented to
enable appropriate cryptographic properties.
2- The power of the constrained IoT devices is not
expensed.
3- A smaller RAM size is occupied during its implementation in IoT devices.
Although the TESLA protocol provides important functionalities, it relies heavily on public key infrastructure (PKI)
for initiating the authentication channel between the network
members, which increases its vulnerability toward quantum
attacks [7].
Our contribution toward the enhancement of the TESLA
protocol initiated with the design of a new hybrid TESLA
protocol called two-level infinite µTESLA (TLI µTESLA),
where we theoretically established its ability to provide
security services within the acceptable levels of computation
and communication demands as compared to previous
TESLA protocols [8]. This study aims to further improve and
provide simulation analysis to the proposed TLI µTESLA,
considering the suitable implementation environments for
TESLA protocol, selecting parameters that provide optimum
performance, and introducing an alternative to PKI using
biometric authentication methods to establish the first line of
authentication among the IoT members. We therefore listed
our contribution as follows:
1. Establishing security analysis of TLI µTESLA protocol and time complexity comparison with variant
TESLA protocols.
2. Performing theoretical analysis on the selection of
parameters that help in achieving best performance for
TLI µTESLA protocol.
3. Introducing an alternative to PKI for Initiating the
authentication channel between the network members
using biometric authentication for generating the Initial
authentication parameters in TLI µTESLA µTESLA
protocol.
The remainder of this paper is organized as follows.
The fundamental properties of the TESLA protocol along
with its general functionality are presented in Section II.
In addition, the list of updates of TESLA protocol is
introduced in Section III, wherein the compatibility aspects
of the previously proposed hybrid TESLA protocols are
discussed in terms of the scalability of IoT. In Section IV,
the TESLA protocols are compared in terms of the security
services they provide, and the possible implementations of
TESLA protocol in IoT systems are summarized. Moreover,
the recent challenges faced during the implementation
of TESLA protocol along with the proposed solutions
and selection of parameters are discussed in Section V.
Subsequently, the importance of establishing Root of Trust
among IoT members to implement authentication protocols
is highlighted in Section VI. Thereafter, in Section VII, the
biometric authentication is introduced as a replacement to
the public cryptography used for sharing the commitment
key and initial security parameters among the IoT members.
Finally, a conceptual summary of the proposed methods to
secure the biometric data during the authentication process
is provided in Section VIII, and the overall discussion along
with the conclusions of the current research are presented in
Section IX.
**II. TESLA PROTOCOL: GENERAL OVERVIEW AND**
**IMPORTANT PROPERTIES**
TESLA is a broadcast authentication protocol used in
wireless sensor networks (WSNs)/IoT with a single source
of trust. In addition, it uses lightweight primitives to realize
important properties for implementing the constrained IoT
devices [6]. First, it relies on symmetric cryptography
with a symmetric key shared between two parties (e.g.,
sender and receiver). It relies on the message authentication
control (MAC) function, which is a pseudorandom function
that uses the symmetric key with the original message as
an input to generate a MAC value as an output to be used
with the original message for transmission to the receiver.
Subsequently, the receiver side uses the symmetric key with
the original message received as input to calculate its own
MAC value from the MAC function that has already been
established between the sender and receiver. Therefore, the
receiver can review if the calculated number corresponds to
the received number for authenticating the sender and the
message.
The second vital property of the TESLA protocol is the
presence of a delay interval to disclose the symmetric key
between the sender and receiver. Thus, the symmetric key will
-----
**FIGURE 1. Establishment of loose synchronization between sender and**
receiver in the TESLA protocol.
not be disclosed during the transmission period, but a certain
delay is present during which the receiver is required to wait
until the sender reveals the key to authenticate the previous
message [6]. The delay aids in providing data authentication
and integrity review as the attacker will be unable to
accurately predict the period until the key is revealed, and
consequently, the receiver side would be secured by the time
the key is disclosed. This process reduces the probability
of the attacker sniffing the key to manipulate and force
malicious messages.
The third essential property of the TESLA protocol is
the loose synchronization established between the sender
and receiver to reduce the computational demands and the
energy drain of the constrained devices. The synchronization
between the sender and receiver is established to initiate a
communication channel, as presented in Fig.1. Generally, the
synchronization and sharing of important security properties
rely on asymmetric cryptography [9]. The receiver initiates a
request message, including the receiver time tR, and generates
a nonce—a number used only once to avoid replay back
attacks. Thereafter, the sender receives the message at time
tS and replays back with tS, and the received nonce is
encrypted with the sender private key. At the receiver side,
the receiver will authenticate the message by decrypting
it using the sender’s public key and inspect the nonce in
the message. Upon authenticating the message, the receiver
records tS, tR, and the current time t to calculate the upper
bound time expressed as t − tR + tS. This represents the
maximum synchronization error for the receiver to wait until
the message is received by the sender and respond back [10].
The security of TESLA protocol relies on a one-way
hash chain, which is a chain containing a sequence of keys
generated using a one-way hash function [6]. Upon deciding
the channel between the sender and receiver, the sender
will divide it into sub-time intervals of the same duration.
The time-window duration is agreed between the sender and
receiver. Each time interval will be protected by a symmetric
key from the corresponding key chain. The sender will
randomly select a value representing the last key element
in the chain and apply it to the one-way hash function for
generating the previous key element in the chain. This process
continues until the first key element is generated in the
chain, which is called the commitment key, K0 This keychain
**FIGURE 2. Generation of keychain in the TESLA protocol.**
exhibits important properties: first, the commitment key can
generate and verify any key element in the chain; second,
we can verify and generate key Kj from the chain using
another key Ki from the chain for any i[th] value less than the j[th]
value. This is because the lower key elements can be used to
generate and verify higher key elements in case one of the
keys is lost. During the authentication between the sender
and receiver, the disclosure of the keys will be in reverse
order—initiating by disclosing the first key element, and
thereafter, the second key element, and so on, as presented
in Fig.2.
**III. UPDATED TESLA PROTOCOLS**
Although the TESLA protocol exhibits symmetric properties,
it does not support the scalability of new IoT devices joining
a system or the loss of the predefined keychain packets owing
to weak communication [11]. Therefore, improvements and
updates are proposed to the original TESLA protocol to
achieve more security services and scalability.
_A. TESLA ++_
TESLA was developed to simplify the messages
++
transmitting between the sender and receiver to reduce the
computation overhead and the loss of packets [12]. In the
original TESLA, the calculated MAC value and the original
message are sent to the receiver, and after a certain delay, the
key is disclosed to be used by the receiver to generate its own
MAC value and verify the sender’s message. However, once
the sender calculates the MAC value in TESLA, it will
++
be transmitted only with the index of the time interval that
the sender is talking to the receiver, and after a certain delay,
the key and original message will be disclosed to the receiver
for generating the MAC value and verifying the message.
The advantage of this protocol is that if the packet containing
the key and message is lost, the attacker will not have prior
knowledge of the message before disclosing the key, and
therefore, the message cannot be manipulated. Moreover, this
reduces the buffering size of the messages waiting until key
disclosure.
_B. STAGGERED TESLA_
Staggered TESLA is proposed to reduce the time required
to filter the packets being received by the receiver side and
reduce the probability of buffering overflow while waiting
for key disclosure [13]. This protocol aims to include several
MAC values within the transmitted packet, and these MAC
values are related to the time intervals corresponding to the
-----
undisclosed keys to ensure that an attacker cannot manipulate
the packet. The number of MAC values included in the
message depends on the type of application and the level
of security it can manage. This protocol is advantageous
because the inclusion of the MAC values in the message can
partially authenticate the packet before disclosing the key. For
instance, once the receiver can detect a pattern from the MAC
values being received from prior authenticated packets, the
receiver can authenticate the packet arriving from a legitimate
source. In case unusual MAC numbers are received, the
receiver will immediately drop the packet without buffering
it until key disclosure, which reduces the buffer overflow in
the system.
_C. µTESLA PROTOCOL_
_µTESLA protocol aims to simplify the functionality of_
the TESLA protocol from a broadcast authentication into
a unicast authentication, where the sender (base station)
authenticates the receivers individually [1], [11], [14]. The
protocol relies on the condition that the receiver should
review a value related to the time interval of the transmitting
base station, to ensure that the key is not disclosed yet.
Otherwise, an outside attacker can manipulate the message.
This process reduces the computational power and communication bandwidth usage of the receiver receiving unnecessary
authentication packets that do not belong to the receiver and
can aid in limiting the authenticated users.
_D. UPDATED µTESLA PROTOCOLS_
To overcome the scalability issue in the µTESLA protocol,
researchers improved the scheme through the inclusion of a
third trusted party between the base station and receiver [15].
Instead of a single party (base station) sending the message
and symmetric key to the receiver, a third trusted party called
the key server, responsible for sending the symmetric key,
is included, whereas the base station is only required to send
the authentication message. This protocol is advantageous in
that it includes two parties transmitting key information that
cannot be easily forged by the attacker.
An additional advantage of this protocol is considered
through the following example: an attacker succeeds in
forging its key to the receiver, and any message or key sent
for authentication suffers from that single point of failure.
In the protocol, the receiver will initiate a threshold value
for the maximum error failures of authentication messages
arriving from the base station. Moreover, on every instance
of an authentication failure, an encounter will start adding
these failures until the threshold value is reached. Upon
reaching the threshold value, the receiver will initiate a
request to the key server to update the key. Thereafter,
the key server will review the time interval at which the
base station is communicating to that receiver and will
transmit the key corresponding to that interval. Subsequently,
the receiver will use the received key to authenticate
the message transmitted from the base station. In such
cases, the successful authentication of the message indicates
that the already saved key is malicious, and the protocol will
replace it with a new key.
An important stage is securing the communication link
between the receiver and key server. As the receiver initiates
a request to the key server, the latter will notify the base
station regarding the request for updating the key. Thereafter,
the base station will broadcast a message containing a new
symmetric key used to communicate the key server with the
receiver, but this message will be encrypted with a symmetric
key that will be disclosed by the key server at a later stage.
After a certain delay, the key server will reveal the key to
allow the receiver to authenticate both parties and extract the
new key for communicating the receiver with the key server.
Furthermore, an additional improvement to the µTESLA
protocol is called multilevel µTESLA that provides the
advantages of authenticating the base station and reducing
the authentication delay between the sender and receiver
to reduce the probability of DoS attack [16]. This protocol
introduces two keychain levels: a high-level keychain directly
connected to the base station, and a low-level keychain
responsible for authenticating the messages transferred
between the sender and receiver. In particular, the high-level
keychain exhibits a long-time interval to cover the entire
lifetime of the receiver without requiring an additional establishment of a new keychain, which reduces the computational
complexity and demands of the process. Moreover, each time
interval in the high-level key chain will be further divided into
short time intervals corresponding to the low-level key chain.
The use of short time intervals reduces the time required to
receive the message from the receiver and to authenticate the
message, so that the delay can be within tolerable range to
diminish the probability of a DoS attack.
A vital property of this protocol is that the high-level
keychain is connected to the low-level keychain such that
the low-level keys can be generated from the high-level keys
using the one-way hash function in case several low-level
packets are lost. The authentication message transmitted from
the base station to the receiver is called the commitment
distribution message (CDM), which contains the time interval
of communication between the receiver and base station,
the commitment key of the low-level keychain, the MAC
value for the receiver for verification, and the high-level
key for authenticating the previous message from the prior
time interval. In addition, the CDM packet is periodically
transmitted by the base station to reduce the probability
of loss, as high-level key packets require a long time to
re-establish synchronization between the sender and receiver.
Contrarily, this causes buffer overflow on the receiver,
including communication and computational overhead.
Owing to the problems discussed for multilevel µTESLA,
an improvement protocol called efficient fault-tolerant multilevel µTESLA protocol contributes toward shortening the
recovery period of lost high-level packets by acting on a single high-level time interval, which reduces the buffering time
and the risk of experiencing memory-based DoS attacks [17].
In context, another improvement to the multilevel µTESLA
-----
is called enhanced DoS-resistant protocol that contributes
to tolerating packet loss by reducing the authentication
time of CDM packets through adding an image value to
these packets and maintaining continuity in occurrence of
a packet loss [17]. For instance, if the receiver is receiving
the CDMi at i[th] time interval, it will contain an image value
of the CDMi+1 packet. Upon receiving the second packet,
the image value will be calculated and compared with the
value transmitted in the previous packet for authentication.
In case the CDMi+1 packet is lost, the receiver will wait
for CDMi+2 and use the high-level key of the CDMi packet
to verify the key in the CDMi+2 packet, as the lower keys
from the keychain can verify the higher keys in the chain.
In case the verification is achieved, the receiver can utilize
the image of the lost CDMi+1 packet that is available in
the CDMi packet to provide continuous authentication of the
packets.
_E. INF-TESLA PROTOCOL_
An additional improvement to TESLA Protocol is called
the infinite-TESLA, which considers providing continuous
resynchronization between the sender and receiver in case
the keychain level is terminated [11]. In the original TESLA
protocol, when the key level attains the last key element,
the system needs to re-establish a new synchronization
between the same sender and receiver, such as they are
new to the connection. Those unnecessary establishments
increase the computational demands and energy wastage.
Thus, the Infinite-TESLA introduced two key chains in
offset alignment between each other, which maintains the
functioning of a chain and the synchronization between the
sender and receiver in case a key chain has been terminated.
The way these two keys are included in the CDM packet
can follow either the two-key mode, where both keys are
transmitted in the CDM packet, or they can follow an
alternating mode, where a key from either of the chains is
presented alternatingly as if one key chain is corresponding
to the odd intervals and the other chain is corresponding to
the even intervals.
_F. TWO-LEVEL INFINITE µTESLA (TLI µTESLA)_
We proposed a hybrid TESLA protocol called two-level
infinite µTESLA (TLI µTESLA), which combines both
the multilevel µTesla and the infinite-Tesla to combine the
benefits of reducing the authentication delay as well as
providing continuous synchronization between the sender
and receiver [8]. The theoretical process of this protocol
relies on the hash function and the establishment of loose
synchronization between the sender and receiver. Similar to
the multilevel-µTESLA, two keychain levels are introduced,
where the high-level keychain has a long-time interval to
cover the lifetime of the receiver. This keychain will be further
divided into sub-intervals to represent the low-level keychain,
where the infinite-TESLA protocol is implemented. Additionally, the low-level keychain will contain two keychains in
offset alignment to each other; the CDM packet will contain
two commitment keys for the low-level keychain with their
MAC numbers for verification, including the high-level key
related to the previous CDM packet. Similar to the multilevel_µTESLA, the low-level commitment keys in TLI-µTESLA_
can be derived from the high-level commitment key through
a special one-way hash function F01.
**IV. SECURITY ANALYSIS AND SERVICES DISCUSSION**
Evaluating the computational security of TESLA Protocols
relies on the security capability of their respected hash
functions: one-way hash function used to generate the keys in
the keychain and MAC function used to encrypt the message
with Its corresponding key. The design goals of one-way
hash function Is to possess preimage resistance (Inability to
reverse the output to extract the Input) and collision resistance
(considering a low probability of generating the same output
from two different Inputs).
Therefore, the best guidance toward ensuring the security
of hash function is analyzing the complexity of attacking
the previous goals. For an n-bit hash function, an adversary
would require 2[n] number of operations to produce preimage
and 2[n][/][2] number of operations to produce a collision [18].
By the time the adversary breaks the hash function, the
key would be authenticated at the receiver side and the
message is received successfully. Regarding MAC function,
two Important security properties need to be obtained: key
non-recovery and computation resistance of the MAC value.
For an adversary to determine the MAC key, exhaustive
research is required by checking all possible t number of
keys to find a value that agrees with the sent one, which
requires a 2[t] number of operations. As for guessing the MAC
value of a preimage of a given MAC value requires about
2[−][n] number of operations for n-bit MAC algorithm [18].
however, this guessed value cannot be verified without a
prior knowledge of either the text message or the key, which
makes the probability of forging a malicious MAC value
nearly Impossible within the given short authentication time
in variant TESLA protocols.
Let us now consider proving the position and integrity
properties of the packets delivered by the TESLA protocol.
Such discussion applies to all versions of the TESLA
protocol, including the one put forward in this paper
(TLI-µTESLA protocol discussed in [8] and In section III-F)
as they all share the same key-checking provisions. In principle, the properties can be proven by following the hash-chain
to verify the relation between the disclosed key and the
commitment key. If the relation holds, the received packet
occupies in the receiving order the same position it had in the
sending order. Also, the disclosed key is the one originally
used to encrypt the packet; as a consequence, the packet
delivered was not modified after its encryption, and integrity
is proven.[1] This proof can be formalized by modeling the
TESLA protocol as a finite state automaton where each
1For the authenticity property, the disclosed key must be signed. Upon
verification of the signature, the receiver can link the holder of the disclosed
key to an identity.
-----
**TABLE 1. Comparison between TESLA protocols.**
step along the hash chain corresponds to a transition. The
properties can thus be proven for any fixed hash-chain, i.e.,
for any fixed distance between the delivered packet and
the initial one. In the general case, however, an infinite
state system would be needed to represent the inductive
relationship between an arbitrary i-th packet and the initial
packet. In timed automata, transitions may be put local timing
constraints called invariants. An automaton can pass through
an invariant transition an arbitrary number of times. For
such reasons, TAME, a proof engine for timed finite state
models, was used in [19] to model TESLA protocol as timed
automaton with an invariant, the transition modeling a step
along the hash-chain. TAME invariant analysis proves that the
TESLA protocol can guarantee the order and data integrity of
packets coming at an arbitrary distance from the initial one.
The above-mentioned proofs of correctness apply also to our
TLI- µTESLA since the core of the authentication process the
same and our modifications to the mechanisms did not affect
the correctness of the protocol.
Regarding the security of the disclosed key, guessing
attacks are not feasible [19], [20] as there is no a strategy that
an attacker can use to guess the disclosed key that is better
than random guessing. Moreover, the generation of the keys
is done using one-way hash function, which is impossible to
be inverted, likewise the MAC function, which is designed
to be non-invertible. Therefore, choosing a relatively large
key size, will decrease the probability of brute force attack
to disclose the key and break the keychain to a significate
low value [21]. So, by expanding the key space, the protocol
can achieve a low-key guessing probability. This proof is also
applicable to our TLI- µTESLA which has the same keychecking provisions as the original TESLA protocol proven
In [19].
The services properties of the proposed scheme were
analyzed by discussing the essential security services and
comparing them with the limitations of the previous TESLA
protocols. The limitations of TESLA protocol include its
inability to support the scalability of IoT devices, as the oneway key chain should be predefined. However, this poses
communication and computational demands and can cause
loss of packets. Upon the termination of the key chain,
a new synchronization process is required to be established
between the sender and receiver, which does not support
immediate and continuous authentication, and thus, results in
vulnerability toward DoS attacks.
The improvement of TESLA over TESLA is in terms
++
of buffering of MAC and its index to occupy less memory
as compared to the buffering of MAC and message in the
TESLA protocol, which aims to reduce the DoS attacks.
However, the protocol does not support the scalability of
IoT network and follows the synchronization establishment
between the network members upon the termination of
the key chain, which lacks immediate and continuous
authentication.
Although the staggered TESLA improves the authentication process by including the MAC numbers and enhances
the scalability of the IoT network, it augments the buffering
issues and packet loss if an attacker floods the buffer with
replicas of MAC numbers. In addition, it does not support
continuous authentication between the network members as
the key chain terminates.
The properties of µTESLA are beneficial in saving
computation power, communication bandwidth, and memory
requirements by reducing the size of the transmitted packets.
However, unicasting the initial key and security parameters
will delay the joining of new members to the network, which
-----
does not support scalability. Moreover, it does not resolve
the problems of the original TESLA protocol, such as the
lack of immediate and continuous authentication and the
vulnerability toward DoS attacks.
The improved µTESLA protocol improves the resistance
against DoS attacks but increases the communication overhead by requiring several exchanges of messages between the
key server and base station. Moreover, it does not support
immediate and continuous authentication as it requires
resynchronization after the termination of the key chain.
Relatively, multilevel _µTESLA_ introduces several
improvements including supporting scalability of IoT devices
and fault-tolerance toward the loss of packets, as the
low-level key chains can be derived from the high-level
key chains. Additionally, multilevel µTESLA provides
immediate authentication to the CDM message, as several
copies of CDM packets are frequently transmitted to reduce
the risk of losing high-level packets. However, the copy of
the subsequent CDM included in the current CDM increases
the size of the CDM as well as the buffering on the sensor
nodes, because the copy of the subsequent CDM might
be of similar length to the current CDM, which is buffer
consuming. Moreover, the inclusion of two-level key chains
increases the computation overhead in comparison to the
original µTESLA. In addition, multilevel µTESLA does
not support continuous authentication between the network
members.
In context, enhanced multilevel µTESLA aims to reduce
the computation overhead of the multilevel µTESLA by
shortening the recovery period of lost high-level packets
using a single high-level time interval. Additionally, it tolerates packet loss by reducing the authentication period of
CDM packets via adding an image value to these packets
and maintaining continuity in the occurrence of packet loss.
However, this continuity assumption was not evaluated and
analyzed to avoid any high demand of memory resource for
the long key chains.
Inf-TESLA provides continuous authentication between
the network members, as it reduces the resynchronization
process by including dual offset keychains. This reduces
the risks of man-in-the-middle attacks in case an attacker
attempts to inject the attacker key over the network key chain,
wherein the algorithm will notify the receiver regarding the
violation of the key-chain exchange procedure. However,
Inf-TESLA does not support the scalability of the network
members owing to the number of keychains required to be
specified prior to the synchronization packets.
In comparison, the proposed TLI-µTESLA protocol
enhances the original TESLA with two commitment keys in
the CDM message and two low-level key chains and using
image value of upcoming CDM instead of using the copy of
the subsequent CDM in the current CDM. This allows the
protocol to avoid increasing the size of the buffer in the sensor
node and reduce the DoS attacks on the network. The lowlevel key chain exhibits short time intervals to accelerate the
authentication process of the broadcasted message with less
delay. Additionally, the dual-offset key-chain mechanism is
used in the low-level key chains to assure continuous receipt
of packets from the high-level key chain. All the services are
discussed in detail as follows:
**Immediate Authentication: In addition to the symmetric**
property in TESLA protocol, the proposed protocol relies
on the two commitment keys in the low-level keychain for
authentication instead of sending a copy of the CDM packet
on every instance of transmission between the sender and
receiver, which reduces the authentication delay to a tolerable
value.
**Data Integrity: The originality of the message is main-**
tained by ensuring that it is not altered during transmission,
and a higher security level is achieved with the implementation of two keychain layers and offset alignment keychains as
compared to alternative TESLA protocols.
**Communication and computation overhead: The imple-**
mentation of two offset alignment keychains realizes the
continuous authentication instead of sending copies of CDM
packets during transmission, which considerably reduces the
communication overhead and computation complexity in
comparison to previous TESLA protocols.
**Scalability: The successful application of IoT technology**
to daily-life scenarios involves security schemes that are
required to display their ability for adapting to the variations
in the environment and the inevitable growth in the amount
of work and the number of network members [22]. The
implementation of two-level keychains in TLI-µTESLA
enhances the broadcasting of the messages to a scalable
number of devices and increases the number of messages
broadcasted between the members.
**Resistance to DoS attacks: The authentication protocols**
implemented on constrained devices are highly targeted at
increasing their immunity against various forms of DoS
attacks, including buffer overflow attacks and lack of
continuity in the authentication process [23]. In TESLA
protocols, a buffering process occurs in the CDM packets
until the subsequent packet is received to authenticate the
previous message. In particular, the authentication will not
occur if the receiver does not have adequate buffer space to
wait until key disclosure. This can create network traffic that
forces the receiver to drop the packets, thereby increasing the
vulnerability of the receiver to DoS attacks. Moreover, a high
probability of experiencing communication overhead exists
in a constrained network that can result in lost keys and lack of
continuity in the authentication process. In the proposed TLI
_µTESLA protocol, two commitment keys in the low-level_
keychain are presented to authenticate the message after
the disclosure of the high-level key instead of sending a
copy of the CDM packet, which reduces the excessive usage
of the buffer, and consequently, reduces the vulnerability
toward DoS attacks. The short interval in the low-level
keychains allows the key to be authenticated immediately
without buffering. In addition, the offset alignment of
the commitment keys in the low-level keychain allows
continuous authentication of the packets received from the
-----
high-level keychain, as the low-level keys are used in an
alternate manner. The first keychain index covers the period
of the high-level interval, while the second keychain index
covers half between the first high-level interval and the
next high-level interval, where both commitment keys of
the low-level keychains can be derived from the high-level
commitment key.
Let us consider an example where both the authentication delay and continuous authentication are solved in
TLI µTESLA protocol. At i[th] time interval, the receiver
receives CDMi packet containing the high-level key Ki−1.
To authenticate the CDMi packet, the receiver needs to buffer
it until receiving the CDMi+1 to use the key Ki disclosed
in it. The receiver needs to authenticate Ki by applying the
one-way hash function Ki−1 = F0(Ki). if the first condition is
satisfied, the receiver needs to authenticate the MAC number
of the CDMi packet to authenticate the commitment keys of
the low-level keychain. If the first condition is not met, the
receiver will drop the packet. On the other side, if the CDMi+1
packet is lost, the receiver will wait until CDMi+2 is received
to use the one-way hash function F0 to authenticate the
high-level key. Consequently, the low-level keychains will be
derived from the authenticated high-level key using the oneway function F01. Using the short time intervals in the lowlevel keychains, the authentication process can be accelerated
with less delay, allowing the packets and their keys to be
immediately authenticated without oversizing the buffering.
Moreover, the presence of the two offset low-level keychains
instead of one keychain allow a continuous initialization and
authentication of the sensor nodes. Once the first low-level
index chain Is expired, the second low-level index chain will
continue covering half of the next high-level index chain.
The security services offered by the proposed protocol and
the previous improvements to TESLA Protocol in addition
to the time complexity of each protocol are comparatively
presented in Table 1. Based on a theoretical perspective,
we can observe that the core of the TLI-µTESLA protocol
is not changed compared to the original TESLA protocol,
considering the exchange of the commitment key and other
essential security parameters between the server and its
clients, to the usage of the one-way hash function and the
MAC function to process the security computations during
the authentication process. Furthermore, the authenticity of
the coming packets in TESLA protocols depend on the previous packets being legitimate as discussed in [19], [21] which
indicates a recursive authentication. Therefore, the whole
authentication scheme in TESLA must be bootstrapped by
guaranteeing that the initial packet is authentic. This is
assumed to be done by the sender using the more expensive
method of digitally signing the first packet [6].
the additional Improvements proposed In TLI µTESLA
protocol can achieve the required services within acceptable
computation and communication overhead and with similar
time complexity as compared to the existing protocols. Thus,
our future step is to verify and prove that the proposed
protocol can achieve the security services by performing
simulation and numerical analysis. Our first step in this
paper is investigating the most suitable environments for
implementing the proposed TESLA protocol.
**V. CHALLENGES IN THE TESLA PROTOCOL AND**
**PROPOSED SOLUTIONS**
Throughout the implementation of TESLA protocol in GPS
navigation messages and VANET networks, researchers were
concerned about two critical weaknesses: the disclosure delay
of the key and the loose time synchronization between the
sender and receiver. As discussed in Section II, the disclosure
delay is used to introduce the asymmetric property in TESLA
Protocol to protect the keys used in authenticating the
communication between the network members, whereas the
loose synchronization provides simplicity and light-weighted
functionality to the protocol. Nevertheless, a long disclosure
delay and loose synchronization time error can introduce
vulnerability to the protocol by allowing attackers to use
the time gap for spoofing the messages with the previously
disclosed keys [21]–[25].
The issue of loose synchronization is a critical weakness of
the VANET network in implementing the TESLA protocol.
Therefore, researchers suggested increasing the awareness of
the loose synchronization delay at the sender side to limit
the option of sending messages to necessary neighboring
vehicles as well as prevent a probable attack [25]. Moreover,
the risks of the previous challenges can be reduced and
the most suitable performance can be achieved from the
TESLA protocol by analyzing the decisions based on certain
parametric selections [21]. For instance, the suitable hash
function (e.g., SHA-256) must be selected to provide preimage resistance for reducing the ability of reversing the
output inside the hash function and generating the input.
In addition, the hash function should permit collision
resistance to reduce the probability of generating the same
output from two distinct inputs.
Regarding the selection of the hash function, the bruteforce attack should be identified; this is a scenario where
the attackers perform hash-chain computations to break the
keychain by matching their key with the latest released
key in the chain. A proposed suggestion to avoid this
precomputation and breaking of the keychain is to introduce a
type of cryptographic randomness called salt, which is added
to the key before it is hashed to generate the previous key
in the chain [21]. The salt value can be added to the key
following two major approaches: using a timestamp of the
key release, which requires a time-varying hash function to
be used in a deterministic agreement between the network
members and adding a fixed random number to the key before
being hashed. The addition of the salt value is required to be
the same for all the keys belonging to the same keychain but
is required to be altered in case the sender and receiver initiate
an additional keychain between each other.
Apart from the addition of the salt value, certain parameters
can be controlled in TESLA to reduce the brute-force attack
and the probability of success in breaking the keychain.
-----
In context, the key length and keychain length are the most
important parameters that strongly influence the reduction
in the probability of predicting the key in the chain and
the probability of calculating the number of hash functions
that the attacker needs to perform to break the key chain.
Researchers in [21] studied the influence of various key
and keychain sizes on the probability of brute-force attack
and determined that the linear increase in the key length
is exponentially related to the increase in the immunity
toward the brute-force attack. Therefore, they deduced that
the keychain size does not need to be quite long if the
key length is adequately large. In particular, [26] proposed
that a minimum of 128 bits is necessary for maintaining a
secure chain. Another study in [21] reviewed the variations
in the authentication delay and computation speed upon
increasing the key size to achieve a certain level of immunity
against brute-force attacks. The results revealed that a shorter
authentication time delay allows the algorithm to use smaller
key lengths and key sizes. However, the large variations
in the authentication delay and computation speed resulted
in only small variations in the required key lengths, which
maintained the security level of the algorithm even with a long
authentication delay.
Regarding the key length size, [24] analyzed the computational load required by the user to apply a TESLAbased navigation-message authentication scheme. TESLA
protocol was implemented in four mobile devices with
varying processing power and capability to study the effect of
the processor on the performance of the TESLA protocol and
its energy expenditure. The analysis was related to monitoring
the time required for verifying the commitment key, the time
required to process the MAC number and message, and the
time required to authenticate the last key element in the
chain using the commitment key by altering the number
of subintervals in the communication channel. The results
revealed that the time required for verifying the commitment
key or the MAC number was not significantly influenced by
the devices as compared to that resulting from variations in
the keychain length (time distance between a certain key and
the commitment key). The processing required for verifying
a key using the commitment key increases with the time
distance, which further increases the battery drainage in the
network. This indicates that there exists a tradeoff between
increasing the key length to achieve higher security levels
against brute-force attacks and increasing the computation
complexity in the network that affects the power consumption
and the lifetime. Therefore, a compromise value must be
selected for the key length size to balance the security and
energy expenditure in the network. The selection of the
parameter values that pose the most influence on TESLA
protocol and its performance are summarized in Table 2.
Recent implementation of TESLA protocols involved the
authentication of GPS navigation messages and event-driven
traffic between the VANET network members [25]–[28].
TESLA protocol is proposed to be used during the real-time
nature of VANETs as it uses symmetric key encryption
**TABLE 2. TESLA parameter selection for better performance.**
schemes, which are verified by the receiver in a shorter time
as compared to using asymmetric digital signatures [25], [27].
In addition, the TESLA protocol was considered as a
favorable option to authenticate the one-way navigation messages owing to its hybrid properties (symmetric/asymmetric
functionalities), reduced authentication message size, and the
simplicity of symmetric key transfer [28], [29].
With reference to GPS navigation system, TESLA protocol
can also be Implemented In location-based services (LBS)
to offer an unconditional privacy to the user’s query and
protects the services offered by the service provider [30]–[32]
without revealing the location of the service provider or the
user. LBS can be found In VANET where privacy-preserving
mechanisms are essential to avoid having a malicious vehicle
among the members causing Intentional accidents [33].
therefore, TESLA protocol allows the vehicle to request for
services from the location server without revealing the query
content to the location server.
TESLA protocol can also be used in urban aircraft mobility
(UAM) systems, which have been developed from unmanned
aircraft vehicles and have provided the opportunity of highly
automated aircrafts operating and transporting passengers
or cargo at lower altitudes within urban and suburban
areas [28], [29]. Unlike conventional drones flying over
unoccupied areas, UAM members are designed to operate
over metropolitan areas with high density of population and
property. Consequently, an aircraft failure will certainly result
in substantial damage. Moreover, the design of such network
architecture, including the sensors and the autopilot systems,
are more complicated than that in drones. Thus, the UAMs
are more exposed to attacks that can target specific data and
affect the integrity and availability of the services [29]. Such
security requirements are certainly achieved with the TESLA
protocols that assure its lightweight property and flexibility
between the network members. The implementation of the
TESLA Protocol to secure the authentication of the network
members will aid in protecting critical navigation data along
with providing command and control components with sensor
information.
**VI. ROOT OF TRUST**
During the discussion of existing TESLA protocols,
researchers assumed that the initial security parameters, e.g.,
the hash function, commitment key, and disclosure delay,
were already shared between the two parties. However,
to simulate the proposed TESLA protocol, we need to
-----
understand the initialization process and transmission of
the initial security parameters and the initial symmetric key
between the sender and the receiver before establishing the
TESLA protocol process. Thus, the concept of the Root
of Trust (RoT) is important as it provides the foundational
security component of a connected device and is a set of
implicitly trusted functions that the remainder of the system
or device can use to ensure security [34]–[36]. As IoT is more
concerned with wireless sensor network (WSN), we need
to understand that WSN is a distributed infrastructure that
establishes a trust routine between the members to ensure the
security of the communication and integrity of the messages.
Typically, RoT exhibits multiple forms depending on the type
of the implementation network [35]. For instance, there is a
centralized node distributing various hierarchical trust values
among the members in the centralized network. Nonetheless,
this form can be affected by the central point of failure, e.g.,
if an attacker manages to attack the central node the entire
system will become dysfunctional. An alternative form of
trust is in the distributed network, where each node monitors
the other nodes in the system and evaluates their trust based
on the performance and behavior of the network. However,
this addressed value must be frequently updated, which
increases the computational demands and depletes the energy
of the network.
Another form of trust that seamed feasible to most
networks and systems is the certificate-based trust model,
wherein a trust party generates the certificates to the users
signed by the private key of this trusted party and each
node can verify the others’ certificates in the system using
the public key of the trusted party. This concept forms
the basics of PKI that creates the digital certificates to
authenticate the members in the network [37]. The types of
PKI include the RSA and elliptic curve cryptography, where
the latter demonstrated the ability to provide the same security
performance but with a shorter key size as compared to the
RSA, to enhance its feasibility in application in constrained
devices [36].
Although PKIs appear to be highly secured as they rely
on three hard mathematical problems (integer factorization
problem, discrete logarithm problem, and elliptic-curve
discrete logarithm problem), they are vulnerable toward
quantum attacks as the evolution of quantum computing
improves the processing speed to alleviate the previous
problems [37]. Therefore, the primary objective is to replace
the PKI that is used for transmitting the initial security
parameters and reduce the risk of quantum attacks. For
instance, in the implementation of the TESLA protocol
on mobile applications, PKI can be replaced by using
the SIM platform as the trusted party for transmitting the
symmetric key and initial security parameters. However,
in sensor devices such as RFID or wireless sensor nodes,
we can replace the PKI with biometric tools and biometric
authentication schemes that will aid in sending the initial
security parameters between the two parties. The following
section contains a thorough explanation about biometric
**FIGURE 3. Biometric authentication systems.**
authentication and its securing methods that are helpful to
generate the root of trust for TLI-µTESLA protocol.
**VII. BIOMETRIC AUTHENTICATION**
Biometric authentication is rapidly replacing traditional
authentication methods and is becoming a part of everyday
life, including accessing banking and government services.
It has shown significant advantages in the field of security
since it is difficult to lose, forget, copy, forge, and break [38].
The main objective behind using biometric authentication is
to try to generate the symmetric key between the two parties
from biometrics samples or features for a secure message
transmission without revealing sensitive information and
without using public cryptography. Examples of biometric
tools are electro-cardio diagram (ECG), electroencephalogram, fingerprint, face, iris, and voice-based recognition,
as shown in Fig.3.
The most popular type used is the ECG, which allows the
user to live monitor the body signals during authentication
and is used for different purposes such as in hospitals, security
checks, and in wearable devices [38]. Hospitals use ECG data
to track patients’ health history by registering the patients
with their identities and the ECG signals, which need to be
sufficiently monitored to perform subsequent identifications.
Some security checkpoints are now using ECG authentication
to increase their security level. Employees usually register
their identities using their ECG that must be stabilized for
subsequent recognitions within a short period. Wearable
devices can continuously authenticate users; however, in this
case, the wearable devices must be able to differentiate
between different users’ modes such as awake, anger, and
sleep modes. All these modes have different signals and
different energy demands in addition to the noise generated
when monitoring the signal; these must be normalized
when analyzing each user to help improve the quality of
authentication.
-----
**FIGURE 4. Traditional machine learning process.**
Biometric authentication has been combined with machine
learning techniques to train the models on biometric data,
thereby improving the accuracy and efficiency of the
authentication process [38]. Machine learning allows systems
to perform tasks without being explicitly programmed to
do so. Machine learning is therefore being widely used in
areas including image processing and biometrics, as it can
effectively analyze and interpret large datasets [39]. Machine
learning models such as regression models are being used to
predict the patterns in the data and generate output based on
the identified patterns, or to make decisions using classifiers
and pattern recognition models. Fig.4 shows a traditional
machine learning process.
Biometric authentication has been discussed in [38], [39]
in which ECG data from hospitals and security check points
were analyzed for authentication purposes. The first stage
involves feature extraction of the ECG signals to identify
which case each data sample belongs to. The next stage
involves cleaning the data before being imported to the
training model, through checking and adjusting the drift
between the different data samples, normalizing the different
amplitudes of the signal, removing the noise generated during
the monitoring process, and correcting flipped signals, if any.
The next stage involves dividing the data into subintervals
based on the peak-to-peak levels of the ECG signals with a
time window determined based on the minimum heartbeat of
a certain heart rate to ease the computational process. The
following stage involves passing the adjusted data through
the training model; in [40], [41], the decision tree was used
because of its flexibility in dealing with data of different sizes
and frequencies.
Fingerprint biometrics are also very commonly used for
authentication and have been discussed in [42] as having
two processing phases: user registration phase, which enables
the user to use his fingerprint to generate his own private
key for later use for authentication; and user authentication
phase, which enables authentication between the user and
server through the generation of a session key and a message
authenticator. A brief explanation of the two phases is
provided as follows.
_A. USER REGISTRATION_
This stage is responsible for registering the user by capturing
his fingerprint using feature extraction and selecting minutiae
points from the consistent region, which is mostly captured
through feature extraction. These points are then applied
through convolutional computations to generate the private
key.
_B. USER AUTHENTICATION_
When authentication takes place between the user and server,
the fingerprint is first captured and encrypted; it is then sent to
the server for verification. The server uses another synthetic
fingerprint from its own database to extract the minutiae
points, add randomness, and generate security values to create
the session key. These values will be sent to the user to
generate a similar session from his side. To ensure that both
sides generate the same session key, the server generates
a certain value ‘‘B,’’ encrypts it as ‘‘B’’’ with the session
key and sends both B and B’ to the user. The user then
receives the values, encrypts B using his generated session
key, and compares the result with the received B.’ The
authentication using fingerprint biometrics has shown an
accuracy of approximately 95% [42].
A hybrid multimodal authentication protocol was presented in [43], wherein face recognition, fingerprint, and
ECG data were used to authenticate the user and achieve
gender reveal features. The proposed model uses feature
extraction for each dataset, as each set can have distinctive
characteristics and requires its own cleaning procedure.
Specifically, a deep learning model was used instead
of a machine learning one, to ensure that the analysis
and classification processes are robust against the noises
generated from the different and large biometric datasets.
Since these three features (face recognition, fingerprint, and
ECG) can be captured using a single device and can be
used simultaneously, the model provides high security and
immunity to attacks.
Our previous discussion showed the importance of biometric templates in declaring and authenticating the identity
of the user during real-time monitoring process. Therefore,
by extracting the minutiae points out of the fingerprints, or by
generating the cleaned sampled ECG data, we can use them
to represent the identity token of the user. The identity token
will then be applied to a cryptographic function (e.g.: oneway hash function) to produce the commitment key, which is
the essential parameter used for generating the keychain of
TESLA protocol and for authenticating the communication
channel between the network members, without relying on
PKI to transfer the commitment key. The challenging process
is protecting the biometric templates from being exposed and
from revealing the identity of the user. We therefore discussed
in the below section the proposed techniques used to secure
the biometric data during the authentication process.
**VIII. SECURING BIOMETRIC DATA DURING**
**AUTHENTICATION**
Biometrics authentication is widely used in mobile applications to allow access to several sensitive services including
banking and government services; hence, it is important to
-----
consider how the biometrical datasets (biometric samples and
templates) can be protected from being spoofed by attackers
and used to relate them back to the real identity of the user.
As such, there were concerns regarding developing protocols
to reduce exposing the biometrical identities/samples when
performing authentication between the user and server.
Among the proposed protocols was the zero-knowledge proof
of knowledge protocol, which allows the user, called the
‘‘prover,’’ to prove to the other server, called the ‘‘verifier,’’
that he knows the value of ‘‘x’’ without revealing it but
provides proof that he does. The method presented in [44]
relies on a trusted party responsible for receiving the
biometric identities and protecting them to protect the user
identity and its sensitive information from being revealed and
sniffed by an attacker during the process. The method consists
of two phases to provide secure biometric authentication:
**Enrolment phase: In this phase, the user receives an**
identity token from the identity provider (trusted party)
containing three secrets related to the user; one secret is
derived from his biometric identity, such as miniature points
from his fingerprint or from his ECG signal or from face
recognition; another secret is derived from the password; and
the third secret is derived from the cryptographic salt value or
artifact that will be used in case one of the previous secrets
are lost. After establishing the identity token, the biometric
templates will pass through the training classifier model to
generate the classifier parameters that will be later used to
authenticate the user with the server.
**Authentication phase: During this phase, the server needs**
to check the originality of the identity token as well as the
identity of the user. The identity token is authenticated by
checking the signature of the identity provider by decrypting
it using the identity provider public key. The server will
then challenge the user by sending a challenge value to be
used at the user side with its biometric templates extracted
from the feature extraction, his password, and the classifier
parameters to perform zero knowledge computations and
generate proof values. The proof values will be sent to the
server to perform another set of zero knowledge computations
and generate results that will determine whether the user is
legitimate or not. An additional verification step is then added
from the server side to establish a session key to perform a
handshake with the user to avoid man-in-the-middle attacks.
Random numbers are generated from the server side and sent
to the user to use them with his own secrets and establish a
session key; the server uses the random numbers generated
with the user identity token to generate the same session
key, and so, they can initiate the handshake. The primary
feature of this method is that it avoids saving the user’s
biometric templates in either the identity provider or the
server. Moreover, the identity provider is not involved in the
authentication process; this protects the sensitive information
of the user. Furthermore, the addition of the handshake helps
in reducing the possibility of a man-in-the-middle attack.
Upgrading the authentication process of mobile services
is another matter, as several services based on a single
authentication process must be accessed. This concept was
introduced in [45], where mutual authentication and key
agreement were performed using a single sign in to a trusted
party called the token service provider. In this method, the
user and the service providers are registered to the token
provider; the user uses his biometric samples and password
to generate zero knowledge proof values, which are then
sent to the token provider to register and receive a token.
The service providers also send their certificates and proof
of identities for registration and to receive the token from
the token provider. After establishing the tokens, the user
and the service providers can mutually authenticate each
other and communicate without performing an authentication
process per service. The advantages of this method are as
follows: reduction in the computation and communication
overhead through the use of a single authentication process
by the token provider; use of a centerless authentication
process where the token provider is not included during
communication with the service providers, thereby ensuring
that sensitive information of the users are well protected, and
avoiding the center point of failure on the token provider; and
provision of a remote biometric-based authentication process
between several services simultaneously, thereby increasing
the scalability and usability of the system.
Finally, another method for protecting biometric identities
and templates was proposed in [46] to provide blind
authentication to both the user and the server side. The
proposed method aims to protect the users’ biometric
identities from the servers and protects the servers’ classifiers
parameters from the users. A trusted party called the
enrolment server will be responsible for establishing the
blind authentication between the parties. The user will send
the biometric templates from his feature extraction to the
enrollment server to pass them through the training model
to generate the classification parameters, which will then
be sent to the server. During authentication between the
user and server, the user will encrypt his biometric identity
with his public key and send it to the server to compute
the products of the encrypted biometrics and the encrypted
classifier parameters and randomize the results for security
purposes. The randomized products will then be sent to
the user to unlock them and calculate the sum of the
products. The resulting sum will be resent to the server
to derandomize it and find the result to check it against a
threshold value to determine whether to accept or reject that
user. The advantage of this method relies on the ability of
keeping the sensitive information (user’s identity and server’s
classification parameters) hidden from both parties while
still being able to authenticate each other. The method does
not involve the use of the enrollment server, which contains
all the sensitive information, in the authentication process,
thereby avoiding serious losses if the server or the client are
compromised.
A conductive numerical proof of ZKP applicability is
discussed deeply in [47], [48] to achieve confidential transactions and private smart contracts in blockchain technology.
-----
Moreover, they emphasized on ZKP ability to provide a
verifiable proof of the user’s identity using remote biometric
authentication, without leaking the biometric modalities to
untrusted parties. The mentioned proofs can guarantee us
that the usage of ZKP during the generation of the biometric
commitment key in TLI-µTESLA can help in securing the
identity of the user.
**IX. CONCLUSION**
In summary, we discussed an important lightweight cryptography protocol used in IoT-constrained devices—the TESLA
protocol. In addition, the updates and improvements developed were presented, including our proposed TLI-µTESLA,
and they were theoretically compared in terms of security
services. We highlighted the important parameters of the
TESLA protocol, for example, symmetric cryptography,
presence of the disclosure delay, reduced message size,
and loose synchronization between the network members.
Moreover, we discussed the recent implementations of
TESLA in the VANET network and GPS navigation message authentication and proposed a new implementation
of TESLA in UAMs. The challenges faced during the
implementation of the protocol were considered along with
the suggested solutions and parameter selections, which will
assist in the simulation stage of TLI µTESLA. Our study
demonstrated that the determination of an adequately large
key length strongly impacts the reduction of brute-force
attack during the disclosure delay or the establishment of
the loose synchronization between the network members.
The addition of the salt value to the key chain aids in
reducing the probability of attackers breaking the keychain.
Furthermore, the challenges of reducing the involvement
of public cryptography during the authentication process is
required in the TESLA protocol to avoid quantum attacks
through the utilization of biometric authentication to generate
the session key. Finally, the authentication schemes using
biometric templates revealed the importance of protecting the
biometric templates during authentication of other parties in
the network.
**REFERENCES**
[1] C. Li, ‘‘Security of wireless sensor networks: Current status and key
issues,’’ in Smart Wireless Sensor Networks. Rijeka, Croatia: InTech, 2010.
[2] A. Perrig, R. Szewczyk, J. D. Tygar, V. Wen, and D. E. Culler. (2002).
_SPINS: Security Protocols for Sensor Networks. Accessed: Mar. 27, 2021._
[Online]. Available: http://www.citris.berkeley.edu/
[3] W. J. Buchanan, S. Li, and R. Asif, ‘‘Lightweight cryptography methods,’’
_J. Cyber Secur. Technol., vol. 1, nos. 3–4, pp. 187–201, Sep. 2017, doi:_
[10.1080/23742917.2017.1384917.](http://dx.doi.org/10.1080/23742917.2017.1384917)
[4] S. Kim, R. Shrestha, S. Kim, and R. Shrestha, ‘‘Introduction to automotive
cybersecurity,’’ in Automotive Cyber Security. Singapore: Springer, 2020,
pp. 1–13.
[5] K. Grover and A. Lim, ‘‘A survey of broadcast authentication schemes for
wireless networks,’’ Ad Hoc Netw., vol. 24, pp. 288–316, Jan. 2015, doi:
[10.1016/j.adhoc.2014.06.008.](http://dx.doi.org/10.1016/j.adhoc.2014.06.008)
[6] A. Perrig, R. Canetti, J. D. Tygar, and D. Song, ‘‘The TESLA
broadcast authentication protocol,’’ Dept. IBM Res., Carnegie Mellon
Univ., Pittsburgh, PA, USA, Tech. Rep., 2005. [Online]. Available:
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.869.3259&
rep=rep1&type=pdf
[7] X. Bogomolec, J. G. Underhill, and S. A. Kovac, ‘‘Towards post-quantum
secure symmetric cryptography: A mathematical perspective,’’ Cryptol.
ePrint Arch., Tech. Rep., 2019.
[8] A. Al Dhaheri, C. Y. Yeun, and E. Damiani, ‘‘New two-level µTESLA
protocol for IoT environments,’’ in Proc. IEEE World Congr. Services,
[Jul. 2019, pp. 84–91, doi: 10.1109/SERVICES.2019.00029.](http://dx.doi.org/10.1109/SERVICES.2019.00029)
[9] S. Suwannarath, The TESLA-Alpha Broadcast Authentication Protocol for
_Building Automation System. Long Beach, CA, USA: California State_
Univ., 2016.
[10] K. S1 and S. R2. Securing Tesla Broadcast Protocol With Diffie–
_Hellman Key Exchange. Accessed: Mar. 28, 2021. [Online]. Available:_
https://iaeme.com/ijcet.asp
[11] S. Câmara, D. Anand, V. Pillitteri, and L. Carmo, ‘‘Multicast delayed
authentication for streaming synchrophasor data in the smart grid,’’ in
_Proc. IFIP Adv. Inf. Commun. Technol., vol. 471, 2016, pp. 32–46, doi:_
[10.1007/978-3-319-33630-5_3.](http://dx.doi.org/10.1007/978-3-319-33630-5_3)
[12] A. Studer, F. Bai, B. Bellur, and A. Perrig, ‘‘Flexible, extensible, and
efficient VANET authentication,’’ J. Commun. Netw., vol. 11, no. 6,
[pp. 574–588, Dec. 2009, doi: 10.1109/JCN.2009.6388411.](http://dx.doi.org/10.1109/JCN.2009.6388411)
[13] Q. Li and W. Trappe, ‘‘Staggered TESLA: A multicast authentication
scheme resistant to DoS attacks,’’ in Proc. IEEE Global Telecom_[mun. Conf., vol. 3, Dec. 2005, pp. 1670–1675, doi: 10.1109/GLO-](http://dx.doi.org/10.1109/GLOCOM.2005.1577934)_
[COM.2005.1577934.](http://dx.doi.org/10.1109/GLOCOM.2005.1577934)
[14] Y. Fan, I.-R. Chen, and M. Eltoweissy, ‘‘On optimal key disclosure interval
for µTESLA: Analysis of authentication delay versus network cost,’’ in
_Proc. Int. Conf. Wireless Netw., Commun. Mobile Comput., vol. 1, 2005,_
[pp. 304–309, doi: 10.1109/WIRLES.2005.1549427.](http://dx.doi.org/10.1109/WIRLES.2005.1549427)
[15] D. Ruiying and W. Song, ‘‘An improved scheme of _µTESLA_
authentication based trusted computing platform,’’ in Proc. 4th Int.
_Conf. Wireless Commun., Netw. Mobile Comput., 2008, pp. 1–4, doi:_
[10.1109/WiCom.2008.1127.](http://dx.doi.org/10.1109/WiCom.2008.1127)
[16] D. Liu and P. Ning, ‘‘Multilevel µTESLA: Broadcast authentication for
distributed sensor networks,’’ ACM Trans. Embedded Comput. Syst., vol. 3,
[no. 4, pp. 800–836, Nov. 2004, doi: 10.1145/1027794.1027800.](http://dx.doi.org/10.1145/1027794.1027800)
[17] X. Li, N. Ruan, F. Wu, J. Li, and M. Li, ‘‘Efficient and enhanced broadcast
authentication protocols based on multilevel µTESLA,’’ in Proc. IEEE
_33rd Int. Perform. Comput. Commun. Conf. (IPCCC), Dec. 2014, pp. 1–8,_
[doi: 10.1109/PCCC.2014.7017109.](http://dx.doi.org/10.1109/PCCC.2014.7017109)
[18] A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone, Handbook of
_Applied Cryptography. Boca Raton, FL, USA: CRC Press, Dec. 2018, doi:_
[10.1201/9781439821916.](http://dx.doi.org/10.1201/9781439821916)
[19] M. Archer. (Jan. 1, 2002). Proving Correctness of the Basic TESLA
_Multicast Stream Authentication Protocol With TAME. Accessed: Jan. 18,_
2022. [Online]. Available: https://apps.dtic.mil/sti/citations/ADA464932
[20] L. Guo, C. Zhang, J. Sun, and Y. Fang, ‘‘A privacy-preserving attributebased authentication system for mobile health networks,’’ IEEE Trans.
_Mobile Comput., vol. 13, no. 9, pp. 1927–1941, Sep. 2014, doi:_
[10.1109/TMC.2013.84.](http://dx.doi.org/10.1109/TMC.2013.84)
[21] A. Neish, T. Walter, and P. Enge, ‘‘Parameter selection for the Tesla
keychain,’’ in Proc. 31st Int. Tech. Meeting Satell. Division Inst. Navigat.,
[Oct. 2018, pp. 2155–2171, doi: 10.33012/2018.15852.](http://dx.doi.org/10.33012/2018.15852)
[22] A. Gupta, R. Christie, and R. Manjula, ‘‘Scalability in Internet of Things:
Features, techniques and research challenges,’’ Int. J. Comput. Intell. Res.,
vol. 13, no. 7, pp. 1617–1627, 2017, Accessed: Jul. 07, 2021. [Online].
Available: http://www.ripublication.com
[23] N. Ruan and Y. Hori, ‘‘DoS attack-tolerant TESLA-based broadcast authentication protocol in Internet of Things,’’ in Proc. Int.
_Conf. Sel. Topics Mobile Wireless Netw., Jul. 2012, pp. 60–65, doi:_
[10.1109/iCOST.2012.6271291.](http://dx.doi.org/10.1109/iCOST.2012.6271291)
[24] S. Cancela, J. D. Calle, and I. Fernández-Hernández, ‘‘CPU
consumption analysis of TESLA-based navigation message
authentication,’’ in Proc. Eur. Navigat. Conf., May 2019, pp. 1–6,
[doi: 10.1109/EURONAV.2019.8714171.](http://dx.doi.org/10.1109/EURONAV.2019.8714171)
[25] M. H. Jahanian, F. Amin, and A. H. Jahangir, ‘‘Analysis of Tesla protocol
in vehicular ad hoc networks using timed colored Petri nets,’’ in Proc.
_6th Int. Conf. Inf. Commun. Syst. (ICICS), Apr. 2015, pp. 222–227, doi:_
[10.1109/IACS.2015.7103231.](http://dx.doi.org/10.1109/IACS.2015.7103231)
[26] A. J. Kerns, K. D. Wesson, and T. E. Humphreys, ‘‘A blueprint
for civil GPS navigation message authentication,’’ in Proc. IEEE/ION
_Position, Location Navigat. Symp., May 2014, pp. 262–269, doi:_
[10.1109/PLANS.2014.6851385.](http://dx.doi.org/10.1109/PLANS.2014.6851385)
[27] S. Bao, W. Hathal, H. Cruickshank, Z. Sun, P. Asuquo, and A. Lei,
‘‘A lightweight authentication and privacy-preserving scheme for VANETs
using TESLA and Bloom filters,’’ ICT Exp., vol. 4, no. 4, pp. 221–227,
[Dec. 2018, doi: 10.1016/j.icte.2017.12.001.](http://dx.doi.org/10.1016/j.icte.2017.12.001)
-----
[28] J. A. Maxa, R. Blaize, and S. Longuy, ‘‘Security challenges of
vehicle recovery for urban air mobility contexts,’’ in Proc. IEEE/AIAA
_38th Digit. Avionics Syst. Conf. (DASC), Sep. 2019, pp. 1–9, doi:_
[10.1109/DASC43569.2019.9081808.](http://dx.doi.org/10.1109/DASC43569.2019.9081808)
[29] A. C. Tang, ‘‘A review on cybersecurity vulnerabilities for urban air
mobility,’’ in Proc. AIAA Scitech Forum, vol. 1, Jan. 2021, pp. 1–17, doi:
[10.2514/6.2021-0773.](http://dx.doi.org/10.2514/6.2021-0773)
[30] V. K. Yadav, N. Andola, S. Verma, and S. Venkatesan, ‘‘P2LBS: Privacy
provisioning in location-based services,’’ IEEE Trans. Services Comput.,
[early access, Oct. 27, 2021, doi: 10.1109/TSC.2021.3123428.](http://dx.doi.org/10.1109/TSC.2021.3123428)
[31] Y. Pu, J. Luo, Y. Wang, C. Hu, Y. Huo, and J. Zhang, ‘‘Privacy preserving
scheme for location based services using cryptographic approach,’’ in Proc.
_IEEE Symp. Privacy-Aware Comput. (PAC), Sep. 2018, pp. 125–126, doi:_
[10.1109/PAC.2018.00022.](http://dx.doi.org/10.1109/PAC.2018.00022)
[32] V. K. Yadav, S. Verma, and S. Venkatesan, ‘‘Linkable privacy-preserving
scheme for location-based services,’’ IEEE Trans. Intell. Transp. Syst.,
[early access, May 5, 2021, doi: 10.1109/TITS.2021.3074974.](http://dx.doi.org/10.1109/TITS.2021.3074974)
[33] V. K. Yadav, S. Verma, and S. Venkatesan, ‘‘Efficient and secure locationbased services scheme in VANET,’’ IEEE Trans. Veh. Technol., vol. 69,
[no. 11, pp. 13567–13578, Nov. 2020, doi: 10.1109/TVT.2020.3031063.](http://dx.doi.org/10.1109/TVT.2020.3031063)
[34] L. H. Adnan, H. Hashim, Y. M. Yussoff, and M. U. Kamaluddin,
‘‘Root of trust for trusted node based-on ARM11 platform,’’ in
_Proc. 17th Asia–Pacific Conf. Commun., Oct. 2011, pp. 812–815, doi:_
[10.1109/APCC.2011.6152919.](http://dx.doi.org/10.1109/APCC.2011.6152919)
[35] M. Momani, ‘‘Trust models in wireless sensor networks: A survey,’’ in
_Recent Trends in Network Security and Applications (Communications in_
Computer and Information Science), vol. 89. Berlin, Germany: Springer,
[2010, pp. 37–46, doi: 10.1007/978-3-642-14478-3_4.](http://dx.doi.org/10.1007/978-3-642-14478-3_4)
[36] Z. Chen, M. He, W. Liang, and K. Chen, ‘‘Trust-aware and low
energy consumption security topology protocol of wireless sensor
[network,’’ J. Sensors, vol. 2015, pp. 1–10, Jan. 2015, doi: 10.1155/2015/](http://dx.doi.org/10.1155/2015/716468)
[716468.](http://dx.doi.org/10.1155/2015/716468)
[37] S. Y. Yan, _Quantum_ _Attacks_ _on_ _Public-Key_ _Cryptosystems,_
vol. 9781441977229. New York, NY, USA: Springer, 2013.
[38] S. K. Kim, C. Y. Yeun, E. Damiani, and N. W. Lo, ‘‘A machine
learning framework for biometric authentication using
electrocardiogram,’’ IEEE Access, vol. 7, pp. 94858–94868, 2019,
[doi: 10.1109/ACCESS.2019.2927079.](http://dx.doi.org/10.1109/ACCESS.2019.2927079)
[39] L. Chato and S. Latifi, ‘‘Application of machine learning to biometric systems—A survey,’’ J. Phys., Conf. Ser., vol. 1098, Sep. 2018,
[Art. no. 012017, doi: 10.1088/1742-6596/1098/1/012017.](http://dx.doi.org/10.1088/1742-6596/1098/1/012017)
[40] S.-K. Kim, C. Y. Yeun, and P. D. Yoo, ‘‘An enhanced machine
learning-based biometric authentication system using RR-interval framed
electrocardiograms,’’ IEEE Access, vol. 7, pp. 168669–168674, 2019, doi:
[10.1109/ACCESS.2019.2954576.](http://dx.doi.org/10.1109/ACCESS.2019.2954576)
[41] E. Al-Alkeem, S.-K. Kim, C. Y. Yeun, M. J. Zemerly, K. Poon, and
P. D. Yoo, ‘‘An enhanced electrocardiogram biometric authentication
system using machine learning,’’ IEEE Access, vol. 7, pp. 123069–123075,
[2019, doi: 10.1109/ACCESS.2019.2937357.](http://dx.doi.org/10.1109/ACCESS.2019.2937357)
[42] G. Panchal, D. Samanta, A. K. Das, N. Kumar, and K.-K.-R. Choo,
‘‘Designing secure and efficient biometric-based secure access mechanism for cloud services,’’ IEEE Trans. Cloud Comput., early access,
[Apr. 14, 2020, doi: 10.1109/tcc.2020.2987564.](http://dx.doi.org/10.1109/tcc.2020.2987564)
[43] H.-K. Song, E. Alalkeem, J. Yun, T.-H. Kim, H. Yoo, D. Heo, M. Chae,
and C. Y. Yeun, ‘‘Deep user identification model with multiple biometric
[data,’’ BMC Bioinf., vol. 21, no. 1, p. 315, Jul. 2020, doi: 10.1186/s12859-](http://dx.doi.org/10.1186/s12859-020-03613-3)
[020-03613-3.](http://dx.doi.org/10.1186/s12859-020-03613-3)
[44] H. Gunasinghe and E. Bertino, ‘‘PrivBioMTAuth: Privacy preserving
biometrics-based and user centric protocol for user authentication from
mobile phones,’’ IEEE Trans. Inf. Forensics Security, vol. 13, no. 4,
[pp. 1042–1057, Apr. 2018, doi: 10.1109/TIFS.2017.2777787.](http://dx.doi.org/10.1109/TIFS.2017.2777787)
[45] W. Liu, X. Wang, W. Peng, and Q. Xing, ‘‘Center-less single sign-on with
privacy-preserving remote biometric-based ID-MAKA scheme for mobile
cloud computing services,’’ IEEE Access, vol. 7, pp. 137770–137783,
[2019, doi: 10.1109/ACCESS.2019.2942987.](http://dx.doi.org/10.1109/ACCESS.2019.2942987)
[46] M. Upmanyu, A. M. Namboodiri, K. Srinathan, and C. V. Jawahar, ‘‘Blind
authentication: A secure crypto-biometric verification protocol,’’ IEEE
_Trans. Inf. Forensics Security, vol. 5, no. 2, pp. 255–268, Jun. 2010, doi:_
[10.1109/TIFS.2010.2043188.](http://dx.doi.org/10.1109/TIFS.2010.2043188)
[47] J. Partala, T. H. Nguyen, and S. Pirttikangas, ‘‘Non-interactive
zero-knowledge for blockchain: A survey,’’ IEEE Access, vol. 8,
[pp. 227945–227961, 2020, doi: 10.1109/ACCESS.2020.3046025.](http://dx.doi.org/10.1109/ACCESS.2020.3046025)
[48] X. Sun, F. R. Yu, P. Zhang, Z. Sun, W. Xie, and X. Peng, ‘‘A survey on zeroknowledge proof in blockchain,’’ IEEE Netw., vol. 35, no. 4, pp. 198–205,
[Jul. 2021, doi: 10.1109/MNET.011.2000473.](http://dx.doi.org/10.1109/MNET.011.2000473)
KHOULOUD ELEDLEBI received the B.Sc.
degree in communication engineering from KUST,
in 2013, the M.Sc. degree in electrical and
computer engineering, in 2015, and the Ph.D.
degree in electrical and computer engineering,
in 2019. She is currently a Postdoctoral Fellow
at Khalifa University and an Active Member of
Cyber Security and Physical Systems (C2PS). Her
research interests include cyber-security, AI and
ML for IoT devices, cognitive radio networking,
nanotechnology, and low-power semiconductor devices as she is trained in
the modeling of nanoscale device and wireless-sensor network optimization
and possesses expertise in several evolutionary computing methods.
CHAN YEOB YEUN (Senior Member, IEEE)
received the M.Sc. and Ph.D. degrees in information security from the Royal Holloway, University of London, in 1996 and 2000, respectively.
After his Ph.D., he joined Toshiba TRL, Bristol,
U.K., and later became the Vice President at
LG Electronics, Mobile Handset Research and
Development Center, Seoul, South Korea, in 2005.
He was responsible for developing mobile TV
technologies and related security. He left LG Electronics, in 2007, and joined ICU (merged with KAIST), South Korea, until
August 2008, and then the Khalifa University of Science and Technology,
in September 2008. He is currently a Researcher in cybersecurity, including
the IoT/USN security, cyber-physical system security, cloud/fog security, and
cryptographic techniques, as an Associate Professor with the Department of
Electrical Engineering and Computer Science, and the Cybersecurity Leader
of the Center for Cyber-Physical Systems (C2PS). He also enjoys lecturing
for M.Sc. cyber security and Ph.D. engineering courses at Khalifa University.
He has published more than 140 journal articles and conference papers, nine
book chapters, and ten international patent applications. He also serves on
the editorial board of multiple international journals and on the steering
committee of international conferences.
ERNESTO DAMIANI (Senior Member, IEEE)
received the Honorary Doctorate degree from
the Institut National des Sciences Appliquées
de Lyon, France, in 2017, for his contributions
toward the research and education of big data
analytics. He is currently a full-time Professor with the Department of Computer Science,
Universit à degli Studi di Milano, where he
leads the Secure Service-Oriented Architectures
Research (SESAR) Laboratory. In addition, he is
also the Founding Director of the Center for Cyber-Physical Systems, Khalifa
University, United Arab Emirates. He is also the Principal Investigator of the
H2020 TOREADOR Project on big data as a service. He has published over
600 peer-reviewed articles and books. His research interests include cybersecurity, big data, and cloud/edge processing. He is a Distinguished Scientist
of ACM and was a recipient of the 2017 Stephen Yau Award.
YOUSOF AL-HAMMADI received the bachelor’s
degree in computer engineering from the Khalifa
University of Science and Technology (previously
known as the Etisalat College of Engineering),
Abu Dhabi, United Arab Emirates, in 2000, the
M.Sc. degree in telecommunications engineering
from the University of Melbourne, Australia,
in 2003, and the Ph.D. degree in computer science
and information technology from the University
of Nottingham, U.K., in 2009. He is currently the
Acting Dean of Graduate Studies and an Associate Professor with the Electrical & Computer Engineering Department, Khalifa University of Science and
Technology. His research interests include the area of information security—
intrusion detection, botnet/bots detection, viruses/worms detection, machine
learning and artificial intelligence, and RFID and mobile security.
KHOULOUD
_Proc. IEEE/AIAA_
, Sep. 2019, pp. 1–9, doi:
, vol. 1, Jan. 2021, pp. 1–17, doi:
_IEEE Trans. Services Comput.,_
_Proc._ nanotechnology, and low-power semiconductor devices as she is trained in
, Sep. 2018, pp. 125–126, doi: the modeling of nanoscale device and wireless-sensor network optimization
and possesses expertise in several evolutionary computing methods.
-----
| 20,533
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ACCESS.2022.3152895?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ACCESS.2022.3152895, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://ieeexplore.ieee.org/ielx7/6287639/9668973/09717227.pdf"
}
| 2,022
|
[
"JournalArticle",
"Review"
] | true
| null |
[
{
"paperId": "5bb3314ebed0eff94d4a73de3763ef2829d43840",
"title": "P2LBS: Privacy Provisioning in Location-Based Services"
},
{
"paperId": "c686a9726f776f8a1b35e2c197fa0a450416f939",
"title": "Linkable Privacy-Preserving Scheme for Location-Based Services"
},
{
"paperId": "65f1e83a53c5ce9dee04042e0dd4191a451df2b8",
"title": "A Survey on Zero-Knowledge Proof in Blockchain"
},
{
"paperId": "52de7d92f0cd3a58fd32376d41938903e9ebda31",
"title": "Efficient and Secure Location-Based Services Scheme in VANET"
},
{
"paperId": "1503f883b9372a5c51cec718bae9da5cb6d226b8",
"title": "Deep user identification model with multiple biometric data"
},
{
"paperId": "9259e9b7213aadeba938d6c9ef75290ad9b1ce6f",
"title": "A Review on Cybersecurity Vulnerabilities for Urban Air Mobility"
},
{
"paperId": "edbdb429ff8a83a4989374235de611eec5f3d534",
"title": "Center-Less Single Sign-On With Privacy-Preserving Remote Biometric-Based ID-MAKA Scheme for Mobile Cloud Computing Services"
},
{
"paperId": "5220a1a116bc520614af2aea76294187d53fa074",
"title": "Security Challenges of Vehicle Recovery for Urban Air Mobility Contexts"
},
{
"paperId": "d96720ccb6266a820757f628fe9b3f6f30d5239e",
"title": "An Enhanced Machine Learning-Based Biometric Authentication System Using RR-Interval Framed Electrocardiograms"
},
{
"paperId": "6bfa8842ccd19e5409d605a624eb1ba3a915490f",
"title": "New Two-Level µTESLA Protocol for IoT Environments"
},
{
"paperId": "a511fc81b007c9a3f5e1f859078317ccb9bfd7b3",
"title": "An Enhanced Electrocardiogram Biometric Authentication System Using Machine Learning"
},
{
"paperId": "125602d726d4924b5a7b863c8626bcf6fd50e00e",
"title": "CPU Consumption Analysis of TESLA-based Navigation Message Authentication"
},
{
"paperId": "83721103a6fd5535e943b1b575cf70862c2322a8",
"title": "Handbook of Applied Cryptography"
},
{
"paperId": "e6063826b1f843a48643f7a167d3c3840643a073",
"title": "Parameter Selection for the TESLA Keychain"
},
{
"paperId": "a5c73a50af76271d2d4d014c6ab22da436c71f9d",
"title": "Application of Machine Learning to Biometric Systems- A Survey"
},
{
"paperId": "eb263dd279ea6fc5ad271dadcc311a3c5f5b5ed9",
"title": "Privacy Preserving Scheme for Location Based Services Using Cryptographic Approach"
},
{
"paperId": "8602eecc8191f4b40e98f958ed3184d9436465ce",
"title": "PrivBioMTAuth: Privacy Preserving Biometrics-Based and User Centric Protocol for User Authentication From Mobile Phones"
},
{
"paperId": "d29b2e0a7bfa5ed3b482f3fc505073164d16565b",
"title": "A lightweight authentication and privacy-preserving scheme for VANETs using TESLA and Bloom Filters"
},
{
"paperId": "1ab0227e00d706c2d777393ee11d60a39fe22b1b",
"title": "Lightweight cryptography methods"
},
{
"paperId": "62653d32f467fd7d5e24a95818e5bbeb4cb514cb",
"title": "Multicast Delayed Authentication for Streaming Synchrophasor Data in the Smart Grid"
},
{
"paperId": "f51d95ae1a960dc572b6e659841fc321bf9df143",
"title": "Trust-Aware and Low Energy Consumption Security Topology Protocol of Wireless Sensor Network"
},
{
"paperId": "b7bceae92d9452d9a5fa31760fd8fca2d0dcef84",
"title": "Analysis of TESLA protocol in vehicular ad hoc networks using timed colored Petri nets"
},
{
"paperId": "547e1903fd4695ab4de8b33e8a5cf3761df401b5",
"title": "Efficient and enhanced broadcast authentication protocols based on multilevel μTESLA"
},
{
"paperId": "a2233e605435546c78115f1c72016884ccf79578",
"title": "A Privacy-Preserving Attribute-Based Authentication System for Mobile Health Networks"
},
{
"paperId": "c081a9bb19f3818783c7d7aa54c6053af7ec7215",
"title": "A blueprint for civil GPS navigation message authentication"
},
{
"paperId": "c66e2bcf16ffe0bf508e6825fa9913a0528df44c",
"title": "Quantum Attacks on Public-Key Cryptosystems"
},
{
"paperId": "f6b4291476bd5de10c910c8a2cced37717d6e89a",
"title": "DoS attack-tolerant TESLA-based broadcast authentication protocol in Internet of Things"
},
{
"paperId": "8c1cff6252490c33c7ac73d9573a15b30c2e16f6",
"title": "Root of trust for trusted node based-on ARM11 platform"
},
{
"paperId": "06c737279fa5dab9e6035d0f723b01dc49ca04d3",
"title": "Security of Wireless Sensor Networks: Current Status and Key Issues"
},
{
"paperId": "20f8209cceb1852f0613369248f27519095d9ab7",
"title": "Trust Models in Wireless Sensor Networks: A Survey"
},
{
"paperId": "b593cdd46c9c483d0829dac021a028b7728f86a6",
"title": "Blind Authentication: A Secure Crypto-Biometric Verification Protocol"
},
{
"paperId": "6ab95354678a22888d363d8b13e9ec0ec61bcd80",
"title": "Flexible, extensible, and efficient VANET authentication"
},
{
"paperId": "f1851020ecf3b2398d784ae85b42189f1d675453",
"title": "An Improved Scheme of μTESLA Authentication Based Trusted Computing Platform"
},
{
"paperId": "ddbb28073ead5a9f59ca9da9a06f8f0b79240467",
"title": "On optimal key disclosure interval for /spl mu/TESLA: analysis of authentication delay versus network cost"
},
{
"paperId": "7624f8ecbec6b36d948921cd1d6103a2a024f308",
"title": "Multilevel μTESLA: Broadcast authentication for distributed sensor networks"
},
{
"paperId": "618ebee46df8ab724d1d38814d1419f1d89c1f40",
"title": "Introduction to Automotive Cybersecurity"
},
{
"paperId": "55c55def9525996b4f48402676e0af7b9b362b03",
"title": "Non-Interactive Zero-Knowledge for Blockchain: A Survey"
},
{
"paperId": "51c8c0fb67a0cee036f77a0f45f620b0a444cd9e",
"title": "Towards Post-Quantum Secure Symmetric Cryptography: A Mathematical Perspective"
},
{
"paperId": "ccf123985b4c7de089e5f6194b80a6f60c847de1",
"title": "Scalability in Internet of Things: Features, Techniques and Research Challenges"
},
{
"paperId": "0d1e658724ab5ef7924ab476b6cf6b37ea9ed6b2",
"title": "The TESLA-alpha broadcast authentication protocol for building automation system"
},
{
"paperId": "0257c51a8d48d25a4a517417fdb23781b7a41d9f",
"title": "A survey of broadcast authentication schemes for wireless networks"
},
{
"paperId": "fb05554313c462257dac2191a5f4e87ff2256652",
"title": "SECURING TESLA BROADCAST PROTOCOL WITH DIFFIE- HELLMAN KEY EXCHANGE"
},
{
"paperId": "8204f863be75cefd906deccab2ce2e75e209edbe",
"title": "SPINS: Security Protocols for Sensor Networks"
},
{
"paperId": "5fe463d8c4e8139aab1a0b9032934a2290ec3145",
"title": "Staggered TESLA: a multicast authentication scheme resistant to DoS attacks"
},
{
"paperId": "5761135bad63f904e0856dd63c1fa7221494d7d1",
"title": "The TESLA Broadcast Authentication Protocol"
},
{
"paperId": "2a803dbb2fe74120403da582dbd59a73bf35dec2",
"title": "Proving Correctness of the Basic TESLA Multicast Stream Authentication Protocol with TAME"
},
{
"paperId": null,
"title": "Performing theoretical analysis on the selection of parameters that help in achieving best performance for TLI µ TESLA protocol"
},
{
"paperId": null,
"title": "toward the research and education of big data analytics"
},
{
"paperId": "7e2ed7ab1c34f4b6a1d96e865c74403106587334",
"title": "DESIGNING SECURE AND EFFICIENT BIOMETRIC-BASED SECURE ACCESS MECHANISM FOR CLOUD SERVICES"
}
] | 20,533
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Economics",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00aaf5a6dee8ae0180304255b861c537a029e92b
|
[
"Computer Science"
] | 0.902096
|
A Cryptoeconomic Traffic Analysis of Bitcoins Lightning Network
|
00aaf5a6dee8ae0180304255b861c537a029e92b
|
Cryptoeconomic Systems
|
[
{
"authorId": "51212329",
"name": "Ferenc Béres"
},
{
"authorId": "67187480",
"name": "István András Seres"
},
{
"authorId": "3286520",
"name": "A. Benczúr"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
Lightning Network (LN) is designed to amend the scalability and privacy issues of Bitcoin. It's a payment channel network where Bitcoin transactions are issued off chain, onion routed through a private payment path with the aim to settle transactions in a faster, cheaper, and private manner, as they're not recorded in a costly-to-maintain, slow, and public ledger. In this work, we design a traffic simulator to empirically study LN's transaction fees and privacy provisions. The simulator relies on publicly available data of the network structure and generates transactions under assumptions we attempt to validate based on information spread by certain blog posts of LN node owners. Our findings on the estimated revenue from transaction fees are in line with widespread opinion that participation is economically irrational for the majority of large routing nodes who currently hold the network together. Either traffic or transaction fees must increase by orders of magnitude to make payment routing economically viable. We give worst-case estimates for the potential fee increase by assuming strong price competition among the routers. We estimate how current channel structures and pricing policies respond to a potential increase in traffic, how reduction in locked funds on channels would affect the network, and show examples of nodes who are estimated to operate with economically feasible revenue. Even if transactions are onion routed, strong statistical evidence on payment source and destination can be inferred, as many transaction paths only consist of a single intermediary by the side effect of LN's small-world nature. Based on our simulation experiments, we quantitatively characterize the privacy shortcomings of current LN operation, and propose a method to inject additional hops in routing paths to demonstrate how privacy can be strengthened with very little additional transactional cost.
|
#### **Cryptoeconomic Systems**
# **A Cryptoeconomic Tra�c** **Analysis of Bitcoin’s** **Lightning Network**
#### **Ferenc Béres [1], István András Seres [2], András A Benczúr [3]**
**1** **Institute for Computer Science and Control (SZTAKI), Hungary; Eötvös Loránd University,**
**2** **Eötvös Loránd University,**
**3** **Institute for Computer Science and Control (SZTAKI), Hungary; Széchenyi University, Győr,**
**Hungary**
###### Published on: Dec 12, 2020 License: Creative Commons Attribution 4.0 International License (CC-BY 4.0)
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
#### Abstract
Lightning Network (LN) is designed to amend the scalability and privacy issues of Bitcoin. It is a
payment channel network where Bitcoin transactions are issued off the blockchain and onion routed
through a private payment path with the aim to settle transactions in a faster, cheaper, and more
private manner, as they are not recorded in a costly-to-maintain, slow, and public ledger. In this work,
we design a traffic simulator to empirically study LN’s transaction fees and privacy provisions. The
simulator relies only on publicly-available data of the network structure and capacities, and generates
transactions under assumptions that we attempt to validate based on information spread by certain
blog posts of LN node owners.
Our findings on the estimated revenue from transaction fees are in line with the widespread opinion
that participation is economically irrational for the majority of the large routing nodes who currently
hold the network together. Either traffic or transaction fees must increase by orders of magnitude to
make payment routing economically viable. We give worst-case estimates for the potential fee increase
by assuming strong price competition among the routers. We also estimate how current channel
structures and pricing policies respond to a potential increase in traffic, how reduction in locked funds
on channels would affect the network, and show examples of nodes who are estimated to operate with
economically feasible revenue.
Our second set of findings considers privacy. Even if transactions are onion routed, strong statistical
evidence on payment source and destination can be inferred, as many transaction paths only consist of
a single intermediary by the side effect of LN’s small-world nature. Based on our simulation
experiments, we (1) quantitatively characterize the privacy shortcomings of current LN operation; and
(2) propose a method to inject additional hops in routing paths to demonstrate how privacy can be
strengthened with very little additional transactional cost.
### **1. Introduction**
Bitcoin is a peer-to-peer, decentralized cryptographic currency [1]. It is a censorship-resistant,
permissionless, digital payment system. Anyone can join and leave the network whenever they would
like to. Participants can issue payments, which are inserted into a distributed, replicated ledger called
blockchain. Since there is no trusted central party to issue money and guard this financial system,
payment validity is checked by all network participants. The necessity of full validation severely limits
the scalability of decentralized cryptocurrencies: Bitcoin could theoretically process 27 transactions
per second (tps) [2]; however, in practice its average transaction throughput is tps 7 [3]. This is in stark
2
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
contrast with the throughput of mainstream payment providers; for example, in peak hours Visa is
able to achieve 47 000 tps on its network [4].
To alleviate scalability issues, the cryptocurrency community is continuously inventing new protocols
and technologies. A major line of research is focused on amending existing currencies without
modifying the consensus layer by introducing a new layer, i.e., off-chain transactions [5 ] [6 ] [7]. These
proposals are called Layer-2 protocols: they allow parties to exchange transactions locally, without
broadcasting them to the blockchain network, updating a local balance sheet instead and only utilizing
the blockchain as a recourse for disputes. For an exhaustive review of off-chain protocols, refer to [8].
Among these proposals, the most prominent ones are payment channel networks (PCNs), in which
nodes have several open payment channels, being able to connect to all nodes, possibly through
multiple hops. The most popular instantiation of a PCN is Bitcoin’s Lightning Network (LN) [9], a
public, permissionless PCN, which allows anyone to issue Bitcoin transactions without the need to wait
for several blocks for payment confirmation and currently with transaction fees orders of magnitude
lower than on-chain fees. LN is suitable for several application scenarios, for instance, micropayments
or e-commerce, with the intent to make everyday Bitcoin usage more convenient and frictionless. LN’s
core value proposition is that Bitcoin users can send low-value payments instantly in a privacy
preserving manner with negligible fees, which has led to quite a widespread adoption of LN among
Bitcoin users.
The main difficulty with analyzing how LN operates is that the exact transaction routes are
cryptographically hidden from eavesdroppers due to onion routing [10]. LN can only be observed
through public information on nodes and channel openings, closings, and capacity changes. The actual
amount of Bitcoins circulated in LN is unknown, although in blog posts, some node owners publish
high-level statistics, such as their revenue [11 ] [12], which can be used as grounds for estimation.
To analyze LN efficiency and profitability, we designed a traffic simulator for LN to analyze the
routing costs and potential revenue at different nodes. We assigned roles to nodes by collecting
external data, [1] labeling nodes as wallet services, shops, and other merchants. Using node labels, we
simulated the flow of Bitcoin transactions from ordinary users towards merchants over time, based on
the natural assumption that transactions are routed through the path that charges the minimum total
transaction fee. By taking the dynamically changing transaction fees of the LN nodes into account, we
designed a method to predict the optimal fee pricing policy for individual nodes in case of the cheapest
path routing.
To the best of our knowledge, there has been no previous empirical study on LN transaction fees.
Our traffic simulator hence opens the possibility for addressing questions of transaction routes,
amounts, fees, and other measures otherwise depending upon strictly private information, based
3
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
solely on the observable network structure. By releasing the source code of our tool, we allow node
owners to fit various parameters to their private observation(s) on LN traffic. In particular, in this
paper, the simulator enables us to draw two major conclusions:
Economic incentives: Currently, LN provides little to no financial incentive for payment routing.
Low routing fees do not sufficiently compensate the routing nodes that essentially hold the network
together. Our results show that in general, transaction fees are underpriced, since for many possible
payments there is no alternative path to execute the transaction. We also give estimates of how the
current network and fee structure responds to increase in traffic and decrease in channel capacities,
thus assessing the income potential in different strategies. We provide an open source tool for nodes
to experimentally design their channels, capacities, and fees by incorporating all possible
information that they privately infer from the traffic over their channels.
Privacy: We quantitatively analyze the privacy provisions of LN. Despite onion routing, we observe
that strong statistical evidence can be gathered about the sender and receiver of LN payments, since
a substantial portion of payments involve only a single routing intermediary, who can easily de
anonymize participants. We find that using deliberately suboptimal, longer routing paths can
potentially restore privacy while only marginally increasing the cost of an average transaction, as it
is partially already incorporated in other implementations of the Lightning protocol [13].
The rest of the paper is organized as follows. In Section 2, we review the growing body of literature on
PCNs and specifically on LN. In Section 3, we provide a brief background on LN and its fee structure. In
Section 4, our traffic simulator is presented. We discuss our experimental results in three sections. We
investigate the price competition and the potential to increase fees, under various assumptions in
Section 5. We estimate the profitability of the central router nodes under estimated current and
potentially increased future traffic in Section 6. Finally, we estimate the amount of privacy
shortcomings due to too short paths and potential mitigations in Section 7. We conclude our paper in
Section 8.
### **2. Related Works**
To the best of our knowledge, we have conducted the first empirical analysis on LN transaction fees,
similar to the way empirical and theoretical studies on on-chain transaction fees have been conducted
during the early adoption of cryptocurrencies. Möser and Böhme conducted a longitudinal study on
Bitcoin’s nascent transaction fee market [14]. Kaskaloglu asserted that near-zero transaction fees
cannot last long as block rewards diminish [15]. Easley et al. developed a game-theoretic model to
explain the factors leading to the emergence of transactions fees, and provided empirical evidence on
the model predictions [16]. Recently, BitMEX, using a single LN node, has experimented with setting
different transaction fees to measure the effect on routing revenue [12], which shows a similar pattern
to our simulation experiments.
4
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
Unlike on-chain transactions, the LN transaction fee market is not yet consolidated. Some actors
behave financially rationally, while the vast majority exhibit altruistic behavior, which parallels the
early days of Bitcoin [14]. Similarly to on-chain fees, we expect to see more maturity and a similar
evolution in the LN transaction fee market in the future.
Even before the launch of LN, many works studied the theoretical aspects of PCNs. Branzei et
al. studied the impact of LN on Bitcoin transaction costs [17]. They conjectured a lower miner income
from on-chain transaction fees as users tend to use and issue transactions on LN. In [18], the
transaction fees of various payment channels are compared, however, without reference to the
underlying network dynamics.
Depleted payment channels account for many efficiency issues in PCNs. Khalil and Gervais devised a
handy algorithm to revive imbalanced payment channels without opening new ones [19].
PCNs can also be considered to be creation games. A user might decide to create a payment channel to
a destination node or just route the payment in the already existing PCN. The former is more
expensive; however, repeated payments can amortize the on-chain cost of opening a payment channel.
Avarikioti et al. found that given a free routing fee policy, the star graph constitutes a Nash
equilibrium [20]. In a similar game-theoretic work, the effect of routing fees was analyzed [21]. It was
again found that the star graph is a near-optimal solution to the network design problem.
Even though transactions in LN are not recorded on the blockchain, they do not provide privacy
guarantees. As early as 2016, Herrera et al. anticipated the privacy issues emerging in a PCN [22].
Single-intermediary payments do not provide privacy, although they have higher utility. Tang et
al. asserts that a PCN either operates in a low-privacy or a low-utility regime [23]. Although a recently
devised cryptographic protocol solves the privacy issues of single-intermediary routed payments [24],
the protocol is not yet in use due to its complexity of implementation.
After the launch of LN, several studies have investigated the graph properties of LN [25 ] [26 ] [27]. They
described the topology of LN at an arbitrarily chosen point in time and found that LN exhibits a hub
and spoke topology, and its degree distribution can be well approximated with a scale-free
distribution [25 ] [26]. Furthermore, these works assessed the robustness of the network against
various types of attack strategies: they showed that LN is susceptible to both node [25 ] [27] and
channel [26] removal-based attacks. These works are restricted to a static snapshot of LN. The lack of
temporal data has largely limited the insights and results of these contributions.
In a Youtube video [28], an estimate of the routing income is given based on the assumption that the
payment probability between any node pair is the same. As it is easy to see, under this assumption the
routing income of a node is proportional to its betweenness centrality. In our simulation experiments,
5
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
we will explicitly compare our prediction with the one based on betweenness centrality and show how
the finer structure of our estimation procedure yields more plausible results.
At the time of writing, four research groups published results on payment channel network
simulators, each serving purposes very different from ours. Out of them, the simulator of Branzei et
al. [17] is the only one that has pointers to publicly available resources. Their simulator only considers
single bidirectional channels or a star topology, and its main goal is to analyze channel opening costs
and depletion. This simulator is extended in [29] to generate and analyze Barabási-Albert graphs as
underlying networks. CLoTH [30] is able to provide performance statistics (e.g., the probability of
payment failure on a given PCN graph); however, it does not analyze transaction fees, profitability,
optimal fee policy, and privacy provisions of LN. In contrast, our LN traffic simulator can produce
insights in those areas as well. Finally, the simulator in [31] is a distributed method to minimize the
transaction fee of a payment path, subject to the timeliness and feasibility constraints for the success
ratio and the average accepted value of the transactions.
### **3. Routing and Fees in Lightning Network Payment Channels**
In this section we provide a light background on LN and how transaction fee mechanism in LN is
structured.
#### 3.1 Notations
Throughout the paper we are using the following notations. *G* = ( *V*, *E* ) denotes a weighted multi
graph, where *V* is the set of nodes and *E* is the set of edges *e* = ( *u*, *v*, *c* ), *u*, *v* being nodes and is the *c*
capacity of the edge between said nodes. Let *e* *E* ( *t* ) and *N* ( *t* ) denote the number of edges and nodes
at time respectively. Sometimes we omit the time parameter. Let *t* *d* ( *i*, *j* ) denote the length of the
shortest path between a node and another node The transitivity or global clustering coefficient of a *i* *j* .
network is the ratio of present triangles and all possible triangles. To assess centrality we calculated
the central point dominance (CPD):
##### 1
## CPD = N −1 ∑ i ( B max − B i ),
where *B* *max* is the largest value of betweenness centrality in the network. The CPD of a complete
graph is, while it is for a star graph. 0 1
#### 3.2 Payment Channel Networks (PCNs)
A payment channel allows users to make multiple cryptocurrency transactions without committing
all of the transactions to the blockchain. In a typical payment channel, only two transactions are added
to the blockchain, but theoretically, an unlimited number of payments can be made between the
participants. Parties can open a payment channel by escrowing funds on the blockchain for subsequent
6
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
use only between those two parties. The sum of the individual balances on the two sides of the channel
is usually referred to as the capacity.
We illustrate the operation of a payment channel by an example. Let Alice and Bob escrow 1 and 2
tokens respectively, by committing a transaction to the blockchain that sets up a new channel. Once
the channel is finalized, Alice and Bob can send escrowed funds back and forth by revoking the
previous state of the channel and digitally signing the new state updated by the transacted tokens. For
example, Alice can send 0.1 of her 1 token to Bob, so that the new channel state is (Alice=0.9, Bob=2.1).
Once the parties decide to close the channel, they can commit its final state through another
blockchain transaction.
Maintaining a payment channel has an opportunity cost since users must lock up their funds while the
channel is open, and funds are not redeemable until the channel is closed. Hence, it is not practical to
expect users to maintain a channel with every individual with whom they may ever need to transact.
In a payment channel network (PCN), nodes have several open payment channels between each
other; however, not necessarily with all other nodes. The network of bidirectional payment channels
allows two parties to exchange funds even if they do not have a direct payment channel. For example,
if Alice has a balance of 1 token with Ingrid, and Ingrid has a balance of 2 tokens with Bob locked in a
payment channel, then Alice can route payments to Bob through Ingrid up to the maximum of the
balances of Alice and Ingrid. Assuming that Alice sends 0.2 tokens to Bob, after routing we have the
following channel balances: Alice=0.8, Ingrid=0.2 on the first channel and Ingrid=1.8, Bob=0.2 on the
second channel.
In a payment channel, cryptographic protections are used to ensure that channel updates in both
directions are executed atomically, i.e., either both or neither of them are performed [8]. In addition,
incentive-based protections are also implemented to prevent users from stealing funds in a channel,
e.g., by committing a revoked state. Similar techniques allow payment routing for longer paths.
Furthermore, payment router intermediaries are financially motivated to relay payments as they are
entitled to claim transaction fees after each successfully routed payment.
LN as a PCN consists of nodes representing users and undirected, weighted edges representing
payment channels. Users can open and close bidirectional payment channels between each other and
route payments through these connections. Therefore, LN can be modeled as an undirected, weighted
multigraph since nodes can have multiple channels between each other. The weights on the edges
correspond to the capacity of the payment channels.
In LN only capacities of payment channels are known publicly, individual balances are kept secret.
This is because if individual balances are known, balance updates would reveal successful
7
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
transactions, hence preventing transaction privacy.
#### 3.3 Routing in LN and Fee Mechanism
LN applies source routing, meaning that it is always the sender who decides the payment route
towards the intended recipient. Packets are onion routed, which means that intermediary nodes only
know the identity of their immediate predecessor and successor in the route. Therefore, from a
privacy perspective, nodes are incentivized to avoid single-intermediary paths, as in those cases
intermediaries are potentially able to identify both the sender and the receiver.
LN provides financial incentives for intermediaries to route payments. In LN there are two types of
fees that a sender pays to the intermediaries in case the transaction involves more than one payment
channel. Nodes can set and charge the following fees after each routed payment:
Base fee: a fixed fee denoted as baseFee, charged each time a payment is routed through the
channel.
Fee rate: a percentage fee denoted as feeRate, charged on the value txValue of the payment.
Therefore, the total transaction fee to an intermediary can be obtained as:
## txFee = baseFee + feeRate ⋅ txV alue . (1)
We note that the base fee and fee rate is set by individual users, thus forming a fee market for
payment routing. Furthermore, we remark that Equation 1 does not hold for all routing algorithms.
However, we do not consider other fee structures in our simulator, as alternative routing algorithms
are currently not widely adopted throughout the network.
#### 3.4 Data
Throughout our work, we analyze two main data sources that are both available online. [2] First, we
gathered an edge stream data that describes every payment channel opening and closure from block
height 501 337 (in December 28, 2017) to 576 140 (in May 15, 2019). Second, we collected snapshots of the
public graph using the lnd client and utilized snapshots taken by Rohrer et al.[26], as well. We highlight
that only the latter dataset contains transaction fee information. Thus, the experiments in Section 4
through Section 7 are only based on 40 consecutive LN graph snapshots from February and March,
2019.
We note that according to some estimates, 28% of all channels are private [32], meaning that their
existence can only be recognized by the two ends. In our analysis, we have no information about
private payment channels; however, the same holds for all the other network participants as well.
8
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
Hence, we do not expect a significant bias in our results, as presumably those channels have private
use and do not participate in carrying the global network traffic.
We labeled LN nodes by relying on the tags provided by the node owners. [3] This allows us to distinguish
between ordinary users and merchants. We assume that merchants receive payments more often
than regular users. This is essential in understanding how popular payment channels are depleted
throughout LN by repeated use in one direction. The number of merchant nodes in the union of all 40
snapshots is 169.
First we describe the graphs defined based on the 40 consecutive LN graph snapshots from February
and March, 2019. We consider a minimum meaningful capacity *α* = 60 000 (approximately US $5) and
exclude edges with capacity less than in *α* *G* as they cannot be used in payments with value . *α* [4]
Although LN channels are bidirectional, in our experiments we consider two directed edges, so that we
can use channels in one direction if the capacity is exhausted in the other direction. We also ignore
edges in the direction where they are flagged as disabled in the data. The properties of the LN
network, averaged over the 40 daily snapshots, are as follows:
Number of the union of all nodes: 4787;
Average number of nodes in a day: 3358;
Non-isolated nodes after filtering disabled edge directions and edges with capacity less than 60 000
SAT: 3132;
Size of the largest strongly connected component: 2206;
The degree distribution of LN follows power law. The effect of preferential attachment, the
phenomenon that new edges tend to attach to high degree nodes, is clearly seen in Figure 3. Ever since
LN was launched, its popularity has grown steadily ( Figure 1). This growth in popularity has caused the
average degree increasing and the diameter decreasing over time, a “densification” phenomenon
observed for a wide class of general networks in [33]. The average degree steadily increases, while the
effective diameter decreases only after a first initial expansion phase ( Figure 2), following the
densification power law ( Figure 4).
9
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
**Figure 1** : LN’s increasing popularity and adoption in its first 17 months.
10
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
**Figure 2** : Average degree and effective diameter in LN, as the function of time.
**Figure 3** : Preferential attachment in LN. The higher a node’s degree, the higher the
probability that it receives a payment channel.
11
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
**Figure 4** : LN follows the Densification Power Law relation with exponent *a* = 1.55634117.
Goodness-of-fit: *R* = 2 0.98.
We observe that the higher its degree, the longer a node participates in LN see Figure 5. Additionally,
the channels adjacent to merchants have a shorter average lifetime (5198 blocks) than the average
channel lifetime (5474 blocks); see the difference of the full distribution in Figure 6. We suspect that
subsequent payments deplete the channels of the merchants, who then close these channels, collect
their funds, and open new channels.
12
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
**Figure 5** : Node lifetime distribution in days, separately for four node degree groups.
**Figure 6** : Channel lifetime distribution of merchants and others (merchant average: 5198; overall
average: 5474).
We observe strong central point dominance in LN ( Figure 7), which indicates that LN is more
centralized than a Barabási-Albert or an Erd ő s-Rényi graph of equal size. This is in line with the
predictions of [20 ] [21], affirming that PCNs lean to form a star graph-like topology to achieve Nash
equilibrium.
13
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
**Figure 7** : Central Point Dominance of LN as the function of time, compared to that of an Erdős
Rényi (ER) and a Barabási-Albert (BA) graph of equal size at the given time.
Counterintuitively, LN also exhibits high transitivity, also known as global clustering coefficient, see
Figure 8. One would expect that nodes have no incentive to close triangles, as they might as well just
route payments along already existing payment channels. However, we observe that the vast majority
(68.76%) of all created payment channels connect nodes only 1 hop (distance 2) away from each other,
see Figure 9. We believe that in most cases this is caused by replacing depleted payment channels. The
high transitivity in LN is especially striking when it is compared to other social graphs. LN has roughly
the same clustering coefficient as the YouTube social network [34].
14
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
**Figure 8** : Transitivity of LN, compared to that of an Erdős-Rényi (ER) and a Barabási-Albert (BA)
graph of equal size at the given time.
15
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
**Figure 9** : The distance of LN nodes in the network at the time before a payment channel is
established between them, shown separately for all nodes and for merchants only. If nodes were
in different connected components before establishing a payment channel between them, then
we define their distance as ∞.
### **4. Lightning Network Traffic Simulator**
In this section, we introduce our main contribution, the LN Traffic Simulator, which we designed for
daily routing income and traffic estimation of network entities. Simulation is necessary to analyze the
fine-grained structure, since the key concept of LN is privacy: data will never include transaction
amounts, sources, and targets in any form, and it is very unlikely that it will give information on the
capacity distribution over the channels, since that would leak information on the actual transactions.
Hence we need a simulator to understand the capabilities and limitations of the network to route
transactions.
By simulating transactions at different traffic volumes and transaction amounts, we shed light on the
fee pricing policies of major router entities as well as on privacy considerations, as we will describe in
Section 5 through Section 7.
In our simulator, we make the assumption that the sender nodes always choose the cheapest route to
execute their transactions. Due to the source routing nature of LN, nodes are expected to possess the
knowledge of network structure and current transaction fees to make price-optimal decisions. Note
that in the LN client, [5] the source node selects the routing for their transactions. For example, the
sender node may choose the shortest instead of the cheapest path to the target if speed is more
important than the transaction cost, and our simulator can be modified accordingly.
16
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
The main goal of our traffic simulator is to generate a certain number of transactions, given as an input
parameter, by using only the information on the edges and their capacities in a given LN snapshot. To
generate transaction sources and targets, we predefine the fraction of the transactions that lead to
merchants based on the assumption that the majority of the transactions correspond to money spent at
shops and service providers. We fix the amount as constant to reduce the complexity of the simulation
model.
We acknowledge that using constant payment amounts is a strong assumption. One could consider
various distributions such as Pareto, power law, or Poisson, as in previous works [23]. However,
assumptions on the distributions as well as their parameter settings greatly increase the complexity of
the experimentation, and cannot be empirically validated, since payment values are not public. We
found the necessity to incorporate correlations of the amounts with node sizes and roles particularly
troublesome. We note that constant amounts are also capable of capturing larger values by repeated
payments from the same node. Finally, any time some entities obtain reliable estimates on the
payment value distribution, they can conduct the corresponding experiments with our open source
simulator.
Formally, we use the following notation:
*G*, a daily graph snapshot of the LN with channels represented by pairs of edges in both directions;
disabled directions and too low capacity edges are excluded;
*M*, the set of merchant nodes defined in Section 3.2;
*τ*, the number of random transactions to sample;
*α*, the (constant) value of each transaction, in Satoshis; [6]
*ϵ*, the ratio of merchants in the endpoints of the random transactions.
The available data only includes the total channel capacity but not its distribution between the
endpoints. Thus, before simulation we randomly initialize the capacity between the channel
endpoints. For example, if is the total capacity of the channel between nodes and, we let Γ *u* *v*
0 ≤ *γ* ( *uv* ) ≤Γ and 0 ≤ *γ* ( *vu* ) ≤Γ denote the maximum value in Satoshis, which can be routed from
*u* to and vice versa. Both *v* *γ* ( *uv* ) and *γ* ( *vu* ) change after each transaction that uses this channel while
maintaining *γ* ( *uv* ) + *γ* ( *vu* ) = Γ at all times.
If an edge has capacity less than in a direction (that is, *α* *γ* ( *uv* ) < *α* ), the edge direction *uv* is depleted.
In the simulation, a depleted edge *uv* cannot be used before a payment is made in the opposite
direction *vu*, in which case *γ* ( *uv* ) ≥ *α* will hold. Optionally, in Section 6, we will also investigate the
effect of removing this constraint and allowing the simulation to use an edge direction without limits.
We also note that routers can balance payment channels without closing and reopening existing ones
17
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
by finding cycles containing a depleted channel and route funds on a circular payment path [19],
however, this option is not implemented in the current version of our simulator.
We start the simulation by first sampling transactions, each of amount *τ* *α* . First we select senders *τ*
uniformly at random from all nodes. Recipients are selected by putting emphasis on merchants *M* : we
choose *ϵ* ⋅ *τ* merchants with probability proportional to their degree in addition to (1 − *ϵ* ) ⋅ *τ*
recipients that are selected uniformly at random from all nodes including both merchants and non
merchants. Finally, we randomly match senders and recipients.
Given the transactions, we are ready to simulate traffic by finding the cheapest paths
*P* = ( *s* = *u*, 0 *u*, 1 *u*,…, 2 *u* *k* = *t* ) from sender to recipient with the capacity constraint *s* *t*
*γ* ( *u u* *i* *i* +1 ) ≥ *α* for *i* = 0 … *k* −1. Then, node statistics (e.g., routing income, number of routed
transactions) are updated for each intermediary node { *u*, 1 *u*,…, 2 *u* *k* −1 } with respect to the latest
transaction. Finally, for *i* = 0 … *k* −1 the value of *γ* ( *u u* *i* *i* +1 ) is decreased while *γ* ( *u* *i* +1 *u* ) *i* is
increased by the transaction amount in order to keep available node capacities up to date. As we *α*
work with daily graph snapshots, the simulation mimics the daily traffic on LN.
The simulated routing income of a node will arise as the sum of the payment costs of its inbound
channels. The cost of a payment can be obtained by substituting txValue = *α* in the transaction fee
Equation 1; we obtain the transaction fee of an edge as baseFee + feeRate ⋅ *α* . We note that in this
work we give no estimate on the cost of opening the channels, instead, we stop using depleted edges as
long as a payment in the opposite direction reactivates them. We will assess the effect of channel
depletion on routing income in Section 6, where we will allow the simulation to use an edge direction
without capacity limits.
Due to several random factors in the simulation, including source and target sampling and capacity
distribution initialization, we run the traffic simulator ten times. We use 40 consecutive daily
snapshots in our data. We always report the mean node statistics (e.g., node routing income, daily
traffic) of LN entities over our sets of 400 simulations for each parameter setting.
#### 4.1 Feasibility Validation and Choice of Parameters
We validate our simulation model by comparing published information with our estimates for the
income and traffic of the most relevant LN router entities. These nodes are responsible for keeping the
network operational by routing most of the transactions. Our key source of information is the blog
[post [11] on LNBIG, the most relevant routing entity who owns several nodes on LN as well as](https://lnbig.com/#/)
approximately half of the total network capacity:
[In a typical day, LNBIG serves 200–300 transactions through all of its nodes, rarely exceeding 600 in](https://lnbig.com/#/)
a single day.
18
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
[On routing commissions, LNBIG earns 5000–10 000 satoshis per day.](https://lnbig.com/#/)
[We managed to reproduce daily traffic and routing income similar to LNBIG by sampling](https://lnbig.com/#/) *τ* = 5 000
transactions with *α* = 60 000 satoshis (approximately US $5) and merchant ratio *ϵ* = 0.8. The
estimated revenue, as the function of the parameters, is shown in Figure 10, also showing the target
[daily income and traffic ranges stated by LNBIG](https://lnbig.com/#/) [11].
19
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
**Figure 10** : Mean estimated routing income and number of routed payments of [LNBIG entity with](https://lnbig.com/#/)
respect to traffic simulator parameters. The default parameter setting (daily transaction count *τ* = 5000,
single transaction amount *α* = 60 000 Satoshis, and merchant endpoint ratio � *ϵ* = 0.8 ) is marked by vertical
black dotted lines. The daily income and traffic ranges stated by [LNBIG (LNBig) are marked by horizontal](https://lnbig.com/#/)
red dashed lines.
LNBig. Guy makes $20 a month from locking $5 million bitcoin on the lightning network. (n.d.). Retrieved from
https://www.trustnodes.com/2019/08/20/guy-makes-20-a-month-for-locking-5-million-worth-of-bitcoin-on-the-lightning-network
To summarize, simulating a few thousand micro-payments with mostly merchant recipients resulted
[in similar traffic and revenue as described over the nodes of LNBIG. We choose](https://lnbig.com/#/) *τ* = 5000 *α*, = 60 000
, and *ϵ* = 0.8 as default parameters of our traffic simulator in order to draw some conclusions on LN
node profitability and transaction privacy in Section 5 through Section 7.
#### 4.2 Traffic Simulator Response to Parameter Changes
Next we examine the stability of our traffic simulator for different ratios of merchant endpoints . We *ϵ*
note that the set of transaction recipients can be sampled uniformly at random by choosing *ϵ* = 0.0,
while in case *ϵ* = 1.0, every sampled transaction has merchant endpoints. Thus, by increasing the
value of the traffic can be centralized towards LN service providers. As determined in the previous *ϵ*
subsection, we set the remaining parameters *τ* = 5 000 and *α* = 60 000.
Our goal is to observe stable traffic characteristics throughout a sequence of days, measured as the
correlation of node statistics across days. Towards this end, we measure the following node level
summaries of the simulated traffic every day:
Routing traffic: the number of transactions that are forwarded by a given node;
Routing income: the sum of all transaction fees that a given node charges for payment routing;
Sender traffic: the number of transactions that are initiated by a given node;
Sender fee: the sum of all transaction fees that a given node has to pay for his transactions to be
forwarded by intermediary nodes.
In Figure 11, the Spearman, Kendall, and unweighted and weighted Kendall-tau correlations of routing
traffic and income are shown for *ϵ* = 0.0,0.2,0.5,0.8, and 1.0. For the definitions, see [35].
20
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
**Figure 11** : Correlation of simulated daily node routing traffic ( **top three** ) and income ( **bottom three** ) with respect to different
ratio of merchants among transaction endpoints *ϵ* .
We observe high weighted Kendall-tau correlation, which means that the set of nodes with the highest
routing income and traffic are very similar regardless of the ratio of merchants among transaction *ϵ*
recipients.
By contrast, we observe low values of (unweighted) Kendall-tau. Since the set of nodes is dominated
by low-traffic ones, the Kendall-tau value also depends mostly on the simulated traffic amount of
these nodes. Hence, low Kendall-tau implies that nodes with low traffic and income fluctuate as
transaction endpoints are selected at random. Most of these nodes have probably no traffic when
transactions are centralized towards service providers ( *ϵ* = 1.0).
In Figure 12, we assess the stability of the simulation by showing the mean correlation of four different
node statistics over 10 independent simulations for each snapshot. Two of the statistics, routing
income and routing traffic, show high correlation for all values of which means that nodes with high *ϵ*,
daily routing income and traffic are stable across independent experiments. By contrast, sender
transaction fees and sender traffic especially vary highly, which is a natural consequence of uniform
random sampling for source selection. By our measurements, ratio only affects the sender *ϵ*
transaction fee. By increasing the value of more and more transactions are centralized towards *ϵ*,
21
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
merchants. Thus, sender nodes pay the transaction fees to more or less the same set of intermediary
nodes, which results in higher sender transaction fee correlations.
**Figure 12** : Mean Spearman, unweighted and weighted Kendall-tau cross correlation of node statistics over the 10 independent
simulations with respect to the ratio of merchants as transaction endpoints ( *ϵ* ∈ {0.0, 0.5, 0.8, 1.0}).
Finally, we compare our simulated routing income with simple estimates based on the properties of
the nodes in LN as a graph. In a Youtube video, Pickhardt [28] shows the routing income of a node is
proportional to its betweenness centrality in case the payment probability between any node pair is
the same. In Figure 13, we observe that our simulated routing income with parameters *α* = 60 000,
*τ* = 5000, *ϵ* ∈{0.0,0.2,0.4,0.6,0.8,1.0} is well correlated with the betweenness centrality of a
node. However, the Spearman correlation decreases with larger which means that since payment *ϵ*,
endpoints are biased towards merchants, we need a more accurate estimation method. In Figure 14,
we show two more node statistics, degree and total node capacity, both correlating much weaker to our
prediction than betweenness centrality.
22
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
**Figure 13** : Spearman correlation of predicted daily routing income (or traffic) and Betweeness
centrality of LN nodes. The correlation decreases in case of high simulated merchant ratio *ϵ* .
**Figure 14** : Spearman correlation of predicted daily routing income and graph centrality measures with regard to the
merchant ratio among payment endpoints. *ϵ*
In summary, the set of nodes with high routing income and traffic are consistent across independent
simulations regardless of the ratio of merchants among sampled transaction endpoints, while
randomization naturally has a big influence on the low traffic end of the network. The low traffic end
23
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
can be estimated by incorporating the role of a node in the simulation, as we do in a very simple way
by controlling traffic towards merchants with the parameter *ϵ* .
### **5. Transaction Fee Competition**
Our first analysis addresses the observed and potential profitability of LN, which is questioned in
several blog posts [12 ] [11]. A core value proposition of LN is that Bitcoin users can execute payments
with negligible transaction fees. This feature may be cherished by payment initiators, but in case of
insufficiently low network traffic, it could be unprofitable for router entities.
Our goal is to assess how transaction costs depend on topology and to what extent they are targets to
competition. To measure transaction fee price competition, we use our traffic simulator to estimate
daily node routing income and traffic volume for the 40 consecutive LN snapshots in our data. Our
findings on how revenue from routing depend on transaction fees shows a similar shape as
experimented for BitMEX, a single LN node [12].
We use the parameters of the simulator that we calibrated based on published information on the
income of certain nodes [11] in Section 4.1. Our analysis in this section confirms that transaction fees
are indeed very low, and they are potentially underpriced for relevant router nodes.
To analyze the competition that a node faces in the network, we compare the simulated traffic in a *x*
daily LN snapshot *G* and in the graph *G* *x* that we obtain by removing node from *x* *G* . By attempting to
route the same set of transactions on *τ* *G* and *G* *x*, first of all we measure the number of failed
payments *φ* ( *x* ) that were originally routed through but are incapable of reaching their destination *x*
*φ* ( *x* )
when is out of service. For each node *x* *x*, the failure ratio of individual node traffic is *τ* ( *x* ) where *τ* ( *x* )
denotes the number of transactions through in the original simulation. *x*
In Figure 15, we show the average ratio of the traffic of a node that has no alternate routing path, for
five income groups defined as the top 1–10, 11–20, 21–50, 50–100, and 101– router nodes with highest
simulated income. For each group, the average is taken over its nodes *x*, considering the fraction of
*φ* ( *x* )
transactions *τ* ( *x* ) that cannot be routed anymore after removing *x* . It is interesting to observe that for
the first three groups, the average ratio of traffic with no alternate path is at least 0.3. This means that
even if the 100 routers with highest simulated traffic increased their transaction fees close to on-chain
fees, the majority of payment sources would have no less expensive option to route their payments.
24
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
**Figure 15** : The average failure ratio of individual node traffic for five income groups
defined as the top 1−10, 11−20, 21−50, 50−100, and 101− router nodes with
highest simulated income.
In the next experiment, we estimate the extent to which transaction prices are potentially limited by
the competition among alternate routes in LN. We take a highly pessimistic view by assuming that a
transaction that can only be routed by relying on an intermediary node will select a payment method *x*
outside LN immediately if increases its transaction fees. For other transactions, we search for the *x*
next cheapest route that avoids and assume that could increase its fees to match the second *x* *x*
cheapest option. In other words, our analysis ignores the failed transactions *φ* ( *x* ) and is based on the
remaining *τ* ( *x* ) − *φ* ( *x* ) where payment routing avoiding node is available. For each of these *x*
transactions, the difference of the total fee can be calculated from the fees of the original path in *δ* *G*
and the alternative route in *G* *x* .
Our assumption is that if node increases its base fee by *x* *β*, transactions with *δ* ≥ *β* are still willing to
pay for the additional costs, while for *δ* < *β*, payments will be routed on the cheaper alternative path,
where is the fee difference to the cheapest path avoiding *δ* *x* . Thus, by observing *β* ≥0 at different
thresholds, we propose an optimal *β* [∗] base fee increment for each router node.
We estimate the optimal fee increase *β* [∗] for each node over multiple snapshots and independent
simulations. For the five node income groups that we previously defined in Figure 15, we show the
average optimal base fee increment as well as the corresponding routing income gain in Figure 16.
25
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
**Figure 16** : The maximal possible base fee increment ( *β*, ∗ **left** ), and the corresponding income gain ( **right** ) in Satoshis, given the
price competition assumptions in Section 3.3. Income groups are defined as the top 1−10, 11−20, 21−50, 50−100, and 101−
router nodes with highest simulated income.
The transaction fee data shows that the current LN fee market is still immature, as the majority of all
channels apply default base fees (1 SAT) and fee rate ( 10 [−6] SAT), while the capacities are usually set
higher than the default value (100 000 SAT) in the lnd client, see Figure 17.
**Figure 17** : Distribution of channel capacities ( **left** ), base fees ( **center** ), and fee rates ( **right** ) with regard to their default values in
the lnd client (100 000 SAT, 1 SAT, and 10 [−6] SAT), respectively.
In our measurements, we find that nodes with high routing income could still increase their base fee
by a few hundred Satoshis, thus generating an average gain of more than 10 000 Satoshis in their daily
income. Despite the low gain, our assumption is that it could get orders of magnitude higher if router
nodes increased their base fee in succession, which could have a major impact on the competition for
transaction costs.
### **6. Profitability Estimation of Central Routers**
Router entities are an essential part of LN. They are responsible for keeping the network operational
by forwarding payments. In this section, we estimate the current routing revenue of these central
nodes, and give predictions how their income will change if the traffic over the current network
26
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
increases. Note that our technique can also be used for node owners to predict the effect of opening
and closing channels as well as changing capacities and transaction fees.
Central routing nodes are binding a huge amount of financial resources in the form of channel
capacity, which enables them to serve high volumes of traffic. In general, router entities consist of a
[single node, but sometimes they have multiple LN nodes. For example, LNBIG owns 25 nodes in our](https://lnbig.com/#/)
dataset. One of our main motivations was to estimate the annual return of investment (RoI) for
entities by simulating daily traffic over several snapshots. In our measurements we calculate annual
RoI as follows:
##### estimated daily routing income in Satoshis × 365
## RoI = . (2)
##### total amount of Satoshis bound by channel capacities
By simulating traffic with parameters *τ* = 5000, *α* = 60 000, and *ϵ* = 0.8, we estimated the daily
average income and traffic for each router. From these statistics and additional entity capacity data
[downloaded from 1ML, we estimate annual RoI in Table 1. We present all router entities with at least](https://1ml.com/)
50 Satoshis of simulated income and 10 forwarded transactions per day on average. For each of these
nodes, the following statistics are presented:
[Entity capacity as downloaded from 1ML. Capacity fraction is the fraction of entity capacity and total](https://1ml.com/)
[network capacity. Remarkably, half of the total network capacity is bound by the nodes of LNBIG.](https://lnbig.com/#/)
Average transaction fee, daily income, and daily traffic, based on the simulated mean cost in Satoshis
that a given entity charges for each payment routing over his channels during the observed 40
snapshots, in ten random simulations, as explained in Section 3.3.
Annual RoI calculated from simulated daily income and entity capacity by Equation 2.
Economical fee in Satoshis is the amount required on average to reach an annual 5% RoI. Fee ratio is
the ratio of the economical and the actual transaction fees. Higher values mean lower profitability.
Three columns show the rank of the nodes in decreasing order of annual RoI, total fee, and traffic.
27
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
**Table 1** : Estimated daily income, traffic and annual RoI for relevant router entities. Columns are explained in Section 6. Note that
currently on-chain transaction fees for a regular transaction (2 inputs, 2 outputs) is in the range of 1000-2000 Satoshis.
Based on our findings, the annual RoI is way below % for almost all relevant entities. Only 5
[rompert.com achieved a comparable amount of annual RoI (3.45%), who indeed applies orders of](https://rompert.com/)
magnitude higher fees than others. It is interesting to see that despite its high transaction fees, it has
[the highest daily traffic in the simulation. Note that rompert.com applies base fees close to onchain](https://rompert.com/)
fees, which may invalidate the assumptions of our simulator if participants fall back to onchain rather
[than paying rompert.com routing fees.](https://rompert.com/)
[Compared to the most profitable node rompert.com, the total estimated traffic of LNBIG through its 25](https://rompert.com/)
nodes is only one third. The main reason behind low annual RoI is low transaction fees. Table 1 shows
that for forwarding *α* = 60 000 Satoshis, most of these entities ask for less then 100 Satoshis, which is
less than 0.2% of the payment value. Very low fees may uphold LN’s core value proposition, but they
are economically irrational for the central routers holding the network together. Based on our
[simulations, for several routers (e.g., LNBIG, Y’alls, ln1.satoshilabs.com, etc.), fees should be in the](https://lnbig.com/#/)
range of a few thousand Satoshis to reach a 5% annual RoI, which is approximately the magnitude of
on-chain transaction fees (1000-2000 Satoshis ~~)~~ [7] .
[Capacity overprovisioning also causes low RoI. For example, extremely large LNBIG capacities result in](https://lnbig.com/#/)
low RoI, despite the reasonable daily income reported. By using our traffic simulator, we observed that
the router entities of Table 1 can increase their RoI by reducing their channel capacities. For each of
these routers, we estimated the changes in revenue ( Figure 18) and RoI ( Figure 19), after reducing all
of its edge capacities to 50, 10, 5, 1, 0.5, 0.1% of the original value, with the assumption that all other
28
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
[routers keep their capacities. In our measurements, LNBIG can significantly improve its RoI by](https://lnbig.com/#/)
bounding only 1% of its original capacity values. In Table 2, we compute the estimated optimal RoI for
the central routers.
**Table 2** : Estimated optimal channel capacity reduction for maximal RoI of the routers of Table 1. Capacity fraction is the estimated
optimal fraction of the original channel capacities and income fraction is the estimated fraction of the original income by using
reduced channel capacities.
29
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
**Figure 18** : The remaining fraction of the original estimated daily routing income, after reducing node capacities to the given
fractions.
**Figure 19** : RoI gain after reducing node capacities to the given fractions.
30
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
To estimate whether routers can be more profitable with an increase in traffic volume or transaction
values, we ran simulations with different values of and and measured the fraction of unsuccessful *τ* *α*
payments as well as the average length of completed payment paths.
First we vary the transaction value with a fixed number of daily transactions *α* *τ* = 5000. In Figure 20
and Figure 21, we present statistics for ten central entities based on their service profiles. For example,
[ZigZag is a cryptocurrency exchange service, while ACINQ provides solutions for Bitcoin scalability.](https://zigzag.io/#/)
Additional entity profiles can be found in Table 3. In Figure 20, the income for most of the nodes
[significantly increases with transaction value, while this effect is almost negligible for rompert.com,](https://rompert.com/)
[LightningPowerUsers.com, and 1ML node ALPHA, whose behavior can be explained by charging](https://1ml.com/)
almost only a base fee and applying a fee rate close to zero.
**Table 3** : LN network entities with related service profiles.
**Figure 20** : Average simulated daily routing income of some LN router entities as the function of the transaction value *α* .
31
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
The simulated amount of daily traffic for the ten central nodes is shown in Figure 21. We observe that
[scalability and capacity providers LightningTo.Me, LightningPowerUsers.com, and 1ML node ALPHA](https://lightningto.me/)
are responsible for forwarding a significant amount of payments irrespective of *α* . Probably due to the
[lack of high capacity channels, the traffic of rompert.com and 1ML node ALPHA drop at](https://rompert.com/) *α* = 500 000
satoshis ( ≈ [US $41). By contrast, the number of payments routed by LNBIG increases with payment](https://lnbig.com/#/)
value due to the fact that this entity owns approximately half of all network capacity, as seen Table 1.
In Figure 22, we provide an efficiency metric for each entity by dividing estimated income by traffic
[volume. The efficiency of rompert.com and LNBIG are surpassed by ZigZag and Y’alls for](https://rompert.com/) *α* ≥60 000
Satoshis, as these service providers have reasonable routing income relative to the number of daily
[forwarded transactions. On the other hand, LightningPowerUsers.com, 1ML node ALPHA, and](http://lightningpowerusers.com/)
[LightningTo.Me have orders of magnitude lower efficiency than other relevant entities. They are likely](https://lightningto.me/)
not considering routing profitability, as their transaction fees are negligible.
**Figure 21** : Average simulated daily routing traffic of some LN router entities as the function of the transaction value *α* .
32
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
**Figure 22** : Average simulated daily routing income per transaction for some LN router entities as the function of the transaction
value *α* .
Next, we estimate the effect of channel depletion, which can be a side-effect of increasing the traffic
without increasing channel capacities. In a highly simplistic experiment, we compare traffic with
simulated channel depletion with the case when we allow the simulator to use channel directions
without limits. We take depletion into account by suspending depleted channels until a reverse
payment reopens them. On the top of Figure 23, we show the routing income estimate with depletion
taken into account for the top ten router nodes, as the function of *τ* . And on the bottom of Figure 23,
we show the ratio of the routing income with and without depletion taken into account. At first glance,
it is surprising that the fraction is above 1 for most of the router nodes. To explain, observe that
channels with low routing fees are used and depleted first, and these channels will lose revenue
compared to the optimistic case. However, if there is an alternate routing path with more expensive
transaction fees, the owners of these channels will observe an increase in revenue due to the depletion
of low cost channels.
33
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
**Figure 23** : Average simulated daily routing income ( **top** ) and the income divided by the optimistic income when channel depletion
is ignored ( **bottom** ) for some LN router entities as the function of the simulated transaction count *τ* . Note that the ratio is above 1
for most nodes as they can take over routing for depleted channels.
As we simulate more traffic or execute more expensive payments, both the fraction of unsuccessful
payments and the average length of completed payment paths increase, as we show in Figure 24.
Transactions can fail in the simulation when there is no path from the source to the recipient such that
the channels have at least available capacity. If is too high, then only a fraction of all channels can *α* *α*
be used for payment routing, while in the case of an extremely large number of transactions, the
available capacity of several channel directions becomes depleted. For example, channels leading to
popular merchants could become blocked in case of heavy one-directional traffic. The growth in
completed payment path length is in agreement with this scenario.
In Figure 24, we also observe that lower payment amounts do not significantly decrease the probability
of a payment being successfully routed. Hence, we do not expect that Atomic Multi-path payments
34
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
(AMP ~~)~~ [8] that allow a sender to atomically split a payment flow amongst several individual payment
flows can significantly increase the success rate of the transactions.
**Figure 24** : Fraction of failed transactions **(left)** and average length of completed payment paths **(right)** with respect to the
simulated transaction value α and the number of sampled transactions *τ* .
A final relevant metric is the number of payments that fail if the given entity becomes unavailable. In
Figure 25, we show the fraction of unsuccessful payments after removing the given entity. For
[example, after removing the 25 nodes of LNBIG from LN, the rate of failed transactions increases to](https://lnbig.com/#/)
0.417 from the original level of 0.382. Recall from Section 3.2 that a large fraction of the payments
cannot be routed, since several nodes have only disabled or no outbound channels with capacity over
the simulated payment value *α* .
35
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
**Figure 25** : The fraction of incomplete payments, out of the simulated *τ* = 5000 transactions, after removing the given entity from
LN. The original fraction of failed transactions 0.3823 is marked by the dashed line.
In this section, we estimated the income of the central router nodes under various settings. Although
our experiments confirm that at the present structure and level of usage, the participation for most
routing nodes is not economical, we also foresee a potential in LN to make routing profitable with little
adjustments in pricing and capacity policies if the traffic volume will increase.
### **7. Payment Privacy**
While LN is often considered a privacy solution for Bitcoin as it does not record every transaction in a
public ledger, the fundamentally different privacy implications of LN are often misunderstood [8 ] [22].
LN provides little to no privacy for single-hop payments, since the single intermediary can de
anonymize both sender and receiver. In this sense, the privacy guarantees of LN payment routing are
quite similar in spirit to that of TOR.
Although the intermediary knows the sender and receiver if it knows that the payment is single-hop,
the onion routing technique [10] used in LN provides a weaker notion privacy called plausible
36
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
deniability. By onion routing, an intermediary has no information on its position in the path and the
sender node can claim that the payment was routed from one of its neighbors.
We remark that plausible deniability is also achieved for on-chain transactions by coin mixing
techniques. In wallets supporting coin-mixing one can regularly observe privacy-enhanced
transactions with large anonymity sets, where the identity of a sender is hidden by mixing with as
many as 100 other transaction senders [36]. Hence for LN to provide privacy guarantees stronger than
on-chain transactions, offering plausible deniability in itself can be insufficient.
Next we assess the strength of privacy for simulated LN payments. By our discussion, high node
degrees and long payment paths are compulsory for privacy. First, payments from low degree nodes
are vulnerable, as the immediate predecessor or successor set is too small and can allow privacy
attacks, for example, by investigating possible channel balances. Second, the majority of payments
should be long, otherwise an intermediary has strong statistical evidence for the source or the
destination of a large number its routed payments.
In Figure 26, we plot the fraction of nodes with sufficiently high degree to plausibly hide its payment
as to be originating from one of its neighbors. We observe that half of the nodes have five or less
neighbors, which makes their transactions vulnerable for attacks based on information either directly
obtained from its neighbors, or inferred through investigating channel capacities. Furthermore,
privacy guarantees are worsened as the value of the payment increases, since we can exclude payment
channels from payment source candidates with capacity less than the payment value.
**Figure 26** : The probability that a node has more channels with at least the given capacity than the degree threshold. Observe that
larger payment amounts increase the risk of yielding more statistical evidence for tracing the source or destination of a payment.
37
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
**Figure 27** : Plausible deniability in LN. Alice can plausible deny being the source of a payment. Similarly, router cannot be sure
whether Bob is the recipient of the payment or one of Bob’s neighbors.
Next, we investigate the possible length of payment paths and the tradeoff between length and cost.
Note that the source has control over the payment path, hence it can deliberately select long paths to
maintain its privacy, however this can result in increased costs.
The topological properties of LN, namely, its small-world nature, allow for very short payment path
lengths. The average shortest path length of LN is around 2.8 [25], meaning that most payment routes
involve one or two intermediaries. This phenomenon is further exacerbated by the client software,
which prefers choosing shortest paths, [9] resulting in a considerable fraction of single-hop transactions.
However, we note that newer advancements in LN client softwares, e.g., c-lightning, incorporate
solutions to decrease the portion of single-hop payments. [10]
Loosely connecting to merchants and paying them only via routing facilitated by intermediaries is
advantageous not just for privacy considerations but also for reducing the required number of
payment channels, and thus limiting the amount that needs to be committed. By contrast, our
measurements in Figure 9 show that nodes seem to prefer opening direct links to other nodes and
especially to merchant nodes. The figure is obtained by computing the shortest path length between *u*
and for each new edge *v* ( *u*, *v* ) immediately before the new edge was created. If there is no such path,
i.e., and lie in different connected components, we assign *u* *v* ∞ to the edge.
Simulations reveal that on average 16% of the payments are single-hop payments, see Figure 28. By
increasing the fraction of merchants among receivers, this fraction increases to 34%, meaning that
strong statistical evidence can be gathered on the payment source and destination through the router
node for more than one-third of the LN payments. We note that in practice, the ratio of de
anonymizable transactions might be even larger, since payments with longer routes can also be de
anonymized if all the router nodes correspond to the same company.
38
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
**Figure 28** : Distribution of simulated path length with respect to the ratio of merchants as transaction endpoints
( *ϵ* ∈ {0.0, 0.5, 0.8, 1.0}).
In our final experiment, we estimate the payment fee increase by using longer paths in the existing
network, based on the assumption that privacy-enhanced routed payments could be achieved by
deliberately selecting longer payment routes. While paths of length more than a predefined number
can be found in polynominal time [37], the algorithm is quite complex and, in our case, needs
enhancements to use the edge costs. Hence, to simplify the experiment, we implemented a genetic
algorithm that injects additional hops into initial lowest-cost paths generated by our simulator, and
finally selects the lowest-cost path it finds for a prescribed length. In Figure 29, we observe that we can
find routing paths that only marginally increase the median cost of the transactions by selecting paths
of length up to six.
39
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
**Figure 29** : Median sender costs in satoshis for fixed path length routing.
In summary, we observed the very small world nature of LN, which is in contrast to the fact that
privacy-aware payment routing could be achieved by deliberately selecting longer payment routes.
The fact that many channel openings are triangle closing could suggest the unreliability of payment
routing in LN. Another reason for the creation of triangle-closing payment channels can also be the
possibility to inject additional hops to preserve transaction privacy, which, by our simulation, is a low
additional cost solution to enhancing privacy.
Overall, we raised questions about the popular belief of the LN community that LN payments provide
superior privacy than on-chain transactions. We believe that deliberately longer payment paths are
required to maintain payment privacy, which does not drastically increase costs at the current level of
transaction fees.
### **8. Conclusion**
In this work, we analyzed Lightning Network, Bitcoin’s payment channel network, from a network
scientific and cryptoeconomic point of view. Past results on the Lightning Network were unable to
analyze the fee and revenue structure, as the data on the actual payments and amounts is strictly
private. Our main contribution is an open-source LN traffic simulator that enables research on the
cryptoeconomic consequences of the network topology without requiring information on the actual
financial flow over the network. The simulator can incorporate the assumption that the payments are
mostly targeted towards the merchants identified by using the tags provided by node owners. We
validated some key parameters of the simulator, such as traffic volume and amount, by simulating the
revenue of central router nodes and comparing the results with information published by certain node
40
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
owners. By using our open source tool, we encourage node owners to build more accurate estimates of
LN properties by incorporating their private knowledge on usage patterns.
Our simulator provided us with two main insights. First, the participation of most router nodes in LN
is economically irrational with the present fee structure; however, signs of sustainability are seen with
increased overall traffic volume over the network. By contrast, at the present level of usage, if routers
start acting rationally, payment fees will rise significantly, which might harm one of LN’s core value
propositions—namely, negligible fees. Second, the topological properties of LN make a considerable
fraction of payments easily de-anonymizable. However, with the present fee structure, paths can be
obfuscated by injecting extra hops with low cost to enhance payment privacy.
We release the source code of our simulator for further research at
[https://github.com/ferencberes/LNTrafficSimulator.](https://github.com/ferencberes/LNTrafficSimulator)
#### Acknowledgements
To Antoine Le Calvez (Coinmetrics) and Altangent Labs for kindly providing us their edge stream data
and daily graph snapshots. To Domokos M. Kelen and Rene Pickhardt for insightful discussions. To our
reviewers, Christian Decker, Cyril Grunspan and to our anonymous reviewer for their invaluable
comments. Support from Project 2018-1.2.1-NKP-00008: Exploring the Mathematical Foundations of
Artificial Intelligence and the “Big Data—Momentum” grant of the Hungarian Academy of Sciences.
### **Footnotes**
D.
Source:
↩
[https://1ml.com](https://1ml.com/)
E.
See
↩
[https://github.com/ferencberes/LNTrafficSimulator](https://github.com/ferencberes/LNTrafficSimulator)
F.
Source:
↩
[https://1ml.com](https://1ml.com/)
41
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
G. Note that at the time of writing, atomic multipath payments (AMPs) are not implemented. AMPs
would allow one to split a payment value into multiple smaller amounts and subsequently send
those payments to the receiver via multiple payment paths through different intermediaries. The
↩
AMP protocol will guarantee that either all sub-payments are executed or none of them.
H.
See
[https://github.com/lightningnetwork/lnd](https://github.com/lightningnetwork/lnd)
and
↩
[https://github.com/ElementsProject/lightning.](https://github.com/ElementsProject/lightning)
I.
Each Bitcoin (BTC) is divisible to the 8th decimal place, so each BTC can be split into 100 000 000
units. Each unit of Bitcoin, or 0.00000001 Bitcoin, is called a Satoshi. A satoshi is the smallest unit of
Bitcoin, see
↩
[https://satoshitobitcoin.co/.](https://satoshitobitcoin.co/)
J.
See
↩
[https://bitcoinfees.info/.](https://bitcoinfees.info/)
K.
See
↩
[https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/000993.html](https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/000993.html)
L.
Source:
https://github.com/lightningnetwork/lnd/blob/40d63d5b4e317a4acca2818f4d5257271d4ac2c7/routin
↩
g/pathfind.go
DC.
Source:
‑
[https://github.com/ElementsProject/lightning/commit/d23650d2edbfe16a21d0e637e507531a60dd2d](https://github.com/ElementsProject/lightning/commit/d23650d2edbfe16a21d0e637e507531a60dd2ddd)
42
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
[dd.](https://github.com/ElementsProject/lightning/commit/d23650d2edbfe16a21d0e637e507531a60dd2ddd) ↩
### **Citations**
D. Nakamoto, S. (2008). Bitcoin: A peer-to-peer electronic cash system. Retrieved from
↩
http://bitcoin.org/bitcoin.pdf
E. Georgiadis, E. (2019). How many transactions per second can bitcoin really handle? Theoretically.
↩
IACR Cryptology EPrint Archive, 2019, 416.
F. Croman, K., Decke, C., Eyal, I., Gencer, A. E., Juels, A., Kosba, A., … Wattenhofer, R. (2016). On
scaling decentralized blockchains (a position paper). In 3rd workshop on bitcoin and blockchain
↩
research.
G. Trillo, M. (2013). Stress test prepares visanet for the most wonderful time of the year. Retrieved from
http://www.visa.com/blogarchives/us/2013/10/10/stress-testprepares-visanet-for-the-most
↩
wonderful-time-of-the-year/index.html
H. McCorry, P., Möser, M., Shahandasti, S. F., & Hao, F. (2016). Towards bitcoin payment networks.
↩
In Australasian conference on information security and privacy (pp. 57–76). Springer.
I. Miller, A., Bentov, I., Kumaresan, R., Cordi, C., & McCorry, P. (2017). Sprites and state channels:
↩
Payment networks that go faster than lightning. ArXiv Preprint ArXiv:1702.05812.
J. Dziembowski, S., Eckey, L., Faust, S., & Malinowski, D. (2017). PERUN: Virtual payment channels
↩
over cryptographic currencies. IACR Cryptology EPrint Archive, 2017, 635.
K. Gudgeon, L., Moreno-Sanchez, P., Roos, S., McCorry, P., & Gervais, A. (2019). SoK: Off the chain
↩
transactions. IACR Cryptology EPrint Archive, 2019, 360.
L. Poon, J., & Dryja, T. (2016). The bitcoin lightning network: Scalable off-chain instant payments.
↩
Retrieved from https://lightning.network/lightning-network-paper.pdf
DC. Kate, A., & Goldberg, I. (2010). Using sphinx to improve onion routing circuit construction. In
↩
International conference on financial cryptography and data security (pp. 359–366). Springer.
DD. Guy makes $20 a month from locking $5 million bitcoin on the lightning network. (n.d.). Retrieved
from https://www.trustnodes.com/2019/08/20/guy-makes-20-a-month-for-locking-5-million
↩
worth-of-bitcoin-on-the-lightning-network
DE. BitMEX. (n.d.). The Lightning Network (Part 2) – Routing Fee Economics. Retrieved from
↩
https://blog.bitmex.com/the-lightning-network-part-2-routing-fee-economics/
43
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
DF. Grunspan, C., & Pérez-Marco, R. (2018). Ant routing algorithm for the lightning network. ArXiv
Preprint ArXiv:1807.00151. ↩
DG. Möser, M., & Böhme, R. (2015). Trends, tips, tolls: A longitudinal study of bitcoin transaction
↩
fees. In International conference on financial cryptography and data security (pp. 19–33). Springer.
DH. Kaskaloglu, K. (2014). Near zero bitcoin transaction fees cannot last forever. In Proceedings of the
↩
international conference on digital security and forensics (digitalsec2014).
DI. Easley, D., O’Hara, M., & Basu, S. (2019). From mining to markets: The evolution of bitcoin
↩
transaction fees. Journal of Financial Economics.
DJ. Brânzei, S., Segal-Halevi, E., & Zohar, A. (2017). How to charge lightning. ArXiv Preprint
ArXiv:1712.10222. ↩
DK. Khan, N., & others. (2019). Lightning network: A comparative review of transaction fees and data
↩
analysis. In International congress on blockchain and applications (pp. 11–18). Springer.
DL. Khalil, R., & Gervais, A. (2017). Revive: Rebalancing off-blockchain payment networks. In
Proceedings of the 2017 acm sigsac conference on computer and communications security (pp. 439–453).
ACM. ↩
EC. Avarikioti, G., Scheuner, R., & Wattenhofer, R. (2019). Payment networks as creation games.
↩
Retrieved from http://arxiv.org/abs/1908.00436
ED. Avarikioti, G., Janssen, G., Wang, Y., & Wattenhofer, R. (2018). Payment network design with
↩
fees. In Data privacy management, cryptocurrencies and blockchain technology (pp. 76–84). Springer.
EE. Herrera-Joancomart ı́, J., & Pérez-Solà, C. (2016). Privacy in bitcoin transactions: New challenges
from blockchain scalability solutions. In International conference on modeling decisions for artificial
↩
intelligence (pp. 26–44). Springer.
EF. Tang, W., Wang, W., Fanti, G., & Oh, S. (2019). Privacy-utility tradeoffs in routing cryptocurrency
↩
over payment channel networks. Retrieved from http://arxiv.org/abs/1909.02717
EG. Tairi, E., Moreno-Sanchez, P., & Maffei, M. (2019). A 2 l: Anonymous atomic locks for scalability
↩
and interoperability in payment channel hubs. In IACR cryptology ePrint archive.
EH. Seres, I. A., Gulyás, L., Nagy, D. A., & Burcsi, P. (2019). Topological analysis of bitcoin’s lightning
↩
network. ArXiv Preprint ArXiv:1901.04972.
44
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
EI. Rohrer, E., Malliaris, J., & Tschorsch, F. (2019). Discharged payment channels: Quantifying the
↩
lightning network’s resilience to topology-based attacks. ArXiv Preprint ArXiv:1904.10253.
EJ. Martinazzi, S. (2019). The evolution of lightning network’s topology during its first year and the
↩
influence over its core values. ArXiv Preprint ArXiv:1902.07307.
EK. Pickhardt, R. (n.d.). Earn Bitcoin With Lightning Network Routing Fees and a Little Data Science.
↩
Retrieved from https://www.youtube.com/watch?v=L39IvFqTZk8
EL. Engelmann, F., Kopp, H., Kargl, F., Glaser, F., & Weinhardt, C. (2017). Towards an Economic
Analysis of Routing in Payment Channel Networks. In Proceedings of the 1st Workshop on Scalable and
↩
Resilient Infrastructures for Distributed Ledgers (p. 2). ACM.
FC. Conoscenti, M., Vetrò, A., De Martin, J., & Spini, F. (2018). The cloth simulator for htlc payment
↩
networks with introductory lightning network performance results. Information, 9(9), 223.
FD. Zhang, Y., Yang, D., & Xue, G. (2019). CheaPay: An optimal algorithm for fee minimization in
blockchain-based payment channel networks. In ICC 2019-2019 ieee international conference on
↩
communications (icc) (pp. 1–6). IEEE.
FE. BitMEX Research. (n.d.). Lightning network (part 7) – proportion of public vs private channels.
Retrieved from https://blog.bitmex.com/lightning-network-part-7-proportion-of-public-vs-private
↩
channels/
FF. Leskovec, J., Kleinberg, J., & Faloutsos, C. (2005). Graphs over time: Densification laws, shrinking
diameters and possible explanations. In Proceedings of the eleventh acm sigkdd international
↩
conference on knowledge discovery in data mining (pp. 177–187). ACM.
FG. Mislove, A., Marcon, M., Gummadi, K. P., Druschel, P., & Bhattacharjee, B. (2007). Measurement
and analysis of online social networks. In Proceedings of the 7th acm sigcomm conference on internet
↩
measurement (pp. 29–42). ACM.
FH. Vigna, S. (2015). A weighted correlation index for rankings with ties. In Proceedings of the 24th
international conference on world wide web (pp. 1166–1176). International World Wide Web
↩
Conferences Steering Committee.
FI. ltcadmin. (n.d.). 100 bitcoin (btc) community members of wasabi wallet make the biggest coinjoin
payment ever. Retrieved from https://icowarz.com/100-bitcoin-btc-community-members-of-wasabi
↩
wallet-make-the-biggest-coinjoin-payment-ever/
45
-----
Cryptoeconomic Systems A Cryptoeconomic Tra�c Analysis of Bitcoin’s Lightning Network
FJ. Bodlaender, H. L. (1993). On linear time minor tests with depth-first search. Journal of
↩
Algorithms, 14(1), 1–23.
46
-----
| 19,751
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1911.09432, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://cryptoeconomicsystems.pubpub.org/pub/beres-lightning-traffic/download/pdf"
}
| 2,019
|
[
"JournalArticle"
] | true
| 2019-11-21T00:00:00
|
[
{
"paperId": "50d5e5dfdbe607eff9029c4adcc64ef6cc265ac5",
"title": "Privacy-Utility Tradeoffs in Routing Cryptocurrency over Payment Channel Networks"
},
{
"paperId": "5af59913e07d5b53a880483b92f7f1e3a0b73e59",
"title": "Payment Networks as Creation Games"
},
{
"paperId": "a46ad421bb5aff382bff7800791bc2430c8a33b2",
"title": "Practical Cryptanalysis of k-ary C"
},
{
"paperId": "4486db64821c02e1cc7ed42a66f2c0b327720b8b",
"title": "Lightning Network: A Comparative Review of Transaction Fees and Data Analysis"
},
{
"paperId": "8453b75f8bf79d203b7acab699ce9895fa9f4419",
"title": "CheaPay: An Optimal Algorithm for Fee Minimization in Blockchain-Based Payment Channel Networks"
},
{
"paperId": "99242f01915abcb2560d07e8aa641fba16a65d82",
"title": "Discharged Payment Channels: Quantifying the Lightning Network's Resilience to Topology-Based Attacks"
},
{
"paperId": "66af9a3c56e3ac836edab57f6358665dd4de1344",
"title": "The evolution of Lightning Network's Topology during its first year and the influence over its core values"
},
{
"paperId": "58e7f831582967848a7910c0ef12dae12d18ae75",
"title": "Topological Analysis of Bitcoin's Lightning Network"
},
{
"paperId": "a420a269739d301647f5f08f6cf01d5d98e436ba",
"title": "The most wonderful time of the year."
},
{
"paperId": "34ab5a15978a2675ce8d0f2533489f084eb0a47b",
"title": "Payment Network Design with Fees"
},
{
"paperId": "da8077489201733e474f62ece89a3c74566d9a3c",
"title": "The CLoTH Simulator for HTLC Payment Networks with Introductory Lightning Network Performance Results"
},
{
"paperId": "bb7f399a23e14af49a8a78ececb75ba82eadb34e",
"title": "Ant routing algorithm for the Lightning Network"
},
{
"paperId": "40b8dd8c6d9926a3b7e0becd134b38b69b386ae6",
"title": "From Mining to Markets: The Evolution of Bitcoin Transaction Fees"
},
{
"paperId": "9380785ab861f80656dff8f738f19ecdcaaf9baf",
"title": "How to Charge Lightning: The Economics of Bitcoin Transaction Channels"
},
{
"paperId": "6b2d41ed1b3a299a9495a254bb02be79745b2b6d",
"title": "Towards an economic analysis of routing in payment channel networks"
},
{
"paperId": "7a2918f9f0192e9a83c46c1ee58742dd6bd98b87",
"title": "Revive: Rebalancing Off-Blockchain Payment Networks"
},
{
"paperId": "4dce5b72e1f205e11dcdcc8db68ea1cb9a68bbc5",
"title": "Sprites and State Channels: Payment Networks that Go Faster Than Lightning"
},
{
"paperId": "15ed54dcf10510c077ba76581fd28c69f2c0968f",
"title": "Privacy in Bitcoin Transactions: New Challenges from Blockchain Scalability Solutions"
},
{
"paperId": "c676783f72402fe700f5a1151a1f74145478ce19",
"title": "Towards Bitcoin Payment Networks"
},
{
"paperId": "75d83792b880757a09e9a72978cc29beb57c4ad5",
"title": "On Scaling Decentralized Blockchains - (A Position Paper)"
},
{
"paperId": "3b7dcca3f2dfbf342af28249318ae4c7cbc641aa",
"title": "Trends, Tips, Tolls: A Longitudinal Study of Bitcoin Transaction Fees"
},
{
"paperId": "df55c55ee097163ad4c10e6a8a76ea2778f7f250",
"title": "A Weighted Correlation Index for Rankings with Ties"
},
{
"paperId": "daa6201036fe4e585799afb6e8ccd7d0766ad134",
"title": "Using Sphinx to Improve Onion Routing Circuit Construction"
},
{
"paperId": "7631c91b69c6ec58a352bf7c3121282770fdbe20",
"title": "Measurement and analysis of online social networks"
},
{
"paperId": "788b6f36a2b7cab86a5a29000e8b7cde25b85e73",
"title": "Graphs over time: densification laws, shrinking diameters and possible explanations"
},
{
"paperId": "cd19b9bd1c726df7c6e9d14d9bcfd5104b599a96",
"title": "How many transactions per second can bitcoin really handle ? Theoretically"
},
{
"paperId": "56a12be3b3406b2f9906cb9eb25e77c1d4a9fcd9",
"title": "A2L: Anonymous Atomic Locks for Scalability and Interoperability in Payment Channel Hubs"
},
{
"paperId": "4d5b9fb1c4205b61060117e3c71b04464c2a1c77",
"title": "SoK: Off The Chain Transactions"
},
{
"paperId": null,
"title": "Privacy-utility tradeoffs"
},
{
"paperId": "0aacfddf9cb22e661302ec77cc251ccafd5f8c71",
"title": "PERUN: Virtual Payment Channels over Cryptographic Currencies"
},
{
"paperId": null,
"title": "The bitcoin lightning network: Scalable off-chain instant payments"
},
{
"paperId": "0639fd390943131b6fecad75f93ea4d71996fcbe",
"title": "Near Zero Bitcoin Transaction Fees Cannot Last Forever"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": "cd1eb8704016b311fd19cda266e5b221d0e00615",
"title": "On Linear Time Minor Tests with Depth-First Search"
},
{
"paperId": null,
"title": "Guy makes $20 a month from locking $5 million bit-coin on the lightning network"
},
{
"paperId": null,
"title": "BitMEX"
},
{
"paperId": null,
"title": "100 bitcoin (btc) community members of wasabi wal-let make the biggest coinjoin payment ever"
},
{
"paperId": null,
"title": "Earn bitcoin with lightning network routing fees and a little data science"
},
{
"paperId": null,
"title": "Lightning network (part 7) -proportion of public vs private channels"
},
{
"paperId": null,
"title": "The lightning network (part 2) -routing fee economics"
}
] | 19,751
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00ab6744896029a8d2b374ade9813f599953243e
|
[
"Computer Science"
] | 0.856404
|
DLattice: A Permission-Less Blockchain Based on DPoS-BA-DAG Consensus for Data Tokenization
|
00ab6744896029a8d2b374ade9813f599953243e
|
IEEE Access
|
[
{
"authorId": "2114113522",
"name": "Tong Zhou"
},
{
"authorId": "2108676313",
"name": "Xiaofeng Li"
},
{
"authorId": "2112151559",
"name": "He Zhao"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://ieeexplore.ieee.org/servlet/opac?punumber=6287639"
],
"id": "2633f5b2-c15c-49fe-80f5-07523e770c26",
"issn": "2169-3536",
"name": "IEEE Access",
"type": "journal",
"url": "http://www.ieee.org/publications_standards/publications/ieee_access.html"
}
|
In today’s digital information age, the conflict between the public’s growing awareness of their own data protection and the data owners’ inability to obtain data ownership has become increasingly prominent. The emergence of blockchain provides a new direction for data protection and data tokenization. Nonetheless, existing cryptocurrencies such as Bitcoin using Proof-of-Work are particularly energy intensive. On the other hand, classical protocols such as Byzantine agreement do not work efficiently in an open environment. Therefore, in this paper, we propose a permission-less blockchain with a novel double-DAG (directed acyclic graph) architecture called DLattice, where each account has its own Account-DAG and all accounts make up a greater Node-DAG structure. DLattice parallelizes the growth of each account’s Account-DAG, each of which is not influenced by other accounts’ irrelevant transactions. DLattice uses a new DPoS-BA-DAG(PANDA) protocol to reach consensus among users only when the forks are observed. Based on proposed DLattice, we introduce a process of data tokenization, including data assembling, data anchoring, and data authorization. We implement DLattice and evaluate its performance on 25 ECS virtual machines, simulating up to 500 nodes. The experimental results show that DLattice reaches a consensus in 10 seconds, achieves desired throughput, and incurs almost no penalty for scaling to more users.
|
Received January 4, 2019, accepted March 6, 2019, date of publication March 21, 2019, date of current version April 8, 2019.
*Digital Object Identifier 10.1109/ACCESS.2019.2906637*
# DLattice: A Permission-Less Blockchain Based on DPoS-BA-DAG Consensus for Data Tokenization
TONG ZHOU 1,2, XIAOFENG LI 1, AND HE ZHAO 1
1 Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China
2 University of Science and Technology of China, Hefei 230026, China
Corresponding author: He Zhao ([email protected])
This work was supported in part by the National Natural Science Foundation of China under Grant 61602435, in part by the Natural
Science Foundation of Anhui Province under Grant 1708085QF153, and in part by the Major Project of Science and Technology of Anhui
Province under Grant 16030901057.
**ABSTRACT** In today’s digital information age, the conflict between the public’s growing awareness of
their own data protection and the data owners’ inability to obtain data ownership has become increasingly
prominent. The emergence of blockchain provides a new direction for data protection and data tokenization.
Nonetheless, existing cryptocurrencies such as Bitcoin using Proof-of-Work are particularly energy intensive. On the other hand, classical protocols such as Byzantine agreement do not work efficiently in an open
environment. Therefore, in this paper, we propose a permission-less blockchain with a novel double-DAG
(directed acyclic graph) architecture called DLattice, where each account has its own Account-DAG and
all accounts make up a greater Node-DAG structure. DLattice parallelizes the growth of each account’s
Account-DAG, each of which is not influenced by other accounts’ irrelevant transactions. DLattice uses a
new DPoS-BA-DAG(PANDA) protocol to reach consensus among users only when the forks are observed.
Based on proposed DLattice, we introduce a process of data tokenization, including data assembling, data
anchoring, and data authorization. We implement DLattice and evaluate its performance on 25 ECS virtual
machines, simulating up to 500 nodes. The experimental results show that DLattice reaches a consensus in
10 seconds, achieves desired throughput, and incurs almost no penalty for scaling to more users.
**INDEX TERMS** Blockchain, data tokenization, consensus algorithm, byzantine agreement protocols,
directed acyclic graph.
**I. INTRODUCTION**
In this new era of digital information age, people generate a
variety of data in their daily lives. On the Internet, they leave
browse records and social data. In the Internet of Things,
user’s health data is collected by wearable devices, and
usage data is acquired by smart home applications. Massive
amounts of data are used to analyze behavioral and health
conditions without the user’s knowledge. To make matters
worse, criminals use privacy data for blackmail and extortion.
The US technology giant Facebook has leaked more than
50 million users’ personal information data, reaping huge
profits and even affecting the US election [1]. Coincidentally,
China Huazhu, a large hotel group, has been reported that
more than 100 million users’ private data has been stolen by
hackers and used for public sales online and blackmail [2].
The associate editor coordinating the review of this manuscript and
approving it for publication was Feng Lin.
In the scientific community, Piero Anversa, a well-known
professor and leading expert in the cardiovascular field, was
identified by his Harvard Medical School and Brigham and
Women’s Hospital as having 31 papers suspected of data
fraud, and major medical journals are requested to withdraw
published papers [3]. From these events we can see:
1) The control over the data is difficult to determine. Data
is not controlled by the real owner (e.g. the user entity) but
by the data producer (such as the equipment manufacturer or
the service provider). The real owner of the data lacks the
permission to agree and know about the use of data, so that
the privacy is not guaranteed [4].
2) Data reliability is poor and can be falsified. Data
producers have the ability to tamper with data or even fabricate false data in the centralized database, making it difficult for data collectors (e.g. research institutions), data
producers, and real owners of data to establish data trust
relationships.
2169-3536
2019 IEEE. Translations and content mining are permitted for academic research only.
VOLUME 7, 2019 Personal use is also permitted, but republication/redistribution requires IEEE permission. 39273
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
-----
T. Zhou *et al.* : DLattice: Permission-Less Blockchain Based on DPoS-BA-DAG Consensus for Data Tokenization
3) Data cannot be shared efficiently. Because the ownership of the data does not belong to the real owner, resulting
in that the data cannot be shared to other paid data collectors
conveniently [5].
The new blockchain technology has shown promising
natures to solve these issues. Blockchain is a cryptographically secure transactional singleton machine with sharedstate [6], which contains an ordered list of records linked
together through chains, trees or DAGs, etc. Blocks are generated by distributed nodes through a consensus mechanism
and spread and verified across the whole network [7]. The distributed nature of blockchain implies no single entity controls
the ledger, but rather the participating peers together validate
the authenticity of records. It is indeed because of its features
such as decentralization, tamper-resistance and network datasharing, blockchain has tremendous potential in the fields of
data protection and tokenization [8], [9].
Unlike cryptocurrencies which are created on and derived
their values directly from the blockchains, digital assets are
often issued by real world entities and blockchains are merely
a medium to record their existence and exchanges [10].
Multichain [11] offers ledgers for storing and tracking asset
history. IOTA [12] issues its token and offers its public ledger
as a platform for micro-payment, which makes data been
exchanged among IoT devices. Previously, we proposed a
method of data assetization and may help promote the data
value transferring and data sharing among the Internet of
Things based on Ethereum Smart Contracts [5].
Resolution of forks is the core problem faced by any
cryptocurrency. Bitcoin [13] and most other cryptocurrencies [6], [14] address forks problem using Proof-of-Work
(PoW), where users must repeatedly compute hashes to grow
the blockchain, and the longest chain is considered authoritative [15]. The process is particularly energy intensive and
time consuming. Proof-of-Stake (PoS) [16] avoids the computational overhead of proof-of-work and therefore allow
reducing transaction confirmation time. On the other hand,
although the advantage of original PBFT [17] is finality,
which once a block is appended, it is final and cannot be
replaced, Byzantine Agreement protocols do not work in an
open environment efficiently because of bandwidth limitation
and having no trusted public-key infrastructure [18].
The main contributions of this paper are as follows:
1) We propose a permission-less blockchain, called DLattice, with a novel Double-DAG architecture where each
account has its own Account-DAG and all accounts
make up a greater Node-DAG structure. DLattice parallelizes the growth of each account’s Account-DAG,
each of which is not influenced by other accounts’
irrelevant transactions. The use of Red-Black Merkle
Tree in the account’s D-Tree speeds up the efficiency
of querying and inserting data assets.
2) We design a new DPoS-BA-DAG (PANDA) protocol
to reach consensus among users only when the forks
are observed instead of executing consensus at a fixed
interval. Experimental results show that the protocol
can reach a consensus with latency in 10 seconds while
scaling to more users.
3) We introduce a process of data tokenization based
on proposed DLattice structure, including data assembling, data anchoring and data authorization.
The rest of the paper is organized as follows: Section II consists of related works, Section III reviews the preliminaries
used throughout this paper. In Section IV, the blockchain
model is described in detail. A series of methods for data
tokenization is presented in Section V. Our PANDA consensus is elaborated in section VI, followed by the attack vectors
and security analysis in Section VII. Section VIII presents the
implementation and evaluation. Finally, the conclusion and
future direction are presented.
**II. RELATED WORKS**
*A. PROOF OF WORK (POW) VARIANTS*
PoW protocols require miners to solve complex cryptographic puzzles which is easy to be verified based on their
own computing power by cryptographic hashes. Specifically,
the solution is a random nonce *n* such that
*H* ( *n* || *H* ( *b* )) ≤ *M* */D,*
for a cryptographic hash function H with a variable number
of arguments and range [0, M], a target difficulty D and the
current block content *b* [10], [19]. The faster the puzzle is
solved by miners, the higher possibility a block is created.
A new block is generated every 10 minutes on average in
Bitcoin [20].
*B. PROOF OF STAKE (POS) VARIANTS*
PoS protocols change the puzzle’s difficulty to be inversely
proportional to the miner’s stake in the network [10], [19]. Let
bal() be the function that returns the stake, then a miner *S* can
generate a new block by solving the puzzle of the following
form:
*H* ( *n* || *H* ( *b* )) ≤ *bal* ( *S* ) *M* */D.*
Casper [21] is an Ethereum’s upcoming PoS protocol based
on smart contract. It allows miners to become validators by
depositing Ethers to the Casper account. The contract then
picks a validator to propose the next block according to
the deposit amount. If the block is confirmed, the validator
gets a small reward. But if it is not, the validator loses its
deposit [10].
*C. BYZANTINE CONSENSUS VARIANTS*
Byzantine Agreement (BA) protocols have been used to replicate a service across a small group of servers [22] [23],
therefore they are suitable for permissioned Blockchain.
PBFT [17] is deterministic and incurs *o* ( *N* [2] ) network messages for each round of agreement where N is the number
of nodes in the network. Tendermint [24] proposes a small
modification on top of PBFT. Instead of having an equal vote,
each node in Tendermint may have different voting power,
proportional to their stake in the network.
39274 VOLUME 7, 2019
-----
T. Zhou *et al.* : DLattice: Permission-Less Blockchain Based on DPoS-BA-DAG Consensus for Data Tokenization
Algorand [15] uses a BA [∗] protocol to reach consensus
among users on the next set of transactions based on Verifiable Random Function that allows users to privately check
whether they are selected to participate in the BA, and to
include a proof of their selection in their network messages.
*D. DIRECTED ACYCLIC GRAPHS (DAGS) VARIANTS*
Nano and HashGraph are some recent proposals for increasing Bitcoin’s throughput by replacing the underlying chain
structured ledger with a DAG structure. Hashgraph [25]
is proposed for replicated state machines with guaranteed
byzantine fault tolerance and achieves its fast, fair and secure
transactions based on gossip about gossip and virtual voting
techniques.
Nano [26] proposes a novel block-lattice architecture
where each account has its own blockchain, delivering near
instantaneous transaction speed and unlimited scalability and
allowing them to update it asynchronously to the rest of the
network, resulting in fast transactions with minimal overhead.
**III. PRELIMINARIES**
*A. SYMMETRIC AND ASYMMETRIC CRYPTOGRAPHY*
Symmetric Cryptography uses the same cryptographic keys
for both encryption of plaintext and decryption of cipher
text. The keys may be identical or there may be a simple
transformation to go between the two keys. Asymmetric
Cryptography is also known as a public key cryptography and
uses public and private keys to encrypt and decrypt data. The
keys are simply large numbers that have been paired together
but are not identical. One key in the pair can be shared with
everyone and called the public key. The other key, called the
private key, in the pair is kept secretly.
*B. CRYPTOGRAPHIC HASH FUNCTION*
maps arbitrarily long strings to binary strings of fixed length.
And it should be hard to find two different strings x and y such
that H(x) = H(y), where *H* () represents the hash function.
*C. DIGITAL SIGNATURES*
allow users to authenticate information to each other without
sharing any secret keys, based on public key cryptography.
To create a digital signature, the hash of the message is
created firstly. The private key is then used to encrypt the
hash. The encrypted hash is the digital signature.
**IV. MODEL DESCRIPTION**
*A. DEFINITIONS*
1) CONSENSUS-PARTICIPATING NODE
Consensus-participating node, *CNode* *i* ∈ { *CNode* 1 *, .,*
*CNode* *N* }, where N represents the number of nodes in the
system. A consensus-participating node is a piece of software
running on a computer that conforms system protocols and
joins in the system network. The nodes communicate with
each other through the gossip protocol, and their distribution
is as shown in Fig. 1. The *CNode* is responsible for recording
**FIGURE 1.** Overall structure of DLattice. The nodes ( **CNode** ), consisting of
normal accounts ( **NorAC** ) and consensus accounts ( **ConAC** ), communicate
with each other through the gossip protocol.
the asset ledger and the data ledger, and these ledgers in
each *CNode* are the same. When initialized, each *CNode*
creates one unique consensus account *ConAC* *i* . The account
consists of a public-private key pair *< Pk* *i* *, Sk* *i* *>* . The
public key *Pk* is called the account address, which is used to
identify the identity of *CNode* *i*, and is exposed to the whole
network. The consensus account needs to reserve a certain
amount of consensus deposit. At the same time as enjoying
the consensus right to obtain other accounts’ fork penalty,
the node also bears the risk of forfeiture of the consensus
deposit for its malicious behaviors. Significantly, the node
that owns all tokens of the system at initial state is called
the Genesis Node, and it is responsible for the booting of the
system.
2) ACCOUNT
The account, *Account* *k* ∈ { *Account* 1 *, . . . ., Account* *M* },
where M represents the number of accounts, which has no
theoretical upper limit. The account is the main body of
actual user’s participation in the system, and is composed
of a public-private key pair *< Pk* *k* *, Sk* *k* *>*, where the public key *Pk* *k* is used to identify the identity of the account
and is exposed to the entire network. A user can control
multiple accounts, but each account only corresponds to one
public key. The private key *Sk* *k* is similar to the password
in the ordinary system. The user holding the private key has
the actual control of the account. The *Sk* *k* can be used by the
account to sign the transaction block or message to clarify
the source of them. The accounts include normal account
*NorAC* and consensus account *ConAC* . The *NorAC* consists
of a currency ledger and a data ledger, which can be used
to send and receive currency assets and data assets and to
assign access control of the data assets. The structure of
account is also shown in Fig. 1. The *ConAC* has the same
function as the *NorAC* except for the functions described in
Definition 1. Each account has its own DAG structure called
Account-DAG, which together makes up the Node-DAG.
VOLUME 7, 2019 39275
-----
T. Zhou *et al.* : DLattice: Permission-Less Blockchain Based on DPoS-BA-DAG Consensus for Data Tokenization
3) TRANSACTION AND BLOCK
A transaction is an agreement or a certain behavior between
the sender and the receiver [27]. In this system, the construction of the transaction requires the account owner to
sign with its private key, and a block contains only one
transaction, so it is called a Transaction Block in this paper,
recorded as *TB* . The transaction blocks include Creating
Transaction Block *TB* *create*, Delegating Transaction Block
*TB* *delegate*, Sending and Receiving Transaction Block *<*
*TB* *send* *, TB* *receive* | *TB* *deal* *>*, and Authorization Transaction
Block *TB* *auth*, etc., as shown in Fig. 2. Both the transfer of
the currency asset and the data asset require the confirmation
of two transaction blocks. The *TB* consists of three states,
*σ* ( *State* *TB* ) ∈{ *S* *sending* *, S* *pending* *, S* *received* }, namely the Sending State *S* *sending*, the Pending State *S* *pending* and the Received
State *S* *received* . In a transaction process, the *TB* *send* is constructed and broadcasted by the sender. At this time, the state
of the *TB* *send* is *S* *sending* . After all nodes are received (or the
consensus is completed), the state becomes *S* *pending* . When
the receiver is online, the currency assets or data assets will be
received according to *TB* *send*, the corresponding *TB* *receive* or
*TB* *deal* is constructed, and the *TB* *send* ’s state becomes *S* *received*,
and the entire transaction process is completed.
**FIGURE 2.** Anatomy of transaction blocks.
The Creating Transaction Block *TB* *create* is used to create user accounts, including creating normal accounts and
consensus accounts. The initial *DLTs* of an account come
from system allocation or currency transferred from other
accounts. *TB* *create* = ( *H* *PRE* *, H* *source* *, H* *account* *, PoW* *, Sig* ),
where *H* *PRE* represents the hash value of the previous transaction block, here is the hash value of the Genesis Header;
*H* *source* indicates the account address of sender; *H* *account*
stands for the account address which created this transaction
(the public key); *PoW* means the proof of work required to
generate this transaction block; *Sig* records the signature of
this transaction block with the account’s private key.
Sending and Receiving Transaction Block *TB* *send*,
*TB* *receive* and *TB* *deal* are used to send or receive currency assets or data assets. The Sending Transaction Block
*TB* *send* = ( *H* *PRE* *, H* *owner* *, H* *dfp* *, Dsp, Value, Token, FP, PoW* *,*
*Sig* ), where *H* *owner* represents the address of the account
receiving the block; *H* *dfp* is a digital fingerprint of the data;
*Dsp* briefly describes the sending data asset; *Value* stands
for the price of the data; *Token* represents the amount of
currency sent; If only *Token* and no *Value* is in *TB* *send*,
it just means the transfer of currency. The *TB* *receive* =
( *H* *PRE* *, H* *source* *, PoW* *, Sig* ), where *H* *source* indicates the hash
value of the corresponding *TB* *send* . If *Value* is included
in *TB* *send*, it indicates the transfer of data assets. The
*TB* *deal* = ( *H* *PRE* *, H* *source* *, H* *RBMerkle* *, Work, Sig* ), where
*H* *RBMerkle* stores the root of D-Tree of the Account-DAG.
When *< TB* *send* *, TB* *receive* *>* and *< TB* *send* *, TB* *deal* *>* appear
in pairs, it indicates that the transfer of currency assets or data
assets has been confirmed by the system. The *TB* *data* is the
representation of *TB* *deal* on the D-Tree of the Account-DAG.
The *TB* *data* is denoted as *TB* *data* = ( *H* *source* *, H* *auth* *, PoW* *, Sig* ),
where *H* *source* represents the hash value of the corresponding
*TB* *deal* and *H* *auth* is the hash value of the *TB* *auth* .
Authorization Transaction Block *TB* *auth* is used by the
account to determine which account has the access control
of the data assets. The Authorization Transaction Block,
*TB* *auth* = ( *H* *PRE* *, H* *source* *, H* *RBMerkle* *, Pld, FP, Pow, Sig* ),
where *Pld* records the list of access permission generated by the account through a mixed cryptogram arithmetic
(see Section V for details). It is worth noting that these
transactions are sent and received from the same account.
Delegating Transaction Block *TB* *delegate* is used to assign a
consensus node to wield voting power on its behalf. *TB* *delegate*
is denoted as *TB* *delegate* = ( *H* *PRE* *, H* *DLG* *, PoW* *, Sig* ), where
*H* *DLG* represents the public key of the delegate node. It is
worth noting that *TB* *delegate* only indicates that the node is
delegated to wield voting power, and the actual currency
assets in the account are not transferred.
4) DIGITAL ASSERT
Digital assets are assets in the form of electronic data which
are owned or controlled by enterprises, organizations or individuals and are held for sale or in production [27]. In the
proposed system, the digital assets are categorized as Currency Asset (CA) and Data Asset (DA), where CA is the token
issued by the system, denoted as *DLT*, which is consumed
as the equivalent in the process of data transfer and is an
important part of data tokenization and a representation of
data value. DA is the result of data tokenization. By assembling the raw data and storing it in a distributed database, and
protecting the corresponding data fingerprint on the chain,
the raw data is tokenized as on-chain assets for sale and
transaction.
5) DLATTICE
As shown in Fig. 3(a), DLattice is a DAG structure called
Node-DAG, which consists of a Genesis Header and the
Account-DAG of accounts. All accounts are organized in
the form of Merkle Patricia Tree (MPT) [28] by the Genesis Header. The public key of the consensus account is
used as the Key, and the hash value of *TB* *create* which is
created as an Account Root Block (ARB) is used as the
Value to jointly build the MPT. The Account-DAG structure
of each account is derived sequentially from its ARB, and
39276 VOLUME 7, 2019
-----
T. Zhou *et al.* : DLattice: Permission-Less Blockchain Based on DPoS-BA-DAG Consensus for Data Tokenization
**FIGURE 3.** (a) Overall structure of Node-DAG; (b) Anatomy of Account-DAG. An older Red-Black Merkle Tree is on the left. After receiving a
new DA (hash 0x23), the newer Red-Black Merkle Tree is on the right. And the *TB* *deal* records the Red-Black Merkle Tree root before
updating.
is composed of the Token-Chain (T-Chain) and the DataTree (D-Tree). The income and expenditure records of the
data asset and the currency asset, which are sent by the
account, is recorded by T-Chain in the form of a unidirectional
chain.
D-Tree is a Red-Black Merkle Tree [29] combining with
T-Chain, which stores the digital fingerprint of the data
asset and corresponding access control permissions, as shown
in Fig. 3(b). The digital fingerprint of the data in *TB* *send*
is taken as the Key, while *TB* *data* is used as the node to
jointly build the Red-Black Tree. The Merkle Root of the
Red-Black Tree is recorded in *H* *RBMerkle* of *TB* *deal* . The complete DLattice structure is formed by the Account-DAG of all
accounts.
*B. ASSUMPTIONS*
*Assumption 1:* DLattice makes standard cryptographic
assumptions such as public-key signatures and hash
functions.
*Assumption 2:* DLattice assumes that honest users run bugfree software and the fraction of money held by honest users
is above some threshold *h* (a constant greater than 2/3), but
that an adversary can participate in DLattice and own some
money.
*Assumption 3:* DLattice makes a ‘‘strong synchrony’’
assumption: most honest users can send messages that will
be received by most other honest users within a known time
bound *δ* *term* . And this assumption does not allow network
partitions.
*Assumption 4:* DLattice also makes a ‘‘weak synchrony’’
assumption: the network can be asynchronous for a long
but bounded period of time. After an asynchrony period,
the network must be strongly synchronous for a reasonably
long period again.
*Assumption 5:* DLattice assumes that if some probability *p*
is negligible, it means it happens with probability at most
*O* (1 */* 2 *[λ]* ) for some security parameter *λ* . Similarly, if some
event happens with high probability, it happens with probability of at least 1 − *O* (1 */* 2 *[λ]* ).
*C. NOTIONS*
Through this paper, we use these notions as shown in Table 1.
**TABLE 1.** Notions and detailed description.
**V. DATA TOKENIZATION**
*A. DATA ASSEMBLING*
Data assembling is to assemble the raw data *D* *raw* into a
data structure that can be used by DLattice at the generation source of data. This data structure is denoted as,
*D* = ( *Pk* *P* *, Pk* *O* *, E* *K* ( *D* *raw* ) *, E* *EK* *, T* *, Sig* *SK* ( *D* *raw* )), where
*Pk* *P* represents the public key of data producer; *Pk* *O* represents the public key of data owner; the source of *D* *raw* are
rich and varied: it can be the continuous data generated by
the device in Internet of Things, or a digital file, or a medical
file, or recorded data generated between people’s communication (e.g. cases, prescriptions between doctors and patients
etc.). The types of *D* *raw* include: binary stream (images,
documents, videos, etc.), URL, etc. *E* *K* ( *D* *raw* ) indicates that
the data producer uses a random AES key to symmetrically
VOLUME 7, 2019 39277
-----
T. Zhou *et al.* : DLattice: Permission-Less Blockchain Based on DPoS-BA-DAG Consensus for Data Tokenization
encrypt the raw data, and the AES key is asymmetrically
encrypted with the *Pk* *O* of data owner and is stored in *E* *EK* .
Only the data owner who owns the corresponding private key
*Sk* *O* can decrypt the ciphertext to obtain *D* *raw* . *Sig* *SK* ( *D* *raw* )
indicates that the data producer uses its private key *Sk* *P* to
sign *D* *raw*, the signature can be verified only by the *Pk* *P* of
the data producer. *T* represents the timestamp of the data
generation. The data structure *D* can be stored in a distributed
database (IPFS [30] currently) to obtain a digital fingerprint.
The digital fingerprint is stored in the *H* *dfp* of *TB* *send* .
*B. DATA AUTHORIZTION*
If the data owner wants to authorize the data to other accounts
for access, the public key *Pk* *other* is needed for asymmetrical
encryption of *< key, iv >* . As shown in (1), *edk* *k* is stored in
the *Pld* of the *TB* *auth* . If the user obtains *edk* *k* from the *Pld*
of the *TB* *auth*, and acquires the symmetric encryption key by
using his private key *Sk* *other* and (2), the data can be decrypted
by using (3), thereby realizing the access control of the data
asset by using the hybrid encryption mechanism.
*edk* *k* ← *E* *ECC* ( *< key, iv >* || *Pk* *other* ) (1)
*< key, iv >* ← *D* *ECC* ( *edk* *k* || *Sk* *other* ) (2)
*D* *raw* ← *D* *AES* ( *Cipher* || *< key, iv >* ) (3)
*C. DATA ANCHORING*
Data anchoring is to anchor the digital fingerprint of data
assets on the blockchain after data assets in distributed stor
age. Data anchoring is the core of data tokenization. The digital fingerprint obtained from data storage is used to construct
a *TB* *send* and it will be broadcasted to the entire network.
Once *TB* *send* is received by all the consensus nodes in the
system, it will be added to the T-Chain of the corresponding
account. If there is a fork, it may be appended after the
consensus is reached (see Section VI for details). When the
receiver is online, the data asset is checked first to see whether
the demands of the receiving account are met, and then the
*TB* *deal* (the current Red-Black Merkle Tree Root is saved
in *H* *RBMerkle* ) will be created to receive the digital asset and
pay. Finally, the *TB* *data* is created, and the Red-Black Merkle
Tree is updated on its D-Tree to complete the transfer of data
assets.
**VI. DPoS-BA-DAG(PANDA)**
*A. NODE BOOTSTRAPPING*
The development of system is divided into the Boot Epoch
and the Freedom Epoch as the number of consensus nodes
increases, denoted as *σ* ( *Epoch* *node* ) ∈ ( *E* *boot* *, E* *freedom* ). In the
initial Boot Epoch, the Genesis node reviews the online and
storage capabilities of the new nodes (these nodes may be
trusted large medical institutions, companies or government
research institutions, etc., which are endorsed by their social
credibility), and assigns certain initial *DLT* *init* to the consensus account of these nodes to complete the joining of the
boot node. The committee consisting of boot nodes is called
*BootCommittee*, and its size is [4 *, C* *B* ]. The nodes less than
the threshold *τ* *good* *[boot]* [are allowed to be accidentally offline, and]
the *DLT* *total* satisfies:
*DLT* *total* = *C* *B* × *DLT* *init* *,*
where *C* *B* represents the amount of boot nodes. In the Boot
Epoch, each node knows the exact number of nodes in the
current system. When the allocation of *DLT* *total* is completed
(the amount of nodes *N > C* *B* at this moment), the system
enters the Freedom Epoch, and the newly joined nodes can
be added to the system at will by purchasing *DLT* from other
accounts in the system. It is noteworthy that common users
can create a normal account at any time by purchasing *DLT*
from a node that has joined the system.
*B. FORK OBSERVATION*
It can be seen from Definition 3 that in DLattice, the transaction block *TB* can only be constructed by the sender, so it
is impossible to be forged by a third party, which means
that a malicious account can generate a fork by constructing
different *TB* *send* s with an identical previous hash *H* *PRE* on its
own T-Chain.
Assume that an account has constructed multiple transaction blocks with identical *H* *PRE*, as shown in Fig. 4,
′ ′′
recorded as *List* *TB* ={ *TB* *send* *, TB* *send* *[,][ TB]* *send* *[, ....]* [}][, and broad-]
casted them to the entire network. A node will observe a
′
set { *TB* *send* *, ..., TB* *send* [}][ with identical] *[ H]* *[PRE]* [, thus forming a]
fork. Because T-Chain is a unidirectional chain, it is necessary for all nodes in the network to pick a certain *TB*
from *List* *TB* and add it to its Account-DAG by the consensus
algorithm.
**FIGURE 4.** A fork occurs when two (or more) signed transaction blocks
reference the same previous block. Older transaction blocks are on the
left; newer blocks are on the right.
If a node not observe any forks, the *TB* will be added to
Account-DAG directly. When a fork is observed by a node
(the node is called a Candidate Consensus Node, denoted as
*Candidate* *seed*, where *seed* is the corresponding *H* *PRE* ), due
to the incentive of Fork Penalty, the *Candidate* *seed* who wants
to get Fork Penalty will actively participate in the consensus
following these steps in Fig. 5.
**FIGURE 5.** Flow chart of PANDA consensus.
39278 VOLUME 7, 2019
-----
T. Zhou *et al.* : DLattice: Permission-Less Blockchain Based on DPoS-BA-DAG Consensus for Data Tokenization
*C. CONSENSUS IDENTITY SETUP*
When a fork is observed, each *Candidate* *seed* begins to calculate its own consensus identities and participate in the
consensus to resolve this fork.
In the *E* *boot*, each *Candidate* *seed* has a unique consensus
identity:
*ID* *seed* ← *< Pk, Seed, hash, proof, message, Sig* *sk* *>*
at each phase of each term of the consensus. The generation
of *hash*, *proof* and *message* will be detailed in Part D.
In the *E* *freedom*, the hash value:
*hash* ≤ *w* *i* */W*
that satisfies the PoS condition is calculated locally and
secretly by *Candidate* *seed* to constitutes a consensus identity:
*ID* *seed* ← *< Pk, Seed, ID* *pos* *< hash >, Sig* *sk* *>,*
together with the information such as its public key to participate in this consensus. It is worth noting that nodes are able
to generate multiple identities in local secretly for consensus
at each phase of the consensus based on their voting power
in the *E* *freedom* . Where *w* *i* is the sum of the voting power the
node holds and represents; *W* is the total voting power. These
parameters together determine the computational difficulty
of the consensus identity. It can be obtained from Lemma 1
that the larger the voting power of the node, the more consensus identities will be generated in the same number of
attempts, as well as the greater the probability of obtaining
Fork Penalty.
*D. COMMITTEE FORMATION*
The *Candidate* *seed* secretly generates the consensus identity in local, and the consensus committee that resolves
this fork is also formed at the same time. The consensus
committee is denoted as *Committee* *seed* . In the *E* *boot*, each
*Candidate* *seed* generates a unique consensus identity to participate in *Committee* *seed* . In the *E* *freedom*, each *Candidate* *seed*
generates multiple consensus identities secretly and locally
based on its voting power to form a *Committee* *seed*, as shown
in Algorithm 1. And the *Committee* *seed* at the *P* *vote* and
*P* *commit* are different, denoted as *Committee* *seed* ( *vote* ) and
*Committee* *seed* ( *commit* ) respectively.
The ConsensusIDGeneration() (Algorithm 1) is used to
generate the consensus identity at the phase *P* *consensus* in
*e* term. According to Lemma 2, each node calculates the
consensus identity for *C* *E* times secretly and locally based on
its voting power. If the identity conforms to the PoS condition,
it can vote in the corresponding consensus phase.
The Verifiable Random Function (VRF) [31] is used to
calculate a hash value secretly and locally, and the consensus
identity that satisfies the PoS condition is calculated according to the hash value. The identity satisfies that each node
can only calculate its own consensus identity instead of being
calculated in advance by other nodes, while other nodes can
verify the identity only after being broadcasted.
**Algorithm** **1** ConsensusIDGeneration(): Generation of
Consensus Identit y
**Input:** *ctx*, *Seed*, *P* *consensus*
1: **i** f *ctx.E* *boot* **t** hen
2: *< hash, proof >* ← *VRF* *Sk* ( *Seed* || *P* *consensus* || *e* )
3: *ID* *seed* ← *< Pk, Seed, hash, proof, Sig* *sk* *>*
4: **i** f *e* % *δ* *MaxTerm* == 1 **t** hen
5: *ctx.List* *ID* [ *Seed* ][ *e* ][ *P* *consensus* ] *.*
*append* ( *ID* *seed* )
6: **else if** *ctx.E* *freedom*
7: **for** *index* = 0; *index < ctx.C* *E* ; *index* + +
8: *< hash, proof >* ← *VRF* *Sk*
( *Seed* || *P* *consensus* || *e* || *index* )
9: *message* ← *< Seed, P* *consensus* *, e, index >*
10: *ID* *pos* ← *< hash, proof, message >*
11: *ID* *seed* ← *< Pk, Seed, ID* *pos* *, Sig* *sk* *>*
12: **i** f *e* % *δ* *MaxTerm* == 1
&& *hash* ≤ *ctx.w* *i* */ctx.W* **then**
13: *ctx.List* *ID* [ *Seed* ][ *e* ][ *P* *consensus* ] *.*
*append* ( *ID* *seed* )
14: **end for**
15: **end if**
In order to simplify the expression, in this paper, the private
key *Sk* *i* and the public key *Pk* *i* of each consensus account
*ConAC* *i*, the sum of the voting power *w* *i* the node owns and
represents, the total voting power *W* of the system, and other
information such as system configuration are collectively
referred to as the context information of *ConAC* *i*, denoted
as *ctx* .
The VerifyID() (Algorithm 2) is used to verify whether
the consensus identity *ID* *seed* is in consensus committee
*Committee* *seed* ( *P* *consensus* ) at the phase *P* *consensus* .
**Algorithm 2** VerifyID(): Verifying A Consensus Identity
Whether in the Consensus Committee
**Input:** *ID* s *eed*, *P* *consensus*
**Output:** *TrueorFalse*
1: *< Pk, Seed, ID* *pos* *, Sig >* ← ProcessID( *ID* *seed* )
2: *< hash, proof, message >* ← *ID* *pos*
3: *< P* *consensus* *, e, index >* ← *message*
4: **if** !VerifySignature( *Pk, Sig* ) **then return** False
5: **i** f *ctx.E* *boot* *then*
6: **i** f ¬ *VerifyVRF* *Pk* ( *hash, proof,*
*Seed* || *P* *consensus* || *e* ) **then**
7: **return** True
8: **else if** *ctx.E* *freedom*
9: **i** f ¬ *VerifyVRF* *Pk* ( *hash, proof,*
*Seed* || *P* *consensus* || *e* || *index* )
10: **&** &( *hash* ≤ *ctx.w* *i* */ctx.W* ) **then**
11: **return** *True*
12: **end if**
13: **return** Flase
VOLUME 7, 2019 39279
-----
T. Zhou *et al.* : DLattice: Permission-Less Blockchain Based on DPoS-BA-DAG Consensus for Data Tokenization
*E. CONSENSUS*
At the same time as the formation of the consensus com
mittee, the consensus begins accordingly. The consensus is
divided into two phases *σ* ( *Phase* *consensus* ) ∈ ( *P* *vote* *, P* *commit* ).
At the *P* *vote*, the selected members of *Committee* *seed* ( *vote* )
shall select a *TB* to vote, and all consensus nodes collect
the vote results. At the *P* *commit*, the members of committees
*Committee* *seed* ( *commit* ) will start the *commit* voting based on
the collected vote results, and all nodes collect the *commit*
voting results. If a node counts *commit* voting results exceeds
the threshold *τ* *good*, the consensus is reached on this node.
In a strongly synchronous network, it is optimal to reach a
consensus within the time of 2 × *δ* *term* . The message propagation complexity is *o* ( *C* *P* [2] [), where] *[ C]* *[P]* [ is the actual size of the]
consensus committee. It is worth noting that in the *E* *boot*, each
node has a unique identity for consensus, *C* *P* = *C* *B* ; however
in the *E* *freedom*, each node has multiple consensus identities
based on its voting power, and these consensus identities of
the node are combined and then broadcasted. At this time,
*C* *P* ≈ *C* *E* .
In a weakly synchronous network environment, if the
member of *Committee* *seed* ( *vote* ) does not receive enough
votes, the *Committee* *seed* ( *vote* ) continues to vote, as shown
in Algorithm (3, 4, 5), and a consensus can be reached according to Lemma 3 and 4. If the consensus has not yet been
reached in the limited term *δ* *MaxTerm*, it will be suspended.
After a certain period of time, the consensus will restart.
When the strong synchrony comes, the reaching consensus
can be guaranteed.
**Al** **g** **orithm 3** PANDA_CONSENSUS ()
**Input:** *ctx*, *Seed*
**Output:** *M* *type*, *e*, *H* *TB*
1: *e* ← 1; *H* *selected* *T* *B* ← *Empty* ; *H* *TB*
← *Empty* ; *List* *TB* ←{}
2: **while** *e* ≤ *δ* *MaxTerm* **do**
3: *List* *TB* ← CollectTBlocks( *H* *PRE* )
4: *H* *selected* *T* *B* ← ForkSelection( *ctx, List* *TB* )
5: CommitteeMsg( *ctx, H* *PRE* *, H* *selected* *T* *B* *, P* *vote* *, e* )
6: *H* *TB* ← CountMsg( *ctx, H* *PRE* *, P* *vote* )
7: **if** *H* *selected* *T* *B* == *H* *TB* **then**
8: CommitteeMsg( *ctx, H* *PRE* *, H* *TB* *, P* *commit* *, e* )
9: *H* *TB* ← CountMsg( *ctx, H* *PRE* *, P* *commit* )
10: **if** TIMEOUT ̸= *H* *TB* **then**
11: **return** *< COMMIT* *, e, H* *TB* *>*
12: *e* + +
13: **end while**
The consensus node monitors the presence of the *Spy* identity (definition in section B of part VII ) during the counting
process, and *Evd* will be collected and broadcasted to the
entire networks as soon as the *Spy* identity is discovered
(e.g. an identity *ID* *i* discovers that an identity *ID* *j* has voted for
both *H* *a* and *H* *b* in a certain term of the consensus voting, then
the evidence *Evd < Pk* *i* *, Pk* *j* *,* { *H* *a* *, Sig* *a* } *,* { *H* *b* *, Sig* *b* } *, Sig >*
will be saved and broadcasted). Upon the knowledge and
verification of other nodes, the node corresponding to the *Spy*
identity is blacklisted, and the voting of the blacklisted node
is ignored in the following consensus. The consensus deposit
of the node is deducted at the end of the consensus. Therefore,
the best choice for a malicious node is to select only one *TB* to
vote, and try to delay the consensus time as much as possible,
or not to vote at all.
The CommitteeMsg() (Algorithm 4) is the algorithm used
by members of the *Committee* *seed* to send messages. The type
of the sent message *σ* ( *M* *type* ) is divided into ( *M* *vote* *, M* *commit* )
according to the phase in which the consensus is located,
where *M* *vote* is the message used by the phase *P* *vote* to vote
for the selected *TB*, while *M* *commit* is the message used by the
phase *P* *commit* to commit the *TB* based on the collected vote
results.
**Algorithm 4** CommitteeMsg(): Broadcasting Messages by
Committee Members
**Input:** *ctx*, *Seed*, *H* *TB*, *P* *consensus*, *e*
1: *M* *type* = *GetMsgType* ( *P* *consensus* )
2: *index* = *e/δ* *MaxTerm* + 1
3: **if** *List* *ID* *seed* ← *ctx.List* *ID* [ *Seed* ][ *P* *consensus* ][ *index* ]
4: !isEmpty( *List* *ID* *seed* ) **then**
5: SendMsg( *M* *type* *, H* *TB* *, List* *ID* *seed* *, Sig* *ctx.sk* )
6: **end if**
The CountMsg() (Algorithm 5) is used by the consensus nodes to collect and count the number of messages.
If the received message amount exceeds the threshold *τ* *good*,
the hash of the corresponding transaction block and its term
are returned; if the threshold is not exceeded within *δ* *term*,
*TIMEOUT* is returned.
**Al** **g** **orithm 5** CountMs g (): Countin g Messa g es
**Input:** *ctx*, *Seed*, *M* *type*
**Output:** *H* *TB* or *TIMEOUT*
1: *start* ← Time(); *counts* ←{}; *voters* ←{}
2: *msgs* ← *CollectGobalMsgs* ( *M* *type* ) *.iterator* ()
3: **while** *True* **do**
4: **if** Time() *> start* + *ctx.δ* *Term* **then** TIMEOUT
5: *m* ← *msgs.next* ()
6: *P* *consensus* ← GetPhase( *M* *type* )
7: *< ID* *seed* *, H* *TB* *>* ← ProcessMsg( *m* )
8: **if** !VerifyID( *ID* *seed* *, P* *consensus* ) **then continue**
9: **if** *ID* *seed* ∈ *voters* [ *ID* *seed* *.e* ][ *M* *type* ] **then continue**
10: *counts* [ *ID* *seed* *.e* ][ *M* *type* ][ *H* *TB* ] + +
11: **if** *counts* [ *ID* *seed* *.e* ][ *M* *type* ][ *H* *TB* ] ≥ *τ* *good* **then**
12: **return** *H* *TB*
13 **end while**
**VII. ATTACK VECTORS AND SECURITY ANALYSIS**
*A. ATTACK VECTORS*
1) DOUBLE SPENDING ATTACK
Double-spending is the core problem faced by any cryptocurrency, where an adversary holding $1 gives his $1 to two
39280 VOLUME 7, 2019
-----
T. Zhou *et al.* : DLattice: Permission-Less Blockchain Based on DPoS-BA-DAG Consensus for Data Tokenization
**TABLE 2.** Potential problems in each step of PANDA protocol and the
corresponding lemmas which resolve them.
different users [15]. DLattice prevents double-spending by
Fork Penalty and PANDA consensus.
First, each transaction block *TB* is required to reserve the
Fork Penalty (FP) at their creation. When the fork occurs,
the honest node selects a *TB* from the fork set, and then
resolves the fork on the basis of PANDA consensus. All participating identities in the consensus have the opportunity to
obtain the FP of the *TB* . The algorithm for allocation of fork
penalty is shown as Algorithm 6. It is worth mentioning that
the FP will be obtained only if the XOR distance between the
consensus identity *ID* *seed* and the previous hash *H* *PRE* is less
than the threshold *τ* *penalty*, so the more *DLT* the consensus
account holds, the greater probability to obtain the penalty.
**Algorithm 6** ForkPenaltyAllocationCheck(): Allocation of
Fork Penalt y
**Input:** *ctx*, *Seed*, *e*
**Output:** True or False
1: *flag* ← False
2: *List* *ID* *seed* ← *ctx.List* *ID* [ *Seed* ][ *e* ]
3: **for** *i* = 0; *i < Len* ( *List* *ID* *seed* ); *i* + +
4: *dist* = *List* *ID* *seed* [ *i* ] ⊕ *Seed*
5: **if** *dist* ≤ *ctx.τ* *penalty* **then** *flag* ← True
6: **end for**
7: **reture** *flag*
2) SYBIL ATTACK
If there is no trusted public key infrastructure in a system,
a malicious node can simulate many virtual nodes, thereby
creating a large set of sybils. An entity could create hundreds
of nodes on a single machine [32].
However, since the identities of these nodes in consensus
process are created in proportion to their account balances,
adding extra nodes into the network will not gain an attacker
extra vote. Therefore, sybil attack will bring no advantage.
3) DDOS ATTACK
A distributed denial-of-service (DDoS) attack is a malicious
attempt to disrupt normal traffic of a targeted server, service
or network by overwhelming the target or its surrounding
infrastructure with a flood of Internet traffic [33].
According to the previous analysis, in a strongly synchronous network, the consensus is reached within the
first term. At this time, the member of *Committee* *seed* ( *vote* )
and *Committee* *seed* ( *commit* ) at the consensus phase are noninteractively selected based on VRF, which has a posteriority
to prevent DDoS attacks and collusion among committee
members. If the consensus is not reached in first term (possibly due to the randomness of generation of the committee or a weakly synchronous network environment),
the *Committee* *seed* ( *vote* ) may indeed suffer DDoS attacks due
to exposure. However, first, the attack will not affect other
consensus committee to resolve other forks. Second, the
development of the entire system will not be affected because
only the consensus committee resolving the fork generated
by malicious users will suffer from DDoS attack. Finally,
thanks to the existence of the FP, committee members who
have suffered DDoS attacks will eliminate the DDoS attack as
soon as possible to reach consensus, so as to obtain rewards.
4) FLUCTUATION OF NODES
Generally, the amount of consensus nodes will show a
trend of growth over time, as shown by the green line
in Fig. 6. Before the time *t* 1, the system is in the *E* *boot*, each
*Candidate* *seed* only has a consensus identity; when the time
is at *t* 2, the system enters the *E* *freedom* from the *E* *boot*, each
*Candidate* *seed* may have multiple consensus identities; but
when the time reaches *t* 3 (on the blue curve), and the nodes in
the system have less than *C* *B* for various reasons, the system is
still in the *E* *freedom* (and it will not go back to the *E* *boot* ). If the
remaining active honest nodes still have the voting power of *h*,
although the actual amount of nodes is less than the size *C* *B* of
the *BootCommittee*, the actual amount of generated identities
*C* *P* still satisfy *C* *E* ≈ *C* *P* . According to Lemma 2, at most
*C* *P* */* 3−1 consensus identities are controlled by the Byzantine
node, so that an effective consensus can still be reached.
**FIGURE 6.** Schematic diagram of fluctuation of Nodes.
*B. SECURITY ANALYSIS*
In this section, we provide security analysis for how DLattice
prevents potential threats and works securely based on several
assumptions clarified in Section IV. We also discuss how the
byzantine adversary gains no significant advantage.
***Definition Spy.*** *If the behavior of an identity is dishon-*
*est and is discovered, we call this identity a Spy, and we*
*can obtain evidence based on dishonest behavior, which we*
*callEvd, like voting forH* *a* *at the same time as voting forH* *b* *at*
*the phaseP* *vote* *, or other behaviors like that.*
***Definition Ballot.*** *If a node receives at leastτ* *good* *votes*
*at the phaseP* *vote* *, we call it the observation of aBallot.*
*And the node can only vote or commit thisBallotat the*
*phaseP* *vote* *orP* *commit* *in the later term [34].*
VOLUME 7, 2019 39281
-----
T. Zhou *et al.* : DLattice: Permission-Less Blockchain Based on DPoS-BA-DAG Consensus for Data Tokenization
*Lemma 1 (Consensus Identity Generation):* In the Consensus Identity Setup phase of the *E* *freedom*, each consensus
identity is generated based on the voting power held by the
node. The generation of consensus identity and the number
of calculating attempts obey the exponential distribution.
*Proof:* Assume that *β* indicates the number of attempts
required to generate a consensus identity; *θ* indicates the
probability of generating a consensus identity at one attempt.
The probability of generating a legal consensus identity
within *β* attempts is:
*P* { *x* ≤ *β* } = 1 − *P* { *x > β* } = 1 − (1 − *θ* ) *[β]*
= 1 − *e* *[β]* [ log(1][−] *[θ]* [)] .
Considering *θ* ≪ 1 (in general), thus log(1 − *θ* ) ≈− *θ*,
and we can achieve:
*P* { *x* ≤ *β* } ≈ 1 − *e* [−] *[θβ]* *.*
Therefore, the consensus identity generation and the
required number of attempts obey the exponential distribution. If we set *θ* = *w* *i* */W*, where *w* *i* refers to the voting power
of the node, *W* refers to the total voting power, then:
*P* { *x* ≤ *β* } = 1 − *e* [−] *[w]* *W* *[i][β]* *.*
When the total voting power *W* = 12000 and the voting
power of two consensus nodes are *w* *i* = 10 and *w* *j* = 20
respectively, the probability of generating a legal consensus
identity in the *β* = 400 calculations is shown in Fig. 7. It’s
shown in the figure that when *β* is a fixed value, the greater
the voting power, the greater the probability of generating a
legal identity.
**FIGURE 7.** Generation of consensus identities and the number of
attempts obey an exponential probability distribution, where the orange
curve represents *w* *i* **=** 20 *DLT*, and blue curve illustrates *w* *i* **=** 10 *DLT* .
*Lemma 2 (Number of Consensus Identities):* In the
Committee Formation phase of the *E* *freedom*, the candidate
consensus nodes generate consensus identities based on
their voting power and establish a consensus committee
*Committee* *seed* ( *P* *consensus* ) . The system guarantees that no multiple consensus will be reached in the consensus committee.
*Proof Sketch:* According to Lemma 1, the candidate
consensus nodes generate consensus identities base on their
voting power. The probability of a consensus account owned
voting power *w* *i* generating *k* consensus identities within *β*
calculations is:
*β*
*P* { *x* = *k* } = ( *[w]* *[i]*
� *k* � *W* [)] *[k]* [(1][ −] *W* *[w]* *[i]* [)] *[β]* [−] *[k]* *[.]*
The expectation is *E* = *βw* *i* */W* . According to Assumption 3, the *DLT* *good* held by honest node and *DLT* *total* of the
systems always satisfy:
*h* = *DLT* *good* */DLT* *total* *.*
If the total voting power is *W* = *DLT* *total*, the voting
power of the honest nodes is *w* *honest* = *DLT* *good*, the voting
power of the malicious nodes is *w* *adversary* = *DLT* *bad* . The
consensus identity expectation generated by the honest nodes
is *E* *bad* = *βw* *adversary* */W*, the consensus identity expectation
generated by the malicious nodes is *E* *bad* = *βw* *adversary* */W*,
so the consensus identity that the system expects to generate
is *C* *E* ≈ *β*, and the honest identity accounts for about *h* of the
total.
Assume that the maximum and minimum identities generated by consensus nodes, the honest nodes and the malicious
nodes are *all* max, *all* min, *h* max, *h* min, *a* max and *a* min respectively. We can list equations as follows:
*P* *unit* = *W* [1] *[,][ P]* *[honest]* [ =] *[ w]* *[honest]* [ ×] *[ P]* *[unit]* *[,][ P]* *[adversary]*
= *w* *adversary* × *P* *unit*
*h* max *β/P* *honest*
*P* { *x < h* max } = �
*k* =0 � *k* �
( *P* *unit* ) *[k]* (1 − *P* *unit* ) *[β/][P]* *[honest]* [−] *[k]*
*a* max *β/P* *adversary*
*P* { *x < a* max } = �
*k* =0 � *k* �
( *P* *unit* ) *[k]* (1 − *P* *unit* ) *[β/][P]* *[adversary]* [−] *[k]*
*all* max *β/P* *unit*
*P* { *x < all* max } = �
*k* =0 � *k* �
( *P* *unit* ) *[k]* (1 − *P* *unit* ) *[β/][P]* *[unit]* [−] *[k]*
*β/P* *honest* *β/P* *honest*
*P* { *h* min *< x* ≤ *β/P* *honest* } = �
*k* = *h* min � *k* �
( *P* *unit* ) *[k]* (1 − *P* *unit* ) *[β/][P]* *[honest]* [−] *[k]*
*β/P* *adversary* *β/P* *adversary*
*P* { *a* min *< x* ≤ *β/P* *adversary* } = �
*k* = *a* min � *k* �
( *P* *unit* ) *[k]* (1 − *P* *unit* ) *[β/][P]* *[adversary]* [−] *[k]*
*β/P* *unit* *β/P* *unit*
*P* { *all* min *< x* ≤ *β/P* *unit* } = �
*k* = *a* min � *k* �
( *P* *unit* ) *[k]* (1 − *P* *unit* ) *[β/][P]* *[unit]* [−] *[k]*
And then we set:
*P* { *x < h* max } ≥ 1 − 2 [−] *[λ]*
*P* { *x < h* min } ≥ 1 − 2 [−] *[λ]*
*P* { *h* min *< x* ≤ *β* } ≥ 1 − 2 [−] *[λ]*
*P* { *a* min *< x* ≤ *β* } ≥ 1 − 2 [−] *[λ]*
*P* { *x < all* max } ≥ 1 − 2 [−] *[λ]*
*P* { *all* min *< x* ≤ *β/P* *unit* } ≥ 1 − 2 [−] *[λ]*
39282 VOLUME 7, 2019
-----
T. Zhou *et al.* : DLattice: Permission-Less Blockchain Based on DPoS-BA-DAG Consensus for Data Tokenization
*Z* − *Z* 0 indicates the number of identities that have not yet
voted;
Once *X* identities have committed *TB* *send*, the remaining
′
*Y* + *Z* identities will not reach a consensus on the *TB* *send* [,]
that is, to prove:
*Y* + *Z* − *Z* 0 *< ID* *good* *.*
We can prove it by contradiction, assume that:
*Y* + *Z* − *Z* 0 ≥ *ID* *good* ⇒ *Y* + *Z* − *Z* 0 + *X* ≥ *ID* *good* + *X*
**FIGURE 8.** The maximum and minimum number of all identities *all*,
honest identities *h*, and malicious identities *a* that the system may
generate with different security parameters.
Figure 8. The maximum and minimum number of all identities *all*, honest identities *h*, and malicious identities *a* that the
system may generate with different security parameters.
When *β* is equal to 100, 200, 300, 400 and 500, respectively, the calculation results of different security parameters
*λ* are shown in Fig. 8. At the same time, the relationship
between *all* max, *h* min and *a* max should be satisfied as follows:
*ID* *good* *>* 2 × *a* max
*ID* *good* ≤ *h* min
2 × *ID* *good* *> all* max
The system requires at least *ID* *good* honest identities to avoid
multiple consensus results being reached during the consensus process. The value of *ID* *good* depends on the security
parameters. When the security parameter is *λ* = 20, the most
ideal result is *β* = 500 and *ID* *good* = 306; when *λ* = 15,
the result is *β* = 400 and *ID* *good* = 250; when *λ* = 10,
*β* = 200 and *ID* *good* = 123. The values above ensure
that no multiple consensus results will be reached at the
phase *P* *commit* .
*Lemma 3 (Proof of Safety):* In the consensus process, if the
consensus identities reach a consensus on a *TB* *send* in the fork
′
set { *TB* *send* ′ *, .., TB* *send* [}][, no consensus will be reached on the]
other *TB*
*send* [.]
*Proof: C* *P* represents the actual size of the consensus
committee; *X* indicates the number of honest identities that
have committed *TB* *send* in the *e* term, while 1 ≤ *X* ≤ *ID* *good* ;
at the phase *P* *vote* in the *e* +1 term, *X* honest identities have to
continue to vote as same as in the *e* term due to the existence
of *Ballot* ; *Y* stands for the number of malicious identity which
can do anything, and *Y < ID* *good* */* 2; *Z* indicates the number
of remaining identities. And these parameters satisfy *X* + *Y* +
*Z* = *C* *P* ; *Z* 0 indicates the number of identities in the remaining identities that have voted at the phase *P* *vote* in term e;
⇒ *C* *P* − *Z* 0 ≥ *ID* *good* + *X* ⇒ *X* + *Z* 0 ≤ *C* *P* − *ID* *good* .
Since *C* *P* = *Y* + *ID* *good* *<* 3 × *ID* *good* */* 2, then *X* + *Z* 0 *<*
*ID* *good* */* 2.
Assume that all malicious identities have voted on the
*TB* *send* in the term e, *X* + *Y* + *Z* 0 *> ID* *good*, also because
with the assumption; if some malicious identities *Y < ID* *good* */* 2, then *X* + *Z* 0 *> ID* *good* */* 2, which is inconsistent *Y* ′ have
voted on thealso because *TB Y* ′ *<* *send* *Y* in the term e, *< ID* *good* */* 2, so *X* + *X Y* + ′ + *Z* 0 *Z >* 0 ≥ *IDID* *goodgood* */* 2,,
which is also inconsistent with the assumption.
*Lemma 4 (Proof of Liveness):* In the consensus phase, if a
consensus identity is locked on *Ballot* of a *TB* in the *e* term,
′ ′
when term *e* *> e*, if the node find a new *Ballot*, and the
*Ballot* in the *e* term is unlocked while the new *Ballot* ′ is
locked, thus ensuring the continuation of the consensus.
*Proof Sketch:* In the *P* *commit*, due to the existence of
*Ballot*, some nodes may commit for *Ballot* while the other
nodes is committing for *Ballot* ′, and the votes from both
parties are just equal that led to the failure of reaching a
consensus. At the *P* *vote*, if the node finds the *Ballot* ′ of higher
term, it indicates that the current system is more inclined to
reach a consensus for *Ballot* ′ of higher term, so the node
unlocks the *Ballot* of lower term and locks *Ballot* ′ of the
higher term. And at the *P* *commit*, the nodes will commit the
new locked *Ballot* ′ .
**VIII. IMPLEMENTATION AND EVALUATION**
We implement DLattice and the goals of our evaluation are
twofold. We first measure the latency and throughput of
DLattice when the network size increases. The second goal
is to compare DLattice to other related consensus protocols
including Bitcoin, Ethereum and PBFT, etc.
*A. IMPLEMENTATION*
We implement a prototype of DLattice in Golang [35], consisting of approximately 4000 lines of code. We implement
a gossip network by using go-libp2p library (go-libp2ppubsub) [36] where each user selects a small random set
of peers to gossip messages to. Elliptic Curve Cryptography (ECC) encryption algorithm is used for asymmetric
encryption while the symmetrical algorithm adopts AES
algorithm. SHA-256 is a cryptographic hash function for us
to calculate hash value. And we use the VRF outlined in
Goldberg [37]. The signature algorithm adopts Elliptic Curve
Digital Signature Algorithm (ECDSA).
VOLUME 7, 2019 39283
-----
T. Zhou *et al.* : DLattice: Permission-Less Blockchain Based on DPoS-BA-DAG Consensus for Data Tokenization
**FIGURE 9.** Latency to reach consensus using PANDA (a) and PBFT (b) respectively with 50 to 500 consensus nodes. The broken line in (a) represents the
number of identities participating in the PANDA consensus.
**TABLE 3.** Implementation Parameters.
Table 3 shows the parameters in implementation of prototype of DLattice; we mathematically validate these parameters *λ*, *C* *E* *τ* *good* *[freedom]*, etc. in Section VII. *h* = 4 */* 5 means
that an adversary would need to control 20% of DLattice’s
currency in order to create a fork. *δ* *term* should be high enough
to allow users to receive messages from committee members,
but low enough to allow DLattice to make progress if it
does not hear from sufficiently many committee members.
We conservatively set *δ* *term* to 20 seconds.
*B. EVALUATION*
We run several experiments with different settings on AliCloud ECS servers to measure the latency and throughput
of DLattice. We vary the number of consensus nodes in the
network from 50 to 500, using up to 25 ECS instances. Each
AliCloud ECS instance is shared at most by twenty nodes,
and has 4 AliCloud vCPUs and 8 GB of memory.
1) LATENCY
Latency is the amount of time that it takes from the
creation of a transaction until the initial confirmation of
it being accepted by the network [38]. The latency of
Bitcoin and Ethereum is 576.4 seconds (from Block Height
556800 to 556810) [39] and 12 seconds (from Block Height
7002602 to 7002612) [40] respectively in the livenet.
The latency of transaction in DLattice is instantaneous,
so we just consider the consensus latency in this section.
We implement a consensus algorithm similar to PBFT for
Boot Epoch of DLattice. The consensus latency of the algorithm is shown in Figure 9(b), where the consensus latency
includes the time to generate consensus identity and time to
reach consensus. As the number of consensus nodes increases
from 50 to 500, the time continues to rise. Similarly, PANDA
is implemented for Freedom Epoch of DLattice. During
the experiment, when the number of nodes increases from
50 to 500 and the corresponding voting weight decreases from
240 to 24, the consensus latency is shown in Figure 9(a).
As shown in Figure 10(a), since the size of the consensus
message of PANDA (about 0.86 kb) is larger than that of
PBFT (about 0.4 kb) messages, the latency of PBFT is smaller
than that of PANDA when the consensus nodes are less than
250. As the number of nodes increases, the number of identities participating in the consensus in PANDA is oscillating
around the expected consensus identities, as shown in the
broken line in Figure 9(b), and the consensus identity in
PBFT increases with the number of nodes, so the difference
between two latency is growing larger. Figure 10(a) shows the
comparison among the latency of DLattice with Boot Epoch
and Freedom Epoch (PBFT is used when consensus nodes are
less than 200 in the Boot Epoch, and PANDA is used when
nodes are more than 200 in the Freedom Epoch), the latency
of DLattice-PBFT (PBFT only) and that of DLattice-PANDA
(PANDA only).
39284 VOLUME 7, 2019
-----
T. Zhou *et al.* : DLattice: Permission-Less Blockchain Based on DPoS-BA-DAG Consensus for Data Tokenization
**FIGURE 10.** (a) Latency to reach consensus in DLattice with 50 to 500 consensus nodes; The blue curve represents DLattice, the red curve represents
DLattice-PBFT, and the green curve represents DLattice-PANDA. (b)Throughput of DLattice with 50 to 500 consensus nodes.
**TABLE 4.** Comparison between DLattice and existing blockchain protocols in academia and industry.
2) THROUGHPUT
One of the end goals of blockchain is to replace the current
infrastructure (like financial back-end of many institutions
around the world, which handles thousands of transactions
per second (TPS)), it will need to scale to meet and/or
exceed the TPS to prove its viability. A higher throughput
will also open the doors to more interesting and intensive
applications of blockchain technology [41]. The Bitcoin network processes up to 3 TPS (from Block Height 556800 to
556810) [39] and Ethereum processes up to 127 TPS (from
Block Height 7002602 to 7002612) [40] in the livenet. And
Nano claims that it has theoretical 7000 TPS [42], and experimental 756 TPS in the beta network stress test [43].
From the foregoing, transaction block types of DLattice
include *TB* *send*, *TB* *receive*, *TB* *auth* etc., where *TB* *receive* has an
average size of about 0.5 kb, and the size of the *TB* *auth*
varies with the number of authorizations. and average size
of *TB* *send* is about 0.7 kb. Therefore, during the experiment,
the time required to receive 25,000 sending transaction blocks
is counted. All numbers are averaged after 10 times. We start
with a network of 50 consensus nodes, and then rise to
500 nodes in the last setting. The throughput of DLattice is
shown in Figure 10(b). As the number of DLattice nodes
deployed per ECS instance increases, the throughput of DLattice is decreasing due to hardware constraints. However, since
the sending of transaction block of each account are asynchronous with other accounts so it is unnecessary to wait
for miners to pack transactions like traditional blockchains.
Although it does not reach 7000 TPS in Nano (Because
Nano’s block is smaller, and DLattice records more information about data tokenization), it is still close to 1200 TPS
when less than four nodes are deployed per ECS instance
(However, each computer is likely to deploy only one DLattice node in a real environment). We will further optimize its
throughput in the future experiments and practical scenarios.
3) COMPARISON TO RELATED SYSTEMS
The comparison between DLattice and the existing
blockchain consensus protocols is shown in Table 4. Compared with the traditional Nakamoto consensus algorithm,
the DLattice with PANDA consensus solves the problem
of high energy consumption; compared with the traditional
BFT consensus, the consensus identity has a posteriority,
that is, the randomly elected consensus committee is able to
prove its identity without revealing it in advance. In addition,
as the number of nodes increases, there is no significant
change in network bandwidth consumption. The round-based
Algorand lacks economic incentives, and the signature data is
large, which has strict requirements on network bandwidth.
The chain-based Ouroboros [44] is established in a strong
VOLUME 7, 2019 39285
-----
T. Zhou *et al.* : DLattice: Permission-Less Blockchain Based on DPoS-BA-DAG Consensus for Data Tokenization
synchrony network and the slot leader has been selected
in advance (these drawbacks have been improved by [45]).
Inspired by Nano, DLattice builds a Double-DAG architecture that is dedicated to the tokenization of data. Compared
with Nano, DLattice slightly sacrifices the processing speed
and throughput rate of transactions, but the random selection
of consensus identity reduces the risk of DDoS attacks and
the possibility of collusion among nodes. Moreover, if the
consensus process is in a strong synchrony network, the consensus can be reached within the first term.
**IX. CONCLUSION AND FUTURE WORK**
In this paper, we propose a new permission-less blockchain,
called DLattice, with a Double-DAG architecture where each
account has its own Account-DAG and all accounts make
up a greater Node-DAG structure. DLattice parallelizes the
growth of each account’s Account-DAG, each of which is not
influenced by other accounts’ irrelevant transactions, resulting in fast transactions with minimal overhead. The core of
DLattice is DPoS-BA-DAG (PANDA) protocol which helps
users reach consensus with low latency only when the forks
are observed. Based on proposed DLattice structure, we introduce a series of methods for data tokenization, including data
assembling, data anchoring and data authorization. DLattice
tokenizes data as on-chain assets for sale and transaction,
making them circulate and transfer securely and efficiently.
Through security analysis, we demonstrate DLattice can prevent some attack vectors such as Double-Spend, Sybil attack,
etc. Experimental results show that DLattice reaches a consensus in 10 seconds, achieves desired throughput, and incurs
almost no penalty for scaling to more users.
The shortcomings of this paper are that i) the current
DLattice prototype only anchors the digital fingerprint of
the data asset, while the raw data is stored in the IPFS, but
due to the lack of incentives, the data may be lost with the
accidental offline of the IPFS nodes. The function of the
consensus node shall be optimized in the following study,
making it not only participate in the consensus but also has
the ability to store data assets; ii) smart contracts are not
currently supported; iii) in the process of consensus, if the
consensus is not reached in the first term, and the identity of
the corresponding consensus committee membership will be
exposed and vulnerable to DDoS attacks. These issues are the
focus of research in the following work.
In the future studies, we consider introducing DLattice to
healthcare and the Internet of Things to achieve the tokenization of medical data and IoT data. One possible application
is to manage chronic diseases by tokenizing the medical
examination data, health data collected by wearable devices
and exercise prescription data issued by doctors based on
our previous experience in the field of health informatics.
In this way, the physiological and exercise data of users are
effectively protected while being asserted, and the data can be
efficiently shared and transferred among scientific research
institutions, hospitals, health equipment manufacturers, and
even insurance companies.
**REFERENCES**
[1] E. Zhou. *China’s Biggest Hotel Operator Leaks 500m Customer*
*Records in Data Breach* . Accessed: Aug. 12, 2018. [Online]. Available: https://www.mingtiandi.com/real-estate/finance-real- estate/huazhuhotels-leaks-500m-customer-records-in-data-breach/
[2] D. Ingram. *Facebook Says Data Leak Hits 87 Million Users, Widening*
*Privacy* *Scandal* . Accessed: Aug. 15, 2018. [Online]. Available:
https://www.reuters.com/article/us-facebook-privacy/facebook-says-dataleak-hits-87-million-users-widening-privacy-scandal-idUSKCN1HB2CM
[3] *Braggadocio, Information Control, and Fear: Life Inside a Brigham*
*Stem* *Cell* *Lab* *Under* *Investigation* . Accessed: Oct. 20, 2018.
[Online]. Available: https://retractionwatch.com/2014/05/30/
braggadacio-information-control-and-fear-life-inside-a-brigham-stemcell-lab-under-investigation/
[4] T. Zhou, X. Li, and H. Zhao, ‘‘EverSSDI: Blockchain-Based Framework
for Verification, Authorization and Recovery of Self-Sovereign Identity
using Smart Contracts,’’ *Int. J. Comput. Appl. Technol.*, to be published.
[5] N. Sheng *et al.*, ‘‘Data capitalization method based on blockchain smart
contract for Internet of Things,’’ *J. Zhejiang Univ. (Engineering Science)*,
vol. 52, no. 11, pp. 2150–2153, Nov. 2018.
[6] G. Wood. *Ethereum:* *A* *Secure* *Decentralized* *Generalized*
*Transaction Ledger* . Accessed: Sep. 20, 2018. [Online]. Available:
https://ethereum.github.io/yellowpaper /paper.pdf
[7] T.-T. Kuo, H.-E. Kim, and L. Ohno-Machado, ‘‘Blockchain distributed
ledger technologies for biomedical and health care applications,’’ *J. Amer.*
*Med. Inform. Assoc.*, vol. 24, no. 6, pp. 1211–1220, 2017.
[8] A. Dubovitskaya, Z. Xu, S. Ryu, M. Schumacher, and F. Wang, ‘‘Secure
and trustable electronic medical records sharing using blockchain,’’ in
*Proc. AMIA Annu. Symp.*, 2017, pp. 650–659.
[9] S. Wang, Y. Zhang, and Y. Zhang, ‘‘A blockchain-based framework for data sharing with fine-grained access control in decentralized storage systems,’’ *IEEE Access*, vol. 6, pp. 38437–38450, 2018.
doi: 10.1109/ACCESS.2018.2851611.
[10] T. T. A. Dinh, R. Liu, M. Zhang, G. Chen, B. C. Ooi, and J. Wang,
‘‘Untangling blockchain: A data processing view of blockchain systems,’’
*IEEE Trans. Knowl. Data Eng.*, vol. 30, no. 7, pp. 1366–1385, Jul. 2018.
[11] *Multichain: Open Platform for Blockchain Applications* . Accessed:
Sep. 11, 2018. [Online]. Available: https://www.multichain.com/
[12] P. Serguei. *The Tangle* . Accessed: Sep. 2018. [Online]. Available:
https://assets.ctfassets.net/r1dr6vzfxhev/2t4uxvsIqk0EUau6g2sw0g/
45eae33637ca92f85dd9f4a3a218e1ec/iota1_4_3.pdf
[13] S. Nakamoto. *Bitcoin: A Peer-to-Peer Electronic Cash System* . Accessed:
Sep. 20, 2018. [Online]. Available: https://bitco.in/pdf /bitcoin.pdf
[14] I. Eyal, A. E. Gencer, E. G. Sirer and R. V. Renesse, ‘‘Bitcoin-NG: A scalable blockchain protocol,’’ in *Proc. 13th USENIX Conf. Netw. Syst. Design*
*Implement.* Berkeley, CA, USA: USENIX Association, 2016, pp. 45–59.
[15] Y. Gilad, R. Hemo, S. Micali, G. Vlachos, and N. Zeldovich, ‘‘Algorand:
Scaling byzantine agreements for cryptocurrencies,’’ in *Proc. 26th Symp.*
*Oper Syst. Princ.*, 2017, pp. 51–68.
[16] S. King and S. Nadal. (2012). *PPcoin: Peer-to-Peer Crypto-Currency*
*With Proof-of-Stake* . Accessed: Sep. 20, 2018. [Online]. Available:
https://peercoin.net /assets/paper/peercoin-paper.pdf
[17] M. Castro and B. Liskov, ‘‘Practical Byzantine fault tolerance,’’ In *Proc.*
*3rd Symp. Oper. Syst. Design Implement.*, Berkeley, CA, USA: USENIX
Association, 1999, pp. 173–186.
[18] L. Luu, V. Narayanan, C. Zheng, K. Baweja, S. Gilbert, and P. Saxena,
‘‘A secure sharding protocol for open blockchains,’’ in *Proc. ACM SIGSAC*
*Conf. Comput. Commun. Secur.*, 2016, pp. 17–30.
[19] B. Group. *Proof* *of* *Stake* *versus* *Proof* *of* *Work* *White* *Paper* .
Accessed: Sep. 25, 2018. [Online]. Available: https://bitfury.com/
content/downloads/pos-vs-pow-1.0.2.pdf
[20] *Bitcoin Average Confirmation Time* . Accessed: Mar. 25, 2018. [Online].
Available: https://blockchain.info/charts/avg-confirmation-time
[21] V. Zamfir. *Introducing Casper ’the Friendly Ghost’*,’’ Accessed: Nov. 2018.
[Online]. Available: https://ethereum.github.io/blog/ 2015/08/01/
introducing-casper-friendly-ghost/
[22] M. Pease, R. Shostak, and L. Lamport, ‘‘Reaching agreement in the presence of faults,’’ *J. ACM*, vol. 27, no. 2, pp. 228–234, Apr. 1980.
[23] L. Lamport, R. Shostak, and M. Pease, ‘‘The Byzantine generals problem,’’
*ACM Trans. Program. Lang. Syst.*, vol. 4, no. 3, pp. 382–401, Jul. 1982.
[24] E. Buchman, J. Kwon, and Z. Milosevic. (2018). ‘‘The latest gossip on BFT
consensus.’’ [Online]. Available: https://arxiv.org/abs/1807.04938
39286 VOLUME 7, 2019
-----
T. Zhou *et al.* : DLattice: Permission-Less Blockchain Based on DPoS-BA-DAG Consensus for Data Tokenization
[25] L. Baird. *The Swirlds Hashgraph Consensus Algorithm: Fair, Fast,*
*Byzantine Fault Tolerance* . Accessed: Sep. 05, 2018. [Online]. Available:
http://www.swirlds.com/downloads/SWIRLDS-TR-2016-01.pdf
[26] C. LeMahieu. *Nano: A Feeless Distributed Cryptocurrency Network* .
Accessed: Aug. 6, 2018. [Online]. Available: https://nano.org/en/
whitepaper
[27] X.-P. Min, Q.-Z. Li, L.-J. Kong, S.-D. Zhang, Y.-Q. Zheng, and
Z.-S. Xiao, ‘‘Permissioned blockchain dynamic consensus mechanism
based multi-centers,’’ *Chin. J. Comput.*, vol. 41, no. 5, pp. 1005–1020,
2018. doi: 10.11897/SP.J.1016.2018.01005.
[28] C. Wong. *Patricia Tree* . Accessed: Mar. 25, 2018. [Online]. Available:
https://github.com/ethereum/wiki/wiki/Patricia-Tree
[29] *Red-Black Merkle Tree* . Accessed: Nov. 17, 2018. [Online]. Available:
https://github.com/amiller/redblackmerkle.
[30] J. Benet. *IPFS-Content* *Addressed,* *Versioned,* *P2P* *File* *System* .
Accessed: Oct. 14, 2018. [Online]. Available: https://ipfs.io/
ipfs/QmR7GSQM93Cx5eAg6a6yRzNde1FQv7uL6X1o4k7zrJa3LX/ipfs.
draft3.pdf
[31] S. Micali, M. Rabin, and S. Vadhan, ‘‘Verifiable random functions,’’ in
*Proc. 40th Annu. IEEE Symp. Found. Comput. Sci. (FOCS)*, New York,
NY, USA, Oct. 1999, pp. 120–130.
[32] J. R. Douceur, ‘‘The Sybil attack,’’ in *Proc. 1st Int. Workshop Peer-Peer*
*Syst. (IPTPS)*, Cambridge, MA, USA, Mar. 2002, pp. 251—260.
[33] *DDOS* . Accessed: Oct. 20, 2018. [Online]. Available:
https://en.wikipedia.org/wiki/Denial-of-service_attack
[34] D. Ojha. *Byzantine Consensus Algorithm* . Accessed: Sep. 11, 2018.
[Online]. Available: https://github.com/tendermint/tendermint/wiki/
Byzantine-Consensus-Algorithm
[35] *Golang* . Accessed: Jan. 3, 2019. [Online]. Available: https://golang.org/
[36] *Libp2p* . Accessed: Dec. 15, 2018. [Online]. Available: https://github.
com/libp2p
[37] *A VRF implementation in golang* . Accessed: Dec. 22, 2018. [Online].
Available: https://github.com/r2ishiguro/vrf/
[38] A. Grigorean. *Latency and Finality in Different Cryptocurrencies* .
Accessed: Jan. 4, 2019. [Online]. Available: https://hackernoon.com/
latency-and-finality-in-different-cryptocurrencies-a7182a06d07a
[39] *Bitcoin* *Explorer* . Accessed: Jan. 3, 2019. [Online]. Available:
https://btc.com/
[40] *Ethereum Explorer* . Accessed: Jan. 3, 2018. [Online]. Available:
https://etherscan.io/
[41] *Zilliqa: A High Throughput Scalable Blockchain?* Accessed: Jan. 4, 2019.
[Online]. Available: https://medium.com/ @curiousinvestor/zilliqa-ahigh-throughput-scalable-blockchain-60e355d873c5
[42] A. Anand. *Nano Embraces Speed, Sees Transaction Rate Jump to 750 TPS* .
Accessed: Jan. 4, 2019. [Online]. Available: https://ambcrypto.com/nanoembraces-speed-sees-transaction-rate-jump-to-750-tps/
[43] *Nano* . Accessed: Jan. 4, 2019. [Online]. Available: https:// nano.org/
[44] A. Kiayias, R. Russell, B. David, and R. Oliynykov, ‘‘Ouroboros: A
provably secure proof-of-stake blockchain protocol,’’ in *Proc. Annu. Int.*
*Cryptol. Conf.* Cham, Switzerland: Springer, 2017, pp. 357–388.
[45] B. Davi *et* *al.* *Ouroboros* *PRAOS:* *An* *Adaptively-Secure,*
*Semi-synchronous Proof-of-Stake Blockchain* . Accessed: Mar. 25, 2018.
[Online]. Available: https://link.springer.com/chapter/10.1007/978-3-31978375-8_3
[46] T. Hanke, M. Movahedi and D. Williams. *Dfinity* *Whitepaper* .
Accessed: Nov. 13, 2018. [Online]. Available: https://dfinity.org/
static/ dfinity-consensus-0325c35128c72b42df7dd30c22c41208.pdf
[47] I. Grigg. *EOS Whitepaper* . Accessed: Oct. 15, 2018. [Online]. Available:
https://eos.io/documents/EOS_An_Introduction.pdf
[48] *The Bitshares Blockchain* . Accessed: Oct. 25, 2018. [Online]. Available:
https://www.bitshares.foundation/download/articles/BitSharesBlockchain.
pdf
[49] I. Research. *Blockchain/DLT: A Game-Changer in Managing MNCs*
*Intercompany Transactions* . Accessed: Oct. 28, 2018. [Online]. Available:
https://www.ibm.com/think/fintech/wp-content/uploads/2018/03/IBM_
Research_MNC_ICA_Whitepaper.pdf
TONG ZHOU received the B.S. degree in software engineering from Hubei University, Wuhan,
China, in 2013, and the M.S. degree in information systems and signal processing from Anhui
University, Anhui, China, in 2016. He is currently
pursuing the Ph.D. degree in computer applied
technology with the University of Science and
Technology of China, Hefei, China. His research
interests include blockchain technology, consensus algorithm, and health informatics.
XIAOFENG LI received the B.S. degree from
Tianjin University, in 1987. He is currently a
Research Professor with the Hefei Institutes of
Physical Science, Chinese Academy of Sciences
(CASHIPS), and a Doctoral Supervisor with the
University of Science and Technology of China.
He is also the Director of the Internet Network
Information Center, CASHIPS, the Vice Chairman
of the Hefei Branch of Association for Computing
Machinery (ACM), and the Vice Chairman of the
Anhui Radio Technology Association. His current research interests include
blockchain technology, computer applied technology and measurement, control technology, and automation instrument.
HE ZHAO received the B.S. and M.S. degrees
from the Nanjing University of Posts and Telecommunications, in 2007 and 2010, respectively, and
the Ph.D. degree from the University of Science
and Technology of China, in 2016. He has been
with Huawei Technologies, from 2010 to 2011.
He is currently a Senior Engineer with the Hefei
Institutes of Physical Science, Chinese Academy
of Sciences. His research interests include com
puter networking, health informatics, blockchain
technology, and software architecture.
VOLUME 7, 2019 39287
-----
| 23,277
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ACCESS.2019.2906637?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ACCESS.2019.2906637, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBYNCND",
"status": "GOLD",
"url": "https://ieeexplore.ieee.org/ielx7/6287639/8600701/08672629.pdf"
}
| 2,019
|
[
"JournalArticle"
] | true
| 2019-03-21T00:00:00
|
[
{
"paperId": "e566ee6b9401b4ba3980ec20bbf76046b9f8bba1",
"title": "Explorer"
},
{
"paperId": "5284469ac2c2d4faa8182f67a7bc48767820d2a8",
"title": "EverSSDI: blockchain-based framework for verification, authorisation and recovery of self-sovereign identity using smart contracts"
},
{
"paperId": "8d96a15be46b930dfb5f334514ac2d1c9d4ccdf4",
"title": "Data capitalization method based on blockchain smart contract for Internet of Things"
},
{
"paperId": "218f5a41293909f5e8e1825225117c9dd011d3f3",
"title": "The latest gossip on BFT consensus"
},
{
"paperId": "593784aa3525e3c875e125c68cfbbb1da72a1005",
"title": "A Blockchain-Based Framework for Data Sharing With Fine-Grained Access Control in Decentralized Storage Systems"
},
{
"paperId": "3b3aec1dcaa40298f58b60f03dd038536346bf58",
"title": "Ouroboros Praos: An Adaptively-Secure, Semi-synchronous Proof-of-Stake Blockchain"
},
{
"paperId": "8319bff063be53e6672ad31c53ac3680d2871e30",
"title": "Algorand: Scaling Byzantine Agreements for Cryptocurrencies"
},
{
"paperId": "5bbc4181e073ec6b3ec894a35eacdc6a67e8c3a3",
"title": "Blockchain distributed ledger technologies for biomedical and health care applications"
},
{
"paperId": "44dacdec625e31df66736a385e7001ef33756c5f",
"title": "Ouroboros: A Provably Secure Proof-of-Stake Blockchain Protocol"
},
{
"paperId": "82d2b9d09cc339fdeac05abfb8a31f9c6eace948",
"title": "Untangling Blockchain: A Data Processing View of Blockchain Systems"
},
{
"paperId": "056dcf482f20698c4c9fb0d4d9d38b32750781fb",
"title": "Secure and Trustable Electronic Medical Records Sharing using Blockchain"
},
{
"paperId": "94cf401c112e409b63077be177e7b3d80a0bbfd7",
"title": "A Secure Sharding Protocol For Open Blockchains"
},
{
"paperId": "d28dd2fe450d50a414e83461bfa3449750a573d0",
"title": "Bitcoin-NG: A Scalable Blockchain Protocol"
},
{
"paperId": "40ede9054b10ded75742f64ce738c2e5595b03bc",
"title": "IPFS - Content Addressed, Versioned, P2P File System"
},
{
"paperId": "35516916cd8840566acc05d0226f711bee1b563b",
"title": "The Sybil Attack"
},
{
"paperId": "89842ed2f3956d52898b96427b5d14026760ab75",
"title": "Verifiable random functions"
},
{
"paperId": "8132164f0fad260a12733b9b09cacc5fff970530",
"title": "Practical Byzantine fault tolerance"
},
{
"paperId": "1689f401f9cd18c8fd033d99d1e2ce99b71e6047",
"title": "The Byzantine Generals Problem"
},
{
"paperId": "07a152ad1c17b35396d8b372cbde16e89705c7ec",
"title": "Reaching Agreement in the Presence of Faults"
},
{
"paperId": null,
"title": "Zilliqa: A High Throughput Scalable Blockchain?"
},
{
"paperId": "600c574adfbd0a6895934ec8d3dbfcb56fb2bd68",
"title": "Nano : A Feeless Distributed Cryptocurrency Network"
},
{
"paperId": null,
"title": "Blockchain/DLT: A Game-Changer in Managing MNCs Intercompany Transactions"
},
{
"paperId": null,
"title": "Bitcoin Average Confirmation Time"
},
{
"paperId": null,
"title": "‘‘Permissioned blockchain dynamic consensus mechanism based multi-centers,’’"
},
{
"paperId": null,
"title": "Ethereum: A Secure Decentralized Generalized Transaction Ledger"
},
{
"paperId": null,
"title": "China’s Biggest Hotel Operator Leaks 500m Customer Records in Data Breach"
},
{
"paperId": null,
"title": "Byzantine Consensus Algorithm"
},
{
"paperId": null,
"title": "Multichain: Open Platform for Blockchain Applications"
},
{
"paperId": null,
"title": "Dfinity Whitepaper"
},
{
"paperId": null,
"title": "The Swirlds Hashgraph Consensus Algorithm: Fair"
},
{
"paperId": null,
"title": "Braggadocio, Information Control, and Fear: Life Inside a Brigham Stem Cell Lab Under Investigation"
},
{
"paperId": null,
"title": "Facebook Says Data Leak Hits 87 Million Users"
},
{
"paperId": null,
"title": "IntroducingCasper’theFriendlyGhost’ ,’’Accessed:Nov"
},
{
"paperId": "69900bac4097a576414f69f1998c11089fb5bb94",
"title": "Proof of Stake versus Proof of Work White Paper"
},
{
"paperId": "43586b34b054b48891d478407d4e7435702653e0",
"title": "The Tangle"
},
{
"paperId": "0db38d32069f3341d34c35085dc009a85ba13c13",
"title": "PPCoin: Peer-to-Peer Crypto-Currency with Proof-of-Stake"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": "ca5cb4e0826b424adb81cbb4f2e3c88c391a4075",
"title": "Influence of cultivation temperature on the ligninolytic activity of selected fungal strains"
},
{
"paperId": "de84de57c1f0201eae8f914318723e12a150e15e",
"title": "Volume 7"
},
{
"paperId": "0156e98719ed722704014d76806813d926b72044",
"title": "パトリシアツリー(Patricia Tree)"
},
{
"paperId": null,
"title": "A VRF implementation in golang"
},
{
"paperId": null,
"title": "The Bitshares Blockchain"
},
{
"paperId": null,
"title": "Red-Black Merkle Tree"
},
{
"paperId": null,
"title": "DLattice: Permission-Less Blockchain Based on DPoS-BA-DAG Consensus for Data Tokenization"
},
{
"paperId": null,
"title": "con-troltechnology,andautomationinstrument"
},
{
"paperId": null,
"title": "Nano Embraces Speed, Sees Transaction Rate Jump to 750 TPS"
}
] | 23,277
|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Business",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00ac7145f7cb0fceed64812b883add579458952d
|
[
"Medicine",
"Computer Science"
] | 0.845465
|
Examining the Acceptance of Blockchain by Real Estate Buyers and Sellers
|
00ac7145f7cb0fceed64812b883add579458952d
|
Inf. Syst. Frontiers
|
[
{
"authorId": "1805457",
"name": "W. Yeoh"
},
{
"authorId": "2116597553",
"name": "Angela Lee"
},
{
"authorId": "2119108520",
"name": "Claudia Ng"
},
{
"authorId": "3193992",
"name": "Aleš Popovič"
},
{
"authorId": "121329438",
"name": "Yue-Shuan Han"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
Buying and selling real estate is time consuming and labor intensive, requires many intermediaries, and incurs high fees. Blockchain technology provides the real estate industry with a reliable means of tracking transactions and increases trust between the parties involved. Despite the benefits of blockchain, its adoption in the real estate industry is still in its infancy. Therefore, we investigate the factors that influence the acceptance of blockchain technology by buyers and sellers of real estate. A research model was designed based on the combined strengths of the unified theory of technology acceptance and use model and the technology readiness index model. Data were collected from 301 real estate buyers and sellers and analyzed using the partial least squares method. The study found that real estate stakeholders should focus on psychological factors rather than technological factors when adopting blockchain. This study adds to the existing body of knowledge and provides valuable insights to real estate stakeholders on how to implement blockchain technology.
| ERROR: type should be string, got "https://doi.org/10.1007/s10796 023 10411 8\n\n# Examining the Acceptance of Blockchain by Real Estate Buyers and Sellers\n\n**William Yeoh[1]** **· Angela Siew Hoong Lee[2] · Claudia Ng[2] · Ales Popovic[3] · Yue Han[4]**\n\nAccepted: 24 May 2023\n© The Author(s) 2023\n\n**Abstract**\nBuying and selling real estate is time consuming and labor intensive, requires many intermediaries, and incurs high fees.\nBlockchain technology provides the real estate industry with a reliable means of tracking transactions and increases trust\nbetween the parties involved. Despite the benefits of blockchain, its adoption in the real estate industry is still in its\ninfancy. Therefore, we investigate the factors that influence the acceptance of blockchain technology by buyers and sell\ners of real estate. A research model was designed based on the combined strengths of the unified theory of technology\nacceptance and use model and the technology readiness index model. Data were collected from 301 real estate buyers and\nsellers and analyzed using the partial least squares method. The study found that real estate stakeholders should focus on\npsychological factors rather than technological factors when adopting blockchain. This study adds to the existing body of\nknowledge and provides valuable insights to real estate stakeholders on how to implement blockchain technology.\n\n**Keywords Blockchain · Real estate · Adoption · Factors · Partial least squares method**\n\n\n### 1 Introduction\n\nReal estate is very different from other assets due to high\ntransaction costs, long-term commitment, regulations,\nand other constraints (Dijkstra, 2017). Buying or selling\nreal estate is often time consuming and labor intensive,\nrequires multiple intermediaries, and incurs high fees.\nHigh expenses include costs associated with time delays,\n\nWilliam Yeoh\n\[email protected]\n\nAngela Siew Hoong Lee\[email protected]\n\nClaudia Ng\[email protected]\n\nAles Popovic\[email protected]\n\nYue Han\[email protected]\n\n1 Deakin University, Geelong, Australia\n\n2 Sunway University, Sunway City, Malaysia\n\n3 NEOMA Business School, Mont-Saint-Aignan, France\n\n4 Le Moyne College, Syracuse, USA\n\n\noutdated technologies, and complex data-sharing mecha\nnisms (Latifi et al., 2019). In addition, the real estate indus\ntry faces information costs, such as the cost of coordinating\ntrusted information between dispersed parties in relation\nto contract enforcement information (Sinclair et al., 2022).\nBlockchain technology could help the real estate industry\neliminate inefficiencies and inaccuracies (Deloitte, 2019).\nAccording to transaction cost theory, adopting blockchain\ntechnology has the potential to lower real estate transaction\ncosts and enable lower ex-post transaction costs by reducing\nverification time (Dijkstra, 2017). Combining transparent\nreal estate markets with more effective real estate transac\ntion processes and lower transaction costs could create more\nliquid real estate markets (Dijkstra, 2017).\n\nBlockchain is a decentralized network that provides a\n\nhigh level of transparency and trust without the need for\na central authority to vouch for accuracy (Akram et al.,\n2020; Kamble et al., 2019). The risk of fraud is mitigated\nby cryptographic signatures that make it virtually impos\nsible to alter or forge anything registered on the blockchain\n(Mansfield-Devine, 2017). Blockchain can reduce effort\nwhile increasing the efficiency and effectiveness of real\nestate transactions. It provides the real estate industry with a\nreliable and transparent means to seamlessly track and trace\nprocesses (Compton & Schottenstein, 2017). Karamitsos\n\n\n-----\n\net al. (2018) concluded that blockchain for the real estate\nindustry could increase trust between companies involved\nin the real estate ecosystem and eliminates the need for\nintermediaries because transactions are automatically veri\nfied and validated.\n\nExisting literature explores the benefits and applica\n\ntions of blockchain for the real estate industry (e.g., Kona\nshevych, 2020; Latifi et al., 2019; Sinclair et al., 2022;\nWouda & Opdenakker 2019; Yapa et al., 2018). However,\ndespite numerous studies examining the benefits of block\nchain, there is little research on how buyers and sellers per\nceive and accept blockchain technology in the real estate\nindustry. Given that blockchain is an emerging technology\n(Akram et al., 2020), the real estate industry is still in the\nearly stages of its adoption. More targeted studies need\nto be conducted on the adoption of blockchain in the real\nestate industry (Saari et al., 2022) because understanding\nblockchain adoption can help alleviate the concerns of real\nestate buyers and sellers, leading to broader adoption in the\nindustry. In addition, this understanding can help real estate\nstakeholders and policymakers make informed decisions\nabout how to allocate scarce resources and create relevant\npolicies to enable blockchain implementation (Alalwan et\nal., 2017; Martins et al., 2014). To address this gap in the\nliterature, we aim to investigate the factors that influence\nthe behavioral intentions of real estate buyers and sellers in\nrelation to the use of blockchain technology. We synergisti\ncally combine the unified theory of acceptance and use of\ntechnology (UTAUT) model and the technology readiness\nindex (TRI) model to develop a research model and test it\nwith real estate buyers and sellers through an online survey.\n\nThis work provides both theoretical and practical contri\n\nbutions. It is one of the first studies to investigate the adop\ntion of blockchain technology in the real estate industry. It\nfills a gap in the literature by providing a comprehensive\nunderstanding of new technology adoption by integrating\nthe UTAUT and TRI models. The model presented in this\npaper demonstrates the importance of psychological fac\ntors in technology acceptance studies and provides a new\nresearch stream for future studies. The implications for\npractitioners are threefold. First, a greater focus on psycho\nlogical factors positively influences technology acceptance.\nSecond, emphasizing the holistic benefits of technology in\nan ecosystem promotes technology acceptance. Third, form\ning a consortium to facilitate the technology implementa\ntion environment is beneficial when stakeholders consider\nnew technologies.\n\nThe remainder of this paper is organized as follows. Sec\n\ntion 2 provides an overview of blockchain for real estate and\nintroduces the theoretical basis of this research. Section 3\nprovides the research model that connects the two theories\nand the hypotheses. The research method is then described\n\n\nin Sect. 4, followed by the analysis of the results in Sect. 5.\nSection 6 discusses the main findings of the study, the con\ntributions of these findings to the literature, and the practical\nimplications of the findings. Section 7 concludes the paper\nand suggests avenues for future research.\n\n### 2 Background\n\n#### 2.1 Blockchain Technology and the real Estate Industry\n\nUnlike traditional databases that are stored in a single loca\ntion and controlled by a single party, blockchain is a distrib\nuted database that can store any information (e.g., records,\nevents, or transactions) (Mougayar, 2016). Blockchain can\nbe referred to as a metatechnology because it integrates\nseveral other technologies, such as software development,\ncryptographic technology, and database technology (Mou\ngayar, 2016). Zyskind and Nathan (2015) revealed that the\ncurrent practice of collecting private information by third\nparties poses the risk of security breaches. The main advan\ntage of blockchain is that it can protect permanent records\nfrom data manipulation and infiltration. It also partially\nguarantees anonymity, transparency, transactions, and data\nauthentication (Mougayar, 2016).\n\nIn recent years, the real estate industry has considered\n\nusing blockchain technology for registering, managing,\nand transferring property rights (Crosby et al., 2016; Swan,\n2015). Real estate industry players have recognized that\nblockchain-based smart contracts can help them reap the\nbenefits of operational efficiency, automation, and transpar\nency. Smart contracts are decentralized agreements driven\nby programming codes that are automatically executed\nwhen certain conditions are met (Swan, 2015). For exam\nple, if an apartment sale is handled through a smart contract,\nthe seller gives the buyer the door code for the apartment\nonce payment is received. The smart contract is executed\nand automatically releases the door code on settlement day.\nBy using smart contracts, not only are these agreements\nautomatically enforced, but they are also legally binding.\nIn addition, the blockchain ensures that all actions and cor\nrespondence between buyers and sellers are recorded immu\ntably, providing all parties with an indisputable record of\npayments and records (Liebkind, 2020).\n\nAccording to transaction cost theory, smart contracts\n\nexpedite the registration, administration, and transfer of\nproperty rights while reducing ex-ante and ex-post transac\ntion costs (Crosby et al., 2016; Kosba et al., 2016; Swan,\n2015). Smart contracts have recently become more popu\nlar because they can replace lawyers and banks involved in\nasset transactions according to predefined aspects (Fairfield,\n\n\n-----\n\n2014). The use of blockchain in real estate transactions\ncould make the transfer of money between parties faster,\neasier, and more efficient (Compton & Schottenstein, 2017).\nBlockchain application in the form of cryptocurrencies has\nemerged as a medium of exchange for real estate transac\ntions, with examples in Tukwila (United States), Essex\n(United Kingdom), and Sabah (Malaysia) (Vanar, 2018).\n\nBlockchain technology can transform key real estate\n\ntransactions such as buying, selling, financing, leasing, and\nmanagement transactions. Karamitsos et al. (2018) found\nthat the benefits of using blockchain for real estate are that it\nincreases trust between entities involved in real estate devel\nopment and eliminates the need for intermediaries because\ntransactions are automatically verified and validated.\nAccording to Deloitte (2019), most executives consider cost\nefficiency the biggest benefit of blockchain use. Table 1 pro\nvides a summary of the benefits of blockchain for the real\n\n\nestate industry. The table demonstrates that blockchain can\nreduce transaction complexity, increase security, and mini\nmize opportunism in real estate transactions.\n\n#### 2.2 UTAUT\n\nThe UTAUT model suggests that four constructs—perfor\nmance expectancy, effort expectancy, social influence, and\nfacilitating conditions—are the most important determi\nnants of intention to use information technology (Venkatesh,\n2003). These constructs comprise the most influential con\nstructs derived from eight models: the technology accep\ntance model (TAM); the theory of reasoned action (TRA);\nthe motivational model (MM); the theory of planned behav\nior (TPB); the combined TAM + TPB (CTT); the model of\npersonal computer utilization (MPCU); innovation diffusion\ntheory (IDT); and social cognitive theory (SCT) (Venkatesh,\n\n\n**Table 1 Advantages of block**\nchain for the real estate industry\n\n\nSecuring digital prop\nerty records and rights\nsystem\n(Altynpara, 2023;\nLiebkind, 2020; Latifi\net al., 2019; Sinclair\net al., 2022; Wouda &\nOpdenakker 2019; Yapa\net al., 2018)\n\nProcessing real estate\ntransactions and smart\ncontracts\n(Latifi et al., 2019; Sin\nclair et al., 2022; Wouda\n& Opdenakker. 2019;\nYapa et al., 2018)\n\nImproving pre-purchase\ndue diligence\n(Altynpara, 2023;\nWouda & Opdenakker,\n2019; Yapa et al., 2018)\n\nRemoving\nintermediaries\n(Yapa et al., 2018;\nAltynpara 2023; Latifi\net al., 2019)\n\nEnabling real estate\ninvestments to\nbecome liquid through\ntokenization\n(Altynpara, 2023; Latifi\net al., 2019)\n\n\nAdvantages Descriptions\n\n\n\n- Blockchain ledger entries can record any data structure, including property\ntitles, identity, and certification, and allow their digital transfer via smart\ncontracts.\n\n- Blockchain can establish transparent and clear timelines for property owners.\n\n- Blockchain can automatically guarantee the legitimacy of the transfer of title.\n\n- Owners can trust that their deed is accurate and permanently recorded if prop\nerty ownership is stored and verified on the blockchain because the verifiable\ntransactional history guarantees transparency.\n\n- Blockchain serves as a single irrefutable point of truth, which can greatly ben\nefit fraud detection and prevention, regulatory compliance, and due diligence.\n\n- Blockchain’s trustless nature allows for direct transactions between buyers\nand sellers, eliminating the need for external supervision of transactions.\n\n- The process can be further bolstered by implementing smart contracts that\nensure a buyer–seller transaction will occur only if certain conditions are met.\n\n- Smart contracts enable the real estate to reap the benefits of deal automation\nand transparency.\n\n- With blockchain, trust will be in a decentralized network of actors rather than\nin individual actors.\n\n- Property documents can be kept digitally in blockchain-based platforms.\n\n- These digital documents can contain all the required property data and easily\nbe searched anytime.\n\n- The required data concerning the desired property is always accessible to\nevery purchaser or property owner, or others involved.\n\n- Blockchain allows all paperwork to be completed automatically and can mini\nmize the possibility of annoying paper errors and inaccuracies.\n\n- Blockchain enables realty data to be shared among a peer-to-peer network.\n\n- Blockchain enables real estate brokers to receive additional monitoring of this\ndata and reduce their fees because data can be accessed easily.\n\n- Blockchain eliminates the need for intermediaries (e.g., title companies, attor\nneys, assessment experts, realtors/real estate agents, and escrow companies) by\nharnessing smart contracts.\n\n- Blockchain can become an absolute realty mediator because it can perform\ntasks from managing a highly secure database of property records to automati\ncally conducting every payment.\n\n- Blockchain enables real estate investments to become liquid because it\nprovides transparent records for the desired property, secure multisignature\ncontracts, and eliminates the need to perform tedious paperwork tasks.\n\n- Tokenization refers to the issuance of blockchain tokens acting as the digital\nrepresentation of an asset or a fraction of an asset.\n\n- Tokenizing properties can bring greater liquidity to the sector, increase trans\nparency, and make the investment in real estate more accessible.\n\n\n-----\n\n2003). Performance expectancy refers to the extent to which\nusers expect that using the system will help them improve\ntheir job performance. This construct has four root con\nstructs: perceived usefulness (from TAM/TAM2 and CTT);\nextrinsic motivation (from MM); relative advantage (from\nIDT); and outcome expectancy (from SCT). Effort expec\ntancy refers to the degree of ease associated with using the\nsystem. This construct is derived from perceived ease of use\n(TAM/TAM2); complexity (MPCU); and ease of use (IDT).\nFinally, social influence indicates how significant the indi\nvidual considers the use of the new system to be. This con\nstruct is represented in the UTAUT model as a “subjective\nnorm” in TRA, TAM2, TPB, and CTT, as “social factors”\nin MPCU, and as an “image” in IDT. The UTAUT model is\nvaluable in various research areas, such as continuous use of\ncloud services (Wang et al., 2017) and behavioral intention\nand use in social networking apps (Ying, 2018). In addi\ntion, the UTAUT model is more successful than the previ\nous eight models in explaining up to 70% of use variations\n(Venkatesh, 2003).\n\n#### 2.3 TRI\n\nThe TRI refers to the propensity of people to adopt and use\nnew technologies to achieve their goals. The TRI can be\nused to gain a deeper understanding of people’s willingness\nto adopt and interact with technology, particularly com\nputer and internet-based technology. Parasuraman (2000)\nnoted that TRI can be viewed as a general state of mind\nthat results from a gestalt of mental promoters and inhibitors\nthat combine to determine a person’s propensity to use new\ntechnologies. The TRI has four dimensions: optimism, inno\nvativeness, discomfort, and insecurity. Optimism is consid\nered an indicator of a positive attitude toward technology and\nrepresents the belief that technology can bring efficiency,\nbetter control, and flexibility. Innovativeness refers to users’\ninclination to pioneer technology. Discomfort describes a\nlack of power and a feeling of being overwhelmed when\nusing technology. Insecurity refers to worries or distrust of\nthe technology and its capabilities. In the four dimensions,\nthe technology motivators are optimism and innovativeness,\nwhile the technology barriers are insecurity and discomfort.\nPattansheti et al. (2016) combined TRI with TPB and TAM\nto explain the adoption behavior of Indian mobile banking\nusers, and the results suggested that the integrated constructs\nwere useful indicators. Larasati and Widyawan (2017) used\nTRI in conjunction with TAM to analyze enterprise resource\nplanning implementation in small- and medium-sized enter\nprises and found that the combined constructs in TAM and\nTRI provided a better understanding of enterprise resource\nplanning implementation.\n\n\n### 3 Research Model and Hypotheses\n\nThis study builds a research model based on UTAUT and\nTRI to investigate how real estate buyers and sellers per\nceive the use of blockchain technology. The UTAUT model\npresents four primary constructs that influence final inten\ntion: performance expectancy, effort expectancy, social\ninfluence, and facilitating conditions; these four constructs\nwere included in the proposed model. Given that blockchain\nis still a relatively new technology that is not yet widely\nused in the real estate industry, the four constructs of TRI\nwere adopted (innovativeness, optimism, discomfort, and\ninsecurity) to explain the willingness of real estate buyers\nand sellers to use this technology.\n\nUsing the UTAUT model alone has the disadvantage of\n\nneglecting the psychological aspects of the user (Napitupulu\net al., 2020). Previous research has demonstrated that user\nreadiness based on personality traits is critical in driving\ntechnology acceptance (Parasuraman, 2000). The TRI is\nincluded in our study to consider characteristics that explain\na person’s willingness to use technology. However, some\nresearchers believe that TRI alone does not adequately\nexplain why certain individuals adopt new technologies\nbecause individuals with high technology readiness do\nnot always adopt new technologies (Basgoze, 2015; Tsi\nkriktsis, 2004). Some previous studies have integrated the\nTAM model with the TRI model to combine variables on\ncognitive aspects and psychological traits of technology\nuse (Adiyarta et al., 2018). However, there are few studies\nthat examine two perspectives (technology readiness and\ntechnology acceptance) simultaneously. Examining both\ntheories of technology readiness and acceptance simultane\nously can provide a deeper description of technology adop\ntion (Rinjany, 2020). Therefore, this study integrates the\nUTAUT with the TRI to complement the strengths of the\ntwo models and compensate for the weaknesses of the mod\nels. The TRI examines user readiness, while the UTAUT\nmodel examines technology acceptance factors.\n\nSince 2020, the COVID-19 pandemic has affected the\n\nway organizations operate and accelerated the adoption of\ndigital technologies by several years (LaBerge et al., 2020).\nBecause many of these changes that occurred during the\npandemic (e.g., social distancing and contactless transac\ntions) could be long term, we also include the influence of\nthe pandemic (PAND) in the research model to test whether\nthe pandemic influences respondents’ behavioral intentions\nto adopt blockchain. We define pandemic influence as the\ninfluence of an epidemic that occurs in a large area and\naffects most people. For example, physical distancing is\npracticed to suppress disease transmission, which leads to\na contactless, paperless approach to conducting real estate\ntransactions that do not require physical contact between\n\n\n-----\n\nreal estate stakeholders becoming a priority. The research\nmodel proposed in this study is presented in Fig. 1.\n\n#### 3.1 Performance Expectancy\n\nPerformance expectancy (PEXP) is the extent to which a\nperson believes that the use of technology will help them\nimprove their job performance (Venkatesh, 2003). This\nmeans that the more a user believes that a technology will\nimprove their job performance, the greater the intention\nto use it (Williams et al., 2015). A person’s motivation to\naccept and use a new technology depends on whether they\nperceive certain benefits will arise from use of the technol\nogy in their daily lives (Davis, 1989). Blockchain has been\nshown to create high expectations for improvements in real\nestate transactions, such as promoting process integrity, net\nwork reliability, faster transactions, and lower costs (Latifi\net al., 2019). In addition, blockchain provides liquidity in\nthe real estate market and eliminates intermediaries through\nsmart contracts. Previous studies have reported that the\nintention of individuals to accept a technology depends sig\nnificantly on the expectation of performance (Alalwan et al.,\n2017; Riffai et al., 2012; Weerakkody et al., 2013). In this\nstudy, PEXP refers to the perception of a real estate buyer or\nseller that using blockchain would improve overall perfor\nmance, including speeding up the registration and transfer\nof property rights, reducing the complexity of transactions\nwith multiple parties, and eliminating the need for interme\ndiaries in real estate transactions. Therefore, we hypothesize\nthe following:\n\n_H1: Performance expectancy positively affects the inten_\n\n_tion to use blockchain technology in the real estate industry._\n\n**Fig. 1 Research model**\n\n\n#### 3.2 Effort Expectancy\n\nEffort expectancy (EEXP) refers to the ease of using a tech\nnology (Venkatesh, 2003). Individuals are less likely to use\na technology if they perceive it to be difficult or if it requires\nmore effort than to use than existing methods. Effort expec\ntancy is closely related to performance expectancy, with\nthe former being closer to efficiency expectancy and the\nlatter being closer to effectiveness expectancy (Brown et\nal., 2010). In this study, the ease of use and complexity of\nblockchain can also be conveyed by the amount of time and\neffort required by the buyer and seller. That is, individuals\nwill be satisfied with their experience with the technology\nif they perceive that it requires little effort and is low in\ncomplexity. Previous studies have demonstrated the impact\nof effort expectancy on the adoption of new technologies,\nincluding the blockchain (Kamble et al., 2019; Pattansheti\net al., 2016). Previous research has also demonstrated that\nsmart contracts in blockchain can minimize human effort\nby using predefined rules (Francisco & Swanson, 2018). In\nthis study, EEXP refers to the extent to which the real estate\nbuyer or seller feels that the blockchain is easy to use in\nreal estate transactions. Users need to understand that the\nblockchain is a distributed ledger and that the smart contract\nis simply a program stored on the blockchain that automati\ncally executes transactions when certain conditions are met,\nand they need to learn to connect the computer system to the\nblockchain network. Therefore, we propose the following\nhypothesis:\n\n_H2: Effort expectancy positively affects the intention to_\n\n_use blockchain technology in the real estate industry._\n\n\n-----\n\n#### 3.3 Social Influence\n\nSocial influence (SINF) is the extent to which an individual\nperceives how significant others consider using the new\nsystem (Venkatesh, 2003). Previous research has found that\nsocial influence is exerted through the opinions of family,\nfriends, and colleagues (Irani et al., 2009; Venkatesh &\nBrown, 2001). Other studies have also demonstrated that\nsocial influence factor can lead to higher intention to use\nwhen users have higher normative pressure and volume\n(Granovetter, 1978; Markus, 1987). The importance of\nsocial influence in accepting new technologies has also been\nhighlighted in studies focusing on areas such as adopting\nmobile government services (Zamberi & Khalizani, 2017)\nand internet-based banking (Martins et al., 2014). In our\nstudy, _SINF refers to how much an individual values the_\nopinions of people around them regarding the use of block\nchain in real estate transactions. Therefore, we hypothesize\nthe following:\n\n_H3: Social influence positively affects the intention to use_\n\n_blockchain technology in the real estate industry._\n\n#### 3.4 Facilitating Conditions\n\nFacilitating conditions (FCON) are defined as the extent to\nwhich an individual believes that an organizational and tech\nnical infrastructure is in place to support the use of a system\n(Venkatesh, 2003). Facilitating conditions, such as network\nconnectivity, hardware, and user support, have a significant\nimpact on technology adoption and use (Queiroz & Wamba,\n2019; Tran & Nguyen, 2021). Because blockchain is highly\ninterconnected, it requires technical resources to enable its\nuse. Insufficient resources negatively impact blockchain\nusage (Francisco & Swanson, 2018). For example, if there\nis a lack of support from the blockchain organization, users\nmight opt for other supported systems. In contrast, if users\nfeel that the blockchain organization provides sufficient\ntechnical support and resources, they are more likely to\nadopt blockchain effortlessly. From the perspective of this\nstudy, facilitating conditions emphasize the availability of\nthe technical infrastructure and the awareness of real estate\nbuyers and sellers about the resources available to support\nthe use of blockchain technology in the real estate industry.\nTherefore, we hypothesize the following:\n\n_H4: Facilitating conditions positively affect the intention_\n\n_to use blockchain technology in the real estate industry._\n\n#### 3.5 Innovativeness Users\n\nInnovativeness (INNO) refers to the user’s propensity to\nbe a pioneer in the field of technology. This factor helps\nto increase individuals’ willingness to accept and use\n\n\ntechnology (Parasuraman, 2000). Individuals with high lev\nels of innovativeness are eager to try new technologies to\nunderstand new features and uses. Therefore, they are more\nmotivated to adopt new technologies and enjoy the experi\nence of learning them (Kuo et al., 2013). Their willingness\nto learn, understand, and use new technologies increases\ntheir adoption of technology (Turan et al., 2015). In addi\ntion, innovative individuals tend to be more open to new\nideas and creations in general (Kwang & Rodrigues, 2002).\nThis is also confirmed by the fact that innovativeness has\nbeen found to be a major factor influencing the intention\nto use technology (e.g., Buyle et al., 2018; Qasem, 2020;\nZmud, 1990). In our study, INNO refers to the motivation\nand interest of real estate buyers and sellers to use block\nchain for real estate transactions. Therefore, we propose the\nfollowing hypothesis:\n\n_H5: Innovativeness positively affects the intention to use_\n\n_blockchain technology in the real estate industry._\n\n#### 3.6 Optimism\n\nOptimism (OPTI) is considered an indicator of a positive\nattitude toward technology. Parasuraman (2000) found that\nindividuals who are optimistic about technology can achieve\nmore benefits from technology in relation to control over\nlife, flexibility, and efficiency. Scheier (1985) also found\nthat confident and optimistic people are usually more likely\nto believe that good things will happen than bad things. The\nmindset of such people influences their attitude toward tech\nnology acceptance and risk perception (Costa-Font, 2009).\nThese individuals have positive strategies that directly affect\ntheir technology acceptance (Walczuch et al., 2007). That is,\noptimistic people tend to focus less on negative things and\naccept technologies more readily. In this study, OPTI refers\nto the beliefs and positive attitudes of real estate buyers and\nsellers toward blockchain in real estate transactions. There\nfore, we propose the following hypothesis.\n\n_H6: Optimism positively affects the intention to use_\n\n_blockchain technology in the real estate industry._\n\n#### 3.7 Discomfort\n\nDiscomfort (DISC) describes feelings of lack of control and\nbeing overwhelmed when using technology. It is a barrier\nthat lowers individuals’ willingness to use and accept tech\nnology (Parasuraman, 2000). Individuals who have high\nlevels of discomfort with new technology are more likely to\nfind the technology difficult to use (Walczuch et al., 2007).\nDiscomfort indicates a low level of technological mastery,\nwhich leads to a reluctance to use the technology, ultimately\nmaking the individual uncomfortable with the technol\nogy (Rinjany, 2020). As a result, they may continue to use\n\n\n-----\n\ntraditional methods to accomplish their daily tasks. Previous\nstudies (Kuo et al., 2013; Rahman et al., 2017) have found\nthat discomfort affects an individual’s perceived ease of use\nand directly influences their intention to use the technology.\nGiven that blockchain is a new and disruptive technology,\nit is reasonable to assume that some discomfort will arise\namong individuals in relation to adopting this technology.\nIn our research, DISC refers to the uneasiness of real estate\nbuyers and sellers toward the use of blockchain in real estate\ntransactions. Therefore, we hypothesize:\n\n_H7: Discomfort negatively affects the intention to use_\n\n_blockchain technology in the real estate industry._\n\n#### 3.8 Insecurity\n\nInsecurity (ISEC) refers to concern about or distrust of tech\nnology and distrust of its capabilities. Similar to discomfort,\nit is a barrier that lowers a person’s willingness to use and\naccept technology (Parasuraman, 2000). Individuals who\nfeel less secure about technology tend to have little confi\ndence in the security of newer technologies. Therefore, they\nmay require more security to use new technology (Parasura\nman & Colby, 2015). Distrust and pessimism about new\ntechnology and its performance can make an individual\nskeptical and uncertain about the performance of the tech\nnology (Rinjany, 2020). Individuals with higher levels of\ninsecurity are more likely to be skeptical of new technolo\ngies and may not even be motivated to try them, even if\nthey could benefit from using them (Kamble et al., 2019).\nBecause blockchain is considered a new technology, some\nindividuals are expected to be skeptical about it. In this study,\n_ISEC refers to the distrust and uncertainty of real estate buy_\ners and sellers about using blockchain in real estate transac\ntions. Therefore, we hypothesize the following:\n\n_H8: Insecurity negatively affects the intention to use_\n\n_blockchain technology in the real estate industry._\n\n#### 3.9 Pandemic Influence\n\nThe COVID-19 virus triggered a global pandemic that has\naffected all aspects of daily life and the economy. We con\nsider the pandemic influence (PAND) has positively affected\nthe use of technology in the real estate industry. According\nto Deloitte (2019), processes in the real estate industry are\ncurrently mainly paper based, and due diligence processes\ngenerally occur offline. Many real estate transactions (e.g.,\nsigning the letter of intent to purchase, purchase agreement,\nand land title registration) require face-to-face contact with\nstakeholders such as the buyer or seller, attorneys, and real\nestate agents, and require ink signatures back and forth on\npaper, with numerous intermediaries involved. Kalla et al.\n(2020) demonstrated that blockchain-based smart contracts\n\n\ncould streamline complex application and approval pro\ncesses for loans and insurance. Other benefits include elimi\nnating processing delays caused by traditional paper-based\npolicies and eliminating intermediaries, which typically\nrequire the physical presence of a person. As social distanc\ning and digitization of various aspects of businesses become\nthe norm to contain the spread of the virus (De et al., 2020),\nwe hypothesize the following:\n\n_H9: The impact of the pandemic positively affects the_\n\n_intention to use blockchain technology in the real estate_\n_industry._\n\n### 4 Research Method\n\nWe developed a questionnaire based on previous literature\nto test the research model. The questionnaire was created\nusing Google Forms. The participants in the survey were\nbuyers and sellers of real estate in Malaysia. A five-point\nLikert scale was used, ranging from “strongly disagree”\nto “strongly agree”. Respondents were told they were not\nrequired to participate in the survey and that they had per\nmission to withdraw at any time without penalty. Partici\npants were also assured that all their data would be kept\nconfidential. Table 2 provides the details of the measure\nment items.\n\nTo promote content validity, an information sheet for par\n\nticipants at the beginning of the questionnaire included the\nguidelines for the questionnaire and a request for partici\npants to submit their responses only if they were buyers or\nsellers of real estate. The online questionnaire was sent to\n1,000 individuals, and a total of 301 valid responses were\ncollected, giving a response rate of 30.1%. Table 2 pro\nvides the details of the measurement items. The items were\nadapted from previous literature.\n\n### 5 Results\n\nTable 3 provides the demographics of the survey partici\npants. The gender distribution among the respondents was\nequal, and half of the survey respondents were younger than\n35 years of age. Notably, half of the respondents owned one\nor two properties (56.1%), followed by 17.6% who owned\nthree or four properties, while only 4% owned five or more\nproperties.\n\n#### 5.1 Measurement Model\n\nMeasurement models indicate the relationships between\nconstructs and the corresponding indicator variables, and\nthe distinction between reflective and formative measures\n\n\n-----\n\n**Table 2 Details of measurement**\nitems\n\n\nPerformance\nExpectancy\n(PEXP)\n\nEffort\nExpectancy\n(EEXP)\n\nSocial\nInfluence\n(SINF)\n\nFacilitating\nConditions\n(FCON)\n\nInnovativeness\n(INNO)\n\nOptimism\n(OPTI)\n\nDiscomfort\n(DISC)\n\nInsecurity\n(ISEC)\n\nBehavioral\nIntention\n(BINT)\n\nPandemic\nInfluence\n(PAND)\n\n\nConstruct Item Indicator\n\n\nPE01 I would find blockchain technologies useful in real estate processes.\n\nPE02 Using blockchain technologies accomplishes real estate processes more\nquickly.\n\nPE03 Using blockchain technologies increases productivity in real estate processes.\n\nPE04 Using blockchain would improve performance in real estate processes.\n\nPE05 Using blockchain will help minimize transaction delays.\n\nEE01 I feel that blockchain would be easy to use.\n\nEE02 I think blockchain is clear and understandable.\n\nEE03 I think it will be easy for me to remember and perform tasks using blockchain.\n\nEE04 I feel blockchain will be easier to use compared to the conventional practices\nof managing real estate processes.\n\nEE05 I would find blockchain flexible to interact with.\n\nSI01 People around me believe using blockchain in real estate processes is a wise\ndecision.\n\nSI02 I am more likely to use blockchain in real estate processes if people around\nme are using it.\n\nSI03 If people around me are exploring the use of blockchain, it puts pressure on\nme to use it.\n\nFC01 I know how blockchain works.\n\nFC03 I have the knowledge necessary to use blockchain.\n\nIN01 I am open to learning new technology such as blockchain.\n\nIN02 I believe that it would be beneficial to replace conventional practices with\nblockchain.\n\nOP01 Blockchain would give me more control over certain aspects in the real estate\nprocesses.\n\nOP02 Blockchain can transform the real estate industry for the better.\n\nOP03 Blockchain can solve current issues faced in the real estate industry.\n\nDI01 It will be difficult to understand and apply the concept of blockchain in real\nestate.\n\nDI02 I think blockchain is too complex.\n\nDI03 There should be caution in replacing important people-tasks with blockchain\ntechnology.\n\nDI04 Blockchain is too complicated to be useful.\n\nIS01 I consider blockchain safe to be applied in real estate.\n\nIS02 I am confident that sending information over blockchain is secure.\n\nIS03 I feel confident storing and accessing data on blockchain.\n\nBI01 I predict that I will use blockchain in real estate processes in the future.\n\nBI02 I intend to use blockchain in real estate processes in the future.\n\nBI03 I will continuously see blockchain being used in real estate processes in the\nfuture.\n\nBI04 If available, I prefer blockchain to be used in real estate processes.\n\nPAN01 I feel that blockchain could help minimize real estate sales procedures that\nrequire human contact (e.g., Smart Contracts).\n\nPAN02 If blockchain was implemented, it would help reduce the possible negative\neffects that the pandemic may have caused on the real estate economy.\n\nPAN03 During a pandemic, real estate sales processes would be more efficient with\nblockchain because it could substitute attorneys and banks involved based on\npredefined aspects.\n\nPAN04 I would feel more comfortable proceeding with selling/buying a property if\nblockchain was integrated in real estate processes.\n\n\nis crucial in assigning meaningful relationships in the struc\ntural model (Anderson & Gerbing, 1988). In this research,\nall ten constructs are reflective. The quality of the reflective\nmeasurement model is determined by the following factors:\n\n\n(1) internal consistency; (2) convergent validity; (3) indica\ntor reliability; and (4) discriminant validity.\n\nThe traditional criterion for measuring internal consis\n\ntency is Cronbach’s alpha (Hair et al., 2010). However, this\nmeasure is sensitive to the number of items on a scale and\n\n\n-----\n\n**Table 3 Respondent demographics**\nCategory Item Frequency Percentage\n\nGender Male 156 51.8\n\nFemale 145 48.2\n\nAge < 26 76 25.2\n\n26–35 75 24.9\n\n36–45 56 18.6\n\n46–55 61 20.3\n\n - 55 33 11\n\nNumber of real 0 25 8.3\nestate properties 0 (to purchase 42 14\nowned within the next\n\ntwo years)\n\n1 or 2 169 56.1\n\n3 or 4 53 17.6\n\n≥ 5 12 4\n\n\n**Table 4 Cronbach’s alpha, composite reliability, and AVE values**\nConstruct Cronbach’s Composite reli Average\nalpha ability (CR) variance\n\nextracted\n(AVE)\n\n_BINT_ 0.911 0.938 0.790\n\n_DISC_ 0.821 0.881 0.651\n\n_EEXP_ 0.919 0.939 0.756\n\n_FCON_ 0.853 0.931 0.872\n\n_INNO_ 0.729 0.878 0.783\n\n_ISEC_ 0.886 0.93 0.815\n\n_OPTI_ 0.834 0.901 0.751\n\n_PAND_ 0.845 0.895 0.682\n\n_PEXP_ 0.899 0.926 0.714\n\n_SINF_ 0.734 0.848 0.650\n\nNote: BINT refers to behavioral intention\n\nunderestimates internal consistency reliability. Thus, it may\nbe used as a more conservative measure. Because of the\nlimitations of Cronbach’s alpha, it may be technically more\nbeneficial to utilize composite reliability, which considers\nthe different outer loadings of the indicator variables (Hair\net al., 2017). Its interpretation is the same as for Cronbach’s\nalpha. The composite reliability of the construct should be\nbetween 0.70 and 0.95 (Grefen et al., 2000).\n\nGiven that Cronbach’s alpha is a conservative measure\n\nof reliability, and composite reliability tends to overestimate\nthe internal consistency reliability, which could result in\nrelatively high reliability estimates, both criteria should be\nconsidered and reported (Hair et al., 2017). Table 4 presents\nthe Cronbach’s alpha values, composite reliability, and aver\nage variance extracted (AVE) values of all ten constructs.\nThe Cronbach’s alpha and composite reliability values were\nwithin the threshold range of 0.70–0.95.\n\nConvergent validity is the extent to which a measure cor\n\nrelates positively with alternative measures within the same\nconstruct. The common measure to establish convergent\nvalidity on the construct level is the AVE. The guideline for\n\n\n**Table 5 Outer loadings**\nConstruct Item Loadings\n\n_BINT_ BI01 0.867\n\nBI02 0.928\n\nBI03 0.849\n\nBI04 0.909\n\n_DISC_ DI01 0.753\n\nDI02 0.877\n\nDI03 0.715\n\nDI04 0.869\n\n_EEXP_ EE01 0.886\n\nEE02 0.875\n\nEE03 0.876\n\nEE04 0.846\n\nEE05 0.866\n\n_FCON_ FC01 0.932\n\nFC03 0.936\n\n_INNO_ IN01 0.848\n\nIN02 0.921\n\n_ISEC_ IS01 0.859\n\nIS02 0.919\n\nIS03 0.928\n\n_OPTI_ OP01 0.837\n\nOP02 0.913\n\nOP03 0.848\n\n_PAND_ PAN01 0.815\n\nPAN02 0.796\n\nPAN03 0.855\n\nPAN04 0.836\n\n_PEXP_ PE01 0.877\n\nPE02 0.870\n\nPE03 0.880\n\nPE04 0.854\n\nPE05 0.737\n\n_SINF_ SI01 0.779\n\nSI02 0.845\n\nSI03 0.793\n\nmeasuring convergent validity is that the AVE of the con\nstruct should be higher than 0.50. As presented in Table 4,\nthe AVE value of all ten constructs meets the guideline\nthreshold value of > 0.50.\n\nIndicator reliability represents how much variation in an\n\nitem is explained by the construct and is referred to as the\nvariance extracted from the item. To measure a construct’s\nindicator reliability, the following guidelines are applied:\n(1) the indicator’s outer loadings should be higher than 0.70\n(Hair et al., 2010); and (2) indicators with outer loadings\nbetween 0.40 and 0.70 should be considered for removal\nonly if the deletion leads to an increase in composite reli\nability and AVE above the suggested threshold value (Hair\net al., 2017). Table 5 presents the outer loadings of all con\nstructs. All values appear to be higher than the suggested\nthreshold value of 0.7. Hence, no removal of constructs was\nrequired.\n\n\n-----\n\n**Table 6 Discriminant validity**\nConstruct _BINT_ _DISC_ _EEXP_ _FCON_ _INNO_ _ISEC_ _OPTI_ _PAND_ _PEXP_ _SINF_\n\n_BINT_ **0.889**\n\n_DISC_ −0.291 **0.807**\n\n_EEXP_ 0.538 −0.346 **0.870**\n\n_FCON_ 0.449 −0.258 0.497 **0.934**\n\n_INNO_ 0.590 −0.142 0.387 0.330 **0.885**\n\n_ISEC_ −0.692 0.300 −0.466 −0.430 −0.536 **0.903**\n\n_OPTI_ 0.673 −0.175 0.569 0.442 0.569 −0.561 **0.867**\n\n_PAND_ 0.647 −0.156 0.465 0.281 0.558 −0.607 0.604 **0.826**\n\n_PEXP_ 0.605 −0.208 0.584 0.356 0.543 −0.522 0.695 0.533 **0.845**\n\n_SINF_ 0.508 −0.104 0.404 0.329 0.439 −0.446 0.485 0.457 0.582 **0.806**\n\n**Fig. 2 Structural model**\n\n\nDiscriminant validity refers to how a construct is genu\n\ninely distinct from other constructs by empirical standards.\nTo check the discriminant validity, the square roots of the\nAVEs were compared with the correlation for each of the\nconstructs. The common guideline for assessing discrimi\nnant validity is that the construct’s square root AVE should\nbe higher than the correlations between the specific construct\nand all the other constructs in the model (Zmud, 1990).\n\nTable 6 presents the discriminant validity result. The\n\ndiagonal items in the table signify the square roots of the\n\n\nAVEs—a measure of variance between the construct and its\nindicators—while the off-diagonal items signify the corre\nlation between constructs. As presented in Table 6, all the\nsquare roots of the AVEs (bold) are higher than the correla\ntion between the constructs, indicating that all the constructs\nin Table 6 satisfy discriminant validity and can be used to\ntest the structural model.\n\n\n-----\n\n**Table 7 VIF values**\nConstruct VIF\n\n_DISC_ 1.33968\n\n_EEXP_ 2.55515\n\n_FCON_ 1.77895\n\n_INNO_ 1.57217\n\n_ISEC_ 1.69459\n\n_OPTI_ 2.45746\n\n_PAND_ 1.84538\n\n_PEXP_ 2.13851\n\n_SINF_ 1.53758\n\n#### 5.2 Common Method bias\n\nBecause of the self-report nature of the data collection\nmethod used in this study, common method bias may be an\nissue. The potential for common method bias was assessed\nand managed using the following measures. First, Pavlou\nand El Sawy (2006) asserted that common method bias\nresults in very high correlations (i.e., r > 0.90). The high\nest correlation among the constructs in this study exceeded\n0.90, indicating there is a concern that this study may be\naffected by common method bias. Thus, the Harman onefactor test was performed in which all the variables were\nloaded into an exploratory factor analysis. Harman’s onefactor test reveals problematic common method bias if an\nexploratory factor analysis returns eigenvalues that depict\nthat the first factor accounts for more than 50% of the vari\nance among the variables. The test result of this study indi\ncates that the highest factor explained 27.9% of the variance\namong all variables, which is acceptable according to Pod\nsakoff and Organ’s (1986) criterion. Based on Liang et al.\n(2007), we included a common method factor in the model.\nThe coefficients for the measurement and structural mod\nels did not alter significantly after controlling the common\nmethod factor. Thus, we conclude that common method bias\ndoes not pose a significant threat to the results of this study.\n\n#### 5.3 Structural Model\n\nThe structural model represents the underlying structural\ntheories of the path model. The assessment of the structural\n\n\nmodel involves examining the model’s predictive capabili\nties and the relationships between the constructs. Figure 2\nabove illustrates the structural model proposed in this study.\nThe steps for structural model assessment are as follows:\n(1) examine structural model for collinearity; (2) assess the\nsignificance of the path coefficients; (3) assess the level of\n_R[2]; (4) assess the f[2] effect size; and (5) assess the predictive_\nrelevance Q[2].\n\nThe first step is to assess the collinearity between the\n\nconstructs. Variance inflation factor (VIF) values of 5 or\nabove in the construct indicate collinearity (Hair et al.,\n2017). Table 7 demonstrates that all VIF values of the con\nstructs are below 5, which means there is no collinearity\nissue in our study.\n\nThe significance of a coefficient ultimately depends on\n\nits standard error obtained through the bootstrapping pro\ncedure. Bootstrapping computes the empirical t-values and\n_p-values for all structural path coefficients. Given that our_\nstudy is exploratory, the significance level is assumed to be\n10%. The bootstrapping analysis was run using a two-tailed\ntest. Hence, the critical value is 1.65 for t-statistics and 0.1\nfor _p-values (Hair et al.,_ 2010). To assess the significance\nof the path coefficients, the guidelines are as follows: (1)\n_t-value should be higher than the critical value; (2) p-value_\nshould be lower than 0.1 (significance level = 10%).\n\nAs presented in Table 8, _PEXP has a nonsignificant_\n\npositive effect on _BINT (β = 0.052,_ _t_ = 0.750, _p_ = 0.454).\nSimilarly, EEXP also has a nonsignificant positive effect on\n_BINT (β = 0.046, t_ = 0.971, p = 0.332). Therefore, neither H1\nnor H2 is supported.\n\n_SINF has a more substantial nonsignificant positive effect_\n\non BINT (β = 0.076, t = 1.460, p = 0.145) than the previous\nconstructs, but it did not satisfy the minimum threshold. The\nsame is true for FCON, with a stronger but nonsignificant\npositive effect on _BINT (β = 0.067,_ _t_ = 1.450, _p_ = 0.148).\nHence, neither H3 nor H4 are supported.\n\nThe effect of _INNO on_ _BINT (β = 0.115,_ _t_ = 2.168,\n\n_p_ = 0.009) is significantly positive. In addition, _OPTI has_\na significant positive effect on _BINT (β = 0.204,_ _t_ = 3.431,\n_p_ = 0.001). Therefore, both H5 and H6 are supported.\n\n\n**Table 8 Path coefficients**\nHypothesis Path Path coefficient (β) _t-statistics_ _p-values_ Hypothesis supported\n\nH1 _PEXP -> BINT_ 0.052 0.75 0.454 No\n\nH2 _EEXP -> BINT_ 0.046 0.971 0.332 No\n\nH3 _SINF -> BINT_ 0.076 1.46 0.145 No\n\nH4 _FCON -> BINT_ 0.067 1.45 0.148 No\n\nH5 _INNO -> BINT_ 0.115 2.618 0.009 Yes\n\nH6 _OPTI -> BINT_ 0.203 3.431 0.001 Yes\n\nH7 _DISC -> BINT_ −0.078 2.251 0.025 Yes\n\nH8 _ISEC -> BINT_ −0.273 5.05 0 Yes\n\nH9 _PAND -> BINT_ 0.179 3.389 0.001 Yes\n\n\n-----\n\n**Table 9 R[2] value for behavioral intention**\nDependent construct _R square_\n\n_BINT_ 0.657\n\n**Table 10 Effect size f[2] values**\nConstruct _f[2]_\n\n_BINT_ –\n\n_DISC_ 0.015\n\n_EEXP_ 0.003\n\n_FCON_ 0.009\n\n_INNO_ 0.021\n\n_ISEC_ 0.105\n\n_OPTI_ 0.046\n\n_PAND_ 0.045\n\n_PEXP_ 0.003\n\n_SINF_ 0.01\n\n**Table 11 Predictive relevance coefficient Q[2]**\n\nConstruct _Q²_\n\n_BINT_ 0.507\n\nIn contrast, _DISC has a significant negative effect on_\n\n_BINT (β =_ −0.078, t = 2.251, p = 0.025). Likewise, the effect\nof _ISEC on_ _BINT is significantly negative (β_ = −0.273,\n_t_ = 5.050, p = 0.000). Thus, H7 and H8 are both supported.\n\nFinally, it is observed that PAND has a significant posi\n\ntive effect on BINT (β = 0.179, t = 3.389, p = 0.001). Hence,\nH9 is supported.\n\nHigher levels of the _R[2] value indicate higher levels of_\n\npredictive accuracy. Table 9 demonstrates that the proposed\nmodel accounted for 65.7% of the variance in behavioral\nintention.\n\nOther than evaluating the _R² values, changes in the_ _R²_\n\nvalue when a specified exogenous construct is excluded\nfrom the model can be used to assess whether the excluded\nconstruct has a substantial influence on the endogenous\nconstructs. This measure is referred to as the ƒ² effect size.\nGuidelines for determining ƒ² are that values of 0.02, 0.15,\nand 0.35, respectively, represent small, medium, and large\neffects of the exogenous latent variable (Cohen, 1988).\nEffect size values of less than 0.02 indicate that there is no\neffect. Table 10 presents the f[2] value for each variable. The\nvalues range from 0.003 to 0.105. _EEXP,_ _PEXP,_ _FCON,_\n_SINF, and DISC have f[2] values less than 0.02, indicating no_\neffect. In contrast, _INNO,_ _PAND,_ _OPTI, and_ _ISEC have_ _f[2]_\nvalues between 0.02 and 0.15, meaning these variables have\na medium effect.\n\nThe predictive relevance Q[2] indicates the model’s out-of\nsample predictive power or predictive relevance (Geisser,\n1975; Stone, 1974). A path model that exhibits predictive\nrelevance accurately predicts data not used in the model\nestimation. In the structural model, Q² values greater than 0\nsuggest that the model has predictive relevance for a specific\n\n\nendogenous construct, whereas values of 0 and below indi\ncate a lack of predictive relevance. As shown in Table 11,\nthe Q[2] value is 0.507, thus exceeding the minimum thresh\nold of zero, which means that the model has predictive rel\nevance for the construct.\n\n### 6 Discussions\n\nThis study combined UTAUT and TRI to develop a research\nmodel with nine hypotheses to understand the factors influ\nencing blockchain acceptance in the real estate indus\ntry. Given that user readiness factors are explained by the\nTRI and technology adoption factors are explained by the\nUTAUT model, we integrated the UTAUT model with the\nTRI to complement the strengths and compensate for the\nweaknesses of each model. Data were collected from real\nestate buyers and sellers, the people most involved in and\naffected by buying or selling real estate. To the best of our\nknowledge, this study is one of the first to address the accep\ntance of blockchain by real estate buyers and sellers. Previ\nous studies have examined either the technological aspect\nor the application of blockchain to real estate, with few\nstudies specifically examining the adoption of blockchain\nin the real estate industry (Konashevych, 2020; Wouda &\nOpdenakker, 2019).\n\n#### 6.1 Findings\n\nThis study revealed several interesting findings. The study\ndemonstrates that four measures from the TRI model,\nnamely innovativeness, optimism, discomfort, insecurity,\nand an additional measure, pandemic influence, are the most\nimportant factors affecting blockchain acceptance in the real\nestate industry. In contrast, four measures from the UTAUT\nmodel, namely performance expectancy, effort expectancy,\nsocial influence, and facilitating conditions, did not signifi\ncantly influence the intentions of real estate buyers and sell\ners to use blockchain technology.\n\nThe results indicate that innovativeness positively influ\n\nences the intention to use blockchain technology. This result\nis consistent with previous studies (Buyle et al., 2018;\nQasem, 2020; Rahman et al., 2017) that have demonstrated\nthat innovativeness has a strong influence on technology\nuse intention. This can be explained by innovative indi\nviduals generally being more open to new ideas (Kwang\n& Rodrigues, 2002). Innovativeness promotes eagerness to\nlearn, understand, and use new technologies, thus increasing\ntechnology acceptance (Turan et al., 2015). Optimism also\nhas a positive influence on the intention to use blockchain.\nThis finding is consistent with findings from recent studies\n(Koloseni & Mandari, 2017; Qasem, 2020; Rahman et al.,\n\n\n-----\n\n2017). Optimistic individuals tend to have positive percep\ntions of technology (Napitupulu et al., 2020). Our findings\nsuggest that optimism increases the likelihood that individu\nals perceive blockchain as a technology that will improve\nthe real estate industry.\n\nThe present study shows that discomfort hinders the\n\nintention to use blockchain technology, in contrast to some\nprevious studies that found discomfort was insignificant in\ninfluencing blockchain adoption (Kamble et al., 2019; Pat\ntansheti et al., 2016). However, our finding is consistent\nwith other studies that have observed that discomfort nega\ntively affects perceived ease of use, which directly affects\ntechnology adoption intentions (Kuo et al., 2013; Rahman\net al., 2017). Given that blockchain is known as a disruptive\ntechnology, some respondents reported feeling uncomfort\nable that they cannot use the technology properly. Our study\nsuggests that uncertainty affects the intention to use block\nchain. This contrasts with a previous study of blockchain\nadoption, which found that uncertainty had an insignificant\neffect on perceived ease of use or usefulness on the intention\nto use blockchain. Most subjects did not consider the use of\nblockchain to be doubtful (Kamble et al., 2019). However,\nblockchain is seen as a new, emerging technology, particu\nlarly when considering its implementation in sectors such as\nreal estate. As a result, uncertainty and doubt are widespread\namong respondents.\n\nThe results suggest that the influence of the pandemic\n\nhas a positive effect on individuals’ intentions to use block\nchain technology. During the COVID-19 pandemic, block\nchain with smart contracts was able to simplify complicated\napplication and approval processes for loans and insur\nance that were affected and extended during the lockdown\nperiods (Pérez-Sánchez et al., 2021). That is, blockchain\ncan mitigate the adverse effects of a pandemic situation\nin the real estate industry by creating smart contracts for\nreal estate (Redolfi, 2021). Our study suggests that perfor\nmance expectancy does not influence the intention to use\nblockchain. Furthermore, similar to previous studies, effort\nexpectancy has no influence on intention to use, implying\nthat effort expectancy is insignificant in determining the\nintention to use blockchain technology (Batara et al., 2017;\nEckhardt et al., 2009). Effort expectancy and performance\nexpectancy are closely related, with the former being more\nassociated with efficiency expectancies and the latter more\nwith effectiveness expectancies (Brown et al., 2010).\n\nThis study also found that social influence does not\n\naffect the intention to use blockchain, which confirms a\nrecent study that found that social influence has no signifi\ncant effect on blockchain adoption intention (Alazab et al.,\n2021). This result suggests that others’ experiences with\nblockchain acceptance do not influence real estate buyers\nand sellers. Moreover, we found that conducive conditions\n\n\ndo not significantly influence behavioral intention. Previous\nresearch has found that enabling conditions influence block\nchain adoption in supply chains in the United States but not\nin India (Queiroz & Wamba, 2019). Our study also suggests\nthat facilitating conditions play an important role in deter\nring blockchain adoption in other developing countries such\nas Malaysia. Our research suggests that blockchain adoption\nby real estate buyers and sellers is mainly determined by\nthe psychological aspects and personality traits measured by\nTRI rather than by the aspects of the system or technology\nthat the UTAUT measures.\n\n#### 6.2 Implications for Theory\n\nThis study provides a broader view of new technology\nadoption and highlights the importance of integrating the\nUTAUT and TRI models. Although UTAUT is a valuable\nmodel in various research areas (Venkatesh, 2003; Wang et\nal., 2017; Ying, 2018), the psychological aspects of the user\nare not considered in the model (Napitupulu et al., 2020).\nOur analysis demonstrates that it may be beneficial and\nsignificant to theorize about effects that are currently miss\ning from the original UTAUT model. Integrating the con\nstructs of the TRI model with the constructs of the UTAUT\nmodel not only enables us to examine technology readiness\nand acceptance simultaneously but also stimulates further\nresearch to improve existing models and deepen the study\nof technology adoption.\n\nPrior studies have not attached significant importance\n\nto individual factors and major global events in influenc\ning technology adoption and have neglected the importance\nof psychological factors as antecedents to intention to use\ninformation technology and systems (Adiyarta et al., 2018;\nNapitupulu et al., 2020). This study provides evidence that\nthe four psychological measures of the TRI model (innova\ntiveness, optimism, discomfort, and insecurity) all signifi\ncantly affect blockchain adoption in the real estate industry.\nIn addition, this paper shows that major global events, such\nas the COVID-19 pandemic, influence real estate buyers’\nand sellers’ behavioral intentions to use blockchain tech\nnology. These findings provide new directions for future\nresearch, not only for the study of blockchain adoption in\nthe real estate industry but also for the general study of tech\nnology adoption.\n\n#### 6.3 Implications for Practice\n\nThis paper also has important implications for practitio\nners. The first implication is that it would be beneficial for\nblockchain and real estate stakeholders to focus more on\npsychological factors than technological factors when imple\nmenting blockchain. They can conduct pre-implementation\n\n\n-----\n\nstudies, such as surveys or focus groups, to understand per\nsonal characteristics and address potential psychological\nconcerns, which will help improve the efficiency of technol\nogy adoption when implementing revolutionary blockchain\ntechnology.\n\nThe second implication for real estate stakeholders is that\n\nemphasizing the holistic benefits of blockchain technology\nto the real estate ecosystem, including buyers and sellers,\nis more likely to drive technology adoption than outlining\nblockchain’s features. As our study shows, people are more\nexperienced in using various new technologies in today’s\ninternet age. Therefore, performance expectancy and effort\nexpectancy were not found to be critical in influencing users’\nintentions to use blockchain. In contrast, knowledge of the\nholistic benefits may contribute to psychological factors that\npositively impact technology adoption, such as innovative\nness and optimism, and mitigate the negative psychological\nfactors, such as discomfort and insecurity.\n\nThe third implication is that stakeholders in the real\n\nestate industry, such as professional associations, govern\nment agencies, financial institutions, brokers, and lawyers,\nshould collaborate to establish a blockchain network so that\nreal estate settlements can be conducted online with smart\ncontracts and blockchain-based streamlined processes. The\nthree implications of this study can also provide stakehold\ners in sectors other than real estate with insights into adopt\ning new technologies.\n\n#### 6.4 Limitations and Future Research\n\nLike any other study, this study has limitations that provide\nfurther research opportunities. First, our model was tested in\nMalaysia, which is a developing country. Future studies can\napply a comparative research approach and test our model\nin developed countries. Second, our study is limited to the\nreal estate industry. Researchers can further investigate\nthe acceptance of blockchain technology by applying our\nresearch model to other sectors or industries.\n\n### 7 Conclusion\n\nBased on the UTAUT and TRI models, this paper concep\ntualized and empirically examined the factors that influence\nintentions to use blockchain technology in the real estate\nindustry. Data were collected from 301 real estate buyers and\nsellers and analyzed using the partial least squares method.\nThe results showed high internal consistency and reliability,\nindicating that the study has high predictive accuracy. The\nstudy concluded that the intention of real estate actors to\nuse blockchain is significantly influenced by the following\nfactors: innovativeness, optimism, discomfort, insecurity,\n\n\nand pandemic influence. Thus, our empirical investigation\nshows that the model we propose, which reformulates the\ntheses of the original UTAUT model, can provide a useful\nalternative for understanding blockchain acceptance and\nuse.\n\n**Acknowledgements This material is based upon work supported**\nby the National Natural Science Foundation of China under Grants\n72172163.\n\n**Funding Open Access funding enabled and organized by CAUL and**\nits Member Institutions\n\n#### Declarations\n\n**Declaration of interest The authors declare that they have no known**\ncompeting financial interests or personal relationships that could have\nappeared to influence the work reported in this paper.\n\n**Open Access** This article is licensed under a Creative Commons\nAttribution 4.0 International License, which permits use, sharing,\nadaptation, distribution and reproduction in any medium or format,\nas long as you give appropriate credit to the original author(s) and the\nsource, provide a link to the Creative Commons licence, and indicate\nif changes were made. The images or other third party material in this\narticle are included in the article’s Creative Commons licence, unless\nindicated otherwise in a credit line to the material. If material is not\nincluded in the article’s Creative Commons licence and your intended\nuse is not permitted by statutory regulation or exceeds the permitted\nuse, you will need to obtain permission directly from the copyright\n[holder. To view a copy of this licence, visit http://creativecommons.](http://creativecommons.org/licenses/by/4.0/)\n[org/licenses/by/4.0/.](http://creativecommons.org/licenses/by/4.0/)\n\n### References\n\nAdiyarta, K., Napitupulu, D., Nurdianto, H., Rahim, R., & Ahmar, A.\n\n(2018, May). User acceptance of e-government services based on\nTRAM model. In IOP Conference Series: Materials Science and\n_Engineering (Vol. 352, p. 012057). IOP Publishing._ [https://doi.](http://dx.doi.org/10.1088/1757-899X/352/1/012057)\n[org/10.1088/1757-899X/352/1/012057](http://dx.doi.org/10.1088/1757-899X/352/1/012057)\n\nAkram, S. V., Malik, P. K., Singh, R., Anita, G., & Tanwar, S. (2020).\n\nAdoption of blockchain technology in various realms: Opportuni\n[ties and challenges. Security and Privacy, 3(5), 1–17. https://doi.](http://dx.doi.org/10.1002/spy2.109)\n[org/10.1002/spy2.109.](http://dx.doi.org/10.1002/spy2.109)\n\nAlalwan, A. A., Dwivedi, Y. K., & Rana, N. P. (2017). Factors influ\n\nencing adoption of mobile banking by jordanian bank customers:\nExtending UTAUT2 with trust. _International Journal of Infor_\n_mation Management,_ _37(3), 99–110._ [https://doi.org/10.1016/j.](http://dx.doi.org/10.1016/j.ijinfomgt.2017.01.002)\n[ijinfomgt.2017.01.002.](http://dx.doi.org/10.1016/j.ijinfomgt.2017.01.002)\n\nAlazab, M., Alhyari, S., Awajan, A., & Abdallah, A. B. (2021). Block\n\nchain technology in supply chain management: An empirical study\nof the factors affecting user adoption/acceptance. _Cluster Com_\n_[puting, 24, 83–101. https://doi.org/10.1007/s10586-020-03200-4.](http://dx.doi.org/10.1007/s10586-020-03200-4)_\n\nAltynpara, E. (2023). 14 Jan). Blockchain in Real Estate: 7 Ways It\n\nCan Revolutionize the Industry. Cleveroad, Retrieved March 14,\n2023, from [https://www.cleveroad.com/blog/how-blockchain-](https://www.cleveroad.com/blog/how-blockchain-in-real-estate-can-dramatically-transform-the-industry/)\n[in-real-estate-can-dramatically-transform-the-industry/](https://www.cleveroad.com/blog/how-blockchain-in-real-estate-can-dramatically-transform-the-industry/)\n\nAnderson, J. C., & Gerbing, D. W. (1988). Structural equation\n\nmodeling in practice: A review and recommended two-step\napproach. _Psychological Bulletin,_ _103(3), 411–423._ [https://doi.](http://dx.doi.org/10.1037/0033-2909.103.3.411)\n[org/10.1037/0033-2909.103.3.411.](http://dx.doi.org/10.1037/0033-2909.103.3.411)\n\n\n-----\n\nBasgoze, P. (2015). Integration of technology readiness (TR) into the\n\ntechnology acceptance model (TAM) for m-shopping. Journal of\n_Scientific Research and Innovative Technology, 2(3), 26–35._\n\nBatara, E., Nurmandi, A., Warsito, T., & Pribadi, U. (2017). Are Gov\n\nernment employees adopting local e-government transformation?\n_Transforming Government: People Process and Policy,_ _11(4),_\n[612–638. https://doi.org/10.1108/TG-09-2017-0056.](http://dx.doi.org/10.1108/TG-09-2017-0056)\n\nBrown, S. A., Dennis, A. R., & Venkatesh, V. (2010). Predicting col\n\nlaboration technology use: Integrating technology adoption and\ncollaboration research. Journal of Management Information Sys\n_[tems, 27(2), 9–54. https://doi.org/10.2753/MIS0742-1222270201.](http://dx.doi.org/10.2753/MIS0742-1222270201)_\n\nBuyle, R., Compernolle, M. V., Vlassenroot, E., Vanlishout, Z.,\n\nMechant, P., & Mannens, E. (2018). Technology readiness and\nacceptance model” as a predictor for the use intention of data\nstandards in smart cities. Media and Communication, 6(4), 127–\n[139. https://doi.org/10.17645/mac.v6i4.1679.](http://dx.doi.org/10.17645/mac.v6i4.1679)\n\nCohen, J. (1988). Set correlation and contingency tables. _Applied_\n\n_Psychological_ _Measurement,_ _12(4),_ 425–434. [https://doi.](http://dx.doi.org/10.1177/014662168801200410)\n\n[org/10.1177/014662168801200410.](http://dx.doi.org/10.1177/014662168801200410)\n\nCompton, S. S., & Schottenstein, D. (2017). Questions and answers\n\nabout using blockchain technology in real estate practice. _The_\n_Practical Real Estate Lawyer, 33(5), 5–9._\n\nCosta-Font, J. M. (2009). Optimism and the perceptions of new\n\nrisks. _Journal of Risk Research,_ _12(1), 27–41._ [https://doi.](http://dx.doi.org/10.1080/13669870802445800)\n[org/10.1080/13669870802445800.](http://dx.doi.org/10.1080/13669870802445800)\n\nCrosby, M., Pattanayak, P., Verma, S., & Kalyanaraman, V. (2016).\n\nBlockchain technology: Beyond bitcoin. _Applied Innovation_\n_[Review, 2(2), 6–19. https://scet.berkeley.edu/wp-content/uploads/](https://scet.berkeley.edu/wp-content/uploads/AIR-2016-Blockchain.pdf)_\n[AIR-2016-Blockchain.pdf.](https://scet.berkeley.edu/wp-content/uploads/AIR-2016-Blockchain.pdf)\n\nDavis, F. D. (1989). Perceived usefulness, perceived ease of use, and\n\nuser acceptance of information technology. MIS Quarterly, 13(3),\n[319–340. https://doi.org/10.2307/249008.](http://dx.doi.org/10.2307/249008)\n\nDe, R., Pandey, N., & Pal, A. (2020). Impact of digital surge dur\n\ning Covid-19 pandemic: A viewpoint on research and practice.\n_International Journal of Information Management, 55, 102171._\n[https://doi.org/10.1016/j.ijinfomgt.2020.102171.](http://dx.doi.org/10.1016/j.ijinfomgt.2020.102171)\n\nDijkstra, M. (2017). _Blockchain: Towards disruption in the real_\n\n_estate sector [Unpublished master’s thesis]. Delft University of_\nTechnology.\n\nEckhardt, A., Laumer, S., & Weitzel, T. (2009). Who influences whom?\n\nAnalyzing workplace referents’ social influence on IT adoption\nand non-adoption. Journal of Information Technology, 24, 11–24.\n[https://doi.org/10.1057/jit.2008.31.](http://dx.doi.org/10.1057/jit.2008.31)\n\nFairfield, J. A. (2014). Smart contracts, bitcoin bots, and consumer pro\n\ntection. Washington and Lee Law Review Online, 71(2), 35–50.\n\nFrancisco, K., & Swanson, D. (2018). The supply chain has no\n\nclothes: Technology Adoption of blockchain for supply chain\ntransparency. _Logistics,_ _2(1), 1–13._ [https://doi.org/10.3390/](http://dx.doi.org/10.3390/logistics2010002)\n[logistics2010002.](http://dx.doi.org/10.3390/logistics2010002)\n\nGeisser, S. (1975). The predictive sample reuse method with applica\n\ntions. _Journal of the American Statistical Association,_ _70(350),_\n[320–328. https://doi.org/10.1080/01621459.1975.10479865.](http://dx.doi.org/10.1080/01621459.1975.10479865)\n\nGranovetter, M. (1978). Threshold models of collective behavior.\n\n_American Journal of Sociology,_ _83(6), 1420–1443._ [https://doi.](http://dx.doi.org/10.1086/226707)\n[org/10.1086/226707.](http://dx.doi.org/10.1086/226707)\n\nGrefen, D., Straub, D., & Boudreau, M. C. (2000). Structural equation\n\nmodeling and regression: Guidelines for research practice. Com\n_munications of the Association for Information Systems,_ _4(7),_\n[1–78. https://doi.org/10.17705/1CAIS.00407.](http://dx.doi.org/10.17705/1CAIS.00407)\n\nHair, J. F., Black, B., Babin, B. J., & Anderson, R. E. (2010). Multi\n\n_variate data analysis: Global edition (7th ed.). Pearson Prentice_\nHall.\n\nHair, J. F., Hult, G. T., Ringle, C. M., & Sarstedt, M. (2017). A primer\n\n_on partial least squares structural equation modeling (PLS-SEM)_\n(2nd ed.). SAGE Publications.\n\n\nIrani, Z., Dwivedi, Y. K., & Williams, M. D. (2009). Understanding\n\nconsumer adoption of broadband: An extension of the technology\nacceptance model. Journal of the Operational Research Society,\n_[60(10), 1322–1334. https://doi.org/10.1057/jors.2008.100.](http://dx.doi.org/10.1057/jors.2008.100)_\n\nKalla, A., Hewa, T., Mishra, R. A., Ylianttila, M., & Liyanage, M.\n\n(2020). The role of blockchain to fight against COVID-19. IEEE\n_Engineering Management Review,_ _48(3), 85–96._ [https://doi.](http://dx.doi.org/10.1109/EMR.2020.3014052)\n[org/10.1109/EMR.2020.3014052.](http://dx.doi.org/10.1109/EMR.2020.3014052)\n\nKamble, S., Gunasekaran, A., & Arha, H. (2019). Understanding the\n\nblockchain technology adoption in supply chains—indian con\ntext. International Journal of Production Research, 57(7), 2009–\n[2033. https://doi.org/10.1080/00207543.2018.1518610.](http://dx.doi.org/10.1080/00207543.2018.1518610)\n\nKaramitsos, I., Papadaki, M., & Barghuthi, N. (2018). Design of the\n\nblockchain smart contract: A use case for real estate. _Journal_\n_of Information Security,_ _9, 177–190._ [https://doi.org/10.4236/](http://dx.doi.org/10.4236/jis.2018.93013)\n[jis.2018.93013.](http://dx.doi.org/10.4236/jis.2018.93013)\n\nKoloseni, D. N., & Mandari, H. (2017). The role of personal traits\n\nand learner’s perceptions on the adoption of e-learning systems\nin higher learning institutions. _African Journal of Finance and_\n_Management, 26, 61–75._\n\nKonashevych, O. (2020). Constraints and benefits of the blockchain\n\nuse for real estate and property rights. _Journal of Property_\n_Planning and Environmental Law,_ _12(2), 109–127._ [https://doi.](http://dx.doi.org/10.1108/JPPEL-12-2019-0061)\n[org/10.1108/JPPEL-12-2019-0061.](http://dx.doi.org/10.1108/JPPEL-12-2019-0061)\n\nKosba, A., Miller, A., Shi, E., Wen, Z., & Papamanthou, C. (2016). The\n\nblockchain model of cryptography and privacy preserving smart\ncontracts. In _Proceedings of 2016 IEEE Symposium on Secu_\n_rity and Privacy (pp. 839–858). IEEE._ [https://doi.org/10.1109/](http://dx.doi.org/10.1109/SP.2016.55)\n[SP.2016.55](http://dx.doi.org/10.1109/SP.2016.55)\n\nKuo, K. M., Liu, C. F., & Ma, C. C. (2013). An investigation of\n\nthe effect of nurses’ technology readiness on the acceptance\nof mobile electronic medical record systems. _BMC Medical_\n_Informatics and Decision Making,_ _13(88), 1–14._ [https://doi.](http://dx.doi.org/10.1186/1472-6947-13-88)\n[org/10.1186/1472-6947-13-88.](http://dx.doi.org/10.1186/1472-6947-13-88)\n\nKwang, N. A., & Rodrigues, D. (2002). A big-five personality profile\n\nof the adaptor and innovator. _The Journal of Creative Behav_\n_ior,_ _36(4), 254–268._ [https://doi.org/10.1002/j.2162-6057.2002.](http://dx.doi.org/10.1002/j.2162-6057.2002.tb01068.x)\n[tb01068.x.](http://dx.doi.org/10.1002/j.2162-6057.2002.tb01068.x)\n\nLaBerge, L., O’Toole, C., Schneider, J., & Smaje, K. (2020, October\n\n5). How COVID-19 has pushed companies over the technology\n_tipping point—and transformed business forever Retrieved March_\n14, 2023, from [https://www.mckinsey.com/business-functions/](https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/how-covid-19-has-pushed-companies-over-the-technology-tipping-point-and-transformed-business-forever)\n[strategy-and-corporate-finance/our-insights/how-covid-19-has-](https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/how-covid-19-has-pushed-companies-over-the-technology-tipping-point-and-transformed-business-forever)\n[pushed-companies-over-the-technology-tipping-point-and-trans](https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/how-covid-19-has-pushed-companies-over-the-technology-tipping-point-and-transformed-business-forever)\n[formed-business-forever](https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/how-covid-19-has-pushed-companies-over-the-technology-tipping-point-and-transformed-business-forever)\n\nLarasati, N., & Widyawan, S. P. (2017). Technology readiness and\n\ntechnology acceptance model in new technology implementation\nprocess in low technology SMEs. International Journal of Inno\n_[vation Management and Technology, 8(2), 113–117. https://doi.](http://dx.doi.org/10.18178/ijimt.2017.8.2.713)_\n[org/10.18178/ijimt.2017.8.2.713.](http://dx.doi.org/10.18178/ijimt.2017.8.2.713)\n\nLatifi, S., Zhang, Y., & Cheng, L. C. (2019, July). Blockchain-based\n\nreal estate market: One method for applying blockchain technol\nogy in commercial real estate market. In 2019 IEEE International\n_Conference on Blockchain (Blockchain) (pp. 528–535). IEEE._\n[https://doi.org/10.1109/Blockchain.2019.00002](http://dx.doi.org/10.1109/Blockchain.2019.00002)\n\nLiang, H., Saraf, N., & Xue, Y. (2007). Assimilation of enterprise sys\n\ntems: The Effect of institutional pressures and the mediating role\nof top management. _MIS Quarterly,_ _31(1), 59–87._ [https://doi.](http://dx.doi.org/10.2307/25148781)\n[org/10.2307/25148781.](http://dx.doi.org/10.2307/25148781)\n\nLiebkind, J. (2020, March 22). How Blockchain Technology\n\nis Changing Real Estate. _Investopedia Retrieved March_\n14, 2023, from [https://www.investopedia.com/news/](https://www.investopedia.com/news/how-blockchain-technology-changing-real-estate/)\n\n[how-blockchain-technology-changing-real-estate/](https://www.investopedia.com/news/how-blockchain-technology-changing-real-estate/)\n\nMansfield-Devine, S. (2017). Beyond bitcoin: Using Blockchain tech\n\nnology to provide assurance in the commercial world. Computer\n\n\n-----\n\n_Fraud & Security,_ _2017(5), 14–18._ [https://doi.org/10.1016/](http://dx.doi.org/10.1016/S1361-3723(17)30042-8)\n[S1361-3723(17)30042-8.](http://dx.doi.org/10.1016/S1361-3723(17)30042-8)\n\nMarkus, M. L. (1987). Toward a “Critical Mass” theory of inter\n\nactive media: Universal access, interdependence and diffu\nsion. _Communication Research,_ _14, 491–511._ [https://doi.](http://dx.doi.org/10.1177/009365087014005003)\n[org/10.1177/009365087014005003.](http://dx.doi.org/10.1177/009365087014005003)\n\nMartins, C., Oliveira, T., & Popovič, A. (2014). Understanding the\n\ninternet banking adoption: A unified theory of acceptance and\nuse of technology and perceived risk application. _International_\n_Journal of Information Management,_ _34(1), 1–13._ [https://doi.](http://dx.doi.org/10.1016/j.ijinfomgt.2013.06.002)\n[org/10.1016/j.ijinfomgt.2013.06.002.](http://dx.doi.org/10.1016/j.ijinfomgt.2013.06.002)\n\nMougayar, W. (2016). _The business blockchain: Promise, practice,_\n\n_and application of the next internet technology. John Wiley &_\nSons.\n\nNapitupulu, D., Pamungkas, P. D., Sudarsono, B. G., Lestari, S. P.,\n\n& Bani, A. U. (2020). Proposed TRUTAUT model of technol\nogy adoption for LAPOR. _IOP Conference Series: Materi_\n_als Science and Engineering. IOP Publishing._ [https://doi.](http://dx.doi.org/10.1088/1757-899X/725/1/012120)\n[org/10.1088/1757-899X/725/1/012120](http://dx.doi.org/10.1088/1757-899X/725/1/012120)\n\nParasuraman, A. (2000). Technology readiness index (TRI): A mul\n\ntiple-item scale to measure readiness to embrace new technolo\ngies. _Journal of Service Research,_ _2(4), 307–320._ [https://doi.](http://dx.doi.org/10.1177/109467050024001)\n[org/10.1177/109467050024001.](http://dx.doi.org/10.1177/109467050024001)\n\nParasuraman, A., & Colby, C. L. (2015). An updated and streamlined\n\ntechnology readiness index: TRI 2.0. Journal of Service Research,\n_[18(1), 59–74. https://doi.org/10.1177/1094670514539730.](http://dx.doi.org/10.1177/1094670514539730)_\n\nPattansheti, M., Kamble, S. S., Dhume, S. M., & Raut, R. D. (2016).\n\nDevelopment, Measurement and validation of an integrated tech\nnology readiness acceptance and planned behaviour model for\nindian mobile banking industry. _International Journal of Busi_\n_ness Information Systems, 22(3), 316–342._\n\nPavlou, P. A., & El Sawy, O. A. (2006). From IT leveraging compe\n\ntence to competitive advantage in turbulent environments: The\ncase of new product development. Information Systems Research,\n_[17(3), 198–227. https://doi.org/10.1287/isre.1060.0094.](http://dx.doi.org/10.1287/isre.1060.0094)_\n\nPérez-Sánchez, Z., Barrientos-Báez, A., Gómez-Galán, J., & Li, H.\n\n(2021). Blockchain technology for winning consumer loyalty:\nSocial norm analysis using structural equation modeling. Math\n_[ematics, 9(532), 1–18. https://doi.org/10.3390/math9050532](http://dx.doi.org/10.3390/math9050532)_\n\nPodsakoff, P. M., & Organ, D. W. (1986). Self-reports in organiza\n\ntional research: Problems and prospects. Journal of Management,\n_[12(4), 531–544. https://doi.org/10.1177/014920638601200408.](http://dx.doi.org/10.1177/014920638601200408)_\n\nQasem, Z. (2020). The effect of positive TRI traits on centennials\n\nadoption of try-on technology in the context of e-fashion retail\ning. International Journal of Information Management, 56, 1–11.\n[https://doi.org/10.1016/j.ijinfomgt.2020.102254.](http://dx.doi.org/10.1016/j.ijinfomgt.2020.102254)\n\nQueiroz, M. M., & Wamba, S. F. (2019). Blockchain adoption chal\n\nlenges in supply chain: An empirical investigation of the main\ndrivers in India and the USA. _International Journal of Infor_\n_mation Management,_ _46, 70–82._ [https://doi.org/10.1016/j.](http://dx.doi.org/10.1016/j.ijinfomgt.2018.11.021)\n[ijinfomgt.2018.11.021.](http://dx.doi.org/10.1016/j.ijinfomgt.2018.11.021)\n\nRahman, S. A., Taghizadeh, S. K., Ramayah, T., & Alam, M. M.\n\n(2017). Technology acceptance among micro-entrepreneurs in\nmarginalized social strata: The case of social innovation in Ban\ngladesh. Technological Forecasting & Social Change, 118, 236–\n[245. https://doi.org/10.1016/j.techfore.2017.01.027.](http://dx.doi.org/10.1016/j.techfore.2017.01.027)\n\nRedolfi, A. (2021, October 27). The future of real estate transactions\n\non the blockchain. _Forbes. Retrieved March 14, 2023, from_\n[https://www.forbes.com/sites/forbesbizcouncil/2021/10/27/the-](https://www.forbes.com/sites/forbesbizcouncil/2021/10/27/the-future-of-real-estate-transactions-on-the-blockchain/?sh=7c5ae8849387)\n[future-of-real-estate-transactions-on-the-blockchain/?sh=7c](https://www.forbes.com/sites/forbesbizcouncil/2021/10/27/the-future-of-real-estate-transactions-on-the-blockchain/?sh=7c5ae8849387)\n[5ae8849387](https://www.forbes.com/sites/forbesbizcouncil/2021/10/27/the-future-of-real-estate-transactions-on-the-blockchain/?sh=7c5ae8849387)\n\nRiffai, M., Grant, K., & Edgar, D. (2012). Big TAM in Oman: Explor\n\ning the promise of on-line banking, its adoption by customers\nand the challenges of banking in Oman. _International Jour_\n_nal of Information Management,_ _32(3), 239–250._ [https://doi.](http://dx.doi.org/10.1016/j.ijinfomgt.2011.11.007)\n[org/10.1016/j.ijinfomgt.2011.11.007.](http://dx.doi.org/10.1016/j.ijinfomgt.2011.11.007)\n\n\nRinjany, D. K. (2020). Does technology readiness and acceptance\n\ninduce more adoption of e-government? Applying the UTAUT\nand TRI on an indonesian complaint-based application. Interna\n_tional Journal on Advanced Science Engineering and Information_\n_[Technology, 4(1), 68–86. https://doi.org/10.30589/pgr.v4i1.157.](http://dx.doi.org/10.30589/pgr.v4i1.157)_\n\nSaari, A., Vimpari, J., & Junnila, S. (2022). Blockchain in real estate:\n\nRecent developments and empirical applications. Land Use Pol\n_[icy, 121, 1–11. https://doi.org/10.1016/j.landusepol.2022.106334.](http://dx.doi.org/10.1016/j.landusepol.2022.106334)_\n\nScheier, M. F. (1985). Optimism, coping, and health: Assessment and\n\nimplications of generalized outcome expectancies. Health Psychol\n_[ogy, 4(3), 219–247. https://doi.org/10.1037/0278-6133.4.3.219.](http://dx.doi.org/10.1037/0278-6133.4.3.219)_\n\nSinclair, S., Potts, J., Berg, C., Leshinsky, R., & Kearney, T. (2022,\n\nJune 27). Blockchain: Opportunities and disruptions for real\n[estate. RMIT University. Retrieved March 14, 2023, from http://](http://reia.com.au/wp-content/uploads/2022/06/Blockchain_Real_Estate_Report_FINAL_LR.pdf?mc_cid=14265b99c4)\n[reia.com.au/wp-content/uploads/2022/06/Blockchain_Real_](http://reia.com.au/wp-content/uploads/2022/06/Blockchain_Real_Estate_Report_FINAL_LR.pdf?mc_cid=14265b99c4)\n[Estate_Report_FINAL_LR.pdf?mc_cid=14265b99c4&mc_](http://reia.com.au/wp-content/uploads/2022/06/Blockchain_Real_Estate_Report_FINAL_LR.pdf?mc_cid=14265b99c4)\neid=UNIQID\n\nStone, M. (1974). Cross validatory choice and assessment of statisti\n\ncal predictions. _Journal of the Royal Statistical Society,_ _36(2),_\n[111–147. https://doi.org/10.1111/j.2517-6161.1974.tb00994.x.](http://dx.doi.org/10.1111/j.2517-6161.1974.tb00994.x)\n\nSwan, M. (2015). Blockchain: Blueprint for a new economy. O’Reilly\n\nMedia.\n\nTran, L. T. T., & Nguyen, P. T. (2021). Co-creating blockchain adop\n\ntion: Theory, practice and impact on usage behavior. Asia Pacific\n_Journal of Marketing and Logistics,_ _33(7), 1667–1684._ [https://](http://dx.doi.org/10.1108/APJML-08-2020-0609)\n[doi.org/10.1108/APJML-08-2020-0609.](http://dx.doi.org/10.1108/APJML-08-2020-0609)\n\nTsikriktsis, N. (2004). A technology readiness-based taxonomy of\n\n[customers. Journal of Service Research, 7(1), 42–52. https://doi.](http://dx.doi.org/10.1177/1094670504266132)\n[org/10.1177/1094670504266132.](http://dx.doi.org/10.1177/1094670504266132)\n\nTuran, A., Tunc, A. O., & Zehir, C. (2015). A theoretical model pro\n\nposal: Personal innovativeness and user involvement as ante\ncedents of unified theory of acceptance and use of technology.\n_[Procedia—Social and Behavioral Sciences, 210, 43–51. https://](http://dx.doi.org/10.1016/j.sbspro.2015.11.327)_\n[doi.org/10.1016/j.sbspro.2015.11.327.](http://dx.doi.org/10.1016/j.sbspro.2015.11.327)\n\nVanar, M. (2018, 9 January). Land deal sealed using bitcoin. The Star.\n\n[https://www.thestar.com.my/news/nation/2018/01/09/land-deal-](https://www.thestar.com.my/news/nation/2018/01/09/land-deal-sealed-using-bitcoin-its-a-new-way-of-transferring-money-says-sabah-businessman/)\n[sealed-using-bitcoin-its-a-new-way-of-transferring-money-says-](https://www.thestar.com.my/news/nation/2018/01/09/land-deal-sealed-using-bitcoin-its-a-new-way-of-transferring-money-says-sabah-businessman/)\n[sabah-businessman/](https://www.thestar.com.my/news/nation/2018/01/09/land-deal-sealed-using-bitcoin-its-a-new-way-of-transferring-money-says-sabah-businessman/)\n\nVenkatesh, V. (2003). User acceptance of information technology:\n\nToward a unified view. _MIS Quarterly,_ _27(3), 425–478._ [https://](http://dx.doi.org/10.2307/30036540)\n[doi.org/10.2307/30036540.](http://dx.doi.org/10.2307/30036540)\n\nVenkatesh, V., & Brown, S. A. (2001). A longitudinal investigation\n\nof personal computers in homes: Adoption determinants and\n[emerging challenges. MIS Quarterly, 25(1), 71–102. https://doi.](http://dx.doi.org/10.2307/3250959)\n[org/10.2307/3250959.](http://dx.doi.org/10.2307/3250959)\n\nWalczuch, R., Lemmink, J., & Streukens, S. (2007). The effect of\n\nservice employees’ technology readiness on technology accep\ntance. _Information & Management,_ _44(2), 206–215._ [https://doi.](http://dx.doi.org/10.1016/j.im.2006.12.005)\n[org/10.1016/j.im.2006.12.005.](http://dx.doi.org/10.1016/j.im.2006.12.005)\n\nWang, C. S., Jeng, Y. L., & Huang, Y. M. (2017). What influences\n\nteachers to continue using cloud services?: The role of facilitating\nconditions and social influence. _The Electronic Library,_ _35(3),_\n[520–533. https://doi.org/10.1108/EL-02-2016-0046.](http://dx.doi.org/10.1108/EL-02-2016-0046)\n\nWeerakkody, V., El-Haddadeh, R., Al-Sobhi, F., Shareef, M. A., &\n\nDwivedi, Y. K. (2013). Examining the influence of intermediar\nies in facilitating e-government adoption: An empirical investiga\ntion. _International Journal of Information Management,_ _33(5),_\n[716–725. https://doi.org/10.1016/j.ijinfomgt.2013.05.001.](http://dx.doi.org/10.1016/j.ijinfomgt.2013.05.001)\n\nWilliams, M. D., Rana, N. P., & Dwivedi, Y. K. (2015). The unified\n\ntheory of acceptance and use of technology (UTAUT): A litera\nture review. _Journal of Enterprise Information Management,_\n_[28(3), 443–488. https://doi.org/10.1108/JEIM-09-2014-0088.](http://dx.doi.org/10.1108/JEIM-09-2014-0088)_\n\nWouda, H. P., & Opdenakker, R. (2019). Blockchain technology in\n\ncommercial real estate transactions. Journal of Property Invest\n_ment & Finance,_ _37(6), 570–579._ [https://doi.org/10.1108/](http://dx.doi.org/10.1108/JPIF-06-2019-0085)\n[JPIF-06-2019-0085.](http://dx.doi.org/10.1108/JPIF-06-2019-0085)\n\n\n-----\n\nYapa, I., Heanthenna, S., Bandara, N., Prasad, I., & Mallawarachchi,\n\nY. (2018, December). Decentralized ledger for land and prop\nerty transactions in Sri Lanka acresense. In _2018 IEEE Region_\n_10 Humanitarian Technology Conference (R10-HTC) (pp. 1–6)._\n[IEEE. https://doi.org/10.1109/R10-HTC.2018.8629811](http://dx.doi.org/10.1109/R10-HTC.2018.8629811)\n\nYing, C. P. (2018). Elucidating social networking apps decisions. Nan\n\n_kai Business Review International,_ _9(2), 118–142._ [https://doi.](http://dx.doi.org/10.1108/NBRI-01-2017-0003)\n[org/10.1108/NBRI-01-2017-0003.](http://dx.doi.org/10.1108/NBRI-01-2017-0003)\n\nZamberi, A. S., & Khalizani, K. (2017). The adoption of m-govern\n\nment services from the user’s perspectives: Empirical evidence\nfrom the United Arab Emirates. _International Journal of Infor_\n_mation Management,_ _37(5), 367–379._ [https://doi.org/10.1016/j.](http://dx.doi.org/10.1016/j.ijinfomgt.2017.03.008)\n[ijinfomgt.2017.03.008.](http://dx.doi.org/10.1016/j.ijinfomgt.2017.03.008)\n\nZmud, R. B. (1990). Information technology implementation research:\n\nA technological diffusion approach. Management Science, 36(2),\n[123–139. https://doi.org/10.1287/mnsc.36.2.123.](http://dx.doi.org/10.1287/mnsc.36.2.123)\n\nZyskind, G., & Nathan, O. (2015, May). Decentralizing privacy: Using\n\nblockchain to protect personal data. In 2015 IEEE Security and\n_[Privacy Workshops (pp. 180–184). IEEE. https://doi.org/10.1109/](http://dx.doi.org/10.1109/SPW.2015.27)_\n[SPW.2015.27](http://dx.doi.org/10.1109/SPW.2015.27)\n\nDeloitte (2019). Blockchain in commercial real estate. Deloitte Center\n\nfor Financial Services. Retrieved March 14, 2023 from [https://](https://www2.deloitte.com/us/en/pages/financial-services/articles/blockchain-in-commercial-real-estate.html)\n[www2.deloitte.com/us/en/pages/financial-services/articles/](https://www2.deloitte.com/us/en/pages/financial-services/articles/blockchain-in-commercial-real-estate.html)\n[blockchain-in-commercial-real-estate.html](https://www2.deloitte.com/us/en/pages/financial-services/articles/blockchain-in-commercial-real-estate.html)\n\n**Publisher’s Note Springer Nature remains neutral with regard to juris**\ndictional claims in published maps and institutional affiliations.\n\nSpringer Nature or its licensor (e.g. a society or other partner) holds\nexclusive rights to this article under a publishing agreement with the\nauthor(s) or other rightsholder(s); author self-archiving of the accepted\nmanuscript version of this article is solely governed by the terms of\nsuch publishing agreement and applicable law.\n\n**William Yeoh is an Associate Professor at Deakin Business School,**\nDeakin University. His scholarship has been published in leading\njournals, including 7A* and 24A Australian Business Deans Coun\ncil (ABDC) ranked journal publications and in all top Information\nSystems Conference proceedings (i.e., ICIS, HICSS, ECIS, PACIS,\nAMCIS, ACIS), and has been supported by AUD1.2 Million from var\nious funding bodies and industries. He has been recognised for excel\nlence in teaching, research, and service, receiving Educator of the Year\nGold Award (a national award from the Australian Computer Society\nACS - Australia’s peak ICT professional association), Deakin ViceChancellor’s Award for Value Innovation, Deakin Faculty Research\nExcellence Award, and two-time internationally-competitive IBM\nFaculty Awards.\n\n\n**Angela Siew-Hoong Lee is a Professor and an Associate Dean at**\nSchool of Engineering and Technology, and the Head of Department\nof Computing and Information Systems at Sunway University. Prof\nAngela Lee has been developing data science curriculum for more than\n10 years and she is the key person to introduce Data Science degree at\nSunway University. She was recently awarded the SAS Global Forum\nDistinguished Educator Award 2021. She regularly speaks at data sci\nence conferences. Angela has developed many innovative ways to\nuse analytics and data science tools from the most elementary level\nto advanced analytics. She teaches Social Media Analytics, Visual\nAnalytics, Advanced Analytics and Business Intelligence and has pub\nlished many international journal papers in the area of churn analytics,\nsentiment analysis and predictive analytics.\n\n**Claudia Ng received her Bachelor of Data Analytics and Master of**\nScience by Research from Sunway University. She is a data analyst at\na Malaysian bank.\n\n**Aleš Popovič is a Full Professor of Information Systems at NEOMA**\nBusiness School in France. He seeks to find research that is relevant\nand useful to both the academic and practitioner communities. His\nareas of research interest are focused on the study of how ISs provide\nvalue for people, organisations, and markets. He studies IS value in\norganisations, IS success, behavioural and organizational issues in IS,\nand IT in inter-organizational relationships. Dr. Popovič has published\nhis research in a variety of academic journals, such as Journal of the\nAssociation for Information Systems, European Journal of Information\nSystems, Journal of Strategic Information Systems, Decision Support\nSystems, Information & Management, Information Systems Frontiers,\nGovernment Information Quarterly, and Journal of Business Research.\n\n**Yue Han is an Associate Professor of Information Systems in the**\nMadden School of Business at Le Moyne College. Her main research\nareas include crowdsourcing, collective intelligence, knowledge reuse\nfor innovation, and information diffusion in social media. She also\nstudies the implementation of business intelligence and artificial intel\nligence. She has published papers in various information systems jour\nnals and conferences such as Information Systems Research, Journal\nof the Association for Information Systems, International Conference\non Information Systems, and ACM SIGCHI Conference on ComputerSupported Cooperative Work & Social Computing.\n\n\n-----\n\n"
| 25,224
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC10233539, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://link.springer.com/content/pdf/10.1007/s10796-023-10411-8.pdf"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-06-01T00:00:00
|
[
{
"paperId": "1b9225210a6ee4e0d8f8e2798e04930ac5e48e1f",
"title": "Blockchain in real estate: Recent developments and empirical applications"
},
{
"paperId": "2c97ababaa155c25422cca9880cc287430dfb502",
"title": "Blockchain Technology for Winning Consumer Loyalty: Social Norm Analysis Using Structural Equation Modeling"
},
{
"paperId": "67cd69b6fe424cd17390113bc0aa398a683d39b5",
"title": "Co-creating blockchain adoption: theory, practice and impact on usage behavior"
},
{
"paperId": "33aab0582f47dd0258b328dcb759a68946257078",
"title": "Blockchain technology in supply chain management: an empirical study of the factors affecting user adoption/acceptance"
},
{
"paperId": "784031522ebb4ceb80703fb47a8f4dc907aac920",
"title": "The effect of positive TRI traits on centennials adoption of try-on technology in the context of E-fashion retailing"
},
{
"paperId": "ad79f1ca2fd3d16b9a33afae402a8cd96d647f0a",
"title": "The Role of Blockchain to Fight Against COVID-19"
},
{
"paperId": "4718845c462bd06de3419a9b418dcbe0c913efd9",
"title": "Impact of digital surge during Covid-19 pandemic: A viewpoint on research and practice"
},
{
"paperId": "e589891c18b1b7303de69e8bbbba2e043a8c4705",
"title": "Adoption of blockchain technology in various realms: Opportunities and challenges"
},
{
"paperId": "847e9d82dce09f7978ddfe198915b27dd7d76386",
"title": "Does Technology Readiness and Acceptance Induce more Adoption of E-Government? Applying the UTAUT and TRI on an Indonesian Complaint-Based Application"
},
{
"paperId": "31735699359db919e6315fa2dd85e263e6587704",
"title": "Proposed TRUTAUT model of technology ddoption for LAPOR!"
},
{
"paperId": "1b90d2a8c1affe6b4c957b20d92d0f2654df44b1",
"title": "Constraints and Benefits of the Blockchain Use for Real Estate and Property Rights"
},
{
"paperId": "0d25d455c03b4d0787df8254696bd1130e68ba64",
"title": "Blockchain technology in commercial real estate transactions"
},
{
"paperId": "10f317e7ded5abc0063cd12850df96e8b4b50cf4",
"title": "Blockchain-Based Real Estate Market: One Method for Applying Blockchain Technology in Commercial Real Estate Market"
},
{
"paperId": "664ad6a548821db18dd0efeb6bb6c5decda00e49",
"title": "Blockchain adoption challenges in supply chain: An empirical investigation of the main drivers in India and the USA"
},
{
"paperId": "debd9d9601940077ab2847daee96899746bd770d",
"title": "“Technology Readiness and Acceptance Model” as a Predictor for the Use Intention of Data Standards in Smart Cities"
},
{
"paperId": "cd36d930a854435592807238ad59ecb42d6adbfd",
"title": "Decentralized Ledger for Land and Property Transactions in Sri Lanka Acresense"
},
{
"paperId": "7637f6d9a4646943b46c4a37045472f93a03dad9",
"title": "Understanding the Blockchain technology adoption in supply chains-Indian context"
},
{
"paperId": "a857db244890540325950efe1f15e3772c76c50b",
"title": "Design of the Blockchain Smart Contract: A Use Case for Real Estate"
},
{
"paperId": "625d0e4502a4fd991d7bdf8d3cf0fbea814a0ec6",
"title": "Elucidating social networking apps decisions"
},
{
"paperId": "b88b1abc849ef783739251d59ce1b076beb0d751",
"title": "User acceptance of E-Government Services Based on TRAM model"
},
{
"paperId": "b0058dba99b5f1a02d8ed29e2922b376d005f3b0",
"title": "The Supply Chain Has No Clothes: Technology Adoption of Blockchain for Supply Chain Transparency"
},
{
"paperId": "f6ee93d2695c37c244a37f423f51cb3e9ecded02",
"title": "Are government employees adopting local e-government transformation?: The need for having the right attitude, facilitating conditions and performance expectations"
},
{
"paperId": "44acea29884eb8e77247b68063e418bdae643bcf",
"title": "The adoption of M-government services from the user's perspectives: Empirical evidence from the United Arab Emirates"
},
{
"paperId": "9018bff90e47281bde04e0a919f1d55b9d488f81",
"title": "Factors influencing adoption of mobile banking by Jordanian bank customers: Extending UTAUT2 with trust"
},
{
"paperId": "3a68b2f4e1a9679acbd38c0f84c46a4c9299fb70",
"title": "What influences teachers to continue using cloud services?: The role of facilitating conditions and social influence"
},
{
"paperId": "45cb5f4555ce6dc7e4ca4d896436d5d7def533ef",
"title": "Beyond Bitcoin: using blockchain technology to provide assurance in the commercial world"
},
{
"paperId": "85acda62d7faaaa61af40839a4e0937f2da81f25",
"title": "Technology acceptance among micro-entrepreneurs in marginalized social strata: The case of social innovation in Bangladesh"
},
{
"paperId": "f26556a71ef731faf8cdf114458def380f0b9003",
"title": "Blockchain for Commercial Real Estate"
},
{
"paperId": "bbff38c90b337d3018a9e5d5adb1901fee5c0e3a",
"title": "Hawk: The Blockchain Model of Cryptography and Privacy-Preserving Smart Contracts"
},
{
"paperId": "76bcf76946928858f8370cd44d7e0ecb72632c30",
"title": "A Theoretical Model Proposal: Personal Innovativeness and User Involvement as Antecedents of Unified Theory of Acceptance and Use of Technology☆"
},
{
"paperId": "4b9184937da308914b9e13c43bfd75845eaf910b",
"title": "Decentralizing Privacy: Using Blockchain to Protect Personal Data"
},
{
"paperId": "47e91abbc1bb385248d62a38c924c3cf5d9b1857",
"title": "The unified theory of acceptance and use of technology (UTAUT): a literature review"
},
{
"paperId": "33b192ee41885a6654fd3c537c56658be60f4c3e",
"title": "An Updated and Streamlined Technology Readiness Index"
},
{
"paperId": "97fddbbfd681bce9eeb8e0a013353b4d5b2ba0db",
"title": "Blockchain: Blueprint for a New Economy"
},
{
"paperId": "b34c2974777b1069eab944b38e9bb3292a19e445",
"title": "Examining the influence of intermediaries in facilitating e-government adoption: An empirical investigation"
},
{
"paperId": "5a13031d75b98affe9d24884c085f48eb3d52e3e",
"title": "An investigation of the effect of nurses’ technology readiness on the acceptance of mobile electronic medical record systems"
},
{
"paperId": "0023318fc6f4533280d1277314a24fbe851b6bdb",
"title": "A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM)"
},
{
"paperId": "c1c7159a9f54e71dfce282196de93293edf1fe70",
"title": "A Primer on Partial Least Squares Structural Equation Modeling"
},
{
"paperId": "0b3ec3d6e2794309cb9d04f45ab0d3add07f527e",
"title": "Big TAM in Oman: Exploring the promise of on-line banking, its adoption by customers and the challenges of banking in Oman"
},
{
"paperId": "d9f6b31bdc398a3b2e12c3deb2a394068192fb4a",
"title": "Predicting Collaboration Technology Use: Integrating Technology Adoption and Collaboration Research"
},
{
"paperId": "61bb7f5d035e387174240237a02d362476fa848c",
"title": "Optimism, coping, and health: assessment and implications of generalized outcome expectancies."
},
{
"paperId": "da2e29039a8cc777e1030f74832b86ee3e8f7205",
"title": "Understanding consumer adoption of broadband: an extension of the technology acceptance model"
},
{
"paperId": "ae9d53471aadf66c962b3c9d3e1acbe2eb5e96ec",
"title": "Who influences whom? Analyzing workplace referents' social influence on IT adoption and non-adoption"
},
{
"paperId": "4875a3eb5d95cc24fe65ef765da4c9073eb15ea1",
"title": "Optimism and the perceptions of new risks"
},
{
"paperId": "ad2690274d0c1efa85069f97dd8101b0f28aee8e",
"title": "Assimilation of Enterprise Systems: The Effect of Institutional Pressures and the Mediating Role of Top Management"
},
{
"paperId": "113f1d3b48d44954027ad1776c071a7c79fed399",
"title": "The effect of service employees' technology readiness on technology acceptance"
},
{
"paperId": "a292206bcdeda0f19cb6e238007ab793f9fda848",
"title": "From IT Leveraging Competence to Competitive Advantage in Turbulent Environments: The Case of New Product Development"
},
{
"paperId": "b278e3e48ca3f50a4323d521246b6eb99fcc0dbb",
"title": "A Technology Readiness-Based Taxonomy of Customers"
},
{
"paperId": "f444aecb9a6cc1219d6baf81c55f23dfce3d9788",
"title": "User Acceptance of Information Technology: Toward a Unified View"
},
{
"paperId": "5a85e313388df3317deec05b21a73f901c1eeedf",
"title": "A Big‐Five Personality Profile of the Adaptor and Innovator"
},
{
"paperId": "c3736d31ea5e0bd8bd71b06d6e921f4ae59ac558",
"title": "A Longitudinal Investigation of Personal Computers in Homes: Adoption Determinants and Emerging Challenges"
},
{
"paperId": "0728964ec29130d59096460904223b442781227c",
"title": "Technology Readiness Index (Tri)"
},
{
"paperId": "11bebab9e623723fa510d89a7f743e211e0b286d",
"title": "Information technology implementation research: a technological diffusion approach"
},
{
"paperId": "99ab229b83321f21547fd2f6ad03a25d9b07bebd",
"title": "Information Technology Implementation Research"
},
{
"paperId": "ea349162d97873d4493502e205968ffccb23fcf2",
"title": "Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology"
},
{
"paperId": "59fe643d3413c89a09b36af2f6a575712b8f81fa",
"title": "Set Correlation and Contingency Tables"
},
{
"paperId": "2265a198e55665ed3df0f19688350567dab593de",
"title": "STRUCTURAL EQUATION MODELING IN PRACTICE: A REVIEW AND RECOMMENDED TWO-STEP APPROACH"
},
{
"paperId": "a52bfe99cf4fd555b1ae1083e410fba0ba8a16e8",
"title": "Toward a “Critical Mass” Theory of Interactive Media"
},
{
"paperId": "63b63272f3174d2608fc0a7943d3c73910d67baa",
"title": "Self-Reports in Organizational Research: Problems and Prospects"
},
{
"paperId": "92b6aeb4e4f8e0c170f33ba274ce1830db9c9496",
"title": "Threshold Models of Collective Behavior"
},
{
"paperId": "7b28610d2d681a11398eb614de0d70d7de41c20c",
"title": "Cross‐Validatory Choice and Assessment of Statistical Predictions"
},
{
"paperId": "e6307e1cc2f358f6260aaa8f3ab36d05a22933c8",
"title": "The Predictive Sample Reuse Method with Applications"
},
{
"paperId": "4ead8ab51ce82e11606544f42dfa4905381d80c9",
"title": "Smart Contracts, Bitcoin Bots, and Consumer Protection"
},
{
"paperId": null,
"title": "Blockchain in Real Estate: 7 Ways It Can Revolutionize the Industry"
},
{
"paperId": "b87275cef8b1b3ae48d01481444caeecd1deaade",
"title": "Angela"
},
{
"paperId": null,
"title": "The future of real estate transactions on the blockchain"
},
{
"paperId": null,
"title": "How COVID-19 has pushed companies over the technology tipping point—and transformed"
},
{
"paperId": null,
"title": "The role of personal traits and learner’s perceptions on the adoption of e-learning systems in higher learning institutions"
},
{
"paperId": "77b0f47971059f33f79afce9e4e050505eb325bb",
"title": "Blockchain: Towards Disruption in the Real Estate Sector: An exploration on the impact of blockchain technology in the real estate management process."
},
{
"paperId": "edb23d19e4eb5705aa612e163416533dea1ec37e",
"title": "Technology Readiness and Technology Acceptance Model in New Technology Implementation Process in Low Technology SMEs"
},
{
"paperId": "81d7cef664d43eeb4fb2c64515791c6bfc3b95e5",
"title": "Transforming Government : People , Process and Policy"
},
{
"paperId": null,
"title": "Questions and answers about using blockchain technology in real estate practice"
},
{
"paperId": "86853b90aa84889491c16ed5c3a35c6d6cc33828",
"title": "Development, measurement and validation of an integrated technology readiness acceptance and planned behaviour model for Indian mobile banking industry"
},
{
"paperId": null,
"title": "The business blockchain: Promise, practice, and application of the next internet technology"
},
{
"paperId": null,
"title": "Blockchain technology: Beyond bitcoin"
},
{
"paperId": "3fc06a9772f6344637ae41afea6fbb6c1e3d7230",
"title": "INTEGRATION OF TECHNOLOGY READINESS (TR) INTO THE TECHNOLOGY ACCEPTANCE MODEL (TAM) FOR M-SHOPPING"
},
{
"paperId": "849adc4acb9ea017fc17a1d3e70fa6edeb10662d",
"title": "Understanding the Internet banking adoption: A unified theory of acceptance and use of technology and perceived risk application"
},
{
"paperId": null,
"title": "Multi - variate data analysis: Global edition (7th ed.)"
},
{
"paperId": "3787715114e286042aac4fd9b612114c226c6fe9",
"title": "Structural Equation Modeling and Regression: Guidelines for Research Practice"
},
{
"paperId": null,
"title": "Blockchain can transform the real estate industry for the better"
},
{
"paperId": null,
"title": "Claudia Ng received her Bachelor of Data Analytics"
},
{
"paperId": null,
"title": "Science by Research from Sunway University. She is a data a Malaysian bank. Aleš Popovič is a Full Professor of Information Systems at Business School in France"
},
{
"paperId": null,
"title": "Land deal sealed using bitcoin. The Star"
}
] | 25,224
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Mathematics",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00ae3f736b28e2050e23acc65fcac1a516635425
|
[
"Computer Science",
"Mathematics"
] | 0.87555
|
Collaborative deep learning across multiple data centers
|
00ae3f736b28e2050e23acc65fcac1a516635425
|
Science China Information Sciences
|
[
{
"authorId": null,
"name": "Kele Xu"
},
{
"authorId": "40565983",
"name": "Haibo Mi"
},
{
"authorId": "49732389",
"name": "Dawei Feng"
},
{
"authorId": "143969934",
"name": "Huaimin Wang"
},
{
"authorId": "50434146",
"name": "Chuan Chen"
},
{
"authorId": "144291579",
"name": "Zibin Zheng"
},
{
"authorId": "143866730",
"name": "Xu Lan"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Sci China Inf Sci"
],
"alternate_urls": null,
"id": "0534c8a0-1226-4f5b-bcf6-a13a8dd1825e",
"issn": "1869-1919",
"name": "Science China Information Sciences",
"type": null,
"url": "http://info.scichina.com/"
}
|
Valuable training data is often owned by independent organizations and located in multiple data centers. Most deep learning approaches require to centralize the multi-datacenter data for performance purpose. In practice, however, it is often infeasible to transfer all data of different organizations to a centralized data center owing to the constraints of privacy regulations. It is very challenging to conduct the geo-distributed deep learning among data centers without the privacy leaks. Model averaging is a conventional choice for data parallelized training and can reduce the risk of privacy leaks, but its ineffectiveness is claimed by previous studies as deep neural networks are often non-convex. In this paper, we argue that model averaging can be effective in the decentralized environment by using two strategies, namely, the cyclical learning rate (CLR) and the increased number of epochs for local model training. With the two strategies, we show that model averaging can provide competitive performance in the decentralized mode compared to the data-centralized one. In a practical environment with multiple data centers, we conduct extensive experiments using state-of-the-art deep network architectures on different types of data. Results demonstrate the effectiveness and robustness of the proposed method.
|
## Collaborative Deep Learning Across Multiple Data Centers
### Kele Xu[1][,][2], Haibo Mi[1][,][2], Dawei Feng[1][,][2], Huaimin Wang[1][,][2], Chuan Chen[3], Zibin Zheng[3], Xu Lan[4]
1 National Key Laboratory of Parallel and Distributed Processing, Changsha, China
2 College of Computer, National University of Defense Technology, Changsha, China
3 School of Data and Computer Science, Sun Yat-Sen University, Guangzhou, China
4 Queen Mary University of London, London, UK
**Abstract**
Valuable training data is often owned by independent organizations and located in multiple data centers. Most deep
learning approaches require to centralize the multi-datacenter
data for performance purpose. In practice, however, it is often
infeasible to transfer all data to a centralized data center
due to not only bandwidth limitation but also the constraints
of privacy regulations. Model averaging is a conventional
choice for data parallelized training, but its ineffectiveness is
claimed by previous studies as deep neural networks are often
non-convex. In this paper, we argue that model averaging
can be effective in the decentralized environment by using
two strategies, namely, the cyclical learning rate and the
increased number of epochs for local model training. With
the two strategies, we show that model averaging can provide
competitive performance in the decentralized mode compared
to the data-centralized one. In a practical environment with
multiple data centers, we conduct extensive experiments
using state-of-the-art deep network architectures on different
types of data. Results demonstrate the effectiveness and
robustness of the proposed method.
### Introduction
The sensitive data, such as medical imaging data, genetic
sequences, financial records and other personal information,
is often managed by independent organizations like hospitals and companies (Tian et al. 2016). Many deep learning
(DL) algorithms prefer to use as much data as possible
distributed in different organizations for training, because
the performance of these DL algorithms directly depends on
the amount of high-quality data not only for rarely occurring
patterns but also for the robustness to the outliers (AmirKhalili et al. 2017). In practice, however, directly sharing
data between different organizations is of great difficulties
due to many reasons including privacy protection, legal
risk consideration and conflict of interests. Therefore, it
has become an important research topic for both academy
and industry to fully employ the data of different organizations for training DL models without centralizing the data,
while achieving similar performance compared to centralized training after moving all data together.
Copyright c⃝ 2019, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
Recently, there has been a trend to use collaborative
solvers to train a global model on geo-distributed, multidatacenter data without directly sharing data between different data centers (Cano et al. 2016; Hsieh et al. 2017).
Specifically, several participants independently train the DL
models for a while, and periodically aggregate their local
updates to construct a shared model. Only parameters are
exchanged and all the training data is kept in the original
places (McMahan et al. 2016). However, there are several
challenges for this approach:
Large performance gap compared to the centralized
_•_
mode: When training on the disjoint multi-party data, traditional deep models using Stochastic Gradient Descent
(SGD) are difficult to provide competitive performance
compared to their centralized mode. Further, with limited
data size, the local learner is vulnerable to fall into the
local optima, as deep models are generally non-convex.
High communication cost: different datasets are stored on
_•_
different data centers (on private cloud or public cloud).
DL algorithms typically require frequent communication to exchange parameter updates such that the shared
deep model is of superior performance. However, current parameter servers are designed for high-speed local
area networks (LANs). Due to the limitation of network
bandwidth of wide-area networks (WANs), parameters of
the global model cannot be exchanged frequently in the
multi-datacenter environment. Therefore, it is necessary
to decrease the communication cost for parameter exchange between different data centers, while retaining the
accuracy of the shared model.
High model aggregation complexity: The update strat
_•_
egy to aggregate the local models is complicated. As
the different participant has its own training setting, the
approach to aggregate local learners should be simple.
In addition, the aggregation method should support the
learning procedure using different deep neural network
architectures.
In this work, we propose a multi-datacenter based collaborative deep learning method (denoted as co-learning),
which (1) minimizes the performance gap between the
centralized and decentralized modes, (2) minimizes the
inter-datacenter communication cost during the co-training
-----
procedure over WANs, (3) is applicable to a wide variety of
deep network architectures without any change.
The co-learning approach proposes two strategies to
improve the performance of a shared model in distributed learning, based on the conventional model averaging method. First, we adopt the modified cyclical learning
rate (Izmailov et al. 2018), so as to avoid falling into the
local optima during the local training procedure. Second,
we enlarge the number of local epochs when the difference
between two consecutive shared models decreases to be less
than a threshold, so as to increase the diversity between local
models and reduce the inter-datacenter communication cost.
The synchronization period is extended from milliseconds
or seconds to ten of minutes or even hours.
Surprisingly, despite the claims from previous studies
(Povey, Zhang, and Khudanpur 2014; McMahan et al.
2016), we find that model averaging in the decentralized
mode can provide competitive performance compared to
the traditional centralized mode. Extensive experiments are
conducted on three different tasks: image classification,
text classification and audio classification. Using the colearning method, we have tested various state-of-the-art neural network architectures including VGGNet (Simonyan and
Zisserman 2014), ResNet (He et al. 2016), DenseNet (Huang
et al. 2017) and Capsule architectures (Sabour, Frosst, and
Hinton 2017). All the experiments reveal that the proposed
co-learning approach can provide superior performance in
the decentralized mode. In summary, the main contributions
include:
We propose a collaborative deep learning approach using
_•_
model averaging. With two simple strategies (cyclical
learning rate and increased number of local training
epochs), we show that model averaging can provide competitive performance compared to the centralized mode.
Our approach enables the training of collaborative deep
_•_
learning in the practical WAN environment.
The proposed co-learning is flexible enough to be applied
_•_
to a wide range of deep learning architectures without any
change.
The remainder of this paper is organized as follows.
Section 2 descries the related work, while Section 3 presents
the details of our co-learning approach. Section 4 describes
the experimental results, the discussion and conclusion are
given in Section 5.
### Related Work
With the increase of data size and model complexity,
training a deep neural network can take long time. An
increasing trend to scale deep learning is to partition the
training dataset, concurrently train separate models on the
disjoint subset. By aggregating the updates of local model’s
parameters via a parameter server, a shared model can
be constructed. In this paper, we define this method as
collaborative deep learning, which can be applied in the
practical situation where each participant wants to hide their
own training data from each other.
#### Parallelized Stochastic Gradient Descent
Many recent attempts have been made to parallelized SGD
based learning schemes across multiple data centers (Hsieh
et al. 2017; Zhang et al. 2017). Nevertheless, the geodistributed nature of data prevents its widespread utilization
between organizations, due to the aforementioned reasons
like limitations in cross data center connectivity, or data
sovereignty regulations restriction. To break through these
restrictions, increasing effort has been made. (Shokri and
Shmatikov 2015) uses the parallel stochastic gradient descent algorithm to train the model for the consideration
of privacy preservation. The communication cost between
the client and the server is prohibitively high, thereby can
seldom be deployed in WAN scenarios due to the bandwidth limit. (Tian et al. 2016) proposed a secure multiparty computation (MPC) approach for simple and effective
computations, yet its overhead for complex computations
and the model training is nontrivial. Consequently, this
approach is more suitable for shallow ML models, while it
is difficult to be applied to deep learning models (Zinkevich
et al. 2010).
Furthermore, to reduce the communication cost, many
compression approaches have been explored, such as, gradient quantization (Alistarh et al. 2017) and network pruning
(Lin et al. 2017), knowledge distillation (Anil et al. 2018;
Hinton, Vinyals, and Dean 2015).
#### Model averaging
For collaborative deep learning, model averaging is an
alternative method for parallelized SGD (Su and Chen 2015;
Povey, Zhang, and Khudanpur 2014). However, most of the
previous literatures (Sun et al. 2017; Goodfellow, Vinyals,
and Saxe 2014) claimed that traditional model averaging
cannot provide satisfied performance in the distributed setting, as a deep neural network is a highly non-convex model.
For example, (Povey, Zhang, and Khudanpur 2014) claimed
that the model averaging algorithm did not work well for
speech recognition models. The main reason to support these
claims was that: when the size of the data is limited for
the training of a local model, the local models may fall
into different local-optima. The shared model obtained by
averaging the local model’s parameters, might even perform
worse than any local model. Moreover, in the follow-up step,
the shared model would be used as a new starting point
of the successive iterations of local training, and the poor
performance of the shared model would drastically slow
down the convergence of the training process and further
decreased the performance of the shared model (Sun et
al. 2017). To avoid falling into local optima, many regularization methods were proposed (Srivastava et al. 2014;
Ioffe and Szegedy 2015). In (Izmailov et al. 2018), it was
found that using a cyclical learning rate could lead to better
generalization than the conventional training.
A federated learning approach (McMahan et al. 2016) was
proposed for a data parallelization in the context of deep
learning. It targeted to solve the model training on massive
mobile devices, and a fixed number of epochs for local
model training was employed for the devices. However, We
-----
Figure 1: Workflow of co-learning. Assume that the participants are different data centers. Each participant holds an
amount of private data and uses the disjoint data to train a
local classifier. The local model parameters will be averaged
by the global server to formulate the new shared model,
which in turn are used for as the starting point for the
next round of local training. Besides the new shared model,
the global server also updates the number of local training
epochs and the learning rate.
utilize a modified cyclical learning rate and an increasing
number of epochs for local model training to get competitive
performance in the decentralized mode with comparison to
the centralized one.
### Methodology
#### Notation and problem formulation
A typical process of parallel training for deep models is
illustrated in Figure 1. Participants train their local models
with the individual deep learning platform in their private
data centers (in private clouds or trusted public clouds).
These local data centers communicate over WANs. In the
piratical situation, due to the limitation of WAN bandwidth,
participants cannot exchange updates frequently.
In the following, we denote a deep neural network
as f (w), where w represents the parameters of this
neural network model. In addition, we denote the outputs of the model f (w) on the input x as f (w, x). In
the parallel training of deep models, suppose there are
_K participants and each of them holds a local dataset_
_Dk = {(xk,1, yk,1), ..., (xk,mk_ _, yk,mk_ )} with size mk, k ∈
1, ..., K . Denote the weight of the neural network model
_{_ _}_
at the iteration t of i-th round (with Ti epochs been
performed) on the participant k as wk[i,t][. Then a typical]
parallel training procedure for neural network implements
the following two steps:
Local training for the participants: At the t-th iteration of
_•_
round i, participant k updates its local model by using
SGD. We refer to one full iteration over all local training
data as an epoch. The local model is communicated and
aggregated to formulate a shared model after Ti epochs,
which is decided dynamically by the global server. Then
each participant can initialize its local parameters for the
following local training by downloading latest values of
the shared model from the global server. During the local
training, the participant does not need to exchange the
data with other participants. At the iteration t of i-th
round, the empirical loss of the k-th local model is defined
as
_mk_
�
_L(f_ (wk[i,t][, x][k][)][, y][k][) =] _L(f_ (wk[i,t][, x][k,m][)][, y][k,m][)][.][ (1)]
_m=1_
Specifically, participant k updates its local model from
**wk[i,t]** [to][ w]k[i,t][+1] by minimizing the training loss using SGD.
Model aggregation for the global server: Firstly, the
_•_
global server initializes the shared model parameters and
pushes them to all participants. The local training of
each participant follows the aforementioned procedures.
If one participant k fails to upload its parameters due
to network errors or other failures, the global server will
restart the local training process of participant k. After all
_K participants finish their updates in the i-th round and_
obtain the parameter wk[i] [, the global deep neural network]
model is updated by taking the average of the K sets of
parameters, i.e.,
_j_
_ηj[i]_ [=][ η][i][ ×][ r] _Ti,_ (3)
_r is the decay rate (in our experiment, r is set as 1/4), η[i]_ is
the shared learning rate in the ith round, used as an initial
value to update each participant’s local learning rate. It
can be updated as i grows. For simplicity, we set η[i] as
a constant value (i.e. 0.01) in this paper. As mentioned
above, the global server has to decide the number of
epochs for local participants dynamically, since these
values have a significant impact on the accuracy of the
shared model. The number of local epochs in the i-th
round (Ti ) is updated based on the following rules:
_T0,_ _if i = 0,_
_Ti =_ 2 ∗ _Ti−1,_ _if i > 0 &_ _[|][ ¯][w]|[i] ¯w[−][i][w][−][¯]_ _[i][1][−]|_ [1][|] _≤_ _ϵ,_ (4)
_Ti−1,_ _if i > 0 &_ _[|][ ¯][w]|[i] ¯w[−][i][w][−][¯]_ _[i][1][−]|_ [1][|] _> ϵ,_
**w¯** _[i]_ = [1]
_K_
_K_
�
**wk[i]** _[,]_ (2)
_k=1_
which is further sent back to the local participants, and
set as the initial parameters for the following training.
Further, the number of epochs Ti is reset according to
the conditions defined in Equations (4). The parameters
of the shared model, as well as Ti and η[i], are sent back
to local participants, and used as the starting point for the
next round of local training (as can be seen in Figure 1).
#### Cyclical learning rate and increasing local epochs
To avoid falling into local optima, we employ the cyclical
learning rate (CLR) schedule in the training phase of
the local participants. Specifically, within the i-th communication round, we decay the learning rate with an
exponential annealing for each epoch j as follows:
-----
where ϵ is used to control the convergence precision of
the shared model parameters. In other words, the number
of epochs in each round is increased by a factor of 2 at
every communication round once the change of the shared
model parameters is lower than ϵ. The pseudocode of the
proposed co-learning is given in Algorithm 1.
**Algorithm 1 co-learning**
initialize w[0], η[0] and T0
**for each round i = 0, 1, 2, ..., N do**
reset Ti according to the Equation (4)
send w[i], η[i] and Ti to participants
**for each participant k ∈** K parallel do
**for local epoch j from 1 to Ti do**
update ηj[i] [according to the Equation (3)]
_wk[i]_ _[←]_ [localSGD(][w][i][,][ η]j[i] [)]
upload wk[i] [to server]
_w[i][+1]_ _←_ _K[1]_ �Kk=1 _[w]k[i]_
#### Ablation study on CLR and ILE
In this part, we perform a thorough ablation study to
highlight the benefits of cyclical learning rate (CLR) and
increasing local epochs (ILE) on model averaging. We
also employ the exponential learning rate (ELR, i.e. noncyclical learning rate) and fixed local epochs (FLE) for the
quantitative comparison.
We run experiments on the CIFAR-10 dataset, which
consists of 10 classes 32 32 images with three channels.
_×_
50,000 training images are partitioned into five disjoint
subsets, which are stored in five different data centers, and
each containing 10,000 samples. The 10,000 test images
are used for the evaluation. The initial values of T0 for
the DenseNet-40, ResNet-152, Inception-V4, and InceptionResNet-V2 models are 5, 5, 20, 5 respectively. The batch
size of the experiments was set to 32.
Using the pairwise combination of (cyclical learning rate
(CLR), exponential learning rate (ELR)) and (increasing
local epochs (ILE), fixed local epochs (FLE)), Figure 2
shows the accuracy of model averaging method for training DenseNet-40, ResNet-152, Inception-V4 and InceptionResNet-V2. As can be seen from the figure:
The combination of CLR and ILE achieves the highest ac
_•_
curacy on four different network architectures. The results
demonstrate that co-learning (CLR+ILE, the red line)
tends to generalize better, which indicates the benefits of
both cyclical learning rate and increasing local epochs.
The reason behind might be that co-learning could converge to flat local optima rather than sharp, isolated
optima. Such flat regions are robust to data perturbations
as well as perturbations of the parameters, all of which are
crucial factors to achieve good generalization.
Similar to previous studies using model averaging, the
_•_
combination of ELR and FLE (the green line) cannot
effectively improve the performance of the collaborative
learning, and tends to be over-fitting in the training phase.
Table 1: Stats for using CLR+ILE on different models in a
communication round.
Comm. interval Comm. volume
Models
(min. / T0) (MB)
DenseNet-40 4.5 / 5 13
ResNet-152 30 / 5 223
Inception-V4 60 / 20 168
Inception-ResNet-V2 27.5 / 5 218
In other words, the performance of the shared model
cannot be improved by using model averaging alone
without any optimization strategy.
Further, ELR+ILE leads to a converged result, however,
_•_
the CLR+FLE prones to be over-fitting. This indicates the
ILE may bring more performance gains than the CLR on
the CIFAR-10 dataset, and ILE can increase the diversities between different local models, which consequently
derives a better shared model.
#### Communication cost
We briefly summarize the communication cost for the proposed co-learning approach. Table 1 exhibits the communication interval and the transferred volume of one model
in a round. The 2nd column reveals the communication
interval between a local participant and the global server
in a communication round before T0 is increased (i.e. time
elapsed between two consecutive model-synchronizations).
Specifically, using the CLR+ILE strategy, the communication intervals for different models range from minutes to
hours, e.g. 60 minutes for the Inception-V4 and 27.5 minutes
for the Inception-ResNet-152. Moreover, if T is enlarged in
the following training, the communication interval will be
further extended. Take the Inception-V4 as an example, in
the 340th epoch, the number of local epochs T is increased
from 20 to 40. Consequently, the communication interval is
enlarged from 60 minutes to 120 minutes, which can greatly
alleviate the dependence on the WAN bandwidth.
In short, combining the CLR and ILE, the performance of
the shared model can be increased, while the communication
cost can be reduced. It is also worthwhile to notice that
we do not employ the compression technique by which the
communication cost can be further decreased.
### Experiments
#### Experimental Settings
To demonstrate the effectiveness of co-learning, empirical
experiments were conducted on three different tasks: image
classification, text classification and audio classification. For
image classification, both CIFAR-10 and ImageNet-2014
(Russakovsky et al. 2015) were used for the experiments;
For text classification, Toxic comment classification dataset
was used in the classification tasks; For audio classification,
Google speech command data (Sainath and Parada 2015)
and Audio Set (Gemmeke et al. 2017) were employed. Using
the proposed co-learning method, different neural network
-----
Figure 2: Accuracy on the CIFAR-10 dataset by using different strategies. The employed neural network architectures include:
Inception-V4, ResNet, Inception-ResNet, DenseNet. Using the proposed ILE strategy, DenseNet-40, ResNet-152, Inception-V4
enlarges T0 at the 250th, 175th and 340th epoch respectively, while Inception-ResNet-V2 increases T0, T1, T2 at the 15th, 105th
and 265th epoch, respectively. After the adjustment, the performance of each shared model sees a significant improvement in
the following rounds. The FLE strategy in the bottom-right figure (the blue line and green line) experiences an early stop, as it
does not boost the performance in the previous rounds.
architectures were tested, including state of the art neural
networks architectures. We conducted experiments across
five geo-distributed data centers in a public cloud, each
equipped with a GPU server with four Tesla P40. All kinds
of datasets were randomly allocated to 5 participants in
an equally distributed manner. All our experiments were
implemented in TensoFlow slim. Also, it is worthwhile to
notice that all the results were obtained using the average of
five repetitive trials of the experiments. The following two
groups of experiments were conducted.
It is a common strategy to integrate the training results of
_•_
each participant by using ensemble learning. In more detail, each participant independently trains its own model,
without interacting with other participants during the
training process. The average output of each participants
model is used as the final prediction. With the CIFAR-10
dataset, accuracy comparison between ensemble-learning
and co-learning were carried out on different kinds of
network architectures. Besides, training a deep model
using the entire dataset in a single data center (denoted
as vanilla-learning below) is introduced as a reference
for comparison. Except for the two proposed strategies
for co-learning, other configuration settings for vanilla
learning are kept the same as the settings of co-learning.
Moreover, to make a quantitative comparison between
_•_
the data centralized training method and de-centralized
one, we conducted comprehensive experiments using
vanilla-learning and the proposed co-learning on different
kinds of deep network architectures and various types of
datasets.
#### Ensemble-learning, vanilla-learning and co-learning
In the following experiment, using the CIFAR-10 dataset,
we show the comparison between ensemble-learning,
vanilla-learning and co-learning, on five kinds of models (i.e. VGG-19, ResNet-152, Inception-V4, InceptionResNet-V2, and DenseNet-40). For the vanilla-learning, the
exponential learning rate (ELR) is employed. Table 2 illustrates the results. It can be observed that using ensemblelearning, the model accuracy is significantly declined, i.e.
nearly 10% reduction compared with the vanilla-learning.
As each participant has only 1/5 disjoint training data,
the accuracy of the local model is poor. Consequently, by
averaging the outputs of each model after independent local
training, it is infeasible to obtain a competitive performance
-----
Table 2: CIFAR-10 accuracy comparison between
ensemble-learning, vanilla-learning and co-learning.
Accuracy(%)
Model vanilla ensemble co-learning
VGG-19 89.44 80.39 **89.64**
ResNet-152 92.64 85.4 **93.51**
Inception-V4 91.34 83.83 **92.07**
Inception-ResNet-V2 **92.86** 84.7 92.83
DenseNet-40 91.35 81.24 **91.43**
Table 3: Test accuracy of ImageNet-2014 using different
models.
Accuracy(%)
Model Top-1 Top-5
vanilla 70.41 88.12
VGG-19
co-learning **70.62** **88.7**
vanilla 79.16 93.82
Inception-V4
co-learning **79.35** **94.28**
vanilla 75.66 92.28
ResNet-V2-101
co-learning **75.85** **92.39**
with the one using vanilla-learning. On the contrary, the
accuracy obtained by the co-learning achieves competitive
results with comparison to the vanilla-learning. Surprisingly, co-learning on four models (i.e. VGG-19, ResNet152, Incpeiton-V4 and DenseNet-40) even achieves better
performance than the vanilla-learning. These results exhibit
again the effectiveness of the cyclical learning rate (CLR)
and increasing local epochs (ILE) on model averaging.
#### Comparison between co-learning and vanilla-learning
**Image Classification.** We conduct another image classification experiments on the ImageNet-2014 to further evaluate
the generalization accuracy of co-learning, as the classification error on ImageNet is particularly important because
many state-of-the-art computer vision problems derive image features or architectures from ImageNet classification
models.
In the training phase, we follow standard data augmentation practices: scale and aspect ratio distortions, random
crops, and horizontal flips. The batch size is set to 256.
Three different state-of-the-art models (VGG, InceptionV4, ResNet-V2-101) are trained, by using both of the colearning and vanilla-learning approach. Top-1 and Top-5
accuracy rates are reported in Table 3. We find that the colearning leads to improved accuracy over vanilla-learning
using the same network architecture settings, which illustrates the promising potential of co-learning. This indicates
that the co-learning approach can be generically applied to
large-scale image classification settings.
Table 4: Multi-class AUC on toxic comment classification
challenge dataset.
Multi-class AUC(%)
Model vanilla co-learning
LSTM 98.52 **98.79**
Capsule 98.32 **98.75**
**Text Classification.** We also run experiments on a largescale toxic comments classification task to demonstrate the
effectiveness of co-learning on a natural language processing problem. In more detail, the training dataset consists
of 159,571 Wikipedia comments, which have been labeled
by human raters for toxic behavior, while 153,164 records
are used for the evaluation. The types of toxicity are: toxic,
severe toxic, obscene, threat, insult, identity hate. In the
training stage, the training dataset is randomly partitioned
into 5 participants. Each contains equal-size disjoint examples, which are stored in the different data center.
For the classification, the employed models include
LSTM (Greff et al. 2017) and Capsule (Hinton, Frosst, and
Sabour 2018). The input embeddings for each word are of
dimension 300 (for the pre-trained word vectors, fastText
(Bojanowski et al. 2017) is employed). For LSTM model,
we use a bidirectional GRU and the batch size is set to
128 here. For Capsule model, the input is the reshaped
embedding vectors, while the second layer is a primary
capsule layer with strides of 1. This layer consists of 32
“Component Capsules” with a dimension of 8. Final capsule
layer includes 6 capsules, refereed to as “Class Capsules”,
one for each type of toxicity. The dimension of these
capsules is 16.
For the evaluation, the mean column-wise ROC AUC is
used. As can be been from the Table 4, the co-learning
improves the accuracy with comparison to the vanillalearning. The experimental results suggest that our method
is practically applicable to the large-scale text classification
task.
**Audio Classification.** Next, we conduct experiments on
the audio classification task. Two different datasets are used:
Google commands dataset and Audio Set.
Google Command Recognition. Google commands
_•_
dataset contains 65,000 utterance, in which each audio
is about one second long and belongs to one out of 30
classes. The voice commands include classes, such as
left, right, yes, no. To process the utterances, we first
calculate the log Mel spectrograms from the original raw
audio signal at a sample rate of 16 kHz. The model
architecture consists of two convolutional layers followed
by two fully connected layers and then a softmax layer
for classification. While this model is not the state-ofthe-art, it is sufficient for our needs, as our goal is to the
quantitative study, not achieve the best possible accuracy
on this task. Table 5 gives the recognition accuracy of the
co-learning, and vanilla-learning. As can be seen from the
table, nearly the same accuracy can be achieved using the
-----
Table 5: TensorFlow speech commands recognition
Validation Test
Method
accuracy (%) accuracy (%)
vanilla 93.1 **93.3**
co-learning **93.3** 93.2
Table 6: Audio Set classification task using a single /
multi data center(s). AP represents result of CRNN with
average pooling, MP for CRNN with max pooling, SA for
CRNN with single attention and MA for CRNN with multiattention.
vanilla / co-learning
Models MAP AUC d-prime
AP **0.300 / 0.299** **0.964 / 0.962** **2.536 / 2.506**
MP 0.292 / 0.292 **0.960 / 0.959** **2.471 / 2.456**
SA 0.337 / 0.337 **0.968 / 0.966** **2.612 / 2.574**
MA **0.357 / 0.352** 0.968 / 0.968 **2.621 / 2.618**
co-learning.
Audio event classification using Audio set. To make
_•_
a quantitative comparison between the co-learning and
the vanilla-learning, large-scale audio event classification
experiments are conducted. Audio Set consists of a large
ontology of 632 sound event classes and a collection of
2 million human-labeled sound clips (mostly 10-second
length) drawn from 2 million YouTube videos.
Each audio recording feature has 240 frames by 64 mel
frequency channels, which are employed as the input for
different architectures. The convolutional recurrent neural
networks (CRNN) are adopted for the classification task.
Specifically, one bi-directional gated recurrent neural network with 128 units is used. Instead of applying a singlelevel attention model after the fully connected neural
network, multiple attention modules (Yu et al. 2018) can
be applied after intermediate layers as well. The batch size
is set to 128 for different network architectures. Table 6
summarizes the results of different network architectures.
Overall, the accuracy is similar by using the co-learning
and the vanilla-learning. The result demonstrates the general applicability of our method on audio datasets.
### Discussion and Conclusion
In this paper, we present co-learning, a novel collaborative
deep learning approach, for training deep models on disjoint
multi-party datasets. Extensive experiments are conducted
on different types of data, including image, text, and audio,
with the goal to demonstrate the effectiveness of co-learning
both quantitatively and qualitatively. All the experiments
demonstrate that co-learning method can provide competitive (sometimes, even better) performance, with comparison
to the data centralized learning.
The experiments also indicate the benefit of both cyclical
learning rate and enlarging local training epoch strategies.
The reason behind might be that co-learning could converge
to flat local optima rather than sharp, isolated local optima.
Such flat regions are robust to data perturbations as well
as perturbations of the parameters, all of which are crucial
factors to achieve good generalization.
On one hand, by restarting the optimization with a large
learning rate, the intrinsic random motion across gradient
direction prevents the model from reaching any of the sharp
basins along its optimization path, which allows the model
to find a better local optima. In this way, although the performance temporarily suffers when the learning rate cycle is
restarted, the performance eventually surpasses the previous
cycle after annealing the learning rate. On the other hand,
by increasing the number of local epoch in the iterations,
each local model could do large steps in the parameter space
to get diverse networks. Thus, it is expected to achieve
better possible accuracy on its local datasets. Moreover, the
increasing local epochs leads to add the diversities between
different local models, which can be averaged to get a better
shared model.
In brief, our co-learning method offers a solution for
collaborative deep learning in the context of multi-parties
data. Future work includes the practical privacy mechanism,
secured multi-party computation in the co-learning framework.
### Acknowledgments
This work was supported by the National Grand R&D
Plan(Grant No. 2016YFB1000101).
### References
[Alistarh et al. 2017] Alistarh, D.; Grubic, D.; Li,
J.; Tomioka, R.; and Vojnovic, M. 2017. Qsgd:
Communication-efficient sgd via gradient quantization
and encoding. In Advances in Neural Information
_Processing Systems, 1709–1720._
[Amir-Khalili et al. 2017] Amir-Khalili, A.; Kianzad, S.;
Abugharbieh, R.; and Beschastnikh, I. 2017. Scalable and
fault tolerant platform for distributed learning on private
medical data. In International Workshop on Machine Learn_ing in Medical Imaging, 176–184. Springer._
[Anil et al. 2018] Anil, R.; Pereyra, G.; Passos, A.; Ormandi,
R.; Dahl, G. E.; and Hinton, G. E. 2018. Large scale distributed neural network training through online distillation.
_arXiv preprint arXiv:1804.03235._
[Bojanowski et al. 2017] Bojanowski, P.; Grave, E.; Joulin,
A.; and Mikolov, T. 2017. Enriching word vectors with
subword information. Transactions of the Association for
_Computational Linguistics 5:135–146._
[Cano et al. 2016] Cano, I.; Weimer, M.; Mahajan, D.;
Curino, C.; and Fumarola, G. M. 2016. Towards geo-distributed machine learning. _arXiv preprint_
_arXiv:1603.09035._
[Gemmeke et al. 2017] Gemmeke, J. F.; Ellis, D. P.; Freedman, D.; Jansen, A.; Lawrence, W.; Moore, R. C.; Plakal,
-----
M.; and Ritter, M. 2017. Audio set: An ontology and humanlabeled dataset for audio events. In Acoustics, Speech
_and Signal Processing (ICASSP), 2017 IEEE International_
_Conference on, 776–780. IEEE._
[Goodfellow, Vinyals, and Saxe 2014] Goodfellow, I. J.;
Vinyals, O.; and Saxe, A. M. 2014. Qualitatively
characterizing neural network optimization problems. arXiv
_preprint arXiv:1412.6544._
[Greff et al. 2017] Greff, K.; Srivastava, R. K.; Koutn´ık, J.;
Steunebrink, B. R.; and Schmidhuber, J. 2017. Lstm: A
search space odyssey. IEEE transactions on neural networks
_and learning systems 28(10):2222–2232._
[He et al. 2016] He, K.; Zhang, X.; Ren, S.; and Sun, J.
2016. Deep residual learning for image recognition. In
_Proceedings of the IEEE conference on computer vision and_
_pattern recognition, 770–778._
[Hinton, Frosst, and Sabour 2018] Hinton, G.; Frosst, N.;
and Sabour, S. 2018. Matrix capsules with em routing.
[Hinton, Vinyals, and Dean 2015] Hinton, G.; Vinyals, O.;
and Dean, J. 2015. Distilling the knowledge in a neural
network. arXiv preprint arXiv:1503.02531.
[Hsieh et al. 2017] Hsieh, K.; Harlap, A.; Vijaykumar, N.;
Konomis, D.; Ganger, G. R.; Gibbons, P. B.; and Mutlu, O.
2017. Gaia: Geo-distributed machine learning approaching
lan speeds. In NSDI, 629–647.
[Huang et al. 2017] Huang, G.; Liu, Z.; Van Der Maaten,
L.; and Weinberger, K. Q. 2017. Densely connected
convolutional networks. In CVPR, volume 1, 3.
[Ioffe and Szegedy 2015] Ioffe, S., and Szegedy, C. 2015.
Batch normalization: Accelerating deep network training by reducing internal covariate shift. _arXiv preprint_
_arXiv:1502.03167._
[Izmailov et al. 2018] Izmailov, P.; Podoprikhin, D.;
Garipov, T.; Vetrov, D.; and Wilson, A. G. 2018. Averaging
weights leads to wider optima and better generalization.
_arXiv preprint arXiv:1803.05407._
[Lin et al. 2017] Lin, Y.; Han, S.; Mao, H.; Wang, Y.; and
Dally, W. J. 2017. Deep gradient compression: Reducing
the communication bandwidth for distributed training. arXiv
_preprint arXiv:1712.01887._
[McMahan et al. 2016] McMahan, H. B.; Moore, E.; Ramage, D.; Hampson, S.; et al. 2016. Communication-efficient
learning of deep networks from decentralized data. arXiv
_preprint arXiv:1602.05629._
[Povey, Zhang, and Khudanpur 2014] Povey, D.; Zhang, X.;
and Khudanpur, S. 2014. Parallel training of dnns with
natural gradient and parameter averaging. _arXiv preprint_
_arXiv:1410.7455._
[Russakovsky et al. 2015] Russakovsky, O.; Deng, J.; Su, H.;
Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.;
Khosla, A.; Bernstein, M.; et al. 2015. Imagenet large
scale visual recognition challenge. International Journal of
_Computer Vision 115(3):211–252._
[Sabour, Frosst, and Hinton 2017] Sabour, S.; Frosst, N.; and
Hinton, G. E. 2017. Dynamic routing between capsules. In
_Advances in Neural Information Processing Systems, 3856–_
3866.
[Sainath and Parada 2015] Sainath, T. N., and Parada, C.
2015. Convolutional neural networks for small-footprint
keyword spotting. In Sixteenth Annual Conference of the
_International Speech Communication Association._
[Shokri and Shmatikov 2015] Shokri, R., and Shmatikov, V.
2015. Privacy-preserving deep learning. In Proceedings
_of the 22nd ACM SIGSAC conference on computer and_
_communications security, 1310–1321. ACM._
[Simonyan and Zisserman 2014] Simonyan, K., and Zisserman, A. 2014. Very deep convolutional networks for largescale image recognition. arXiv preprint arXiv:1409.1556.
[Srivastava et al. 2014] Srivastava, N.; Hinton, G.;
Krizhevsky, A.; Sutskever, I.; and Salakhutdinov, R.
2014. Dropout: a simple way to prevent neural networks
from overfitting. _The Journal of Machine Learning_
_Research 15(1):1929–1958._
[Su and Chen 2015] Su, H., and Chen, H. 2015. Experiments
on parallel training of deep neural network using model
averaging. arXiv preprint arXiv:1507.01239.
[Sun et al. 2017] Sun, S.; Chen, W.; Bian, J.; Liu, X.; and
Liu, T.-Y. 2017. Ensemble-compression: A new method for
parallel training of deep neural networks. In Joint European
_Conference on Machine Learning and Knowledge Discovery_
_in Databases, 187–202. Springer._
[Tian et al. 2016] Tian, L.; Jayaraman, B.; Gu, Q.; and
Evans, D. 2016. Aggregating private sparse learning models
using multi-party computation. In NIPS Workshop on
_Private Multi-Party Machine Learning, Barcelona, Spain._
[Yu et al. 2018] Yu, C.; Barsim, K. S.; Kong, Q.; and Yang,
B. 2018. Multi-level attention model for weakly supervised
audio classification. arXiv preprint arXiv:1803.02353.
[Zhang et al. 2017] Zhang, H.; Zheng, Z.; Xu, S.; Dai, W.;
Ho, Q.; Liang, X.; Hu, Z.; Wei, J.; Xie, P.; and Xing, E. P.
2017. Poseidon: An efficient communication architecture
for distributed deep learning on gpu clusters. arXiv preprint.
[Zinkevich et al. 2010] Zinkevich, M.; Weimer, M.; Li, L.;
and Smola, A. J. 2010. Parallelized stochastic gradient
descent. In Advances in neural information processing
_systems, 2595–2603._
-----
| 10,184
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1810.06877, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "CLOSED",
"url": ""
}
| 2,018
|
[
"JournalArticle"
] | false
| 2018-10-16T00:00:00
|
[
{
"paperId": "ea97db4dfbb3c21cc4d86f6636e882f31a0ef718",
"title": "Generative API usage code recommendation with parameter concretization"
},
{
"paperId": "62ccd99a65bfc7c735ae1f33b75b107665de95df",
"title": "Federated Machine Learning"
},
{
"paperId": "b8989afff14fb630ca58b6afa917fb42574228ee",
"title": "Averaging Weights Leads to Wider Optima and Better Generalization"
},
{
"paperId": "9f5263cda2d58fb3dfaff5ec6db70b0d2ae53c68",
"title": "Multi-level attention model for weakly supervised audio classification"
},
{
"paperId": "cc59b4b1eb7d4629f753bc24f029c5cced301381",
"title": "Large scale distributed neural network training through online distillation"
},
{
"paperId": "603caed9430283db6c7f43169555c8d18e97a281",
"title": "Matrix capsules with EM routing"
},
{
"paperId": "92495abbac86394cb759bec15a763dbf49a8e590",
"title": "Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training"
},
{
"paperId": "c4c06578f4870e4b126e6837907929f3c900b99f",
"title": "Dynamic Routing Between Capsules"
},
{
"paperId": "64432e6239af562d49f222d274d020894f3a9b9b",
"title": "Two-stage local constrained sparse coding for fine-grained visual categorization"
},
{
"paperId": "3a5899cb3200ade9315468c2293a550978ef0a7f",
"title": "Scalable and Fault Tolerant Platform for Distributed Learning on Private Medical Data"
},
{
"paperId": "6e578d6e9531dbf0d948081fe109df9b254ad4c4",
"title": "The Simpler The Better: A Unified Approach to Predicting Original Taxi Demands based on Large-Scale Online Platforms"
},
{
"paperId": "079932bf6ff8b99c899172ba60071818f6b5dfcb",
"title": "Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters"
},
{
"paperId": "6cad4a36102b6387259f56cc2f09dd7a994ba8fa",
"title": "Gaia: Geo-Distributed Machine Learning Approaching LAN Speeds"
},
{
"paperId": "5ba2218b708ca64ab556e39d5997202e012717d5",
"title": "Audio Set: An ontology and human-labeled dataset for audio events"
},
{
"paperId": "a3b4537344ddcd32be3a9b8c0882a2eb769983b4",
"title": "QSGD: Communication-Optimal Stochastic Gradient Descent, with Applications to Training Neural Networks"
},
{
"paperId": "5694e46284460a648fe29117cbc55f6c9be3fa3c",
"title": "Densely Connected Convolutional Networks"
},
{
"paperId": "e2dba792360873aef125572812f3673b1a85d850",
"title": "Enriching Word Vectors with Subword Information"
},
{
"paperId": "04ad1dc78a9628df48fc7f9e4ca5e67582562130",
"title": "Ensemble-Compression: A New Method for Parallel Training of Deep Neural Networks"
},
{
"paperId": "f20a01b51ad93c8fd7b30186a143093b9c1701ef",
"title": "Towards Geo-Distributed Machine Learning"
},
{
"paperId": "d1dbf643447405984eeef098b1b320dee0b3b8a7",
"title": "Communication-Efficient Learning of Deep Networks from Decentralized Data"
},
{
"paperId": "2c03df8b48bf3fa39054345bafabfeff15bfd11d",
"title": "Deep Residual Learning for Image Recognition"
},
{
"paperId": "f2f8f7a2ec1b2ede48cbcd189b376ab9fa0735ef",
"title": "Privacy-preserving deep learning"
},
{
"paperId": "64a192b0d60763e06e3e2990d32035ce040d71b5",
"title": "Experiments on Parallel Training of Deep Neural Network using Model Averaging"
},
{
"paperId": "a7976c2bacfbb194ddbe7fd10c2e50a545cf4081",
"title": "LSTM: A Search Space Odyssey"
},
{
"paperId": "0c908739fbff75f03469d13d4a1a07de3414ee19",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"paperId": "995c5f5e62614fcb4d2796ad2faab969da51713e",
"title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift"
},
{
"paperId": "4d4d09ae8f6a11547441f7fee36405758102a801",
"title": "Qualitatively characterizing neural network optimization problems"
},
{
"paperId": "4030a62e75313110dc4a4c78483f4459dc4526bc",
"title": "Parallel training of Deep Neural Networks with Natural Gradient and Parameter Averaging"
},
{
"paperId": "eb42cf88027de515750f230b23b1a057dc782108",
"title": "Very Deep Convolutional Networks for Large-Scale Image Recognition"
},
{
"paperId": "e74f9b7f8eec6ba4704c206b93bc8079af3da4bd",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"paperId": "83e9565cede81b2b88a9fa241833135da142f4d3",
"title": "Parallelized Stochastic Gradient Descent"
},
{
"paperId": "ada8fe3a8f821d0a6994a648ffda37ffd7125c0f",
"title": "Aggregating Private Sparse Learning Models Using Multi-Party Computation"
},
{
"paperId": "c4756dcc7afc2f09d61e6e4cf2199d9f6dd695cc",
"title": "Convolutional neural networks for small-footprint keyword spotting"
},
{
"paperId": "34f25a8704614163c4095b3ee2fc969b60de4698",
"title": "Dropout: a simple way to prevent neural networks from overfitting"
}
] | 10,184
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00af88e55f9e9457beb8d63099de86bd82ceca04
|
[
"Computer Science"
] | 0.880157
|
Dynamic Network Energy Management via Proximal Message Passing
|
00af88e55f9e9457beb8d63099de86bd82ceca04
|
Found. Trends Optim.
|
[
{
"authorId": "3116539",
"name": "Matt Kraning"
},
{
"authorId": "1737474",
"name": "E. Chu"
},
{
"authorId": "1688041",
"name": "J. Lavaei"
},
{
"authorId": "1843103",
"name": "Stephen P. Boyd"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
Foundations and Trends⃝[R] in Optimization
Vol. 1, No. 2 (2013) 70–122
⃝c 2013 M. Kraning, E. Chu, J. Lavaei, and S. Boyd
DOI: xxx
## Dynamic Network Energy Management via Proximal Message Passing
Matt Kraning
Stanford University
[email protected]
Javad Lavaei
Columbia University
[email protected]
Eric Chu
Stanford University
[email protected]
Stephen Boyd
Stanford University
[email protected]
-----
## Contents
**1** **Introduction** **70**
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
1.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . 73
1.3 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
**2** **Network Model** **76**
2.1 Formal definition and notation . . . . . . . . . . . . . . . 76
2.2 Dynamic optimal power flow problem . . . . . . . . . . . . 78
2.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 79
2.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
**3** **Device Examples** **83**
3.1 Generators . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.2 Transmission lines . . . . . . . . . . . . . . . . . . . . . . 84
3.3 Converters and interface devices . . . . . . . . . . . . . . 86
3.4 Storage devices . . . . . . . . . . . . . . . . . . . . . . . 87
3.5 Loads . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
**4** **Convexity** **91**
4.1 Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.2 Relaxations . . . . . . . . . . . . . . . . . . . . . . . . . . 92
ii
-----
iii
**5** **Proximal Message Passing** **95**
5.1 Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.2 Convergence . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 101
**6** **Numerical Examples** **103**
6.1 Network topology . . . . . . . . . . . . . . . . . . . . . . 103
6.2 Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.3 Serial multithreaded implementation . . . . . . . . . . . . 107
6.4 Peer-to-peer implementation . . . . . . . . . . . . . . . . 107
6.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
**7** **Extensions** **111**
7.1 Closed-loop control . . . . . . . . . . . . . . . . . . . . . 111
7.2 Security constrained optimal power flow . . . . . . . . . . 113
7.3 Hierarchical models and virtualized devices . . . . . . . . . 113
7.4 Local stopping criteria and ρ updates . . . . . . . . . . . . 115
**8** **Conclusion** **116**
-----
### Abstract
We consider a network of devices, such as generators, fixed loads, deferrable loads, and storage devices, each with its own dynamic constraints and objective, connected by AC and DC lines. The problem is
to minimize the total network objective subject to the device and line
constraints over a time horizon. This is a large optimization problem
with variables for consumption or generation for each device, power
flow for each line, and voltage phase angles at AC buses in each period.
We develop a decentralized method for solving this problem called
_proximal message passing. The method is iterative: At each step, each_
device exchanges simple messages with its neighbors in the network
and then solves its own optimization problem, minimizing its own objective function, augmented by a term determined by the messages it
has received. We show that this message passing method converges to
a solution when the device objective and constraints are convex. The
method is completely decentralized, and needs no global coordination
other than synchronizing iterations; the problems to be solved by each
device can typically be solved extremely efficiently and in parallel.
The proximal message passing method is fast enough that even a serial implementation can solve substantial problems in reasonable time
frames. We report results for several numerical experiments, demonstrating the method’s speed and scaling, including the solution of a
problem instance with over 30 million variables in 5 minutes for a serial
implementation; with decentralized computing, the solve time would be
less than one second.
-----
# 1
## Introduction
### 1.1 Overview
A traditional power grid is operated by solving a number of optimization problems. At the transmission level, these problems include
unit commitment, economic dispatch, optimal power flow (OPF), and
security-constrained OPF (SC-OPF). At the distribution level, these
problems include loss minimization and reactive power compensation.
With the exception of the SC-OPF, these optimization problems are
static with a modest number of variables (often less than 10000), and
are solved on time scales of 5 minutes or more. However, the operation
of next generation electric grids (i.e., smart grids) will rely critically on
solving large-scale, dynamic optimization problems involving hundreds
of thousands of devices jointly optimizing tens to hundreds of millions
of variables, on the order of seconds rather than minutes [16, 41]. More
precisely, the distribution level of a smart grid will include various types
of active dynamic devices, such as distributed generators based on solar
and wind, batteries, deferrable loads, curtailable loads, and electric vehicles, whose control and scheduling amount to a very complex power
management problem [59, 9].
In this paper, we consider a general problem, which we call the
70
-----
_1.1. Overview_ 71
_dynamic optimal power flow problem (D-OPF), in which dynamic de-_
vices are connected by both AC and DC lines, and the goal is to jointly
minimize a network objective subject to local constraints on the devices
and lines. The network objective is the sum of the objective functions of
the devices. These objective functions extend over a given time horizon
and encode operating costs such as fuel consumption and constraints
such as limits on power generation or consumption. In addition, the
objective functions encode dynamic objectives and constraints such as
limits on ramp rates for generators or charging and capacity limits for
storage devices. The variables for each device consist of its consumption
or generation in each time period and can also include local variables
which represent internal states of the device over time, such as the state
of charge of a storage device.
When all device objective functions and line constraints are convex,
D-OPF is a convex optimization problem, which can in principle be
solved efficiently [7]. If not all device objective functions are convex, we
can solve a relaxed form of the D-OPF which can be used to find good,
local solutions to the D-OPF. The optimal value of the relaxed D-OPF
also gives a lower bound for the optimal value of the D-OPF which can
be used to evaluate the suboptimality of a local solution, or, when the
local solution has the same value, as a certificate of global optimality.
For any network, the corresponding D-OPF contains at least as
many variables as the number of devices and lines multiplied by the
length of the time horizon. For large networks with hundreds of thousands of devices and a time horizon with tens or hundreds of time periods, the extremely large number of variables present in the corresponding D-OPF makes solving it in a centralized fashion computationally
impractical, even when all device objective functions are convex.
We propose a decentralized optimization method which efficiently
solves the D-OPF by distributing computation across every device in
the network. This method, which we call proximal message passing,
is iterative: At each iteration, every device passes simple messages to
its neighbors and then solves an optimization problem that minimizes
the sum of its own objective function and a simple regularization term
that only depends on the messages it received from its neighbors in the
-----
72 _Introduction_
previous iteration. As a result, the only non-local coordination needed
between devices for proximal message passing is synchronizing iterations. When all device objective functions are convex, we show that
proximal message passing converges to a solution of the D-OPF.
Our algorithm can be used several ways. It can be implemented in
a traditional way on a single computer or cluster by collecting all the
device constraints and objectives. We will demonstrate this use with
an implementation that runs on a single 32-core computer with hyperthreading (64 independent threads). A more interesting use is in a
peer-to-peer architecture, in which each device contains its own processor, which carries out the required local dynamic optimization and
exchanges messages with its neighbors on the network. In this setting,
the devices do not need to divulge their objectives or constraints; they
only need to support a simple protocol for interacting with their neighbors. Our algorithm ensures that the network power flows and AC bus
phase angles will converge to their optimal values, even though each
device has very little information about the rest of the network, and
only exchanges limited messages with its immediate neighbors.
Due to recent advances in convex optimization [61, 46, 47], in many
cases the optimization problems that each device solves in each iteration of proximal message passing can be executed at millisecond
or even microsecond time-scales on inexpensive, embedded processors.
Since this execution can happen in parallel across all devices, the entire
network can execute proximal message passing at kilohertz rates. We
present a series of numerical examples to illustrate this fact by using
proximal message passing to solve instances of the D-OPF with over
30 million variables serially in 5 minutes. Using decentralized computing, the solve time would be essentially independent of the size of the
network and require just a fraction of a second.
We note that although a primary application for proximal message
passing is power management, it can easily be adapted to more general
resource allocation and graph-structured optimization problems [51, 2].
-----
_1.2. Related work_ 73
### 1.2 Related work
The use of optimization in power systems dates back to the 1920s and
has traditionally concerned the optimal dispatch problem [22], which
aims to find the lowest cost method for generating and delivering power
to consumers, subject to physical generator constraints. With the advent of computer and communication networks, many different ways
to numerically solve this problem have been proposed [62] and more
sophisticated variants of optimal dispatch have been introduced, such
as OPF, economic dispatch, and dynamic dispatch [12], which extend
optimal dispatch to include various reliability and dynamic constraints.
For reviews of optimal and economic dispatch as well as general power
systems, see [4] and the book and review papers cited above.
When modeling AC power flow, the D-OPF is a dynamic version of
the OPF [8], extending the latter to include many more types of devices
such as storage units. Recent smart grid research has focused on the
ability of storage devices to cut costs and catalyze the consumption of
variable and intermittent renewables in the future energy market [23,
44, 13, 48]. With D-OPF, these storage concerns are directly addressed
and modeled in the problem formulation with the introduction of a time
horizon and coupling constraints between variables across periods.
Distributed optimization methods are naturally applied to power
networks given the graph-structured nature of the transmission and
distribution networks. There is an extensive literature on distributed
optimization methods, dating back to the early 1960s. The prototypical example is dual decomposition [14, 17], which is based on solving
the dual problem by a gradient method. In each iteration, all devices
optimize their local (primal) variables based on current prices (dual
variables). Then the dual variables are updated to account for imbalances in supply and demand, with the goal being to determine prices
for which supply equals demand.
Examples of distributed algorithms in the power systems literature include two phase procedures that resemble a single iteration of
dual decomposition. In the first phase, dynamic prices are set over a
given time horizon (usually hourly over the following 24 hours) by some
mechanism (e.g., centrally by an ISO [28, 29], or through information
-----
74 _Introduction_
aggregation in a market [57]). In the second phase, these prices allow
individual devices to jointly optimize their power flows with minimal
(if any) additional coordination over the time horizon. More recently,
building on the work of [39], a distributed algorithm was proposed [38]
to solve the dual OPF using a standard dual decomposition on subsystems that are maximal cliques of the power network.
Dual decomposition methods are not robust, requiring many technical conditions, such as strict convexity and finiteness of all local cost
functions, for both theoretical and practical convergence to optimality.
One way to loosen the technical conditions is to use an augmented Lagrangian [25, 49, 5], resulting in the method of multipliers. This subtle
change allows the method of multipliers to converge under mild technical conditions, even when the local (convex) cost functions are not
strictly convex or necessarily finite. However, this method has the disadvantage of no longer being separable across subsystems. To achieve
both separability and robustness for distributed optimization, we can
instead use the alternating direction method of multipliers (ADMM)
[21, 20, 15, 6]. ADMM is very closely related to many other algorithms,
and is identical to Douglas-Rachford operator splitting; see, e.g., the
discussion in [6, §3.5].
Augmented Lagrangian methods (including ADMM) have previously been applied to the study of power systems with static, single
period objective functions on a small number of distributed subsystems,
each representing regional power generation and consumption [35]. For
an overview of related decomposition methods applied to power flow
problems, we direct the reader to [36, 1] and the references therein.
The proximal message passing decentralized power scheduling
method is similar in spirit to flow control on a communication network,
where each source modulates its sending rate based only on information
about the number of un-acknowledged packets; if the network state remains constant, the flows converge to levels that satisfy the constraints
and maximize a total utility function [33, 42]. In Internet flow control,
this is called end-point control, since flows are controlled (mostly) by
devices on the edges of the network. A decentralized proximal message
passing method is closer to local control, since decision making is based
-----
_1.3. Outline_ 75
only on interaction with neighbors on the network. Another difference
is that the messages our method passes between devices are virtual, and
not actual energy flows. (Once converged, of course, they can become
actual energy flows.)
### 1.3 Outline
The rest of this paper is organized as follows. In chapter 2 we give the
formal definition of our network model. In chapter 3 we give examples
of how to model specific devices such as generators, deferrable loads
and energy storage systems in our formal framework. In Chapter 4, we
describe the role that convexity plays in the D-OPF and introduce the
idea of convex relaxations as a tool to find solutions to the D-OPF in
the presence of non-convex device objective functions. In Chapter 5 we
derive the proximal message passing equations. In Chapter 6 we present
a series of numerical examples, and in Chapter 7 we discuss how our
framework can be extended to include use cases we do not explicitly
cover in this paper.
-----
# 2
## Network Model
We begin with an abstract definition of our network model and the
dynamic optimal power flow problem, and the compact notation we
use to describe it. We then give some discussion and an example to
illustrate how the model is used to describe a real power network.
### 2.1 Formal definition and notation
A network consists of a finite set of terminals, a finite set of devices
T
, and a finite set of nets . The sets and are both partitions
D N D N
of . Thus, each device and each net has a set of terminals associated
T
with it, and each terminal is associated with exactly one device and
exactly one net. Equivalently, a network can be defined as a bipartite
graph with one set of vertices given by devices, the other set of vertices
given by nets, and edges given by terminals — very similar in nature
to ‘normal realizations’ [19] of graphs in coding theory.
Each terminal t has a type, either AC or DC, corresponding to
∈T
the type of power that flows through the terminal. The set of terminals
can be partitioned by type into the sets [dc] and [ac], which represent
T T
the set of all terminals of type DC and AC, respectively. A terminal of
either type has an associated power schedule pt = (pt(1), . . ., pt(T )) ∈
76
-----
_2.1. Formal definition and notation_ 77
**R[T]**, where T is a given time horizon. Here, pt(τ ) is the amount of power
consumed by device d in time period τ through terminal t, where t is
associated with d. When pt(τ ) < 0, −pt(τ ) is the energy generated
by device d through terminal t in time period τ . (For AC terminals,
_pt is the real power flow; we do not consider reactive power in this_
paper.) In addition to (real) power schedules, AC terminals t [ac] also
∈T
have phase schedules θt = (θt(1), . . ., θt(T )) ∈ **R[T]**, which represent the
absolute voltage phase angles for terminal t over time. (DC terminals
are not associated with voltage phase angles.)
We use a simple method for indexing quantities such as power schedules that are associated with each terminal and vary over time. For
devices d, we use ‘d’ to refer to both the device itself as well as
∈D
the set of terminals associated with it, i.e., we say t _d if terminal t is_
∈
associated with device d. The set of all power schedules associated with
device d is denoted by pd = {pt | t ∈ _d}, which we can associate with a_
_d_ _T matrix. We use the same notation for nets as we do for devices._
| | ×
The set of all terminal power schedules is denoted by p = {pt | t ∈T },
which we can associate with a _T matrix. For other quantities that_
|T |×
are associated with each terminal (such as phase schedules), we use an
identical notation to power schedules, i.e., θd = {θt | t ∈ _d} is the set of_
phase schedules associate with device d (with an identical notation for
nets), and the set of all phase schedules is denoted by θ = {θt | t ∈T }.
Each device d contains a set of _d_ terminals and has an associated
| |
_objective function fd : R[|][d][|×][T]_ × **R[|][d][ac][|×][T]** → **R** ∪{+∞}, where d[ac] = {t |
_t_ _d_ [ac] is the set of all AC terminals associated with device d, and
∈ ∩T }
we set fd(pd, θd) = ∞ to encode constraints on the power and phase
schedules for the device. When fd(pd, θd) < ∞, we say that (pd, θd)
are a set of realizable power and phase schedules for device d, and we
interpret fd(pd, θd) as the cost (or revenue, if negative) to device d for
operating according to power schedule pd and phase schedule θd.
Similarly, each net n contains a set of _n_ terminals, all of which
∈N | |
are required to have the same type. (We will model AC–DC conversion
using devices.) We refer to nets containing AC terminals as AC nets
and nets containing DC terminals as DC nets. Nets are lossless energy
carriers which constrain the power schedules (and phase schedules in
-----
78 _Network Model_
the case of AC nets) of their constituent terminals: we require power
_balance in each time period, which is represented by the constraints_
� _pt(τ_ ) = 0, _τ = 1, . . ., T,_ (2.1)
_t∈n_
for each n . In addition to power balance, each AC net imposes
∈N
the phase consistency constraints
_θt1(τ_ ) = · · · = θt|n|(τ ), _τ = 1, . . ., T,_ (2.2)
where n = {t1, . . ., t|n|}. In other words, in each time period the power
flows on each net balance, and all terminals on the same AC net have
the same phase.
We define the average net power imbalance ¯p : **R[T]**, as
T →
_p¯t = [1]_ � _pt′,_ (2.3)
_n_
| | _t[′]∈n_
where t _n, i.e., terminal t is associated with net n. In other words,_
∈
_p¯t(τ_ ) is the average power schedule of all terminals associated with the
same net as terminal t at time τ . We overload this notation for devices
by defining ¯pd = {p¯t | t ∈ _d}. Using an identical notation for nets,_
we can see that ¯pn simply contains |n| copies of the average net power
imbalance for net n. The net power balance constraint for all terminals
can be expressed as ¯p = 0.
For AC terminals, we define the phase residual _θ[˜] :_ [ac] **R[T]** as
T →
_θ˜t = θt_ � _θt′ = θt_ _θt,_
− _n[1]_ − [¯]
| | _t[′]∈n_
where t ∈ _n and n is an AC net. In other words, θ[˜]t(τ_ ) is the difference
between the phase angle of terminal t and the average phase angle of
all terminals attached to net n, at time τ . As with the average power
imbalance, we overload this notation for devices by defining θ[˜]d = {θ[˜]t |
_t_ _d_ [ac] with a similar notation for nets. The phase consistency
∈ ∩T }
constraint for all AC terminals can be expressed as θ[˜] = 0.
### 2.2 Dynamic optimal power flow problem
We say that a set of power and phase schedules p : **R[T]**, θ :
T →
T [ac] → **R[T]** is feasible if fd(pd, θd) < ∞ for all d ∈D (i.e., all devices’
-----
_2.3. Discussion_ 79
power and phase schedules are realizable), and both ¯p = 0 and θ[˜] = 0
(i.e., power balance and phase consistency holds across all nets). We
define the network objective as f (p, θ) = [�]d∈D _[f][d][(][p][d][, θ][d][). The][ dynamic]_
_optimal power flow problem (D-OPF) is_
minimize _f_ (p, θ)
(2.4)
subject to _p¯ = 0,_ _θ˜ = 0,_
with variables p : **R[T]**, θ : [ac] **R[T]** . We refer to p and θ as
T → T →
_optimal if they solve (2.4), i.e., globally minimize the objective among_
all feasible p and θ. We refer to p and θ as locally optimal if they are a
locally optimal point for (2.4).
**Dual variables and locational marginal prices.** Suppose p[0] is a set of
optimal power schedules, that also minimizes the Lagrangian
_f_ (p, θ) + �
_t∈T_
_T_
�(yp[0][)][t][(][τ] [)¯][p][t][(][τ] [)][,]
_τ_ =1
subject to θ[˜] = 0, where yp[0] [:][ T →] **[R][T][ are the dual variables associ-]**
ated with the power balance constraint ¯p = 0. (This is actually the
partial Lagrangian as we only dualize the power balance constraints,
but not the phase consistency constraints.) In this case we call yp[0] [a set]
of optimal Lagrange multipliers or dual variables. When p[0] is a locally
optimal point, which also locally minimizes the Lagrangian, then we
refer to yp[0] [as a set of locally optimal Lagrange multipliers.]
The dual variables yp[0] [are related to the traditional concept of lo-]
cational marginal prices : **R[T]** by rescaling the dual variables
L[0] T →
associated with each terminal according to the size of its associated
net, i.e., Lt[0] [=][ |][n][|][(][y]p[0][)][t][, where][ t][ ∈] _[n][. This rescaling is due to the fact]_
that locational marginal prices are the dual variables associated with
the constraints in (2.1) rather than their scaled form used in (2.4) [18].
### 2.3 Discussion
We now describe our model in a less formal manner. Generators, loads,
energy storage systems, and other power sources and sinks are modeled
as single terminal devices. Transmission lines (or more generally, any
-----
80 _Network Model_
wire or set of wires that conveys power), AC-DC converters, and AC
phase shifters are modeled as two-terminal devices. Terminals are ports
on a device through which power flows, either into or out of the device
(or both, at different times, as happens in a storage device). The flow
for AC terminals could be, e.g., three phase, two phase, single phase,
230V, or 230kV; the flow for DC terminals could be high voltage DC,
12V DC, or a floating voltage (e.g., the output of a solar panel). We
model these real cases with a different type for each mechanism (e.g.,
two and three phase AC terminals would have distinct types and could
not be connected to the same net).
Nets are used to model ideal lossless uncapacitated connections between terminals over which power is transmitted and physical constraints hold (e.g., equal voltages, currents summing to zero); losses,
capacities, and more general connection constraints between a set of
terminals can be modeled with the addition of a device and individual
nets which connect each terminal to the new device. An AC net corresponds to a direct connection between its associated terminals (e.g.,
a bus); all the terminals’ voltage phases must be the same and their
power is conserved. A two terminal DC net is just a wired connection
of the terminals. A DC net with more than two terminals is a smart
power router, which actively chooses how to distribute the incoming
and outgoing power flows among its terminals.
The objective function of a device is used to measure the cost (which
can be negative, representing revenue) associated with a particular
mode of operation, such as a given level of consumption or generation of power. This cost can include the actual direct cost of operating
according to the given power schedules, such as a fuel cost, as well as
other costs such as CO2 generation, or costs associated with increased
maintenance and decreased system lifetime due to structural fatigue.
The objective function can also include local variables other than power
and phase schedules, such as the state of charge of a storage device.
Constraints on the power and phase schedules and internal variables
for a device are encoded by setting the objective function to + for
∞
power and phase schedules that violate the constraints. In many cases,
a device’s objective function will only take on the values 0 and +,
∞
-----
_2.4. Example_ 81
Figure 2.1: A simple network (left); its transformation into standard form (right).
indicating no local preference among feasible power and phase schedules. Many devices, especially single-terminal devices such as loads or
generators, impose no constraints on their AC terminals’ phase angles;
in other words, these terminals have ‘floating’ voltage phase angles,
which are free to take any value.
### 2.4 Example
We illustrate how a traditional power network can be recast into our
network model in Figure 2.1. The original power network, shown on
the left, contains 2 loads, 3 buses, 3 transmission lines, 2 generators,
and a single battery storage system. We can transform this small power
grid into our model by representing it as a network with 11 terminals,
8 devices, and 3 nets, shown on the right of figure 2.1. Terminals are
shown as small filled circles. Single terminal devices, which are used
to model loads, generators, and the battery, are shown as boxes. The
transmission lines are two terminal devices represented by solid lines.
The nets are shown as dashed rounded boxes. Terminals are associated
with the device they touch and the net in which they are contained.
The set of terminals can be partitioned by either the devices they
are associated with, or the nets in which they are contained. Figure 2.2
shows the network in Figure 2.1 as a bipartite graph, with devices on
the left and nets on the right. In this representation, terminals are
represented by the edges of the graph.
-----
82 _Network Model_
L1
G1
T1
B
G2
T2
L2
T3
Figure 2.2: The network in Figure 2.1 represented as a bipartite graph. Devices
(boxes) are shown on the left with their associated terminals (dots). The terminals
are connected to their corresponding nets (solid boxes) on the right.
-----
# 3
## Device Examples
In this chapter we present several examples of how common devices can
be modeled in our framework. These examples are intentionally kept
simple, but could easily be extended with more refined objectives and
constraints. In these examples, it is easier to discuss operational costs
and constraints for each device separately. A device’s objective function
is equal to the device’s cost function unless any constraint is violated,
in which case we set the objective value to + . For all single terminal
∞
devices, we describe their objective and constraints in the case of a DC
terminal. For AC terminal versions of one terminal devices, the cost
functions and constraints are identical to the DC case, and the device
imposes no constraints on the phase schedule.
### 3.1 Generators
A generator is a single-terminal device with power schedule pgen, which
generates power over a range, P [min] ≤−pgen ≤ _P_ [max], and has ramprate constraints
_R[min]_ ≤−Dpgen ≤ _R[max],_
which limit the change of power levels from one period to the next.
Here, the operator D **R[(][T]** [−][1)][×][T] is the forward difference operator,
∈
83
-----
84 _Device Examples_
defined as
(Dx)(τ ) = x(τ + 1) _x(τ_ ), _τ = 1, . . ., T_ 1.
− −
The cost function for a generator has the separable form
_ψgen(pgen) =_
_T_
� _φgen(−pgen(τ_ )),
_τ_ =1
where φ : R **R gives the cost of operating the generator at a given**
→
power level over a single time period. This function is typically, but
not always, convex and increasing. It could be piecewise linear, or, for
example, quadratic:
_φgen(x) = αx[2]_ + βx,
where α, β > 0.
More sophisticated models of generators allow for them to be
switched on or off, with an associated cost each time they are turned on
or off. When switched on, the generator operates as described above.
When the generator is turned off, it generates no power but can still
incur costs for other activities such as idling.
### 3.2 Transmission lines
**DC transmission line.** A DC transmission line is a device with two
DC terminals with power schedules p1 and p2 that transports power
across some distance. The line has zero cost function, but the power
flows are constrained. The sum p1+p2 represents the loss in the line and
is always nonnegative. The difference p1 −p2 can be interpreted as twice
the power flow from terminal one to terminal two. A DC transmission
line has a maximum flow capacity, given by
and the constraint
|p1 − _p2|_ _C[max],_
≤
2
_p1 + p2_ _ℓ(p1, p2) = 0,_
−
where ℓ(p1, p2) : R[T] × R[T] → **R[T]+** [is a loss function.]
-----
_3.2. Transmission lines_ 85
For a simple model of the line as a series resistance R with average
terminal voltage V, we have [4]
_ℓ(p1, p2) =_ _V[R][2]_
� _p1 −_ _p2_
2
2
�
_._
A more sophisticated model for the capacity of a DC transmission
line includes a dynamic thermal model for the temperature of the line,
which (indirectly and dynamically) sets the maximum capacity of the
line. A simple model for the temperature at time τ, denoted ξ(τ ), is
given by the first order linear dynamics
_ξ(τ + 1) = αξ(τ_ ) + (1 − _α)ξ[amb](τ_ ) + β(p1(τ ) + p2(τ )),
for τ = 1, . . ., T 1, where ξ[amb](τ ) is the ambient temperature at
−
time τ, and α and β are model parameters that depend on the thermal
properties of the line. The capacity is then dynamically modulated by
requiring that ξ _ξ[max], where ξ[max]_ is the maximum safe temperature
≤
for the line.
**AC transmission line.** An AC transmission line is a device with two
AC terminals, with (real) power schedules p1 and p2 and terminal voltage phase angles θ1 and θ2, that transmits power across some distance.
It has zero cost function, but the power flows and voltage phase angles
are constrained. Like a DC transmission line, the sum p1 + p2 represents the loss in the line, ℓ, and is always nonnegative. The difference
_p1_ _p2 can be interpreted as twice the power flow from terminal one_
−
to terminal two. An AC line has a maximum flow capacity given by
|p1 − _p2|_ _C[max]._
≤
2
(A line temperature based capacity constraint as described for a DC
transmission line can also be used for AC transmission lines)
We assume the line is characterized by its (series) admittance g +ib,
with g > 0. (We consider the series admittance model for simplicity; for
a more general Π model, similar but more complicated equations can be
derived.) Under the common assumption that the voltage magnitude
is fixed at V [4], the power and phase schedules satisfy the relations
_p1 + p2 = 2gV_ [2](1 − cos(θ2 − _θ1)),_ _p1 −2_ _p2_ = bV [2] sin(θ2 − _θ1),_
-----
86 _Device Examples_
which can be combined to give the relations
1 _g_
_p1 + p2 =_
4gV [2][ (][p][1][ +][ p][2][)][2][ +] 4b[2]V [2][ (][p][1][ −] _[p][2][)][2][,]_
_p1 −2_ _p2_ = bV [2] sin(θ2 − _θ1)._
Transmission lines are rarely operated with a phase angle difference
exceeding 15[◦], and in this regime, the approximation sin(θ2 − _θ1) ≈_
_θ2_ _θ1 holds within 1%. This approximation, known as the ‘DC-OPF_
−
approximation’ [4], is frequently used in power systems analysis and
transforms the second relation above into the relation
_p1 −2_ _p2_ = bV [2](θ2 − _θ1)._ (3.1)
Note that the capacity limit constrains |p1 − _p2|, which in turn con-_
strains the phase angle difference |θ2 − _θ1|; thus we can guarantee that_
our small angle approximation is good by imposing a capacity constraint (which is possibly smaller than the true line capacity).
### 3.3 Converters and interface devices
**Inverter.** An inverter is a device with a DC terminal, dc, and an
AC terminal, ac, that transforms power from DC to AC and has no
cost function. An inverter has a maximum power output C[max] and a
conversion efficiency κ (0, 1]. It can be represented by the constraints
∈
−pac(τ ) = κpdc(τ ), 0 ≤−pac(τ ) ≤ _C[max],_ _τ = 1, . . ., T._
The voltage phase angle on the AC terminal, θac, is unconstrained.
**Rectifier.** A rectifier is a device with an AC terminal, ac, and a DC
terminal, dc, that transforms power from AC to DC and has no cost
function. A rectifier has a maximum power output C[max] and a conversion efficiency κ (0, 1]. It can be represented by the constraints
∈
−pdc(τ ) = κpac(τ ), 0 ≤−pdc(τ ) ≤ _C[max],_ _τ = 1, . . ., T._
The voltage phase angle on the AC terminal, θac, is unconstrained.
-----
_3.4. Storage devices_ 87
**Phase shifter.** A phase shifter is a device with two AC terminals,
which is a lossless energy carrier that decouples their phase angles and
has zero cost function. A phase shifter enforces the power balance and
capacity limit constraints
_p1(τ_ ) + p2(τ ) = 0, |p1(τ )| ≤ _C[max],_ _τ = 1, . . ., T._
If the phase shifter can only support power flow in one direction, say,
from terminal 1 to terminal 2, then in addition we have the inequalities
_p1(τ_ ) ≥ 0, τ = 1, . . ., T . The voltage phase angles θ1 and θ2 are unconstrained. (Indeed, this what a phase shifter is meant to do.) When
there is no capacity constraint, i.e., C[max] =, we can think of a phase
∞
shifter as a special type of net for AC terminals that enforces power
balance, but not voltage phase consistency. (However, we model it as
a device, not a net.)
**External tie with transaction cost.** An external tie is a connection to an external source of power. We represent this as a single
terminal device with power schedule pex. In this case, pex(τ )− =
max{−pex(τ ), 0} is the amount of energy pulled from the source, and
_pex(τ_ )+ = max{pex(τ ), 0} is the amount of energy delivered to the
source, at time τ . We have the constraint |pex(τ )| ≤ _E[max](τ_ ), where
_E[max]_ **R[T]** is the transaction limit.
∈
We suppose that the prices for buying and selling energy are given
by c _γ respectively, where c(τ_ ) is the midpoint price, and γ(τ ) > 0
±
is the difference between the price for buying and selling (i.e., the
transaction cost). The cost function is then
−(c − _γ)[T]_ (pex)+ + (c + γ)[T] (pex)− = −c[T] _pex + γ[T]_ |pex|,
where |pex|, (pex)+, and (pex)− are all interpreted elementwise.
### 3.4 Storage devices
A battery is a single terminal energy storage device with power schedule
_pbat, which can take in or deliver energy, depending on whether it is_
charging or discharging. The charging and discharging rates are limited
by the constraints −D[max] ≤ _pbat ≤_ _C[max], where C[max]_ ∈ **R[T]** and
-----
88 _Device Examples_
_D[max]_ **R[T]** are the maximum charging and discharging rates. At time
∈
_τ_, the charge level of the battery is given by local variables
_q(τ_ ) = q[init] +
_τ_
� _pbat(t),_ _τ = 1, . . ., T,_
_t=1_
where q[init] is the initial charge. It has zero cost function and the charge
level must not exceed the battery capacity, i.e., 0 _q(τ_ ) _Q[max],_
≤ ≤
_τ = 1, . . ., T_ . It is common to constrain the terminal battery charge
_q(T_ ) to be some specified value or to match the initial charge q[init].
More sophisticated battery models include (possibly statedependent) charging and discharging inefficiencies as well as charge
leakage [26]. In addition, they can include costs which penalize excessive charge-discharge cycling.
The same general form can be used to model other types of energy
storage systems, such as those based on super-capacitors, flywheels,
pumped hydro, or compressed air, to name just a few.
### 3.5 Loads
**Fixed load.** A fixed energy load is a single terminal device with zero
cost function which consists of a desired consumption profile, l **R[T]** .
∈
This consumption profile must be satisfied in each period, i.e., we have
the constraint pload = l.
**Thermal load.** A thermal load is a single terminal device with power
schedule ptherm which consists of a heat store (room, cooled water reservoir, refrigerator), with temperature profile ξ **R[T]**, which must be
∈
kept within minimum and maximum temperature limits, ξ[min] **R[T]**
∈
and ξ[max] **R[T]** . The temperature of the heat store evolves as
∈
_ξ(τ + 1) = ξ(τ_ ) + (µ/c)(ξ[amb](τ ) − _ξ(τ_ )) − (η/c)ptherm(τ ),
for τ = 1, . . ., T − 1 and ξ(1) = ξ[init], where 0 ≤ _ptherm ≤_ _H_ [max] is the
cooling power consumption profile, H [max] **R[T]** is the maximum cooling
∈
power, µ is the ambient conduction coefficient, η is the heating/cooling
efficiency, c is the heat capacity of the heat store, ξ[amb] **R[T]** is the
∈
-----
_3.5. Loads_ 89
ambient temperature profile, and ξ[init] is the initial temperature of the
heat store. A thermal load has zero cost function.
More sophisticated models [27] include temperature-dependent
cooling and heating efficiencies for heat pumps, more complex dynamics
of the system whose temperature is being controlled, and additional additive terms in the thermal dynamics, to represent occupancy or other
heat sources.
**Deferrable load.** A deferrable load is a single terminal device with
zero cost function that must consume a minimum amount of power
over a given interval of time, which is characterized by the constraint
�Dτ =A _[p][load][(][τ]_ [)][ ≥] _[E][, where][ E][ is the minimum total consumption for]_
the time interval τ = A, . . ., D. The energy consumption in each time
period is constrained by 0 ≤ _pload ≤_ _L[max]. In some cases, the load can_
only be turned on or off in each time period, i.e., pload(τ ) ∈{0, L[max]}
for τ = A, . . ., D.
**Curtailable load.** A curtailable load is a single terminal device which
does not impose hard constraints on its power requirements, but instead penalizes the shortfall between a desired load profile l **R[T]** and
∈
delivered power. In the case of a linear penalty, its cost function is
_α1[T]_ (l − _pload)+,_
where (z)+ = max(0, z), pload **R[T]** is the amount of electricity deliv∈
ered to the device, α > 0 is a penalty parameter, and 1 is the vector
with all components one. Extensions include time-varying and nonlinear penalties on the energy shortfall.
**Electric vehicle.** An electric vehicle charging system is an example of
a device that combines aspects of a deferable load and a storage device.
We model it as a single terminal device with power schedule pev which
has a desired charging profile c[des] **R[T]** and can be charged within
∈
a time interval τ = A, . . ., D. To avoid excessive charge cycling, we
assume that the electric vehicle battery cannot be discharged back into
the grid (in more sophisticated vehicle-to-grid models, this assumption
-----
90 _Device Examples_
is relaxed), so we have the constraints 0 ≤ _pev ≤_ _C[max], where C[max]_ ∈
**R[T]** is the maximum charging rate. We assume that c[des](τ ) = 0 for
_τ = 1, . . . A_ 1, c[des](τ ) = c[des](D) for τ = D + 1, . . ., T, and that the
−
charge level is given by
_q(τ_ ) = q[init] +
_τ_
� _pev(t),_
_t=A_
where q[init] is the initial charge when it is plugged in at time τ = A.
We can model electric vehicle charging as a deferrable load, where
we require a given charge level to be achieved at some time. A more
realistic model is as a combination of a deferrable and curtailable load,
with cost function
_D_
_α_ � (c[des](τ ) − _q(τ_ ))+,
_τ_ =A
where α > 0 is a penalty parameter. Here c[des](τ ) is the desired charge
level at time τ, and c[des](τ ) − _q(τ_ ))+ is the shortfall.
-----
# 4
## Convexity
In this chapter we discuss the important issue of convexity, both of
devices and the resulting dynamic power flow problem.
### 4.1 Devices
We call a device convex if its objective function is convex. A network is
convex if all of its devices are convex. For convex networks, the D-OPF
is a convex optimization problem, which means that in principle we can
efficiently find a global solution [7]. When the network is not convex,
even finding a feasible solution for the D-OPF can become difficult,
and finding and certifying a globally optimal solution to the D-OPF
is generally intractable. However, special structure in many practical
power distribution problems can allow us to guarantee optimality.
In the examples from Chapter 3, the inverter, rectifier, phase shifter,
battery, fixed load, thermal load, curtailable load, electric vehicle, and
external tie are all convex devices using the constraints and objective
functions given. A deferrable load is convex if we drop the constraint
that it can only be turned on or off. We discuss the convexity properties
of the generator and AC and DC transmission lines next.
91
-----
92 _Convexity_
### 4.2 Relaxations
One technique to deal with non-convex networks is to use convex re_laxations. We use the notation g[env]_ to denote the convex envelope [52]
of the function g. There are many equivalent definitions for the convex envelope, for example, g[env] = (g[∗])[∗], where g[∗] denotes the convex
conjugate of the function g. We can equivalently define g[env] to be the
largest convex lower bound of g. If g is a convex, closed, proper (CCP)
function, then g = g[env].
The relaxed dynamic optimal power flow problem (RD-OPF) is
minimize _f_ [env](p, θ)
(4.1)
subject to _p¯ = 0,_ _θ˜ = 0,_
with variables p : **R[T]**, θ : [ac] **R[T]** . This is a convex optiT → T →
mization problem, whose optimal value can in principle be computed
efficiently, and whose optimal objective value is a lower bound for the
optimal objective value of the D-OPF. In some cases, we can guarantee
_a priori that a solution to the RD-OPF will also be a solution to the_
D-OPF [56, 55] based on a property of the network objective such as
monotonicity or unimodularity. Even when the relaxed solution does
not satisfy all of the constraints in the unrelaxed problem, it can be
used as a starting point to help construct good, local solutions to the
unrelaxed problem. The suboptimality of these local solutions can then
be bounded by the gap between their network objective and the lower
bound provided by the solution to the RD-OPF. If this gap is small for
a given local solution, we can guarantee that it is nearly optimal.
**Generator.** When a generator is modeled as in Chapter 3 and is always powered on, it is a convex device. However, when given the ability
to be powered on and off, the generator is no longer convex. In this
case, we can relax the generator objective function so that its cost for
power production in each time period, given in Figure 4.1, is a convex
function. This allows the generator to produce power in the interval
[0, P [min]].
-----
_4.2. Relaxations_ 93
0 1 2 3 4
−pgen
0 1 2 3 4
−pgen
Figure 4.1: Left: Cost function for a generator that can be turned off. Right: Its
convex relaxation.
_p2_ _p2_
_p1_ _p1_
Figure 4.2: Left: Feasible sets of a transmission lines with no loss (black) and AC
loss (grey). Right: Their convex relaxations.
-----
94 _Convexity_
**AC and DC transmission lines.** In a lossless transmission line (AC or
DC), we have ℓ(p1, p2) = 0, and thus the set of feasible power schedules
is the line segment
_L = {(p1, p2) | p1 = −p2,_ _p2 ∈_ [−C[max]/2, C[max]/2]},
as shown in Figure 4.2 in black. When the transmission line has losses,
in most cases the loss function ℓ is a convex function of the input and
output powers, which leads to a feasible power region like the grey arc
in the left part of Figure 4.2.
The feasible set of a relaxed transmission line is given by the convex
hull of the original transmission line’s constraints. The right side of
figure 4.2 shows examples of this for both lossless and lossy transmission
lines. Physically, this relaxation gives lossy transmission lines the ability
to discard some additional power beyond what is simply lost to heat.
Since electricity is generally a valuable commodity in power networks,
the transmission lines will generally not throw away any additional
power in the optimal solution to the RD-OPF, leading to the power
line constraints in the RD-OPF being tight and thus also satisfying
the unrelaxed power line constraints in the original D-OPF. As was
shown in [40], when the network is a tree, this relaxation is always
tight. In addition, when all locational marginal prices are positive and
no other non-convexities exist in the network, the tightness of the line
constraints in the RD-OPF can be guaranteed in the case of networks
that have separate phase shifters on each loop in the networks whose
shift parameter can be freely chosen [54].
-----
# 5
## Proximal Message Passing
In this chapter we describe our method for solving D-OPF. We begin by
deriving the proximal message passing algorithms assuming that all the
device objective functions are convex closed proper (CCP) functions.
We then compare the computational and communication requirements
of proximal message passing with a centralized solver for the D-OPF.
The additional requirements that the functions are closed and proper
are technical conditions that are in practice satisfied by any convex
function used to model devices. We note that we do not require either
finiteness or strict convexity of any device objective function, and that
all results apply to networks with arbitrary topologies.
**Notation**
Whenever we have a set of variables that maps terminals to time periods, x : **R[T]** (which we can also associate with a _T matrix),_
T → |T | ×
we will use the same index, over-line, and tilde notation for the variables
_x as we do for power schedules p and phase schedules θ. For example,_
_xt ∈_ **R[T]** consists of the time period vector of values of x associated
with terminal t, ¯xt = (1/|n|) [�]t[′]∈n _[x][t][′][, where][ t][ ∈]_ _[n][, and ˜][x][t][ =][ x][t][ −]_ _[x][¯][t][,]_
with similar notation for indexing x by devices and nets.
95
-----
96 _Proximal Message Passing_
### 5.1 Derivation
We derive the proximal message passing equations by reformulating the
D-OPF using the alternating direction method of multipliers (ADMM)
and then simplifying the resulting equations. We refer the reader to [6]
for a thorough overview of ADMM.
We first rewrite the D-OPF as
minimize �d∈D _[f][d][(][p][d][, θ][d][) +][ �]n∈N_ [(][g][n][(][z][n][) +][ h][n][(][ξ][n][))]
(5.1)
subject to _p = z,_ _θ = ξ_
with variables p, z : T → **R[T]**, and θ, ξ : T [ac] → **R[T]**, where gn(zn) is the
indicator function on the set {zn | ¯zn = 0} and hn(ξn) is the indicator
function on the set {ξn | _ξ[˜]n = 0}. We use the notation from [6] and,_
ignoring a constant, form the augmented Lagrangian
_Lρ(p, θ, z, ξ, u, v) =_ � _fd(pd, θd) +_ � (gn(zn) + hn(ξn))
_d∈D_ _n∈N_
� �
+ (ρ/2) ∥p − _z + u∥2[2]_ [+][ ∥][θ][ −] _[ξ][ +][ v][∥]2[2]_ _,_ (5.2)
with the scaled dual variables u = yp/ρ : T → **R[T]** and v = yθ/ρ :
[ac] **R[T]**, which we associate with _T and_ [ac] _T matrices,_
T → |T | × |T | ×
respectively, where yp : T → **R[T]** are the dual variables associated with
the power balance constraints and yθ : T [ac] → **R[T]** are the dual variables
associated with the phase consistency constraints. Because devices and
nets are each partitions of the terminals, the last two terms of (5.2)
can be split across either devices or nets, i.e.,
∥p − _z + u∥2[2]_ [=] � ∥pd − _zd + ud∥2[2]_ [=] � ∥pn − _zn + un∥2[2][,]_
_d∈D_ _n∈N_
∥θ − _ξ + v∥2[2]_ [=] � ∥θd − _ξd + vd∥2[2]_ [=] � ∥θn − _ξn + vn∥2[2][.]_
_d∈D_ _n∈N_
The resulting ADMM algorithm is then given by the iterations
(p[k]d[+1], θd[k][+1]) := argmin
_pd,θd_
�
_fd(pd, θd) + (ρ/2)(∥pd −_ _zd[k]_ [+][ u]d[k][∥][2]2
�
+∥θd − _ξd[k]_ [+][ v]d[k][∥][2]2[)] _,_ _d ∈D,_
-----
_5.1. Derivation_ 97
_zn[k][+1]_ := argmin
_zn_
_ξn[k][+1]_ := argmin
_ξn_
� �
_gn(zn) + (ρ/2)∥zn −_ _u[k]n_ [−] _[p]n[k][+1]∥2[2]_ _,_ _n ∈N_ _,_
� �
_hn(ξn) + (ρ/2)∥ξn −_ _vn[k]_ [−] _[θ]n[k][+1]∥2[2]_ _,_ _n ∈N_ _,_
_u[k]n[+1]_ := _u[k]n_ [+ (][p]n[k][+1] − _zn[k][+1]),_ _n ∈N_ _,_
_vn[k][+1]_ := _vn[k]_ [+ (][θ]n[k][+1] − _ξn[k][+1]),_ _n ∈N_ _,_
where the first step is carried out in parallel by all devices, and then
the second and third and then fourth and fifth steps are carried out in
parallel by all nets.
Since gn(zn) and hn(ξn) are simply indicator functions for each
net n, the second and third steps of the algorithm can be computed
analytically and are given by
_zn[k][+1]_ := _u[k]n_ [+][ p]n[k][+1] − _u¯[k]n_ [−] _[p][¯]n[k][+1],_
_ξn[k][+1]_ := _v¯n[k]_ [+ ¯][θ]n[k][+1],
respectively. From the second expression, it is clear that ξn[k] [is simply]
_n_ copies of the same vector for all k.
| |
Substituting these expressions into the u and v updates, the algorithm can be simplified further to yield proximal message passing:
1. Prox schedule updates.
(p[k]d[+1], θd[k][+1]) := proxfd,ρ(p[k]d [−] _[p][¯]d[k]_ [−] _[u]d[k][,][ ¯][θ]d[k]_ [+ ¯][v]d[k][−][1] − _vd[k][)][,]_ _d ∈D._
2. Scaled price updates.
_un[k][+1]_ := _u[k]n_ [+ ¯][p]n[k][+1], _n ∈N_ _,_
_vn[k][+1]_ := _v˜n[k]_ [+ ˜][θ]n[k][+1], _n ∈N_ _,_
where the prox function for a function g is given by
**proxg,ρ(x) = argmin**
_y_
� �
_g(y) + (ρ/2)∥x −_ _y∥2[2]_ _,_ (5.3)
and is guaranteed to exist when g is CCP [52]. If in addition ¯v[0] = 0
(note that any optimal dual variables v[⋆] must also satisfy ¯v[⋆] = 0), then
the v update simplifies to vn[k][+1] := vn[k] [+ ˜][θ]n[k][+1] and the second argument
of the prox function in the first step simplifies to θ[¯]d[k] _d_ [.]
[−] _[v][k]_
-----
98 _Proximal Message Passing_
The origin of the name ‘proximal message passing’ should now be
clear. In each iteration, every device computes the prox function of its
objective function, with an argument that depends on messages passed
to it through its terminals by its neighboring nets in the previous iteration (¯p[k]d[, ˜][θ]d[k][,][ u][k]d[, and][ v]d[k][). Then, every device passes to its terminals]
the newly computed power and phase schedules, p[k]d[+1] and θd[k][+1], which
are then passed to the terminals’ associated nets. Every net computes
the prox function of the power balance and phase consistency indicator
functions (which corresponds to projecting the power and phase schedules back to feasibility), computes the new average power imbalance,
_p¯[k]n[+1]_ and phase residual, θ[˜]n[k][+1], updates its dual variables, u[k]n[+1] and
_vn[k][+1], and broadcasts these values to its associated terminals’ devices._
Since ¯p[k]n [is simply][ |][n][|][ copies of the same vector for all][ k][, all terminals]
connected to the same net must have the same value for their dual variables associated with power balance throughout the algorithm, i.e., for
all values of k, u[k]t [=][ u]t[k][′][ whenever][ t, t][′][ ∈] _[n][ for any][ n][ ∈N]_ [.]
As an example, consider the network represented by figures 2.1 and
2.2. The proximal message passing algorithm performs the power and
phase schedule updates on the devices (the boxes on the left in Figure 2.2). The devices share the respective power and phase profiles via
the terminals, and the nets (the solid boxes on the right) compute the
scaled price updates. For any network, the proximal message passing
algorithm can be thought of as alternating between devices (on the
left) and nets (on the right).
### 5.2 Convergence
**Theory.** We now comment on the convergence of proximal message
passing. Since proximal message passing is a version of ADMM, all
convergence results that hold for ADMM also hold for proximal message
passing. In particular, when all devices have CCP objective functions
and a feasible solution to the D-OPF exists, the following hold:
1. Power balance and phase consistency are achieved: ¯p[k] 0 and
→
_θ˜[k] →_ 0 as k →∞.
2. Operation is optimal: [�]d∈D _[f][d][(][p]d[k][, θ]d[k][)][ →]_ _[f]_ _[⋆]_ [as][ k][ →∞][.]
-----
_5.2. Convergence_ 99
3. Optimal prices are found: ρu[k] = yp[k] [→] _[y]p[⋆]_ [and][ ρv][k][ =][ y]θ[k] [→] _[y]θ[⋆]_ [as]
_k_ .
→∞
Here f _[⋆]_ is the optimal value for the D-OPF, and yp[⋆] [and][ y]θ[⋆] [are optimal]
dual variables for the power schedule and phase consistency constraints,
respectively. The proof of these results (in the more general setting)
can be found in [6]. As a result of the third condition, the optimal
locational marginal prices can be found for each net n by
L[⋆] ∈N
setting Ln[⋆] [=][ |][n][|][(][y]p[⋆][)][n][.]
**Stopping criterion.** Following [6], we can define primal and dual residuals, which for proximal message passing simplify to
_r[k]_ = (¯p[k], _θ[˜][k])_ _s[k]_ = ρ((p[k] _p¯[k])_ (p[k][−][1] _p¯[k][−][1]),_ _θ[¯][k]_ _θ[k][−][1])._
− − − − [¯]
We give a simple interpretation of each residual. The primal residual is
simply the net power imbalance and phase inconsistency across all nets
in the network, which is the original measure of primal feasibility in the
D-OPF. The dual residual is equal to the difference between the current
and previous iterations of both the difference between power schedules
and their average net power as well as the average phase angle on each
net. The locational marginal price at each net is determined by the
deviation of all associated terminals’ power schedule from the average
power on that net. As the change in these deviations approaches zero,
the corresponding locational marginal prices converge to their optimal
values, and all phase angles are consistent across all AC nets.
We can define a simple criterion for terminating proximal message
passing when
∥r[k]∥2 ≤ _ǫ[pri],_ ∥s[k]∥2 ≤ _ǫ[dual],_
where ǫ[pri] and ǫ[dual] are, respectively, primal and dual tolerances. We
can normalize both of these quantities to network size by the relation
_ǫ[pri]_ = ǫ[dual] = ǫ[abs][�] _T,_
|T |
for some absolute tolerance ǫ[abs] _> 0._
-----
100 _Proximal Message Passing_
**Choosing a value of ρ.** Numerous examples show that the value of
_ρ can have a strong effect on the rate of convergence of ADMM and_
proximal message passing. Many good methods for picking ρ in both
offline and online fashions are discussed in [6]. We note that unlike
other versions of ADMM, the scaling parameter ρ enters very simply
into the proximal equations and can thus be modified online without
incurring any additional computational penalties, such as having to refactorize a matrix. For devices whose objectives just encode constraints
(i.e., only take on the values 0 and + ), the prox function reduces to
∞
projection, and is independent of ρ.
We can modify the proximal message passing algorithm with the
addition of a third step
3. Parameter update and price rescaling.
_ρ[k][+1]_ := _h(ρ[k], r[k], s[k]),_
_ρ[k]_
_u[k][+1]_ :=
_ρ[k][+1][ u][k][+1][,]_
_ρ[k]_
_v[k][+1]_ :=
_ρ[k][+1][ v][k][+1][,]_
for some function h. We desire to pick an h such that the primal
and dual residuals are of similar size throughout the algorithm, i.e.,
_ρ[k]∥r[k]∥2 ≈∥s[k]∥2 for all k. To accomplish this task, we use a simple_
proportional-derivative controller to update ρ, choosing h to be
_h(ρ[k]) = ρ[k]_ exp(λw[k] + µ(w[k] _w[k][−][1])),_
−
where w[k] = ρ[k]∥r[k]∥2/∥s[k]∥2 −1 and λ and µ are nonnegative parameters
chosen to control the rate of convergence. Typical values of λ and µ
are between 10[−][3] and 10[−][1].
When ρ is updated in such a manner, convergence is sped up in
many examples, sometimes dramatically. Although it can be difficult
to prove convergence of the resulting algorithm, a standard trick is
to assume that ρ is changed in only a finite number of iterations, after which it is held constant for the remainder of the algorithm, thus
guaranteeing convergence.
-----
_5.3. Discussion_ 101
**Non-convex case.** When one or more of the device objective functions is non-convex, we can no longer guarantee that proximal message
passing converges to the optimal value of the D-OPF or even that it
converges at all (i.e., reaches a fixed point). Prox functions for nonconvex devices must be carefully defined as the set of minimizers in
(5.3) is no longer necessarily a singleton. Even then, prox functions of
non-convex functions are often intractable to compute.
One solution to these issues is to use proximal message passing to solve the RD-OPF. It is easy to show that f [env](p, θ) =
�d∈D _[f]d[env](pd, θd). As a result, we can run proximal message passing_
using the prox functions of the relaxed device objective functions. Since
_fd[env]_ is a CCP function for all d ∈D, proximal message passing in this
case is guaranteed to converge to the optimal value of the RD-OPF
and yield the optimal relaxed locational marginal prices.
### 5.3 Discussion
To compute the proximal messages, devices and nets only require
knowledge of who their network neighbors are, the ability to send small
vectors of numbers to those neighbors in each iteration, and the ability to store small amounts of state information and efficiently compute
prox functions (devices) or projections (nets). As all communication is
local and peer-to-peer, proximal message passing supports the ad hoc
formation of power networks, such as micro grids, and is self-healing
and robust to device failure and unexpected network topology changes.
Due to recent advances in convex optimization [61, 46, 47], many
of the prox function calculations that devices must perform can be
very efficiently executed at millisecond or microsecond time-scales on
inexpensive, embedded processors [30]. Since all devices and all nets
can each perform their computations in parallel, the time to execute a
single, network wide proximal message passing iteration (ignoring communication overhead) is equal to the sum of the maximum computation
time over all devices and the maximum computation time of all nets in
the network. As a result, the computation time per iteration is small
and essentially independent of the size of the network.
In contrast, solving the D-OPF in a centralized fashion requires
-----
102 _Proximal Message Passing_
complete knowledge of the network topology, sufficient communication
bandwidth to centrally aggregate all devices objective function data,
and sufficient centralized computational resources to solve the resulting D-OPF. In large, real-world networks, such as the smart grid, all
three of these requirements are generally unattainable. Having accurate and timely information on the global connectivity of all devices
is infeasible for all but the smallest of dynamic networks. Centrally
aggregating all device objective functions would require not only infeasible bandwidth and data storage requirements at the aggregation site,
but also the willingness of all devices to expose what could be private
and/or proprietary function parameters in their objective functions.
Finally, a centralized solution to the D-OPF requires solving an optimization problem with Ω( _T_ ) variables, which leads to an identical
|T |
lower bound on the time scaling for a centralized solver, even if problem
structure is exploited. As a result, the centralized solver cannot scale
to solve the D-OPF on very large networks.
-----
# 6
## Numerical Examples
In this chapter we illustrate the speed and scaling of proximal message
passing with a range of numerical examples. In the first two sections, we
describe how we generate network instances for our examples. We then
describe our implementation, showing how multithreading can exploit
problem parallelism and how proximal message passing would scale in a
fully peer-to-peer implementation. Lastly, we present our results, and
demonstrate how the number of iterations needed for convergence is
essentially independent of network size and also significantly decreases
when the algorithm is seeded with a reasonable warm-start.
### 6.1 Network topology
We generate a network instance by first picking the number of nets
_N_ . We generate the nets’ locations xi ∈ **R[2], i = 1, . . ., N by drawing**
√
them uniformly at random from [0, _N_ ][2]. (These locations will be used
to determine network topology.) Next, we introduce transmission lines
into the network as follows. We first connect a transmission line between
all pairs of nets i and j independently and with probability
_γ(i, j) = α min(1, d[2]/∥xi −_ _xj∥2[2][)][.]_
103
-----
104 _Numerical Examples_
In this way, when the distance between i and j is smaller than d, they
are connected with a fixed probability α > 0, and when they are located
farther than distance d apart, the probability decays as 1/∥xi − _xj∥2[2][.]_
After this process, we add a transmission line between any isolated net
and its nearest neighbor. We then introduce transmission lines between
distinct connected components by selecting two connected components
uniformly at random and then selecting two nets, one inside each component, uniformly at random and connecting them by a transmission
line. We continue this process until the network is connected.
For the examples we present, we chose parameter values d = 0.11
and α = 0.8 as the parameters for generating our network. This results
in networks with an average degree of 2.1. Using these parameters, we
generated networks with 30 to 100000 nets, which resulted in optimization problems with approximately 10 thousand to 30 million variables.
### 6.2 Devices
After we generate the network topology described above, we randomly
attach a single (one-terminal) device to each net according to the distribution in table 6.1. We also allow the possibility that a net acts as
a distributor and has no device attached to it other than transmission
lines. About 10% of the transmission lines are DC transmission lines,
while the other are AC transmission lines. The models for each device
and line in the network are identical to the ones given in Chapter 3,
with model parameters chosen in a manner we describe below.
For simplicity, our examples only include networks with the devices
listed below. For all devices, the time horizon was chosen to be T = 96,
corresponding to 15 minute intervals for 24 hour schedules, with the
time period τ = 1 corresponding to midnight.
**Generator.** Generators have the quadratic cost functions given in
Chapter 3 and are divided into three types: small, medium, and large.
In each case, the generator provides some idling power, so we set
_Pmin = 0.01. Small generators have the smallest maximum power out-_
put, but the largest ramp rates, while large generators have the largest
maximum power output, but the slowest ramp rates. Medium genera
-----
_6.2. Devices_ 105
**Device** **Fraction**
None 0.4
Generator 0.4
Curtailable load 0.1
Deferrable load 0.05
Battery 0.05
Table 6.1: Fraction of devices present in the generated networks.
_P_ [min] _P_ [max] _R[max]_ _α_ _β_
Large 0.01 50 3 0.001 0.1
Medium 0.01 20 5 0.005 0.2
Small 0.01 10 10 0.02 1
Table 6.2: Generator parameters.
tors lie in between. Large generators are generally more efficient than
small and medium generators which is reflected in their cost function by
having smaller values of α and β. Whenever a generator is placed into
a network, its type is selected uniformly at random, and its parameters
are taken from the appropriate row in table 6.2.
**Battery.** Parameters for a given instance of a battery are generated
by setting q[init] = 0 and selecting Q[max] uniformly at random from the
interval [20, 50]. The charging and discharging rates are selected to be
equal (i.e., C[max] = D[max]) and drawn uniformly at random from the
interval [5, 10].
**Fixed load.** The load profile for a fixed load instance is a sinusoid,
_l(τ_ ) = c + a sin(2π(τ − _φ0)/T_ ), _τ = 1, . . ., T,_
with the amplitude a chosen uniformly at random from the interval
[0.5, 1], and the DC term c chosen so that c = a + u, where u is chosen
uniformly at random from the interval [0, 0.1], which ensures that the
load profile remains elementwise positive. The phase shift φ0 is chosen
-----
106 _Numerical Examples_
uniformly at random from the interval [60, 72], ensuring that the load
profile peaks between the hours of 3pm and 6pm.
**Deferrable load.** For an instance of a deferrable load, we choose E
uniformly at random from the interval [5, 10]. The start time index A
is chosen uniformly at random from the discrete set 1, . . ., (T 9)/2 .
{ − }
The end time index D is then chosen uniformly at random over the set
_A + 9, . . ., T_, so that the minimum time window to satisfy the load
{ }
is 10 time periods (2.5 hours). We set the maximum power so that it
requires at least two time periods to satisfy the total energy constraint,
_i.e., L[max]_ = 5E/(D _A + 1)._
−
**Curtailable loads.** For an instance of a curtailable load, the desired
load l is constant over all time periods with a magnitude chosen uniformly at random from the interval [5, 15]. The penalty parameter α is
chosen uniformly at random from the interval [0.1, 0.2].
**AC transmission line.** For an instance of an AC line, we set the voltage magnitude equal to 1 and choose its remaining parameters by first
solving the D-OPF with lossless, uncapacitated lines. Using flow values
given by the solution to that problem, we set C[max] = max(30, 10F [max])
for each line, where F [max] is equal to the maximum flow (from the
lossless solution) along that line over all periods.
We use the loss function for transmission lines with a series admittance g+ib given by (3.1). We choose a maximum phase angle deviation
(in degrees) in the interval [1, 5] and a loss of 1 to 3 percent of C[max]
when transmitting power at maximum capacity. Once the maximum
phase angle and the loss are determined, g is chosen to provide the
desired loss when operating at maximum phase deviation, while b is
chosen so the line operates at maximum capacity when at maximum
phase deviation.
**DC transmission line.** DC transmission lines are handled just like AC
transmission lines. We set R = g/b, where g and b are chosen using the
procedure for the AC transmission line.
-----
_6.3. Serial multithreaded implementation_ 107
### 6.3 Serial multithreaded implementation
Our D-OPF solver is implemented in C++, with the core proximal
message passing equations occupying fewer than 25 lines of C++ (excluding problem setup and class specifications). The code is compiled
with gcc 4.7.2 on a 32-core, 2.2GHz Intel Xeon processor with 512GB
of RAM running the Ubuntu OS. The processor supports hyperthreading, so we have access to 64 independent threads. We used the compiler
option -O3 to leverage full code optimization.
To approximate a fully distributed implementation, we use gcc’s
implementation of OpenMP (version 3.1) and multithreading to parallelize the computation of the prox functions for the devices. We use 64
threads to solve each example network. Assuming perfect load balancing among the cores, this means that 64 prox functions are being evaluated in parallel. Effectively, we evaluate the prox functions by stepping
serially through the devices in blocks of size 64. We do not parallelize
the computation of the dual updates over the nets since the overhead
of spawning threads dominates the vector operations themselves.
The prox functions for fixed loads and curtailable loads are separable over τ and can be computed analytically. For more complex devices,
such as a generator, battery, or deferrable load, we compute the prox
function using CVXGEN [46]. The prox function for a transmission line
is computed by projecting onto the convex hull of the line constraints.
For a given network, we solve the associated D-OPF with an absolute tolerance ǫ[abs] = 10[−][3]. This translates to three digits of accuracy in
the solution. The CVXGEN solvers used to evaluate the prox operators
for some devices have an absolute tolerance of 10[−][8]. We set ρ = 1.
### 6.4 Peer-to-peer implementation
We have not yet created a fully peer-to-peer, bulk synchronous parallel
[60, 45] implementation of proximal message passing, but have carefully
tracked solve times in our serial implementation in order to facilitate a
first order analysis of such a system. In a peer-to-peer implementation,
the prox schedule updates occur in parallel across all devices followed by
(scaled) price updates occurring in parallel across all nets. As previously
-----
108 _Numerical Examples_
10[−][2]
10[−][3]
10[−][5]
10[−][7]
10[−][2]
10[−][3]
10[−][5]
10[−][7]
iter k
iter k
Figure 6.1: The relative suboptimality (left) and primal infeasibility (right) of proximal message passing on a network instance with N = 3000 nets (1 million variables).
The dashed line shows when the stopping criterion is satisfied.
mentioned, the computation time per iteration is thus the maximum
time, over all devices, to evaluate the prox function of their objective,
added to the maximum time across all nets to project their terminal
schedules back to feasibility and update their existing price vectors.
Since evaluating the prox function for some devices requires solving a
convex optimization problem, whereas the price updates only require a
small number of vector operations that can be performed as a handful of
SIMD instructions, the compute time for the price updates is negligible
in comparison to the prox schedule updates. The determining factor in
solve time, then, is in evaluating the prox functions for the schedule
updates. In our examples, the maximum time taken to evaluate any
prox function is 1 ms.
### 6.5 Results
We first consider a single example: a network instance with N = 3000 (1
million variables). Figure 6.1 shows that after fewer than 200 iterations
of proximal message passing, both the relative suboptimality as well as
the average net power imbalance and average phase inconsistency are
both less than 10[−][3]. The convergence rates for other network instances
over the range of sizes we simulated are similar.
In Figure 6.2, we present average timing results for solving the DOPF for a family of examples, using our serial implementation, with
-----
_6.5. Results_ 109
networks of size N = 30, 100, 300, 1000, 3000, 10000, 30000, and
100000. For each network size, we generated and solved 10 network instances to compute average solve times and confidence intervals around
those averages. The times were modeled with a log-normal distribution.
For network instances with N = 100000 nets, the problem has over 30
million variables, which we solve serially using proximal message passing in 5 minutes on average. By fitting a line to the proximal message
passing runtimes, we find that our parallel implementation empirically
scales as O(N [0][.][996]), i.e., solve time is linear in problem size.
For a peer-to-peer implementation, the runtime of proximal message
passing should be essentially constant, and in particular independent of
the size of the network. To solve a problem with N = 100000 nets (30
million variables) with approximately 200 iterations of our algorithm
then takes only 200 ms. In practice, the actual solve time would clearly
be dominated by network communication latencies and actual runtime
performance will be determined by how quickly and reliably packets can
be delivered [34]. As a result, in a true peer-to-peer implementation, a
negligible amount of time is actually spent on computation. However,
it goes without saying that many other issues must be addressed with a
peer-to-peer protocol, including handling network delays and security.
Figure 6.2 shows cold start runtimes for solving the D-OPF. If we
have good estimates of the power and phase schedules and dual variables for each terminal, we can use them to warm start our D-OPF
solver. To show the effect, we randomly convert 5% of the devices into
fixed loads and solve a specific instance with N = 3000 nets (1 million
variables). Let K[cold] to be the number of iterations needed to solve an
instance of this problem. We then uniformly scale the load profiles of
each device by separate and independent lognormal random variables.
The new profiles, [ˆ]l, are obtained from the original profiles l via
ˆl = l exp(σX),
where X (0, 1), and σ > 0 is given. Using the original solution
∼N
to warm start our solver, we solve the perturbed problem and report
the number of iterations K[warm] needed. Figure 6.3 shows the ratio
_K[warm]/K[cold]_ as we vary σ, showing the significant savings possible
with warm-starting even under relatively large perturbations.
-----
110 _Numerical Examples_
1000
100
10
1
0.1
10 100 1000 10000 100000
_N_
Figure 6.2: Average execution times for a family of networks on 64 threads. Error
bars show 95% confidence bounds. The dotted line shows the least-squares fit to the
data, resulting in a scaling exponent of 0.996.
_σ_
Figure 6.3: Relative number of iterations needed to converge from a warm start for
various perturbations of load profiles compared to original number of iterations.
-----
# 7
## Extensions
Here, we give some possible extensions of our model and method.
### 7.1 Closed-loop control
So far, we have considered only a static energy planning problem, where
each device on the network plans power and phase schedules extending
_T steps into the future and then executes all T steps. This ‘open loop’_
control can fail spectacularly, since it will not adjust its schedules in
response to external disturbances that were unknown at the original
time the schedules were computed.
To alleviate this problem, we propose the use of receding horizon
control (RHC) [43, 3, 47] for dynamic network operation. In RHC, at
each time step τ, we determine a plan of action over a fixed time horizon
_T into the future by solving the D-OPF using proximal message pass-_
ing. The first step of all schedules is then executed, at which point the
entire process is repeated, incorporating new measurements, external
data, and predictions that have become available.
RHC has been successfully applied in many areas, including chemical process control [50], supply chain management [11], stochastic control in economics and finance [24, 58], and energy storage system op
111
-----
112 _Extensions_
eration [37]. While RHC is in general not an optimal controller, it has
been shown to achieve good performance in many different domains.
RHC is ideally suited for use with proximal message passing. First,
when one time step is executed, we can warm start the next round of
proximal message passing with the T 1 schedules and dual variables
−
that were computed, but not otherwise executed, for each device and
each net in the previous iteration of RHC. As was shown in the previous section, this can dramatically speed up computation and allow for
RHC to operate network-wide at fraction of a second rates. In addition,
RHC does not require any stochastic or formal model of future prediction uncertainty. While statistical predictions can be used if they are
available, predictions from other sources, such as analysts or markets,
are just as simple to integrate into RHC, and are also much easier to
come by in many real-world scenarios.
Perhaps the most important synergy between proximal message
passing and RHC is that the predictions used by each device need
only concern that one device and do not need to include any estimates
concerning other devices. This allows for devices to each use their own
form of prediction without worrying about what other devices exist or
what form of prediction they are using (e.g., even if one generator uses
statistical predictions, other devices need not).
The reason for this is that proximal message passing integrates all
the device predictions into the final solution — just as they would
have been in a centralized solution — but does so through the power
and phase schedules and prices that are shared between neighboring
devices. In this way, for example, a generator only needs to estimate
its cost of producing power at different levels over the time horizon T .
It does not need to predict any demand itself, as those predictions are
passed to it in the form of the power schedule and price messages it
receives from its neighbors. Similarly, loads only need to forecast their
own future demand and utility of power in each time period. Loads do
_not need to explicitly predict future prices, as those are the result of_
running proximal message passing over the network.
-----
_7.2. Security constrained optimal power flow_ 113
### 7.2 Security constrained optimal power flow
In the SC-OPF problem, we determine a set of contingency plans for
devices connected on a power network, which tell us the power flows
and phase angles each device will operate at under nominal system
operation, as well as in a set of specified contingencies or scenarios. The
contingencies can correspond, say, to failure or degraded operation of
a transmission line or generator, or a substantial change in a load. In
each scenario the powers and phases must satisfy the network equations
(taking into account any failures for that scenario), and they are also
constrained in various ways across the scenarios. Generators and loads,
for example, might be constrained to not change their power generation
or consumption in any non-nominal scenario.
As a variation on this, we can allow such devices to modify their
powers from the nominal operation values, but only over some range
or for a set amount of time. The goal is to minimize a composite cost
function that includes the cost (and constraints) of nominal operation,
as well as those associated with operation in any of the scenarios. Proximal message passing allows us to parallelize the computation of many
different (and coupled) scenarios across each device while maintaining
decentralized communication across the network.
### 7.3 Hierarchical models and virtualized devices
The power grid has a natural hierarchy, with generation and transmission occurring at the highest level and residential consumption and
distribution occurring at the most granular. Proximal message passing
can be easily extended into hierarchical interactions by scheduling messages on different time scales and between systems at similar levels of
the hierarchy [10]. By aggregating multiple devices into a virtual device (which themselves may be further aggregated into another virtual
device), our framework naturally allows for the formation of composite
entities such as virtual power plants and demand response aggregators.
Let D be a group of devices that are aggregated into a virtual
⊆D
device, which we will also refer to as ‘D’. We use the notation that
terminal t _D if there exists a device d_ _D such that t_ _d. The set of_
∈ ∈ ∈
-----
114 _Extensions_
terminals _t_ _t_ _D_ can be partitioned into two sets, those terminals
{ | ∈ }
whose associated net’s terminals are all associated with a device which
is part of D, and those who are not. These two sets can be though
of as those terminal in D which are purely ‘internal’ to D, and those
terminals which are not, as shown in Figure 7.1. These two sets are
given by
_Din_ = {t ∈ _D | ∀t[′]_ ∈ _nt, t[′]_ ∈ _D},_
_Dout_ = {t ∈ _D | ∃t[′]_ ∈ _nt, t[′]_ ̸∈ _D},_
respectively, where nt is defined to be the net associated with terminal t (i.e., t ∈ _nt). We let (pin, θin) and (pout, θout) denote the power_
and phase schedules associated with the terminals in the sets Din and
_Dout, respectively. Since the power and phase schedules (pin, θin) never_
directly leave the virtual device, they can be considered as internal
variables for the virtual device.
The objective function of the virtual device, fD(pout, θout), is given
by the solution to the following optimization problem
minimize �d∈D _[f][d][(][p][d][, θ][d][)]_
subject to _p¯in = 0,_ _θ˜in = 0,_
with variables pd and θd for d ∈ _D. A sufficient condition for_
_fD(pout, θout) being a convex function is that all of the virtual device’s_
constituent devices’ objective functions are convex functions [7].
By recursively applying proximal message passing at each level of
the aggregation hierarchy, we can compute the objective functions for
each virtual device. This process can be continued down to the individual device level, at which point the device must compute the prox
function for its own objective function as the base case.
These models allow for the computations necessary to operate a
smart grid network to be virtualized since the computations specific
to each device do not necessarily need to be carried out on the device itself, but can be computed elsewhere (e.g., the servers of a virtual power plant, centrally by an independent system operator, . . . ),
and then transmitted to the device for execution. As a result, hierarchical modeling allows one to smoothly interpolate from completely
-----
_7.4. Local stopping criteria and ρ updates_ 115
Figure 7.1: Left: A simple network with four devices and two nets. Right: A hierarchical representation with only 2 devices at the highest level. All terminals connected
to the left-most net are internal to the virtual device.
centralized operation of the grid (i.e., all objectives and constraints
are gathered in a single location and solved), to a completely decentralized architecture where all communication is peer to peer. At all
scales, proximal message passing offers all decision making entities an
efficient method to compute optimal power and phase schedules for the
devices under their control, while maintaining privacy of their devices’
objective functions and constraints.
### 7.4 Local stopping criteria and ρ updates
The stopping criterion and ρ update method in Chapter 5 currently require global device coordination (via the global primal and dual residuals each iteration). These could be computed in a decentralized fashion
by gossip algorithms [53], but this could require many rounds of gossip in between each iteration of proximal message passing, significantly
increasing runtime. We are investigating methods to let individual devices or terminals independently choose both the stopping criterion and
different values of ρ based only on local information such as the primal
and dual residuals of a device and its neighbors.
For dynamic operation another approach is to run proximal message
passing continuously, with no stopping criteria. In this mode, devices
and nets would exchange messages with each other indefinitely and execute the first step of their schedules at given times (i.e., gate closure),
at which point they shift their moving horizon forward one time step
and continue to exchange messages.
-----
# 8
## Conclusion
We have presented a fully decentralized method for dynamic network
energy management based on message passing between devices. Proximal message passing is simple and highly extensible, relying solely
on peer to peer communication between devices that exchange energy.
When the resulting network optimization problem is convex, proximal
message passing converges to the optimal value and gives optimal power
and phase schedules and locational marginal prices. We have presented
a parallel implementation that shows the time per iteration and the
number of iterations needed for convergence of proximal message passing are essentially independent of the size of the network. As a result,
proximal message passing can scale to extremely large networks with
almost no increase in solve time.
116
-----
## Acknowledgments
The authors thank Yang Wang and Neal Parikh for extensive discussions on the problem formulation as well as ADMM methods; Yang
Wang, Brendan O’Donoghue, Haizi Yu, Haitham Hindi, and Mikael
Johansson for discussions on optimal ρ selection and for help with the
_ρ update method; Steven Low for discussions about end-point based_
control; and Ed Cazalet, Ram Rajagopal, Ross Baldick, David Chassin, Marija Ilic, Trudie Wang, and Jonathan Yedidia for many helpful
comments. We would like to thank Marija Ilic, Le Xie, and Boris Defourny for pointing us to DYNMONDS and other earlier Lagrangian
approaches. We are indebted to Misha Chertkov, whose questions on
an early version of this paper prodded us to make the concept of AC
and DC terminals explicit. Finally, we thank Warren Powell and Hugo
Simao for encouraging us to release implementations of these methods.
This research was supported in part by Precourt 11404581-WPIAE, by AFOSR grant FA9550-09-1-0704, by NASA grant
NNX07AEIIA, and by the DARPA XDATA grant FA8750-12-2-0306.
After this paper was submitted, we became aware of [31] and [32],
which apply ADMM to power networks for the purpose of robust state
estimation. Our paper is independent of their efforts.
117
-----
## References
[1] R. Baldick, Applied Optimization: Formulation and Algorithms for Engineering
_Systems. Cambridge University Press, 2006._
[2] S. Barman, X. Liu, S. Draper, and B. Recht, “Decomposition methods for large
scale LP decoding,” Submitted, IEEE Transactions on Information Theory,
2012.
[3] A. Bemporad, “Model predictive control design: New trends and tools,” in
_Proceedings of 45th IEEE Conference on Decision and Control, pp. 6678–6683,_
2006.
[4] A. Bergen and V. Vittal, Power Systems Analysis. Prentice Hall, 1999.
[5] D. P. Bertsekas, Constrained Optimization and Lagrange Multiplier Methods.
Academic Press, 1982.
[6] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,”
_Foundations and Trends in Machine Learning, vol. 3, pp. 1–122, 2011._
[7] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge University
Press, 2004.
[8] J. Carpentier, “Contribution to the economic dispatch problem,” Bull. Soc.
_Francaise Elect., vol. 3, no. 8, pp. 431–447, 1962._
[9] E. A. Chakrabortty and M. D. Ilic, Control & Optimization Methods for Electric
_Smart Grids. Springer, 2012._
[10] M. Chiang, S. Low, A. Calderbank, and J. Doyle, “Layering as optimization
decomposition: A mathematical theory of network architectures,” Proc. of the
_IEEE, vol. 95, no. 1, pp. 255–312, 2007._
[11] E. G. Cho, K. A. Thoney, T. J. Hodgson, and R. E. King, “Supply chain planning: Rolling horizon scheduling of multi-factory supply chains,” in Proceedings
_of the 35th conference on Winter simulation: driving innovation, pp. 1409–1416,_
2003.
118
-----
_References_ 119
[12] B. H. Chowdhury and S. Rahman, “A review of recent advances in economic
dispatch,” IEEE Transactions on Power Systems, vol. 5, no. 4, pp. 1248–1259,
1990.
[13] A. O. Converse, “Seasonal energy storage in a renewable energy system,” Pro_ceedings of the IEEE, vol. 100, pp. 401–409, Feb 2012._
[14] G. B. Dantzig and P. Wolfe, “Decomposition principle for linear programs,”
_Operations Research, vol. 8, pp. 101–111, 1960._
[15] J. Eckstein, “Parallel alternating direction multiplier decomposition of convex
programs,” Journal of Optimization Theory and Applications, vol. 80, no. 1,
pp. 39–62, 1994.
[16] J. H. Eto and R. J. Thomas, “Computational needs for the next generation
electric grid,” in Department of Energy, 2011. http://certs.lbl.gov/pdf/lbnl5105e.pdf.
[17] H. Everett, “Generalized Lagrange multiplier method for solving problems of
optimum allocation of resources,” Operations Research, vol. 11, no. 3, pp. 399–
417, 1963.
[18] A. Eydeland and K. Wolyniec, Energy and Power Risk Management: New De_velopments in Modeling, Pricing and Hedging. Wiley, 2002._
[19] G. Forney, “Codes of graphs: Normal realizations,” IEEE Transactions on In_formation Theory, vol. 47, no. 2, pp. 520–548, 2001._
[20] D. Gabay and B. Mercier, “A dual algorithm for the solution of nonlinear
variational problems via finite element approximations,” Computers and Math_ematics with Applications, vol. 2, pp. 17–40, 1976._
[21] R. Glowinski and A. Marrocco, “Sur l’approximation, par elements finis d’ordre
un, et la resolution, par penalisation-dualité, d’une classe de problems de Dirichlet non-lineares,” Revue Française d’Automatique, Informatique, et Recherche
_Opérationelle, vol. 9, pp. 41–76, 1975._
[22] H. H. Happ, “Optimal power dispatch — a comprehensive survey,” IEEE Trans_actions on Power Apparatus and Systems, vol. 96, no. 3, pp. 841–854, 1977._
[23] E. K. Hart, E. D. Stoutenburg, and M. Z. Jacobson, “The potential of intermittent renewables to meet electric power demand: current methods and emerging
analytical techniques,” Proceedings of the IEEE, vol. 100, pp. 322–334, Feb
2012.
[24] F. Herzog, Strategic Portfolio Management for Long-Term Investments: An
_Optimal Control Approach. PhD thesis, ETH, Zurich, 2005._
[25] M. R. Hestenes, “Multiplier and gradient methods,” Journal of Optimization
_Theory and Applications, vol. 4, pp. 302–320, 1969._
[26] K. Heussen, S. Koch, A. Ulbig, and G. Andersson, “Energy storage in power
system operation: The power nodes modeling framework,” in Innovative Smart
_Grid Technologies Conference Europe (ISGT Europe), 2010 IEEE PES, pp. 1–_
8, Oct 2010.
[27] T. Hovgaard, L. Larsen, J. Jorgensen, and S. Boyd, “Nonconvex model predictive control for commercial refrigeration,” International Journal of Control,
vol. 86, no. 8, pp. 1349–1366, 2013.
-----
120 _References_
[28] M. Ilic, L. Xie, and J.-Y. Joo, “Efficient coordination of wind power and priceresponsive demand — Part I: Theoretical foundations,” IEEE Transactions on
_Power Systems, vol. 26, pp. 1875–1884, Nov 2011._
[29] M. Ilic, L. Xie, and J.-Y. Joo, “Efficient coordination of wind power and priceresponsive demand—part ii: Case studies,” IEEE Transactions on Power Sys_tems, vol. 26, pp. 1885–1893, Nov 2011._
[30] J. L. Jerez, P. J. Goulart, S. Richter, G. A. Constantinides, E. C. Kerrigan,
and M. Morari, “Embedded online optimization for model predictive control at
megahertz rates,” Submitted, IEEE Transactions on Automatic Control, 2013.
[31] V. Kekatos and G. Giannakis, “Joint power system state estimation and breaker
status identification,” in Proceedings of the 44th North American Power Sym_posium, 2012._
[32] V. Kekatos and G. Giannakis, “Distributed robust power system state estimation,” IEEE Transactions on Power Systems, 2013.
[33] F. P. Kelly, A. K. Maulloo, and D. K. H. Tan, “Rate control in communication
networks: shadow prices, proportional fairness and stability,” Journal of the
_Operational Research Society, vol. 49, pp. 237–252, 1998._
[34] A. Kiana and A. Annaswamy, “Wholesale energy market in a smart grid: A
discrete-time model and the impact of delays,” in Control and Optimization
_Methods for Electric Smart Grids, (A. Chakrabortty and M. Ilic, eds.), pp. 87–_
110, Springer US, 2012.
[35] B. H. Kim and R. Baldick, “Coarse-grained distributed optimal power flow,”
_IEEE Transactions on Power Systems, vol. 12, no. 2, pp. 932–939, 1997._
[36] B. H. Kim and R. Baldick, “A comparison of distributed optimal power flow
algorithms,” IEEE Transactions on Power Systems, vol. 15, no. 2, pp. 599–604,
2000.
[37] M. Kraning, Y. Wang, E. Akuiyibo, and S. Boyd, “Operation and configuration
of a storage portfolio via convex optimization,” in Proceedings of the 18th IFAC
_World Congress, pp. 10487–10492, 2011._
[38] A. Lam, B. Zhang, and D. Tse, “Distributed algorithms for optimal power flow
problem,” http://arxiv.org/abs/1109.5229, 2011.
[39] J. Lavaei and S. Low, “Zero duality gap in optimal power flow problem,” IEEE
_Transactions on Power Systems, vol. 27, no. 1, pp. 92–107, 2012._
[40] J. Lavaei, D. Tse, and B. Zhang, “Geometry of power flows in tree networks,”
_IEEE Power & Energy Society General Meeting, 2012._
[41] J. Liang, G. K. Venayagamoorthy, and R. G. Harley, “Wide-area measurement
based dynamic stochastic optimal power flow control for smart grids with high
variability and uncertainty,” IEEE Transactions on Smart Grid, vol. 3, pp. 59–
69, 2012.
[42] S. H. Low, L. Peterson, and L. Wang, “Understanding tcp vegas: a duality
model,” in Proceedings of the 2001 ACM SIGMETRICS international con_ference on Measurement and modeling of computer systems, (New York, NY,_
USA), pp. 226–235, ACM, 2001.
-----
_References_ 121
[43] J. Maciejowski, Predictive Control with Constraints. Prentice Hall, 2002.
[44] S. H. Madaeni, R. Sioshansi, and P. Denholm, “How thermal energy storage
enhances the economic viability of concentrating solar power,” Proceedings of
_the IEEE, vol. 100, pp. 335–347, Feb 2012._
[45] G. Malewicz, M. H. Austern, A. J. C. Bik, J. C. Dehnert, I. Horn, N. Leiser, and
G. Czajkowski, “Pregel: A system for large-scale graph processing,” in Proceed_ings of the 2010 International Conference on Management of Data, pp. 135–_
146, 2010.
[46] J. Mattingley and S. Boyd, “CVXGEN: Automatic convex optimization code
generation,” http://cvxgen.com/, 2012.
[47] J. Mattingley, Y. Wang, and S. Boyd, “Receding horizon control: Automatic
generation of high-speed solvers,” IEEE Control Systems Magazine, vol. 31,
pp. 52–65, 2011.
[48] W. F. Pickard, “The history, present state, and future prospects of underground
pumped hydro for massive energy storage,” Proceedings of the IEEE, vol. 100,
pp. 473–483, Feb 2012.
[49] M. J. D. Powell, “A method for nonlinear constraints in minimization problems,” in Optimization, (R. Fletcher, ed.), Academic Press, 1969.
[50] S. J. Qin and T. A. Badgwell, “A survey of industrial model predictive control
technology,” Control Engineering Practice, vol. 11, no. 7, pp. 733–764, 2003.
[51] P. Ravikumar, A. Agarwal, and M. J. Wainwright, “Message-passing for graphstructured linear programs: Proximal methods and rounding schemes,” Journal
_of Machine Learning Research, vol. 11, pp. 1043–1080, 2010._
[52] R. T. Rockafellar, Convex Analysis. Princeton University Press, 1970.
[53] D. Shah, “Gossip algorithms,” Foundations and Trends in Networking, vol. 3,
no. 2, pp. 1–125, 2008.
[54] S. Sojoudi and J. Lavaei, “Physics of power networks makes hard problems easy
to solve,” To appear, IEEE Power & Energy Society General Meeting, 2012.
[55] S. Sojoudi and J. Lavaei, “Convexification of generalized network flow problem with application to power systems,” Preprint available at http://www.ee.
columbia.edu/~lavaei/Generalized_Net_Flow.pdf, 2013.
[56] S. Sojoudi and J. Lavaei, “Semidefinite relaxation for nonlinear optimization
over graphs with application to power systems,” Preprint available at http:
//www.ee.columbia.edu/~lavaei/Opt_Over_Graph.pdf, 2013.
[57] N. Taheri, R. Entriken, and Y. Ye, “A dynamic algorithm for facilitated charging of plug-in electric vehicles,” arxiv:1112:0697, 2011.
[58] K. T. Talluri and G. J. V. Ryzin, The Theory and Practice of Revenue Man_agement. Springer, 2004._
[59] K. Turitsyn, P. Sulc, S. Backhaus, and M. Chertkov, “Options for control of reactive power by distributed photovoltaic generators,” Proceedings of the IEEE,
vol. 99, pp. 1063–1073, 2011.
[60] L. G. Valiant, “A bridging model for parallel computation,” Communications
_of the ACM, vol. 33, no. 8, p. 111, 1990._
-----
122 _References_
[61] Y. Wang and S. Boyd, “Fast model predictive control using online optimization,” IEEE Transactions on Control Systems Technology, vol. 18, pp. 267–278,
2010.
[62] J. Zhu, Optimization of Power System Operation. Wiley-IEEE Press, 2009.
-----
| 26,589
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1561/2400000002?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1561/2400000002, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://stanford.edu/~boyd/papers/pdf/msg_pass_dyn.pdf"
}
| 2,013
|
[
"JournalArticle"
] | true
| 2013-11-27T00:00:00
|
[] | 26,589
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00b0d415d9787e6359f04dcfd965b9115160647d
|
[
"Computer Science"
] | 0.882108
|
Local Nondeterminism in Asynchronously Communicating Processes
|
00b0d415d9787e6359f04dcfd965b9115160647d
|
FME
|
[
{
"authorId": "145554915",
"name": "F. S. Boer"
},
{
"authorId": "26045227",
"name": "M. V. Hulst"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
# Local Nondeterminism in Asynchronously
Communicating Processes
F.S. de Boer and M. van Hulst
Utrecht University, Dept. of Comp. Sc.,
P.O. Box 80089, 3508 TB Utrecht, The Netherlands
Abstract. In this paper we present a simple compositional Hoare logic
for reasoning about the correctness of a certMn class of distributed sys-
tems. We consider distributed systems composed of processes which in-
teract asynchronously via unbounded FIFO buffers. The simplicity of
the proof system is due to the restriction to local nondeterminism in the
description of the sequential processes of a system. To illustrate the use-
fulness of the proof system we use PVS (Prototype Verification System,
see [ORS92]) to prove in a compositional manner the correctness of a
heartbeat algorithm for computing the topology of a network.
##### 1 Introduction
In [dBvH94] we have shown that a certain class of distributed systems com-
posed of processes which communicate asynchronously via (unbounded) FIFO
buffers, can be proved correct using a simple compositional proof system based
on Hoare-logic. The class of systems introduced in [dBvH94] is characterized
by the restriction to deterministic control structures in the description of the
locM sequential processes. An additional feature is the introduction of input
statements as tests in the choice and iterative constructs. Such input statements
involve a test on the contents of the particular buffer under consideration. Even
in the context of deterministic sequential control structures this feature gives rise
to _global nondeterminism, because the choices involving tests on the contents of_
a buffer depend on the environment.
To reason about the above-mentioned class of distributed systems a buffer is
represented in the logic by an input variable which records the sequence of val-
ues read from the buffer and by an output variable which records the sequence
of values sent to the buffer. The communication pattern of a system then can
be described in terms of these input/output variables by means of a global in-
variant. This should be contrasted with logics which formalize reasoning about
distributed systems in terms of histories ([OG76, AFdR80, ZdRvEB85, Pan88,
HdR86]). The difference between input/output variables and histories is that
in the former information of the relative ordering of communication events on
-----
##### non-compositional proof system based on a cooperation test along the lines of
[AFdR80] for FIFO buffered communication in general. A compositional proof
system based on input/output variables is given in [dBvH94] for the class of systems composed of deterministic processes as described above. However, the proof system in [dBvH94] allows only a decomposition of the pre/postcondition part of the specification of a distributed system. The global invariant, which is needed for completeness and which describes the ongoing communication be- haviour of the system in terms of the input/output variables, does not allow a decomposition into local invariants corresponding to the components of the system. This is due to the global non-determinism inherent in the distributed systems considered in [dBvH94].
In this paper, we investigate local nondeterminism, that is, we restrict to dis- tributed systems composed of processes which may test only their own private program variables. The resulting computational model is still applicable to a wide range of applications: For example, it can be applied to the description of socalled heartbeat algorithms like, for instance, the distributed leader election problem and the network topology determination problem. The latter problem we will discuss in some detail in this paper.
We show that when restricting to local non-determinism, a complete specification of a distributed system can be derived from local specifications of its components, that is, from specifications which only refer to the program variables and the input/output variables of the component specified. This additional compositional feature is very important because it allows for the construction of a library of specified components which can be reused in any parallel context. The proof system in [dBvH94] does not allow this because part of a local specification is the global invariant which specifies the overall communication behaviour of the entire system. Moreover, the relevance of a compositional reasoning pattern
[dB94, dBHdR, dBvH95, HdR86] with respect to the complexity of (mechanically
supported) correctness proofs of concurrent systems lies in the fact that the verification of the local components of a system can in most practical cases be mechanized fully (or at least to a very large extent). What remains is a proof that the conjunction of the specifications of the components implies the desired specification of the entire system. This latter proof in general involves purely mathematical reasoning about the underlying datastructures and does not involve any reasoning about the flow of control. This abstraction from the flow of control allows for a greater control of the complexity of correctness proofs.
We will illustrate the above observation by proving the correctness of a heartbeat algorithm for computing the network topology using the Prototype Verification System (PVS). As the formalization of the local reasoning is straightforward, our verification effort concentrates on the second, global part of the correctness prob- lem, viz. the proof that the conjunction of the specifications of the components implies the desired specification of the entire system.
-----
##### fications can be structured into a hierarchy of parameterized theories. There are a number of built-in theories (e.g. reals, lists, sets, ordering relations, etc.) and a mechanism for automatically generating theories for abstract datatypes. Due to its high expressivity, the specification language can be invoked in many domains of interest whilst maintaining readable (i.e. not overly constructive) specifica- tions. At the core of PVS is an interactive proof checker with, for instance, induction rules, automatic rewriting, and decision procedures for arithmetic. Moreover, PVS proof steps can be combined into proof strategies.
The reason to choose PVS is a pragmatic one: it allows a quick start, and, more importantly, its powerful engine allows one to disregard many of the trivial but tedious details in a proof, a virtue that is not shared by most of the currently available proof checkers/theorem provers. Much effort has already been invested in developing a useful tool for (automated) verification by means of PVS [CS95, Raj94].
The rest of this paper is organized as follows: In section 2, the programming language is defined. Section 3 explains the algorithm for computing the topol- ogy of a network. Then, in section 4, the proof system is introduced and its formal justification is briefly touched upon. The theorem prover PVS and the specification of the correctness of the algorithm in PVS are discussed in section 5. Finally, section 6 contains some concluding remarks and observations.
2 The programming language
In this section, we define the syntax of the programming language. The language describes the behaviour of asynchronously communicating sequential processes. Processes interact only via communication channels which are implemented by (unbounded) FIFO-buffers. A process can send a value along a channel or it can input a value from a channel. The value sent will be appended to the buffer, whereas reading a value from a buffer consists of retrieving its first element. Thus the values will be read in the order in which they have been sent. A process will be suspended when it tries to read a value from an empty buffer. Since buffers are assumed to be unbounded, sending values can always take place.
We assume given a set of program variables Vat, with typical elements x, y,.... Channels are denoted by c, d,.... We abstract from any typing information.
Definition 1. The syntax of a statement S which describes the behaviour of a sequential process, is defined by
-----
S ::= skip
**I** **x:~--e**
#### I c??xlc!!e
##### I S1; $2
I *li[b +
In the above definition skip denotes the 'empty' statement. Assigning the value
of e to the variable x is described by the statement x := e. Sending a value of an
expression e along channel c is described by _c!!e,_ whereas storing a value read
from a channel c in a variable x is described by _c??x._ The execution of c??x is
suspended in case the corresponding buffer is empty. Furthermore we have the
usual sequential control structures of sequential composition, guarded command
and iterated guarded command (b denotes a boolean expression). In the example
below, we only have need for simple guarded statements, which we will denote
by if b then $1 else $2 fi and while b do S od.
In [dBvH94] we considered deterministic choice and iteration constructs which
use input statements as tests. For example, the execution of a (conditional in-
put) statement if c??x then $1 else $2 fi consists of reading a value from channel
c, in case its corresponding buffer is non-empty, storing it in x and proceed-
ing subsequently with $1. In case the buffer is empty control moves on to $2.
These constructs will in general enhance the capability of a deterministic process
to respond to an indeterminate environment and in this respect they give rise
to global nondeterminism in the sense that the choices of a process depend on
the environment. Note that this is not the case in our present language, where
processes can only inspect their local variables. Nevertheless many interesting
algorithms described in the literature can be expressed in a programming lan-
guage based on local nondeterminism. As an example we consider in the next
section the algorithm for computing a network topology.
Definition 2. A parallel program P is of the form [$1 II ..- II S~], where we as-
sume the following restrictions: the statements Si do not share program variables,
channels are unidirectional and connect exactly one sender and one receiver.
3 An example: Computing the network topology
We consider a symmetric and distributive algorithm for computing a network
topology, which is described in [And91]. We are given a network of processes
which are connected by bi-directional communication links, and each link is
represented by two (unidirectional) channels, i.e. between any two processes S~
and Sj there is a channel from Si to Sj iff there is a channel from Sj to S~. Each
-----
process can communicate only with its neighbors and knows only about the
links to its neighbors. We assume that the network is connected. A symmetric
distributed solution to the network topology problem can be obtained as follows:
Each process first sends to its neighbors the information about its own links
and then each of its neighbors is asked for its links. After having obtained this
information each process will know its links and those of its neighbors. This it
will know about the topology within two links of itself. Assuming that we know
the diameter D of the network, that is, the largest distance between two nodes,
iterating the above D times will solve the problem.
To formalize the above algorithm we represent the network topology by a ma-
trix _top[1_ : n, 1 : n] of BOOL, where n is the number of processes, _top[i,j]_
indicates whether there exists a link from process i to process j. Since we have
bi-directional links we have for all processes i and _j top[i,j] = top~, i]._ For each
pair of linked processes i and j we have channels _cij_ and _cjl._ With respect to
channel clj process i is the sender and j the receiver. The contents of each chan-
nel c is described by two variables c?? and c!!. The first variable c?? is local to
the receiver and records all values that have been read; the second variable c!!
is local to the sender and records the sequence of values that were sent. Thus
the input/output variables of process i are cji?? and cij!!, for all processes j
such that i and j are linked. Processes communicate by sending and receiving
their local views of the global topology. Each process has a local variable _lview~,_
which represents its (local) knowledge of the global topology top. Initially, _Iview~_
is intialized to the neighbors of process i, that is _Iviewi[k, l] = true_ if and only
if k = i and _top[i, l] = true._ A local view received by a process i from one of its
neighbors is stored in a local variable _nviewl._ These local views are combined
by an or-operation on matrices, denoted by V, which is an obvious extension
of the corresponding boolean operation on the truth values. The diameter of
the network is given by D. The behaviour of process i is then described by the
following statement:
##### Si = rl := O;
while r~ < D
do j := 1;
while j < n
do if top[i,j]
then _cij ![lview~_
fi;
j:=j+l
od;
j := 1;
while j < n
do if _top[i,j]_
then _cdi??nview~;_
##### lviewi := Iviewi V nviewi
-----
##### j:=j+l
od;
##### r:--ri+l
od
For a network of n processes the program for computing the network topology,
i.e. the matrix _top, is defined by [$1 II-.. II Sn]._
4 The proof system
In this section we provide a proof system for proving partial correctness and
deadlock freedom of programs. To this end, we introduce correctness formulae
##### {p}P{q} which we interpret as follows:
Any computation starting in a state which satisfies p does not deadlock,
and moreover, if its execution terminates, then q holds in the final state.
Note that this interpretation is stronger than the usual partial correctness in-
terpretation in which absence of deadlock is not required. The precondition p
and postcondition q are formulae in some first-order logic. We omit the formal
definition of this logic which is rather standard; here we only mention that p
and q will contain besides the program variables of P the input/output variables
c?? and c!!, where c is a channel occurring in P. These variables c?? and c![ are
intended to denote the sequences of values received along channel c and those
sent along channel c, respectively. Logically they are simply interpreted as (fi-
nite) sequences of values (thus we assume in the logic operations like append,
tail, the length of a sequence etc.).
To derive the correctness of a program P compositionally, we introduce local
correctness formulae of the form I : {p}S{q}, where p and q are (first-order logic)
assertions, allowed to refer to the variables of S only. The set of variables of a
statement S consists of its program variables and those input/output variables
c?? (c!!) for which c is an input channel of S (c is an output channel of S). The
assertions p and q are called the precondition and postcondition, respectively,
while the assertion I is called the invariant. The invariant I is a conjunction of
implications of the form Rc -+ p, where _Rc_ denotes a predicate which indicates
that the next execution step involves a read on channel c. An assertion _Rc -+ p_
thus specifies that if control is about to execute a read on the channel c then p
holds. The information in I will be used in the analysis of deadlock. Intuitively
the meaning of a correctness formula I : (p}S{q} can be rendered as follows:
The invariant I holds in every state of a computation of S starting in a
-----
Note that the invariance of I - RCl --+ Pl A ... A Rck ~ Pk amounts to the fact
that whenever control is at an input _ci?x, 1 < i < k, Pl_ is guaranteed to hold.
In other words, I expresses certain invariant properties which hold whenever an
input statement (specified by I) is about to be executed. It is important to note
that thus the predicates _Rci_ are a kind of 'abstract' location predicates, in the
sense that they refer not just to a particular location of a statement but to a _set_
of locations.
Now we present the axioms and rules of our proof system.
The axiom for the assignment statement is as usual, apart from the addition
of an arbitrary invariant; this is allowed because there is no communication, so
none of the _Rc_ will hold during execution of the statement.
Axiom 1 _(assignment) I: {p[e/x]}x_ := _e{p}_
The output statement _c!!e_ is modeled as an assignment to the corresponding
output variable c!! which consists of appending the value sent to the sequence
c!!. The operation of 'append' is denoted by '.'. With respect to the invariant, a
similar remark holds as for the assignment axiom.
Axiom 2 _(output) I:_ {p[c!!-e/c!!]}c!!e{p}
An input statement _c??x_ is modeled as a (multiple) assignment to the variable
x and the input variable c??. The associated invariant states that when reading
on c, the substituted postcondition should hold.
##### Axiom 3 (input) Rc ~ Vv. Mvtx, c??. v/c??] : {Vv. p[vlx, v/c??]}c??x{p}
We now give the rule for sequential composition; the rules for the choice and
while statement can be obtained by extending in a similar way the usual rules
for these constructs.
Rule 1 _(sequential composition)_
##### I: {p}Sl{q}, I: {q}S2{r}
z: {p}S1; s2{r}
So in order to prove that I is an invariant of S 1 ; $2 one has, naturally, to prove
that I is both an invariant of $1 and $2.
-----
Rule 2 _(local consequence)_
##### I' -+ I, p ~ p', I' : {p'}S{q'}, q' --+ q
z: {p}S{q}
We introduce the expression c as an abbreviation of the expression c!! - c??. By
c!! - c?? we denote the suffix of the sequence c!! (i.e. the sequence of values sent)
which is determined by its prefix c?? (i.e. the sequence of values read). Thus c
represents the contents of the buffer, that is, the values sent but not yet read.
The empty sequence we denote by ~.
In preparation of the parallel composition rule, we first observe that a possible
deadlock configuration of a program P is characterized by: Every process is
either done or about to execute a read on a channel for which the corresponding
buffer is empty; moreover at least one process is not yet done. Suppose P = IS1 II
�9 .. ]1S~] and each S~ has input channels c~, ..., c,~,~. Hence we have the predicates
##### RC~l, ..., Rc~, for each i E {1, ..., n}. Furthermore assume a postcondition q~ for
each of the Si. Now we introduce a set of assertions _C(P),_ the disjunction of
which characterizes all possible deadlock configurations of P:
##### C(P) = {hipi I p~ =- Rc~ Ac~ = e, for some k _< m~, or p~ = q~,
and there exists j : pj ~ qj }.
Note that each assertion _p E C(P)_ characterizes a _set_ of possible deadlock con-
figurations.
Definition 3. Given some local postconditions ql, ..., q~, we define for local in-
variants I1,..., I~ the assertion _DF(I1, ..., I~)_ as
### A Ii A --+ false
pEt(P)
The above assertion _DF(I1, ..., I~)_ expresses that the conjunction of the local
invariants is inconsistent with any possible deadlock configuration, i.e. the as-
sertion Ai=l ~ guarantees deadlock freedom.
Local correctness formulas then can be combined into correctness formulas of an
entire program as follows:
Rule 3 _(parallel composition)_
##### Ii : {pi}Si{q~}(i = 1, ..., n), DF(II,..., I,~)
-----
In the premise of the above rule the formula DF(I1,..., I,~) is implicitly assumed
to be defined with respect to the local postconditions ql,..., qn. The composi-
tional method of proving deadlock freedom incorporated in the above rule can
be best understood by comparing it with the standard way of proving deadlock
freedom using the _proof outlines._ For example in [AFdRS0], given proof outlines
of the components of a CSP program P - [$1 [[ ... [] S,~], absence of deadlock
can be proved by first determining statically all possible deadlock configurations.
Such a configuration consists of a n-tuple of local locations Cone location for each
component). Each possible deadlock configuration then is characterized by the
conjunction of the assertions associated with its locations by the given proof out-
lines. Absence of deadlock then can be established by showing that the assertion
associated with each possible deadlock configuration is equivalent to false. The
main difference with our deadlock analysis lies in the use of the predicates _Rc_
which do not refer to a specific location but represent a set of locations, namely
all those locations where the corresponding process is about to execute a read
on channel c. In our case then deadlock freedom can be established by showing
that the conjunction of the local invariants, which provide information about
the local states of processes when these are about to execute a read, is incon-
sistent with any possible deadlock configuration. This abstraction from specific
locations, which is due to the restriction to local nondeterminism, allows for the
simple compositional proof rule for parallel composition described above.
Apart from the above rule for parallel composition we also have the usual con-
sequence rule for programs. With respect to reasoning about global states we
moreover have for each channel c the following axiom of asynchronous commu-
nication:
c?? _< c!!
where < denotes the prefix ordering on sequences.
The formal justification of the proof system, i.e. soundness and (relative) com-
pleteness can be proved in a rather straightforward manner using a compositional
semantics which associates with each statement S a meaning
M(S) e E -+ P(~ • Chan -* P(~))
( Z denotes the set of states, a state being a function which assigns values to
the program variables and the input/output variables, and _Chan_ denotes the set
of channel names). Here _(a', f) E Ad(S)(a),_ with _f E Chan ~ P(Z),_ indicates
that a' is the result of a terminating computation of S starting from or, and
every intermediate state a" just before an input on a channel c belongs to _fCc)._
In other words, _f(c)_ collects all the intermediate states which occur just before
an input on channel c is executed. Formally we then define for I -- A~ Rc~ ~ p~,
##### I : {p}S{q} iff for every pair of states a and a' and function f E
than --4 7~(Z), such that Ca', f) E AdCS)Ca ) and p holds in ~, it is the
-----
##### The semantics of a program can be defined in terms of the meaning A~I(S) of its components by a straightforward 'translation' of the parallel composition rule of the proof system. Moreover it is rather straightforward to prove the correctness of the compositional semantics with respect to an operational semantics. More details can be found in the technical report [dBvH96].
#### 5 Automated verification in PVS
##### In this section, we will show how the network topology determination algorithm can be specified and verified using PVS.
The specification to be proved is
_{Ai(lviewi[i, l] = top[i, l] A (j ~ i -+ lviewi[j, l] = false))}_
_[sl II ... II s~]_
##### {Ai Iview~ = top}
In words, if initially for every i, lviewl is initialized to the neighbours of i, then the program [$1 11 ... II S~] terminates in a state in which for any i, Iview~ equals the actual network topology top.
Using the local proof rules, it is not difficult to derive the following local speci- fication for each Si (it is implicitly assumed that the indices j and k range over the neighbours of i):
Aj Rcj~ -+ (Ak Ic~k!!l = r~ ^ Ak<j Ic~.*?l = r~ A A~_>j Ic~??l -- r~ - 1 ):
{lvie~[i, _l] = top[i, l] A (j # i -+ lvie~,[j, l] = false))_
#### &
**{q~ A Aj Ic~j!!l = Ic~??l = n}**
##### For the moment, we do not consider yet the first part of the postcondition ql, which we will consider in detail later in this section. The invariant informally states that when a process is ready to receive on channel cj~, all its outgoing channels have length r~, as well as its in-going channels from processes with index smaller than j, and the in-going channels from all processes from index j upward have length r~ - 1.
To derive the specification for [$I II .-. II S~] we have to show first that the condition for deadlock freedom holds, so that we can apply the parallel compo- sition rule. Then there remains to show that the conjunction of the q~ implies the globM postcondition A~ lviewi --'- top.
As to the first problem, we have to show for any p E C(P): A~ I~ A p --+ false.
-----
it involves starting at some process waiting for an input, and tracking down the
processes on which it is waiting until arriving at the first process again or at a
terminated process, which in both cases leads to a contradiction. The intricacy
of the proof stems from the fact that the processes may run 'out of phase' to a
considerable degree.
In the rest of this section, we will focus on the second essential part of the proof,
which involves an application of the global consequence rule. We now focus on
the specification of this problem in PVS. �9
Specifications in PVS are organized in theories, which may depend on other
theories via an importing mechanism. In particular, any theory may import
from the set of built-in theories. As an example of this, in the theory processes
below the type nat is (silently) imported. Theories may be parameterized, as in
our case: the parameter n denotes the number of processes that participate in
the algorithm. The first axiom below takes care that we are dealing with at least
2 processes. The type process is defined as a subtype of the natural numbers,
i.e. the primitive type nat. The type pairset will be used further on in the
definition of type links; it fixes the type of sets of 2-tuples of processes.
```
processes [ n: nat ] : THEORY
BEGIN
process : TYPE = {m: nat I 1 <= m AND m <= n}
pairset : TYPE = setof[[process,process]]
```
The variable declarations which follow below should be self-explanatory. The
constraints on the type links express the properties that any network topology
should possess: no channel should connect a process with itself (nonrefl), chan-
nels are bidirectional (more accurately: the existence of a channel implies the
existence of the reverse channel) (symmetric) and any process should be con-
nected to at least one process (connected) (we provide the definition of nonrefl
only). The projection functions proj_l and pro j_2 are built-in accessor functions
on tuples.
```
m,ml,k : VAR nat
i,j,il,jl,i2,j2 : VAR process
z, zl : VAR [process,process]
```
-----
```
P : VAK pairset
nonrefl : pred[pairset] =
LAMBDA (p):
(FOKALL(z):
(member(z, p)) IMPLIES proj_l(z) /= proj_2(z) )
links : TYPE = { p: pairset I nonrefl(p) AND
symmetric(p) AND
connected(p) }
1 : VAK links
#### The following fragment should be self-explanatory.
neighbors(l,i) yields the set of neighbors of process i in
linkset 1
neighbors: [links,process -> setof[process]] =
LAMBDA (l,i): { j I EXISTS (z): member (z,l) AND
proj_l(z) = i AND proj_2(z) = j }
path(l,i,j,m) = TRUE iff there exists a path of length m
between i and j in linkset 1
path : pred[[links,process,process,nat]] =
LAMBDA (l,i,j,m):
(EXISTS(sp: sequence[process]l:
i = sp(O) AND j = sp(m) AND
(FOKALL (mO: nat): mO < m IMPLIES
(member( sp(mO + i), neighbors(l,sp(mO)))) ))
```
-----
##### The next two lemmas are useful in proving the larger lemmas below. Their proof in PVS requires minimal effort, while they provide more clarity in bigger proofs. chain states that if there exists a path from i to j of length m + 1 then there exists a neighbor of i which has distance m to j.
```
chain LEMMA
FORALL (m:nat):
(path(l,i,j,m+l)
IMPLIES
(EXISTS (jl:process): member(jl, neighbors(l,i))
AND path(l,jl,j,m) ))
zeropath : LEMMA
path(l, i, j, O)
IMPLIES
i=j
The type matrix is used as representation for the data objects in our domain, viz. lview~ and nview~ in the algorithm. Each channel c~j is described by the channel variables • j) for c~j?? and outchan(i, j) for c~j!!.
matrix : TYPE = [process,process -> bool]
index : TYPE = {m:nat I m < n-l}
ix,ix2 : VAR index
chan : TYPE = [[process,process],index -> matrix]
inchan :chan
outchan : chart
topold(1, i) yields the matrix with only the i-th row filled in according to the neighbor set of i with respect to 1. Thus it corresponds to the value of lview~ at the beginning of the algorithm.
```
-----
```
LAMBDA (i, i) : (LAMBDA(il,jl):
IF i = il THEN member(jl, neighbors(l,i))
ELSE FALSE
ENDIF )
##### Using the rules of the proof system for local correctness formulas it is straightfor- ward to derive the following postcondition, for each i (note that any free variable is implicitly universally quantified over, so that postcond below expresses the conjunction over all i). Note that, because the postcondition directly relates the values of indexed channel variables (which are matrices), there is no need to introduce local variables. The postcondition, referred to as qi above, is plainly expressed by
cij!![ix] = (topold(1,i) v V ci2 # ? ? [ix2] )
i2eneighbors (i, i)
```
_O<ix2<ix_
##### In words, the matrix that is sent out to any j in the ix-th (outer) loop equals the original topology of the sender, or-ed with all inputs from its neighbors so far (note that V denotes the logical or lifted to matrices). Wrapping together all postconditions, this amounts to the following PVS expression:
```
postcond :AXIOM
member(j,neighbors(l,i)) IMPLIES
outchan((i,j),ix) =
(aAMBDA(il,jl):(topold(l,i)(il,jl) OR
(EXISTS(i2:process):
(EXISTS(ix2:index):
(member(i2,neighbors(l,i)) AND
ix2 < ix AND
inchan((i2,i),ix2)(il,jl)))) ))
The next temma chansplit which is used in the proof of main below was proven with induction on k. It expresses the following relation:
cij!![k + l] = (cij!![k] V V cj2~??[k])
```
-----
It reduces the matrix that has been sent over clj in the k + 1-th (outer) loop to
an expression consisting of matrices that were sent and received by i in the k-th
loop.
```
chansplit : LEMMA
forall(k) :
k<n-2
IMPLIES
(member (j, neighbors (i, i) )
IMPLIES
outchan((i,j) ,k+l) (il,jl) =
(outchan((i,j) ,k) (il,jl) OR
(EXISTS (j 2) : member (j 2, neighbors (i, i) )
AND inchan((j2,i),k)(il,jl) )))
```
Before coming to the main theorem, we show a few other helpful lemmas:
```
Z
Z lessdist is true iff there is a path between i and j with length
smaller than or equal to k
lessdist : [links,process,process,nat -> bool] =
LAMBDA(I,i,j,m):
EXISTS(ml):(ml <= m AND path(l,i,j,ml))
nextneigh : LEMMA
(lessdist(l,i,j,m+l) AND i /=3 )
IMPLIES
(EXISTS(i2):(member(i2,neighbors(1,i))
AND lessdist(1,i2,j,m)))
Idistl : LEMMA
lessdist(l,i,j,m) IMPLIES lessdist(l,i,j,m+l)
idist2 : LEMMA
```
-----
```
IMPLIES
FORALL(jl): (member(jl,neighbors(l,i))
IMPLIES
(NOT lessdist(l,jl,j,m)))
```
We now come to the main theorem which states that the k-th output over channel
##### cij is a matrix that equals topold(1, il) with respect to row il if the distance
in the network between i and il is less than or equal to k, and otherwise it
yields FALSE on that Tow. In particular, it follows from this theorem (again
using local reasoning) that after D executions of the loop, the value of _Iviewi_
corresponds with the network topology _top._ The second conjunct may not seem
too exciting, but is needed to keep the induction going.
```
main THEOREM
k < n-i IMPLIES
((lessdist(l,i,il,k)
IMPLIES
FOKALL (j): member(3, neighbors(l,i))
IMPLIES
(outchan( (i, j ), k) (il, j I) = topold(l, il) (il, j i) ) )
AND
((NOT lessdist(l,i,il,k))
IMPLIES
FORALL (j): member(j, neighbors(l,i))
IMPLIES
(outchan((i,j),k)(il,jl) = FALSE)) )
END processes
```
The proof of main is currently about 15 pages. Possibly this can be improved
by defining some clever strategies (in fact macros of proof steps). Perhaps more
interesting is to construct as general as possible a proof, so that it can be re-used
in the light of small changes.
#### 6 Conclusions
We have shown how the restriction to local nondeterminism gives rise to a simple
compositional proof system based on Hoare logic for distributed systems com-
-----
We used the theorem prover PVS in a non trivial application of the proof sys-
tem to the correctness of a heartbeat algorithm for computing the topology of
a network.
In general we believe that a fruitful line of research with respect to automated
verification is the syntactic identification of classes of distributed systems which
allow a simple compositional reasoning pattern.
#### References
[AFdR80] K.R. Apt, N. Francez, and W.-P. de Roever. A proof system for commu-
nicating sequential processes. _A CM- TOPLAS,_ 2(3):359-385, 1980.
lAnd91] Gregory R. Andrews. _Concurrent Programming, Principles and Practice._
The Benjamin/Cummings Publishing Company, Inc., 1991.
#### [cs95] D. A. Cyrluk and M. K. Srivas. Theorem proving: Not an esoteric di-
version, but the unifying framework for industrial verification. In _IEEE_
_International Conference on Computer Design (ICCD) '95, Austin, Texas,_
October 1995.
[dB941 F.S. de Boer. Compositionality and completeness of the inductive asser-
tion method for concurrent systems. In _Proc. IFIP Working Conference_
_on Programming Concepts, Methods and Calculi,_ San Miniato, Italy, 1994.
[dBHdR] F.S. de Boer, J. Hooman, and W.-P. de Roever. _State-based proof theory_
_of concurrency: from noncompositional to compositional methods._ Draft
of a book.
[dBvH941 F.S. de Boer and M. van Hulst. A proof system for asynchronously
communicating deterministic processes. In B. Rovan I. Prfvara and
P. Ru~i~ka, editors, _Proc. MFCS '9~, volume 841 of Lecture Notes in Com-_
_puter Science,_ pages 256-265. Springer-Verlag, 1994.
[dBvH95] F.S. de Boer and M. van Hulst. A compositional proof system for asyn-
chronously communicating processes. In _Proceedings MPC'95,_ Kloster
Irsee, Germany, 1995.
[dBvH96] F.S. de Boer and M. van Hulst. LocM nondeterminism in asynchronously
communicating processes. Technical report, Utrecht University, 1996. In
Preparation.
#### [fra92] N. Francez. Program Verification. Addison Wesley, 1992.
[HdR86] J. Hooman and W.-P. de Roever. The quest goes on: a survey of proof
systems for partial correctness of CSP. In _Current trends in concur-_
_rency,_ volume 224 of Lecture Notes in Computer Science, pages 343-395.
Springer-Verlag, 1986.
#### [OG76] S. Owicki and D. Gries. An axiomatic proof technique for parallel pro-
grams I. _Acta Informatica,_ 6:319-340, 1976.
#### [ORS92] S. Owre, J. Rushby, and N. Shankar. PVS: A prototype verification sys-
tem. In _11th Conference on Automated Deduction,_ volume 607 of Lecture
_Notes in Artificial Intelligence,_ pages 748-752. Springer-Verlag, 1992.
[Pan88] P.K. Pandya. _Compositional Verification of Distributed Programs._ PhD
thesis, Tata Institute of Fundamental Research, Homi Bhabha Road, Bom-
-----
[Raj94] S. Rajan. Transformations in high-level synthesis: Formal specification
and efficient mechanical verification. Technical Report CSL-94-10, CSL,
1994.
[ZdRvEB85] J. Zwiers, W.-P. de Roever, and P. van Emde Boas. Compositionality
and concurrent networks: Soundness and completeness of a proofsystem.
In _Proc. ICALP'85,_ volume 194 of _Lecture Notes in Computer Science._
Springer-Verlag, 1985.
-----
| 9,402
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/3-540-60973-3_97?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/3-540-60973-3_97, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://link.springer.com/content/pdf/10.1007/3-540-60973-3_97.pdf"
}
| 1,996
|
[
"JournalArticle"
] | true
| 1996-03-18T00:00:00
|
[] | 9,402
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00b1ce16371cd475e4c49882d8631cc249c086f7
|
[] | 0.859432
|
Noise Modulation-Based Reversible Data Hiding with McEliece Encryption
|
00b1ce16371cd475e4c49882d8631cc249c086f7
|
Security and Communication Networks
|
[
{
"authorId": "2128188107",
"name": "Zexi Wang"
},
{
"authorId": "2337436",
"name": "Minqing Zhang"
},
{
"authorId": "152280730",
"name": "Yong-jun Kong"
},
{
"authorId": "50013361",
"name": "Yan Ke"
},
{
"authorId": "2774932",
"name": "Fuqiang Di"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Secur Commun Netw"
],
"alternate_urls": [
"http://onlinelibrary.wiley.com/journal/10.1002/(ISSN)1939-0122",
"http://www.interscience.wiley.com/journal/security"
],
"id": "02a4454a-84c8-471c-9b40-cc045d4f3223",
"issn": "1939-0122",
"name": "Security and Communication Networks",
"type": "journal",
"url": "https://www.hindawi.com/journals/scn/"
}
|
McEliece cryptosystem is expected to be the next generation of the cryptographic algorithm due to its ability to resist quantum computing attacks. Few research studies have combined it with reversible data hiding in the encrypted domain (RDH-ED). In this article, we analysed and proved that there is a redundancy in the McEliece encryption process that is suitable for embedding. Then, a noise modulation-based scheme is proposed, called NM-RDHED, which is suitable for any signal and not only for images. The content owner scrambles the original image and then encrypts it with the receiver’s public key. The data hider generates a load noise by modulating additional data. After that, the load noise is added to the encrypted image, which achieves the data embedding. The reconstructed image is without any distortion after the direct decryption of the marked image, and the extracted data are no errors. The experimental results demonstrate our scheme has a higher embedding rate and more security, which is superior to existing schemes.
|
Hindawi
Security and Communication Networks
Volume 2022, Article ID 4671799, 14 pages
[https://doi.org/10.1155/2022/4671799](https://doi.org/10.1155/2022/4671799)
# Research Article Noise Modulation-Based Reversible Data Hiding with McEliece Encryption
### Zexi Wang,[1][,][2] Minqing Zhang,[1][,][2] Yongjun Kong,[1][,][2] Yan Ke,[1][,][2] and Fuqiang Di 1,2
_1College of Cryptography Engineering, Engineering University of PAP, Xian 710086, China_
_2Key Laboratory of PAP for Cryptology and Information Security, Xian 710086, China_
[Correspondence should be addressed to Minqing Zhang; [email protected]](mailto:[email protected])
Received 22 June 2022; Revised 17 September 2022; Accepted 11 October 2022; Published 30 October 2022
Academic Editor: Xuehu Yan
[Copyright © 2022 Zexi Wang et al. Tis is an open access article distributed under the Creative Commons Attribution License,](https://creativecommons.org/licenses/by/4.0/)
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
McEliece cryptosystem is expected to be the next generation of the cryptographic algorithm due to its ability to resist quantum
computing attacks. Few research studies have combined it with reversible data hiding in the encrypted domain (RDH-ED). In this
article, we analysed and proved that there is a redundancy in the McEliece encryption process that is suitable for embedding. Ten,
a noise modulation-based scheme is proposed, called NM-RDHED, which is suitable for any signal and not only for images. Te
content owner scrambles the original image and then encrypts it with the receiver’s public key. Te data hider generates a load
noise by modulating additional data. After that, the load noise is added to the encrypted image, which achieves the data
embedding. Te reconstructed image is without any distortion after the direct decryption of the marked image, and the extracted
data are no errors. Te experimental results demonstrate our scheme has a higher embedding rate and more security, which is
superior to existing schemes.
### 1. Introduction
Information hiding and cryptography are both important
technologies to protect user privacy and have been inseparable from people’s life. Reversible data hiding in the
encrypted domain (RDH-ED) [1–3] as their cross-research
hot spot has the characteristics of both privacy protection
and secret data transmission; it is not only to embed additional data but also to reconstruct the original carrier
without loss. Particularly, it has been applied in areas such
as telemedicine, judicial forensics, and the military. In the
past decades of development, researchers have been
working to improve embedding capacity (EC) and enhance
the security of RDH-ED, and have also achieved significant
results.
_1.1._ _In_ _terms_ _of_ _Improving_ _Embedding_ _Capacity._
Researchers have proposed two basic frameworks: vacating
room after encryption (VRAE) and vacating room before
encryption (VRBE). Te main methods of the former are
replacement or flipping of the least significant bits (LSBs),
such as the first RDH-ED scheme based on VRAE proposed
by Puech et al. [4], which encrypts an image with advanced
encryption standard (AES) and embeds 1-bit additional data
into a sub-block of the image containing 16 pixels. Te
receiver extracts the embedded data based on the local
standard deviation of the image with the recovery of the
original image. Subsequently, Zhang [5] proposed a scheme
based on stream encryption, partitioning the encrypted
image into nonoverlapping sub-blocks, and vacating room
to embed 1-bit additional data by flipping the 3 LSBs of subblock pixels; the EC is affected by the sub-block size; and the
quality of the recovered image and the EC are mutually
constrained. Hong et al. [6] improved the scheme [5] with
the side match method, which increases the EC and reduces
the bit error rate for extracting additional data. In addition,
the schemes based on compressing the least significant bits
[7, 8], re-encoding [9, 10], and pixel value ordering (PVO)
[11] are presented successively. Furthermore, adaptive
embedding, multi-layer embedding, and hierarchical embedding strategies [12–14] are effective to improve the EC.
-----
2 Security and Communication Networks
Since the weak correlation of encrypted images, it is
difficult to generate a large redundancy room so that the EC
is limited. To address the issue, Ma et al. [15] proposed a new
embedding framework of VRBE; that is, the original image is
fully compressed before encryption to reserve more space for
embedding. In Reference [15], the encrypted image is divided into two sets, the LSBs of one set are embedded into
the other to generate redundancy space, then, the image is
encrypted, and the data hider can directly replace the LSBs
with additional data to achieve embedding, which improves
the EC. Later, more and more methods that are used to
vacate the room before encryption was presented, such as the
most significant bit (MSB) prediction [16], bit plane rearrangement [17], parametric binary tree labeling (PBTL) [18],
and compressed coding, like sparse coding [19] and entropy
coding [20]. Most of the schemes rely on image correlation
and usually can obtain high EC for smooth images, while it is
smaller for images with complex textures. It is worth noting
that if the data hider wants to embed additional data into the
encrypted image, the image must be preprocessed before
encryption. However, to protect the image privacy, the
content owner can only complete this process, which exposes the purpose of hiding and is not practical.
Given the problems existing in the two embedding
frameworks of VRAE and VRBE, a new embedding framework for vacating redundancy in encryption (VRIE) was
proposed by Ke et al. [21]. Tey explored the redundancy in
the process of public-key encryption and proposed an RDHED scheme based on LWE, by quantizing the encrypted
domain room of LWE encryption and re-encoding its ciphertext to load it with additional bits. After that, they encapsulated the difference expansion method with fully
homomorphic encryption (FHE) to further enhance security
[22]. Recently, Kong et al. [23] have declared their scheme
based on McEliece encryption, but it does not reach the
security level required. Rather, it takes advantage of its error
correction capability to increase the robustness of the scheme.
_1.2. In terms of Enhanced Security. RDH-ED mainly utilizes_
stream cipher [5–8, 10, 12, 15] and block cipher [4, 24] in the
early. Te distribution of keys is difficult in a symmetric
cryptosystem, and the number of keys is large, thus costly to
manage. Public key encryption was introduced into RDHED, and the first scheme based on Paillier encryption was
proposed by Chen et al. [25], which divides a pixel into two
parts and encrypts them separately, and the data hider uses
the homomorphic property to embed 1-bit data into the two
LSBs of the encrypted pixels pair, and the decrypted image
can still maintain the relevance of the embedded data, but
the embedding rate (ER) is only 0.25 bit per pixel (bpp).
Later, Zhang et al. [26] proposed a lossless and reversible
method according to the probabilistic and homomorphic
properties of Paillier. Wu et al. [27] developed a hierarchical
embedding algorithm with Paillier encryption, which has a
higher EC. Subsequently, several excellent schemes are
designed [28, 29]. However, another issue of encrypted data
expansion is raised by public key encryption. Wu et al. [30]
and Chen et al. [31] adopted secret sharing as a lightweight
encryption method for RDH-ED to reduce data expansion,
enhance the privacy of images, and meet the needs of
multiple users. Te shares are changed because of the embedding, and it must be required that the shares can recover
lossless after extracting data, including schemes [32, 33].
Tere is some auxiliary information to achieve the reversibility for most schemes, which may be self-embedded in the
encrypted image or may be transmitted additionally; maybe,
it is a security hole. Terefore, Yu et al. [34] proposed a more
secure scheme without additional information transmission.
As we all know, Rivest Cipher 4 (RC4) was declared to be
broken in 2013 [35]. Furthermore, the security of most
public-key cryptographic algorithms is based on the difficulties of integer factorization or the discrete log problem, as
well as on elliptic curves. However, the discovery of Shor’s
algorithm and Grover’s search algorithm may reduce the
difficulty of integer factorization or shorten the search time
of keys, which will have a huge impact on the security of
public keys and even symmetric ciphers [36]. It will affect the
RDH-ED because its security depends in part on the
cryptographic algorithm, which means that more secure
encryption algorithms are considered to design the RDH-ED
scheme. McEliece encryption is one of the shortlisted algorithms for postquantum cryptography according to NIST
[37], which can resist quantum computing attacks and is
expected to be a new generation of cryptographic algorithms. To the best of our knowledge, there has been little
research work to combine McEliece with RDH-ED.
In this work, we focus on McEliece encryption to analyse
the redundancy for embedding in the encryption process and
proposed a noise modulation-based RDH-ED scheme (NMRDHED), which is suitable for any encrypted signal. Compared with the state of the art, it has more security that can
resist quantum computing attacks, and a higher embedding
rate due to it is not affected by carrier redundancy. Te
experimental results verify the excellent performance of our
scheme. Te main contributions are summarized as follows:
(1) McEliece cryptosystem as one of the postquantum
cryptographies is introduced into RDH-ED so that the
carriers and additional data can be better protected.
(2) We proved that there is a redundancy in the McEliece encryption process that is suitable for embedding. According to the error correction
characteristics of the coding base cipher and the
randomness of the noise, the random noise added to
the ciphertext can be regarded as embedded redundancy. We divide the noise into various subnoises and simplify it into two cases depending on
whether the Hamming weight is zero or not. It
concludes that there are two forms of redundancy in
the McEliece encryption process.
(3) A noise modulation-based embedding method is
proposed, and it modulates the additional data into a
load noise. We calculate the number of subnoises
with different Hamming weights by probabilistic
estimation, define a modulation principle to make
-----
Security and Communication Networks 3
full use of the redundant room, and then build
modulation tables. According to the table, the additional data can be modulated into a load noise,
which achieves the embedding.
(4) An NM-RDHED scheme is proposed. It has a higher
embedding rate and the reconstructed image is with
no distortion after the direct decryption of a marked
image, because the operation of data hiding does not
affect the procedure of encryption. Meanwhile, no
extra steps are required for decryption, so it has
strong concealment.
Te rest of this article is organized as follows: in Section
2, we introduce McEliece cryptosystem before analysing
and proving the redundancy for embedding. Ten, Section
3 details the proposed noise modulation-based RDH-ED
scheme. Section 4 provides the experimental results,
analysis, and comparisons. Finally, Section 5 draws a
conclusion.
### 2. Methodology
_2.1. McEliece Cryptosystem. Te McEliece cryptosystem [38]_
is a type of code-based public key cryptosystem that uses
binary Goppa error-correcting code [39], which security is
based on the NP-hard problem of finding a code word with
minimal Hamming distance to a given word. It has several
advantages, which can resist cryptanalysis in some quantum
computer settings.
_2.1.1. Goppa Code and Setting. We will briefly describe how_
to construct a binary [n, k, d] Goppa code Γ(L, g(x)) over
the finite field GF2m � GF2[x]/k(x), which satisfies
_m ≥_ 3, mt + 1 ≤ _n ≤_ 2[m], 2 ≤ _t ≤_ (2[m] − 1)/m, and k(x) is an mdegree irreducible polynomial, where t is the maximum
error-correcting capacity. Firstly, select n distinct elements
from GF2m to form a finite subset L � α1, α2, . . ., αn. Ten,
choose a t-degree irreducible polynomial g(x) ∈ _GF2m,_
which satisfies g(αi) ≠ 0 for all αi ∈ _L. Finally, compute all_
code words ci, which satisfy the polynomial g(x) and divide
the sum function:
⎨⎧ _n_ _ci_ ⎬⎫
Γ � ⎩c ∈ GF2[n]| i�1 _x −_ _αimod g(x) ≡_ 0⎭[.] (1)
_2.1.3. Encryption. To encrypt a k-length binary sequence_
message M, use the public key G[′] dot it and add random
noise E to disguise the ciphertext:
**C �** **M · G[′]** + E, (2)
where both the encrypted message sequence C and E are the
length of n and the Hamming weight wt(E) � _t._
_2.1.4. Decryption. Te receiver first uses the matrix P[−]_ [1] to
eliminate the influence of permutation. Ten, according to
Patterson’s decoding algorithm, he can use the parity check
matrix H to correct the error E[′] to decode C[′] and obtains the
message M[′] � **M · S. Finally, recover the original message M**
by eliminating S so that
**C[′]** �(M · S · G · P + Ε) · P[−] [1],
�(M · S) · G · P - P[−] [1] + Ε · P[−] [1],
�(M · S) · G + Ε[′],
where the noise E[′] satisfies wt(E[′]) � _wt(E)._
(3)
To set up a McEliece Cryptosystem, suppose a binary
Goppa code, which has parameters
[n � 2[m], k ≥ _n −_ _mt, d ≥_ 2t + 1], and its generated matrix and
parity check matrix are denoted by Gk×n and H(n−k)×n,
respectively.
_2.1.2. Key Generate. Generating a public and private key is_
detailed as follows: firstly, randomly choose an invertible
matrix Sk×k and a permutation matrix Pn×n. Ten, compute
**G[′]** � **S · G · P, where P has exactly one “1” in every row and**
column, with all other entries being zero. Finally, the public
key is Pk � G[′], t and the private key is Sk � g(x), G, S, P.
**M �** **M · S · S[−]** [1]. (4)
_2.2. Redundancy Analysis for Embedding. In the process of_
McEliece encryption, we find that there is a step called
disturbance that requires adding random noise to the ciphertext. Because the random noise can be completely
corrected in the decryption process, the additional data can
be embedded into the ciphertext through it and can be
extracted without errors. Besides, the randomness of the
noise allows us to generate a load noise that contains additional data to replace the random noise. Terefore, the
random noise can be regarded as redundant space for
embedding. Here, we will analyse the redundancy of the
random noise and demonstrate the feasibility of loading
additional data without reducing the security of the encryption algorithm.
Te random noise is a binary error pattern in coding
schemes, which uses “1” to indicate where an error has
occurred in a code word and “0” to indicate where no error
has occurred. Specifically, the random noise is a sparse
vector that consists of many “0” and a small number of “1”
under the security encryption parameters. Te random noise
produced by a pseudo-random sequence generator (PRSG)
obeys a uniform distribution. To generate a load noise that
has the same statistical character as the random noise, we
regarded a binary random noise of n bits with a Hamming
weight of at most t as a discrete memoryless source E and its
sample space is {0, 1}. Next, we use L elements as a group to
make up a new random variable that has which is equal to a
new source containing 2[L] symbols and called L-degree
extended source of E. Terefore, the load noise can be divided into many subnoises and building a special mapping
relation between the additional data with them is easier. To
simplify, these subnoises are classified into two cases: one
where the Hamming weight is zero, and the other where the
-----
4 Security and Communication Networks
Hamming weight is not zero. It concludes that there are two
forms of embedding redundancy.
_x_
Tere are possibilities for a subnoise of x bits with a
_r_
Hamming weight of r. More generally, the possibilities of
_x_
vectors with different Hamming weights satisfy 1 < 1 �
_x_ _x_ _x_ _x_ _x_
_x −_ 1 < 2 � _x −_ 2 - · · < ⌊r/2⌋ � ⌈r/2⌉ .
Considering a sequence of x bits represents possibilities at
most, if only the subnoises with Hamming weights of 0 and 1
are used to load additional data, there are 1 + 2[x] − 1 pos
sibilities, so the length of the subnoise is at least 2[x] − 1. Ten,
we denote the probability of subnoises with Hamming
2[x] − 1
weight y as Pr(ey) � _y_ (t/n)[y](1 − _t/n)[2][x][−]_ [1][−] _[y], with_
[2]y[x]�[−]0[1] [Pr][(][e]y[) �] [1][, x][ >][ 1][,][ 0][ ≤] _[y][ ≤]_ [2][x][ −] [1, where][ e][ represents the]
subnoise. Te sum of the number of subnoises is n⌊ /2[x] − 1⌋,
and their Hamming weights are less than or equal to t.
_n_
⎪⎧⎪⎪⎪⎨ _N0 + N1 + · · · + Ny + · · · + N2x−1 �_ � 2[x] − 1,
(5)
⎪⎪⎪⎪⎩ _N1 + 2 ∗_ _N2 + · · · + y ∗_ _Ny + · · · + (2x −_ 1) ∗ _N2x−1 ≤_ _t,_
where Ny is the number of subnoises with a Hamming
weight of y.
Te subnoise of length 2[x] − 1 bits has at most 2[2][x][−] [1]
possibilities. In this case, the mapping space of the subnoise
is larger than that of x bits. However, the number of subnoises is calculated by (5) before we know N0 ≫ _N1 > N2_
- N3 ≫ - · · ≫ _N2x−1. Besides, since the number of the_
subnoise with Hamming weight of 3 is less than 1 but not 0,
we decide with a 50% probability whether to use it. If used,
subtract 1 from N3 and add 1 to both N1 and N2, but it carries
no additional data. Terefore, only the subnoises with
Hamming weights of 0, 1, and 2 are used to carry the additional data, and the actual probabilities of the subnoises are
approximated by their frequency:
groups of length v bits. Ten, each group of the encrypted
data is divided into several code words consisting of x bits,
and the code word has 2[x] possibilities. Finally, the code
words in each group of encrypted data are counted. We
found that out of 100,000 tests, there are always certain code
words that account for a higher percentage. Furthermore,
considering that the number of the subnoise with Hamming
weight of 0 is also the most, the code word with the highest
percentage should be modulated into the subnoise as much
as possible. Terefore, we define a modulation principle to
make full use of the redundant room as follows.
_Definition 1. Te process of mapping code words consisting_
of x bits into a subnoise of length of 2[x] − 1 bits is called noise
modulation. Meanwhile, the ratio of the length of the additional data to a subnoise as a modulation rate (MR) is
_x_
MR � [len][(][additional data][)] � (7)
len(sub noise) 2[x] − 1[.]
When the greater the MR, the more embedded the
additional data is, so that it can be used to indicate the
efficiency of embedding. Note that the MR is maximum
when x � 2; thus, we mainly discuss the modulation method
under this case.
_Definition 2. A code word with a higher percentage in a group_
of data is supposed to be modulated into a subnoise with a larger
number, which we adopt as a modulation principle.
Finally, build a one-to-one mapping relationship between subnoises and additional data, the subnoises with
Hamming weights of 0 and 1 are grouped into ST1, and
Hamming weights of 0 and 2 are grouped into ST2. Tere are
_T1 and T2 kinds of mapping relationships, respectively, and_
are T1 × T2 kinds in total:
2[x] − 1
_Ty �_ 21[x] - ⎜⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝ _y_ ⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ - 2� _x −_ 1!, _y �_ 1, 2, (8)
2[x] − 1
where y represents the Hamming weights of the subnoise.
### 3. Proposed Scheme
In this section, we propose a noise modulation-based reversible data hiding scheme called NM-RDHED, which uses
images as a case of signals. Tere are some main symbols and
information listed in Table 2.
Te proposed NM-RDHED scheme embeds and extracts
additional data in the encryption and decryption process,
and it does not affect these processes so it has strong security
and concealment. Take the image as a case of signal to
introduce the scheme and provide a structure of our scheme
in Figure 1. Te content owner provides an original image,
scrambles it with security parameters, and encrypts it with a
public key of the receiver. Ten, the additional data are
modulated into a load noise by building a mapping. Finally,
add the load noise to the encrypted image to obtain a marked
image. During the decryption process, a receiver who has a
private key can directly decrypt the marked image and
Pr ey ≈ _Pr[′]ey_ �
_Ny_ 3
n/2[x] − 1[,][ and][ ]y�1 _Pr[′]ey_ � 1. (6)
Te number of subnoises with different Hamming
weights is calculated by equation (6) and listed in Table 1 in
different settings. Note that not all subnoises with Hamming
weight of 0 are used to carry the additional data.
In general, the additional data to be embedded is
encrypted and it obeys a uniform distribution. However,
there are certain statistical characteristics in the local scope
of encrypted data, and we have verified them through many
experiments. First, we generate a random sequence by PRSG
as the encrypted data and split it into a large number of
-----
Security and Communication Networks 5
Table 1: Te number of subnoises with different Hamming weights in different settings.
_t �_ 53 _t �_ 71 _t �_ 97 _t �_ 125 _t �_ 157
_m �_ 10 [292, 46, 2, 1] [275, 62, 3, 1] [253, 80, 7, 1] — —
_m �_ 11 [632, 48, 1, 1] [614, 66, 1, 1] [590, 88, 3, 1] [565, 110, 6, 1] [537, 134, 10, 1]
_m �_ 12 [1315, 48, 1, 1] [1297, 66, 1, 1] [1271, 92, 1, 1] [1244, 118, 2, 1] [1244, 146, 4, 1]
_Note. [N0, N1, N2, N3] represents the number of subnoises with Hamming weights of 0, 1, 2, and 3._
Table 2: Notions.
Symbols Information
_I_ Original image
_I s_ Scrambled image
_I e_ Encrypted image after McEliece encryption
_I m_ Marked image with additional data
_E_ Random noise
_E d_ Load noise that contains additional data
SI Side information
_Sk_ Te private key for the original image
_Pk_ Te public key for the original image
_Kd_ Data hiding key
_M_ An encryption parameter
_K_ Length of plaintext
_N_ Length of ciphertext
_T_ Maximum error-correcting capacity
_v_ Grouping length of additional data
correct the noise to recover the original image, and who has a
data hiding key can extract the embedded data from the
noise. Note that the scrambling parameters, the private key,
and the data hiding key are transformed through the secure
channel or the public channel based on the Diffie–Hellman
key exchange protocol.
_3.1. Image Encryption_
Step 1: to remove the correlations of the original image,
scrambling is necessary. First, we transform all pixels of
the grey-scale image I sized M × N to binary sequence
and then scramble in the pixel level, which dislocates
the position of all elements with Guan et al. [40]. Ten,
we segment the image into eight bit planes and
scramble within each bit plane by Li et al. [41]. Finally,
we denote Is as the scrambled image.
7
_p �_ 2[h] - ph � p0, p1, p2, p3, p4, p5, p6, p7, (9)
_h�0_
_ps �_ Josh(p, start, step) � p7, p5, p4, p0, p6, p2, p3, p1,
(10)
where the function of Josephus is described by Josh( ∗ ),
whose input p is an original pixel, the start is an initial
index and step is a step length, and the output ps is a
dislocated pixel; an example is given by equation (10).
_i[′]_ 1 _b_ _i_
⎡⎣ ⎤⎦ � , (11)
_j[′]_ _a a · b_ _j_
where i and j are the current index of bits in planes, and
_i′ and j′ are the new index of bits, and a, b are the_
parameters of Arnold.
Step 2: supposing the McEliece cryptosystem has parameters [n, k, t], public key Pk � G[′], t, and private
key Sk � g(x), G, S, P. Te scrambled image is segmented into eight bit planes and reshaped into binary
sequences in order of left to right and top to bottom.
Next, these sequences are divided into different groups
of the same length k and denoted as I[[]v[i][][][j][]], with
1 ≤ _i ≤_ 8, 1 ≤ _j ≤_ ⌊(8 × M × N)/k⌋. Te content owner
encrypts each group of sequences using a public key of
the receiver:
**I[[]ev[i][][][j][]]** � **I[[]v[i][][][j][]]** - G[′], (12)
where I[[]ev[i][][][j][]] is a group of ciphertext sequences that is
expanded from k bits to n bits, and [i][j] is the j-th
group of sequences at the i-th bit plane.
_3.2. Data Embedding_
Step 1: Generate a data hiding key Kd with a hyper
chaotic system [42, 43], which can provide a pseudorandom sequence of sufficient length. Next, encrypt
additional data with Kd.
Step 2: Te encrypted additional data are split into
numerous groups of length v bits, and each group of
data is divided into several code words of length x bits.
Ten, count the code words in each group of data, and
construct a modulation table that contains the relationship between the encrypted data and the subnoise
according to the modulation principle and Table 1.
Note that the modulation table has T1 × T2 possibilities,
of which modulation table id used depending on Kd.
Table 3 provides an example of the modulation table.
Step 3: Modulate code words of additional data into
many subnoises based on the modulation table generated in Step 2. After that, all the subnoises are used to
make up the load noise Ed. We select w bits from Kd at
each time and transform them to decimal digits as
indexes of the load noise. If the current index duplicates
the previous one, it will be skipped and the next is
checked until all subnoises are filled. Finally, the parts
unfilled will be filled by the subnoise with Hamming
weight of zero, and they do not carry additional data:
_w−1_ _n_
index[[][i][]] � i�0 2[i] - Kd[[][i][]], w ≤ log22[x] − 1, (13)
-----
6 Security and Communication Networks
Scrambled
Original Dislocate Scramble Image
Encrypt Public Key
Image Pixel-Level Bit-Planes
Encrypted
Content Owner
Image
Private key
Marked
Reconstructed Image
Reconstruct Decrypt Add Load Noise
Image
Additional Data Hiding Noise
Extract Load Noise
Data Key Modulation
Receiver Data Hiding Key Data Hider Additional Data
Figure 1: Structure of the NM-RDHED scheme.
|Scrambled Original Dislocate Scramble Image Encrypt Public Key Image Pixel-Level Bit-Planes Encrypted Content Owner Image|Col2|Col3|
|---|---|---|
|Private key Reconstructed Reconstruct Decrypt Image Additional Extract Load Noise Data Receiver Data Hiding Key||Marked Image Add Load Noise Data Hiding Noise Key Modulation Data Hider Additional Data|
||||
Table 3: An example of the modulation table.
Subnoises
Code words Percent (%)
_wt(e) �_ 0, 1 _wt(e) �_ 0, 2
[0, 0] 12.5 [0, 1, 0] [1, 1, 0]
[0, 1] 25.0 [1, 0, 0] [0, 1, 1]
[1, 0] 50.0 [0, 0, 0] [0, 0, 0]
[1, 1] 12.5 [0, 0, 1] [1, 0, 1]
where the symbol of ⌊∗⌋ represents the operation of
round down.
Step 4: Te load noise containing additional data is
added to the ciphertext by equation (14) so that a
marked ciphertext is obtained. Repeat Steps 2 to 4, and
then, all marked ciphertexts make up the marked image
**Im, which still has eight bit planes, but is larger than the**
original image:
**I[[]mv[i][][][j][]]** � **I[[]ev[i][][][j][]]** + E[[]d[i][][][j][]], (14)
where the symbol “+” represents XOR, the size of the
marked image is M′ × N′ � n × M × N)/k, and the
symbol of ⌈ ∗ ⌉ represents the operation of round up.
_Side Information. Te code words with the highest_
percentage in each group of additional data need to be
recorded as side information side information (SI), which
ensures that the unique modulation table can be identified
when extracting the data. Te side information is regarded as
additional data and is embedded into the ciphertext. Note
that the side information new generated whose size is
smaller is filled into the marked image, because some
random pixels need to be filled when the marked ciphertext
sequences are converted into an image.
_3.3. Data Extraction and Image Reconstruction. Te receiver_
decrypts the marked image with Sk to reconstruct the
original image. Meanwhile, the load noise can be corrected
during the decryption so that the additional data are
extracted with the Kd extracted. Tere are three possible
outcomes: the first is that the receiver has only the Kd and he
cannot get any information. Te second scenario is that the
receiver has only the Sk and he can only reconstruct the
original image. Te last case is that the receiver has both
keys, and he can not only extract the additional data but also
reconstruct the original image.
_3.3.1. Image Reconstruction. Te receiver segments the_
marked image into a stack of eight bit planes and reshapes
each bit plane into some sequences of n bits. Ten, decrypt
marked ciphertext I[[]mv[i][][][j][]] and correct load noise E[[]d[i][][][j][]] group
by group, 1 ≤ _i ≤_ 8, 1 ≤ _j ≤_ (8 × M[′] × N[′])/n. Finally, calculate
**I[[]v[i][][][j][]]** by using matrix S[−] [1] of the private key and then inverse
scrambling of images in bit plane and pixel level. Te
reconstructed image has no distortion compared to the
original image:
**I[′][[]ev[i][][][j][]]** � **I[[]mv[i][][][j][]]** - P[−] [1],
(15)
**E[[]d[i][][][j][]]** � Correct I [′][[]ev[i][][][j][]], H(n−k)×n,
where the function of Correct( ∗ ) is Patterson’s decoding
algorithm, and G · H[T] � 0.
_3.3.2. Data Extraction. Divide each load noise E[[]d[i][][][j][]]_ into
some subnoises consisting of 2[x] − 1 bits, and create indexes
for them, by using the Kd as the indexes to identify which
subnoises carry additional data and extract them sequentially. Next, extract the first group of SI from the marked
ciphertext, and the unique modulation table that is used to
modulate the load noise in each group is determined by Kd
and SI. Finally, recover the additional data according to the
modulation table.
_3.4. Example. Figure 2 provides an example that can help_
readers better understand the NM-RDHED scheme, where the
encryption parameters are [m =10, n =1024, t =71, k =314] and
embedding parameters are [v =16, x =2]. Te image scrambling
consists of two phases. First, all pixels are transformed into
binary sequences, and then, dislocate each element of the sequence, such as the pixel of 164; its binary sequence is
“10100100,” and the dislocated sequence is “01010001” after
Josephus scrambling. Terefore, the original pixel [164, 167, 170,
-----
Security and Communication Networks 7
Dislocate
10100100 01010001 127 217 Scrambled
Image
86 154 1111000/00010111…11000101/00111010…
164 167 240 23
170 172 197 58 Encrypt Public Key
8-MSB 01…01… 10…10…
Scramble
Original 7-MSB 10…01… 01…10… 00011000/01001101/01100011… 11110100/01100111…
…
Image
LSB 11…00… 10…10… Ciphertext
Index:138 Index:272
…/100/…/000/000/000/…/010/…/000/000/000/010/…/001/000/… Add
Load Noise
…/175/138/23/302/…/110/…/164/272/306/118/…/15/240/… 00010000/01001001/01100011…01110100/01101111…
Data Hiding Key Marked Ciphertext
…000/100/010/101/000/000/100/000…/001/000/010/100/000/000/010/011…
Sub Noise
Code Percent Sub Noise Code Percent Sub Noise Side Information 16 73 99 111
|Dislocate|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
|10100100 01010001|||||127|217|S||
|10100100|||||||||
||||||||||
||||||||||
||164|167||86 154 8-MSB 01…01… 10…10… Scramble 7-MSB 10…01… 01…10… … LSB 11…00… 10…10…|86|154|||
||170|172|||||||
||Original Image||||||||
|||||||||10…10…|
|Image|Col2|Col3|
|---|---|---|
|240|23||
||||
|197|||
||58||
||||
|Code|Percent|Sub Noise|
|---|---|---|
|00|12.5%|010or110|
|01|25.0%|100or011|
|10|50.0%|000|
|11|12.5%|001or101|
|Col1|Col2|Sub|
|---|---|---|
|Code|Percent|Sub Noise|
|00|25.0%|010or101|
|01|37.5%|000|
|10|25.0%|100or011|
|11|12.5%|001or110|
|…|/100|/…/000/000/000/…/010/…/000/000/000/|Col4|Col5|Col6|Col7|Col8|010|/…/001/000/…|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||
|…/175/|||13|8/23/302/…/110/…/164/|272/|306/118/…/15/240/…|||||
||||||||||||
|…000/|100/|010/101/000/000/100/000…/001/000/|||||010|/100/000/000/010/011…|||
|Code Percent Sub Noise 00 12.5% 010or110 01 25.0% 100or011 10 50.0% 000 11 12.5% 001or101|||||||||||
|…1001…10/01/00||/11/10/10/01/10…01011001…101…11/01/00/10||||||||/01/01/00/10…10101…|
|00011000/01001101/01100011… 11110100/01100111…|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
|||||Ciphertext Add|||
|00010000/01001001/01100011…01110100/01101111…|||||||
|||||Marked Ciphertext 99 111 134 254|||
||16|73||99|111||
||115|22||134|254||
||||||||
Additional Data
Marked Image
Figure 2: An example of the NM-RDHED scheme.
172] is scrambled to [127, 217, 86, 54]. Secondly, each bit plane is
scrambled by the Arnold algorithm and obtained a scrambled
image, and the 8-MSBs of the dislocated pixels make up a new
sequence “01...01...” that is scrambled to “10...10...”; thus, the
scrambled pixels are [240, 23, 197, 58]. Ten, all scrambled pixels
are transformed into many groups of binary sequences that
consist of 314 bits “1111000000011111...1100010100111010
...,” and they are encrypted by the public key of the receiver. Te
ciphertext sequence “000110000100110101100011...1111010001
100111...” is obtained, which extends to 1024 bits. A group of
encrypted additional data “1001001110100110” that contains 16
bits, where each 2-bit is a code word, and count them; we find
that the code word ‘10’ takes up 50%, both ‘00’ and ‘11’ take up
12.5%, and the code word ‘01’ takes up 25%. According to the
modulation principle, there are 24 × 72=1728 kinds of modulation tables, and only one is adopted that is determined by the
data hiding key.
Here, taking the left modulation table as an instance, the
code word “00” can be modulated into “010” or “110” based
on the number of subnoises provided in Table 1. However,
the code word “10” only can be modulated into “000” due to
it having the highest percentage. After that, the subnoise
“100” is filled into the index of 138 of the load noise, where
the index is formed by the data hiding key. Until all subnoises are filled, the unfilled parts are filled with “000.”
Finally, the load noise is added to the ciphertext to get a
marked ciphertext. Te process of data extraction is the
opposite of embedding.
### 4. Experimental Results
In this section, we use six different features of grey-scale
images with a size of 512 × 512 as a case of signal to
experiment, as shown in Figure 3. Furthermore, 100 images
are randomly selected from the BOSS Base library and
converted into binary sequences as a universal signal. Te
results are elaborated to demonstrate the performance of the
proposed scheme. Te simulation program is run on a
computer with eight cores and a 2.30 GHz CPU, 32 GB of
RAM, and a Windows 10 operating system with MATLAB
2021b.
_4.1. Embedding Rate. In our scheme, additional bits embed_
into a group of load noise at each time, and the groups are
independent of each other. Te load noise is of the same
length as the encrypted data, which is considered a cover.
What’s more, the side information also affects the actual
embedding rate. It converts the number of noise bits into the
number of pixel blocks and uses bit per pixel (bpp) as the
unit. Define the embedding rate (ER) and effective embedding rate (EER) as follows:
ER � 8 · [Embedded bits],
Noise bits
(16)
EER � 8 · [Embedded bits][ −] [Side information bits].
Noise bits
Tere are two primary factors affecting the ER of the
NM-RDHED scheme, which is not constrained by the image
content. Tus, we generate 100,000 random sequences by
PRSG as the encrypted additional data to evaluate ER.
_4.1.1. Te Factor of Encryption Parameters. When m is fixed,_
the larger t is, the higher the ER, because there are more bits
“1” in the noise to load the additional bits. Due to the length
-----
8 Security and Communication Networks
(a) (b) (c)
(d) (e) (f)
Figure 3: Six different features grey-scale test images: (a) Lena, (b) Baboon, (c) Plane, (d) Boat, (e) Peppers, and (f) Man.
Table 4: Embedding rate of the proposed scheme in different settings.
_m �_ 10 _m �_ 11 _m �_ 12
ER (bpp)
_t �_ 53 _t �_ 71 _t �_ 97 _t �_ 53 _t �_ 71 _t �_ 97 _t �_ 53 _t �_ 71 _t �_ 97
Best 1.88 2.50 3.28 0.94 1.23 1.65 0.49 0.62 0.81
Worst 1.09 1.52 2.13 0.57 0.78 1.11 0.29 0.40 0.55
Average 1.37 1.87 2.60 0.70 0.95 1.30 0.35 0.48 0.66
_n of noise being determined by the parameters m, fixing t,_
and increasing m, the ER decreases. Table 4 shows the
embedding rate of the proposed scheme in different settings,
and the average ER reaches 2.60 bpp when m � 10, t � 97. To
show the embedded rate more visually and comprehensively,
Figure 4 shows the trend of ER, which is linearly and
positively correlated with t. When m � 12, t � 340, the ER still
reaches 2.34 bpp. To illustrate the embedding performance,
the ER on 100 randomly selected images from BOSS Base is
shown in Figure 5.
_4.1.2. Te Factor of Grouping Length. On the one hand, the_
shorter the grouping length v, the higher the statistical correlation of code words, and the higher the percentage of
certain code words. It means there are more subnoises with a
Hamming weight of 0 that can be used to load additional data
according to the modulation principle. Figure 6(a) provides
the percentages of the largest percentage in the 100,000 tests
after counting the largest percentage of code words in different-length encrypted data. We found that the largest
percentage is 37% when the grouping length is 16 bits, which
is over 50%, and the largest percentage is 50% whose percentage is over 30%. We conclude that the shorter the
grouping length, the higher the ER. On the other hand, the
code words with the highest percentage are recorded as the
side information in the embedding process. Te amount of
side information depends on v. Consider that the EER is
constrained by v. Figure 6(b) illustrates the influence of
grouping length on the ER. Te longer the grouping length is,
the smaller the amount of side information is, the smaller the
effect on the EER is, and the EER is closer to the ER. However,
with the grouping length increasing, the ER decreased.
Table 5 presents a comparison of embedding rates with
different schemes. Both schemes [16, 34] are based on stream
cipher, which embeds data by using MSB replacement. Te
former sufficiently uses the image redundancy recursively and
gets a higher ER, an average of 1.71 bpp. Te latter does not
-----
Security and Communication Networks 9
3.0
2.5
2.0
1.5
1.0
0.5
0.0
0 50 100 150 200 250 300 350
Error-correct capacity (bit)
m = 10
m = 11
m = 12
Figure 4: Average embedding rate in different encryption parameters.
2.30
2.20
2.10
2.00
1.90
1.80
1.70
1.60
0 20 40 60 80 100
1.10
1.05
1.00
0.95
0.90
0.85
0.80
0 20 40 60 80 100
The number of images
0.54
0.52
0.50
0.48
0.46
0.44
0.42
0 20 40 60 80 100
The number of images
The number of images
m = 10,t = 71
(a)
m = 11,t = 71
(b)
m = 12,t = 71
(c)
60
50
40
30
20
10
0
Figure 5: Embedding rate of NM-RDHED on 100 randomly selected images from BOSS Base.
2.8
2.6
2.4
2.2
2.0
1.8
1.6
1.4
1.2
20 30 40 50 60 70 80 90 100
The largest percent of cord word (%)
1.0
0 50 100 150 200 250 300
Grouping length (bit)
EER (m = 10, t = 71)
ER (m = 10, t = 97)
EER (m = 10, t = 97)
v = 256 bits
v = 128 bits
v = 64 bits
v = 32 bits
v = 16 bits
ER (m = 10, t = 53)
EER (m = 10, t = 53)
ER (m = 10, t = 71)
(b)
(a)
-----
10 Security and Communication Networks
Table 5: Comparison with other schemes in aspects of the ER and PSNR.
ER (bpp)
Schemes Encryption methods PSNR
Lena Baboon Plane Boat Peppers Man Average
[16] Stream cipher 1.70 0.87 0.96 1.50 1.66 1.45 1.71 +∞
[34] Stream cipher 0.25 0.24 0.25 0.25 0.25 0.25 0.24 ≥35
[25] Paillier 0.50 0.50 0.50 0.50 0.50 0.50 0.50 ≥40
[26] Paillier 0.36 0.27 0.31 0.24 0.29 0.28 0.30 ≥35
[27] Paillier 0.56 0.30 0.73 0.43 0.43 0.41 0.56 ≥35
[22] FHE 0.42 0.26 0.44 0.37 0.42 0.40 0.40 ≥40
[32] Secret sharing 2.91 1.25 3.24 2.78 2.57 2.19 2.55 +∞
[33] Secret sharing 0.33 0.16 0.38 0.21 0.31 0.28 0.32 ≥45
[23] McEliece 2.11 0.61 2.18 1.61 1.94 1.74 1.70 +∞
Proposed McEliece 2.93 2.13 2.85 2.41 2.62 2.21 2.53 +∞
consider the redundancy of nature images so that ER is
smaller and more stable. Besides, the public key encryptionbased schemes [25–27] are aiming to embed additional data in
encrypted images directly, which is achieved by homomorphic addition. Terefore, the embedding rate is lower and is
constrained by the Paillier encryption. Moreover, fully homomorphic encryption encapsulated a difference expansion
scheme [22], as expected, which ER is not higher because of
the principle of DE, as well as the scheme [33]. For schemes
[23, 32], even if the encryption methods are different, their
higher ER still depends on the image correlation. However,
the ER of our scheme is independent of the image content. As
a result, our scheme has a higher ER than others and the
average ER reaches 2.53 bpp with sufficient security.
_4.2. Reversibility. Te reversibility of the reconstructed_
images can be analysed in two aspects. According to the
embedding principle, the main consideration is whether
there are data that are discarded during the embedding
procedure and cannot be reconstructed directly or indirectly. Also, the peak signal-noise ratio (PSNR) or structural
similarity (SSIM) is used to evaluate the distortion degree of
the reconstructed image compared with the original image.
Te additional data are modulated into load noise before
adding to the ciphertext, and the obtained marked ciphertext is
equal to a new ciphertext, because the disturbance of the noise
to the ciphertext is within the decryption error correction
capability. Te marked image is directly decrypted, can entirely
correct the load noise, and reconstruct the original image,
which ensures the reversibility of the proposed scheme.
Table 5 also shows the comparison of reconstructed image
quality with other schemes. Tese results of the PSNR in
schemes [22, 25–27, 33, 34] are calculated by comparing the
directly decrypted image with the original image, which all
have good visual quality. Sometimes, it is necessary to introduce additional operations to recover images lossless, like
schemes [16, 23, 32]. Furthermore, we randomly select 100
images from BOSS Base to test the quality of the constructed
images. Table 6 gives the results of the PSNR and SSIM in
different parameters and EC, where the PSNR reaches infinity
when the EC � 300,000 bits, which means there is no difference between reconstructed images and original images.
SSIM evaluates the constructed image quality from three
metrics of luminance, contrast, and structure. In different
Table 6: Te PSNR and SSIM of the proposed scheme on 100
randomly selected images from BOSS base.
Metrics _m �_ 10, t � 53 _m �_ 11, t � 71 _m �_ 12, t � 97
EC (bit) 300,000 300,000 200,000
PSNR +∞ +∞ +∞
SSIM 1 1 1
parameters and embedding capacities, SSIM reaches the
expected value of 1, so the constructed image is lossless. We
conclude that our scheme is completely reversible.
_4.3. Data Expansion and Complexity. After McEliece en-_
cryption, the length of the binary ciphertext sequence is
greater than that of the plaintext, which is called data expansion. We define the data expansion rate as follows:
2[m]
EX � _[n]_ (17)
_k_ [�] 2[m] − _m · t[,]_
where the [m, n, k, t] are parameters of McEliece.
In our scheme, the data expansion is caused by encryption and is not related to the embedding operation.
However, data expansion is a negative effect in pursuing
higher embedding rates; for instance, for fixing mandn, a
larger ER can only be obtained by increasing t, but it will lead
to unacceptably data expansion. Terefore, an excellent
trade-off obtained between data expansion and embedding
rate determines an appropriate parameter t. Figure 7(a)
provides a reference basis. Here, it reaches a better tradeoff between embedding rate and side information.
To evaluate the time complexity of the scheme, we use
the number of the groups performing the embedding operation as a metric and denote the total embedding capacity
as TEC, each group embedding capacity as EC. Furthermore,
because the embedding production is performed in groups,
and groups are independent of each other; thus, as the
embedding capacity increases, the increasing of time consumption is linear complexity:
TEC
(18)
EC [�] [8]2[ ·][m][ TEC]· ER [⟶] _[O][(][N][)][,]_
where the embedding rate (ER) could be regarded as a
constant.
-----
Security and Communication Networks 11
300
250
0.10
0.08
200
150
0.06
0.04
100
50
0
50 100 150 200 250 300
Error-correcting capacity (bit)
0.02
0.00
0 500 1000 1500 2000
m = 10
m = 11
m = 12
v = 256 bits
v = 128 bits
v = 64 bits
Embedding capacity (bit)
v = 32 bits
v = 16 bits
(a)
(b)
Figure 7: Data expansion and complexity of the proposed scheme: (a) data expansion rate in different encryption parameters and (b)
computational cost (in seconds) in different grouping lengths and embedding rates (m � 11, t � 71).
4000
3500
3000
2500
2000
1500
1000
500
3000
2500
2000
1500
1000
500
3000
2500
2000
1500
1000
500
0 50 100 150 200 250
Pixel values
0
0 50 100 150 200 250
Pixel values
0 50 100 150 200 250
Pixel values
0
0
(a)
(b)
(c)
Figure 8: Histogram of images before and after embedding data: (a) histogram of the original image, (b) histogram of the encrypted image,
and (c) histogram of the marked image.
Figure 7(b) provides computational cost in different
grouping lengths and embedding capacities. When the embedding capacity is fixed, the longer the grouping length v, the
more the run-time cost is. However, the computational cost is
minimal under the 32 bits grouping for embedding 128 bits,
which just costs 0.005233s.
_4.4.SecurityAnalysis. In this part, we evaluate the security of_
NM-RDHED from the aspects of statistical characteristics of
marked images and differential attacks. As a result, the
proposed scheme has higher security.
_4.4.1. Statistical. As for a secure RDH-ED scheme, the_
marked images and the encrypted images should have
similar statistical properties. To find the difference between
an encrypted image and a marked image, Figure 8 gives the
histogram of the original, encrypted, and marked images of
Lena. It is easy to find that the histograms of the marked
image and the encrypted image are similar, and both obey a
uniform distribution, unlike the statistical features of the
original images. Besides the histogram, correlation is also
supposed to be considered. Te correlation between
neighbouring pixels in nature images is very strong.
Figure 9(a) shows a correlation between Lena. We randomly
select 3000 pair pixels to test the correlation of the marked
image in the horizontal and vertical direction and assess the
influence of embedding operation on it. As shown in
Figures 9(b) and 9(c), they do not have any correlation.
Terefore, the embedding operation does not affect it, and
the marked image is secure in statistics.
_4.4.2. Differential Attack. Image security encryption theory_
requires that encrypted images must be extremely sensitive
to plaintext and keys; otherwise, they cannot effectively
resist differential attacks. Number of pixel change rate
-----
12 Security and Communication Networks
250
200
150
100
50
250
200
150
100
50
250
200
150
100
50
0
0 50 100 150 200 250
Pixel values
(c)
0
0 50 100 150 200 250
Pixel values
(a)
0
0 50 100 150 200 250
Pixel values
(b)
Figure 9: Correlation scatter of images: (a) correlation of original image, (b) horizontal correlation of marked image, and (c) vertical
correlation of marked image.
Table 7: Results of Entropy, NPCR, and UACI in the NM-RDHED scheme.
Entropy
Parameters NPCR (%) UACI (%)
Encrypted image Marked image
_t �_ 53 7.999664752 7.999677022 99.61791485 33.41627523
_m �_ 10 _t �_ 71 7.999752149 7.999765369 99.60227857 33.44588781
_t �_ 97 7.999964888 7.999966860 99.60777406 33.45717517
_t �_ 53 7.999490145 7.999527118 99.62612496 33.49962531
_m �_ 11 _t �_ 71 7.999465042 7.999512357 99.60785974 33.49380102
_t �_ 97 7.999653732 7.999642044 99.61431632 33.41455707
_t �_ 53 7.999303357 7.999342806 99.61652476 33.46878585
_m �_ 12 _t �_ 71 7.999375217 7.999449063 99.58013714 33.37529066
_t �_ 97 7.999423866 7.999534749 99.60679129 33.37616961
(NPCR) and the normalized average changing intensity
(UACI) are used as an important indicator of cryptanalysis.
When the image encryption method is secure enough, the
sensitivity of the NPCR and UACI to the plaintext is
analysed for grey-scale images with 8 bits depth. Te expected values of the NPCR and UACI are 99.6094% and
33.4635%, respectively.
Considering an image I, we modify one pixel of it and
denote the modified image I′, and encrypt them with the
same public key in different settings. Next, during the
disguising process, random noise is added to one image, and
load noise with additional data is added to the other; two
marked images are Im and Im[′] obtained. Calculate the NPCR
and UACI with them, as listed in Table 7. We can know that
the NPCR and UACI are very close to the theoretical values.
Te embedding scheme does not affect the security of the
original encryption algorithm and can effectively resist
differential attacks. Meanwhile, the entropy of marked
images is close to the limit of entropy 7.99. Tis is because
the load noise is indistinguishable and does not affect the
security of the McEliece encryption.
### 5. Conclusion
could embed additional data into the encrypted image, but
only the receiver with a private key and a data hiding key
could extract the embedded data. Compared with other
schemes in aspect of the embedding rate, the proposed
scheme has a higher ER. Although the side information influences on the ER, an appropriate grouping length makes an
excellent trade-off and maintains a higher ER. Te reconstructed image with no distortion after direct decryption of a
marked image is superior to the state-of-the-art schemes. Our
scheme shows better security in both statistical security and
resistance to differential attack analysis, and McEliece as a
postquantum cryptographic algorithm can resist quantum
computing attacks, so the scheme has higher security and
meets the demand of RDH-ED for future security development. In the future, we concentrate on reducing the amount
of side information and improving the embedding rate.
### Data Availability
Tis article proves the redundancy room of McEliece encryption that can be used to embed additional data and
proposes a new noise modulation-based reversible data
hiding in the encrypted domain scheme called NM-RDHED,
which is suitable for any signal processing. Any data hider
Te BOSSBase database images used in this article are from
[https://agents.fel.cvut.cz/boss/index.php?](https://agents.fel.cvut.cz/boss/index.php?mode=view&tmpl=materials)
[mode=view&tmpl=materials, other data used to support](https://agents.fel.cvut.cz/boss/index.php?mode=view&tmpl=materials)
the findings of this study are included within the article.
### Conflicts of Interest
Te authors declare that there are no conflicts of interest
regarding the publication of this article.
-----
Security and Communication Networks 13
### Acknowledgments
Tis work was supported by the National Natural Science
Foundation of China, under grants nos. 61872384, 62102450,
62102451, and 62202496.
### References
[1] Y. Q. Shi, X. Li, X. Zhang, H. T. Wu, and B. Ma, “Reversible
data hiding: advances in the past two decades,” IEEE Access,
vol. 4, pp. 3210–3237, 2016.
[2] P. Puteaux, S. Y. Ong, K. S. Wong, and W. Puech, “A survey of
reversible data hiding in encrypted images: the first 12 years,”
_Journal of Visual Communication and Image Representation,_
vol. 77, Article ID 103085, 2021.
[3] S. Kumar, A. Gupta, and G. S. Walia, “Reversible Data Hiding:
A Contemporary Survey of State-Of-Te-Art, Opportunities
and Challenges,” Applied Intelligence, vol. 52, pp. 1–34, 2021.
[4] W. Puech, M. Chaumont, and O. Strauss, “A reversible data
hiding method for encrypted images,” Proceedings of SPIE,
_Security, forensics, steganography, and watermarking of mul-_
_timedia contents X, vol. 6819, pp. 534–542, 2008._
[5] X. Zhang, “Reversible data hiding in encrypted image,” IEEE
_Signal Processing Letters, vol. 18, no. 4, pp. 255–258, 2011._
[6] W. Hong, T. S. Chen, and H. Y. Wu, “An improved reversible
data hiding in encrypted images using side match,” IEEE
_Signal Processing Letters, vol. 19, no. 4, pp. 199–202, 2012._
[7] X. Zhang, “Separable reversible data hiding in encrypted
image,” IEEE Transactions on Information Forensics and Se_curity, vol. 7, no. 2, pp. 826–832, 2012._
[8] Z. Yin, B. Luo, and W. Hong, “Separable and error-free reversible data hiding in encrypted image with high payload,” Te
_Scientific World Journal, vol. 2014, Article ID 604876, 8 pages,_
2014.
[9] M. S. Abdul Karim and K. S. Wong, “Data embedding in
random domain,” Signal Processing, vol. 108, pp. 56–68, 2015.
[10] Z. Qian and X. Zhang, “Reversible data hiding in encrypted
images with distributed source encoding,” IEEE Transactions
_on Circuits and Systems for Video Technology, vol. 26, no. 4,_
pp. 636–646, 2016.
[11] D. Xiao, Y. Xiang, H. Zheng, and Y. Wang, “Separable reversible data hiding in encrypted image based on pixel value
ordering and additive homomorphism,” Journal of Visual
_Communication and Image Representation, vol. 45, pp. 1–10,_
2017.
[12] H. Ge, Y. Chen, Z. Qian, and J. Wang, “A high capacity multilevel approach for reversible data hiding in encrypted images,”
_IEEE Transactions on Circuits and Systems for Video Tech-_
_nology, vol. 29, no. 8, pp. 2285–2295, 2019._
[13] C. Yu, X. Zhang, X. Zhang, G. Li, and Z. Tang, “Reversible data
hiding with hierarchical embedding for encrypted images,”
_IEEE Transactions on Circuits and Systems for Video Tech-_
_nology, vol. 32, no. 2, pp. 451–466, 2022._
[14] C. Yu, X. Zhang, G. Li, S. Zhan, and Z. Tang, “Reversible data
hiding with adaptive difference recovery for encrypted images,” Information Sciences, vol. 584, pp. 89–110, 2022.
[15] K. Ma, W. Zhang, X. Zhao, N. Yu, and F. Li, “Reversible data
hiding in encrypted images by reserving room before encryption,” IEEE Transactions on Information Forensics and
_Security, vol. 8, no. 3, pp. 553–562, 2013._
[16] P. Puteaux and W. Puech, “A recursive reversible data hiding
in encrypted images method with a very high payload,” IEEE
_Transactions on Multimedia, vol. 23, pp. 636–650, 2021._
[17] K. Chen and C. C. Chang, “High-capacity reversible data
hiding in encrypted images based on extended run-length
coding and block-based MSB plane rearrangement,” Journal
_of Visual Communication and Image Representation, vol. 58,_
pp. 334–344, 2019.
[18] Y. Wu, Y. Xiang, Y. Guo, J. Tang, and Z. Yin, “An improved
reversible data hiding in encrypted images using parametric
binary tree labeling,” IEEE Transactions on Multimedia,
vol. 22, no. 8, pp. 1929–1938, 2020.
[19] X. Cao, L. Du, X. Wei, D. Meng, and X. Guo, “High capacity
reversible data hiding in encrypted images by patch-level
sparse representation,” IEEE Transactions on Cybernetics,
vol. 46, no. 5, pp. 1132–1143, 2016.
[20] Y. Qiu, Q. Ying, Y. Yang, H. Zeng, S. Li, and Z. Qian, “Highcapacity framework for reversible data hiding in encrypted
image using pixel prediction and entropy encoding,” IEEE
_Transactions on Circuits and Systems for Video Technology,_
vol. 32, no. 9, pp. 5874–5887, 2022.
[21] Y. Ke, M. Q. Zhang, J. Liu, T. T. Su, and X. Y. Yang, “A multilevel reversible data hiding scheme in encrypted domain
based on LWE,” Journal of Visual Communication and Image
_Representation, vol. 54, pp. 133–144, 2018._
[22] Y. Ke, M. Q. Zhang, J. Liu, T. T. Su, and X. Y. Yang, “Fully
homomorphic encryption encapsulated difference expansion
for reversible data hiding in encrypted domain,” IEEE
_Transactions on Circuits and Systems for Video Technology,_
vol. 30, no. 8, pp. 2353–2365, 2020.
[23] Y. Kong, M. Zhang, Z. Wang, Y. Ke, and S. Huang, “Reversible
data hiding in encrypted domain based on the error-correction redundancy of encryption process,” Security and
_Communication Networks, vol. 2022, Article ID 6299469,_
17 pages, 2022.
[24] Z. Qian, X. Zhang, Y. Ren, and G. Feng, “Block cipher based
separable reversible data hiding in encrypted images,” Mul_timedia Tools and Applications, vol. 75, no. 21, pp. 13749–_
13763, 2016.
[25] Y. C. Chen, C. W. Shiu, and G. Horng, “Encrypted signalbased reversible data hiding with public key cryptosystem,”
_Journal of Visual Communication and Image Representation,_
vol. 25, no. 5, pp. 1164–1170, 2014.
[26] X. Zhang, J. Long, Z. Wang, and H. Cheng, “Lossless and
reversible data hiding in encrypted images with public-key
cryptography,” IEEE Transactions on Circuits and Systems for
_Video Technology, vol. 26, no. 9, pp. 1622–1631, 2016._
[27] H. T. Wu, Y. . m. Cheung, Z. Yang, and S. Tang, “A highcapacity reversible data hiding method for homomorphic
encrypted images,” Journal of Visual Communication and
_Image Representation, vol. 62, pp. 87–96, 2019._
[28] C. S. Tsai, Y. S. Zhang, and C. Y. Weng, “Separable reversible
data hiding in encrypted images based on paillier cryptosystem,” Multimedia Tools and Applications, vol. 81, no. 13,
pp. 18807–18827, 2022.
[29] H. T. Wu, Y. M. Cheung, Z. Zhuang, L. Xu, and J. Hu,
“Lossless data hiding in encrypted images compatible with
homomorphic processing,” IEEE Transactions on Cybernetics,
pp. 1–14, 2022.
[30] X. Wu, J. Weng, and W. Yan, “Adopting secret sharing for
reversible data hiding in encrypted images,” Signal Processing,
vol. 143, pp. 269–281, 2018.
[31] B. Chen, W. Lu, J. Huang, J. Weng, and Y. Zhou, “Secret
sharing based reversible data hiding in encrypted images with
multiple data-hiders,” IEEE Transactions on Dependable and
_Secure Computing, vol. 19, no. 2, pp. 978–991, 2022._
-----
14 Security and Communication Networks
[32] Z. Hua, Y. Wang, S. Yi, Y. Zhou, and X. Jia, “Reversible data
hiding in encrypted images using cipher-feedback secret
sharing,” IEEE Transactions on Circuits and Systems for Video
_Technology, vol. 32, no. 8, pp. 4968–4982, 2022._
[33] Y. Ke, M. Zhang, X. Zhang, J. Liu, T. Su, and X. Yang, “A
reversible data hiding scheme in encrypted domain for secret
image sharing based on Chinese remainder theorem,” IEEE
_Transactions on Circuits and Systems for Video Technology,_
vol. 32, no. 4, pp. 2469–2481, 2022.
[34] M. Yu, H. Yao, and C. Qin, “Reversible data hiding in
encrypted images without additional information transmission,” Signal Processing: Image Communication, vol. 105,
Article ID 116696, 2022.
[35] N. J. Alfardan, B. Poettering, and J. Schuldt, “On the security
of RC4 in TLS and WPA,” USENIX Security Symposium,
vol. 173, 2013.
[36] L. Chen, S. P. Jordan, Y. K. Liu, D. Moody, and R. Peralta,
_Report on post-quantum Cryptography, NIST, Gaithersburg,_
MD, USA, 2016.
[37] G. Alagic, J. Alperin-Sheriff, and D. Apon, Status Report on the
_Second Round of the NIST post-quantum Cryptography_
_Standardization Process, US Department of Commerce, NIST,_
Gaithersburg, MD, USA, 2020.
[38] R. J. McEliece, “A public-key cryptosystem based on algebraic,” Coding Tv, vol. 4244, pp. 114–116, 1978.
[39] E. Berlekamp, “Goppa codes,” IEEE Transactions on Infor_mation Teory, vol. 19, no. 5, pp. 590–592, 1973._
[40] Z. Guan, J. Li, L. Huang, X. Xiong, Y. Liu, and S. Cai, “A novel
and fast encryption system based on improved Josephus
scrambling and chaotic mapping,” Entropy, vol. 24, no. 3,
p. 384, 2022.
[41] M. Li, T. Liang, and Y. J. He, “Arnold Transform Based Image
Scrambling Method,” in Proceedings of the 3rd International
_Conference_ _on_ _Multimedia_ _Technology,_ pp. 1309–1316,
Guangzhou, China, December 2013.
[42] G. Qi, M. A. Van Wyk, B. J. Van Wyk, and G. Chen, “A new
hyperchaotic system and its circuit implementation,” Chaos,
_Solitons & Fractals, vol. 40, no. 5, pp. 2544–2549, 2009._
[43] N. Yujun, W. Xingyuan, W. Mingjun, and Z. Huaguang, “A
new hyperchaotic system and its circuit implementation,”
_Communications in Nonlinear Science and Numerical Simu-_
_lation, vol. 15, no. 11, pp. 3518–3524, 2010._
-----
| 18,419
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1155/2022/4671799?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1155/2022/4671799, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GOLD",
"url": "https://downloads.hindawi.com/journals/scn/2022/4671799.pdf"
}
| 2,022
|
[] | true
| 2022-10-30T00:00:00
|
[
{
"paperId": "8c30c91f568a476d2b63cb63bd77feb69fee278f",
"title": "Reversible Data Hiding in Encrypted Domain Based on the Error-Correction Redundancy of Encryption Process"
},
{
"paperId": "1c67c2dc9b4370c088ab544529bebc6a5fee3dd2",
"title": "Lossless Data Hiding in Encrypted Images Compatible With Homomorphic Processing"
},
{
"paperId": "6c09cf78c59ccc493667b672ac1344a4ed26cbde",
"title": "Reversible data hiding in encrypted images without additional information transmission"
},
{
"paperId": "234d9f1e40b076cee2ee9a0c35ed61ed10b79e3b",
"title": "Separable reversible data hiding in encrypted images based on Paillier cryptosystem"
},
{
"paperId": "12fc0a25309f7c534d93df549940b6386158a989",
"title": "A Novel and Fast Encryption System Based on Improved Josephus Scrambling and Chaotic Mapping"
},
{
"paperId": "c302d5ad25bcc76e4aa642f55a295d963e780d5f",
"title": "Reversible Data Hiding With Hierarchical Embedding for Encrypted Images"
},
{
"paperId": "c36ed1dd094d0c938c04e803479c90b42e24e70d",
"title": "Reversible data hiding with adaptive difference recovery for encrypted images"
},
{
"paperId": "f862c74794d571842f647010583b775212258fbc",
"title": "Reversible data hiding: A contemporary survey of state-of-the-art, opportunities and challenges"
},
{
"paperId": "3bbe59d7578d559bb361fc9aac783ab7e49e5d3a",
"title": "Secure Reversible Data Hiding in Encrypted Images Using Cipher-Feedback Secret Sharing"
},
{
"paperId": "ce769798be28fb3119491330e3b75c443e035684",
"title": "A survey of reversible data hiding in encrypted images - The first 12 years"
},
{
"paperId": "ec3389d276aa47accee90dad948317480ea688e1",
"title": "High-Capacity Framework for Reversible Data Hiding in Encrypted Image Using Pixel Prediction and Entropy Encoding"
},
{
"paperId": "e56ce8a0909364b9c62eb794aa248cb2e99a180a",
"title": "A Reversible Data Hiding Scheme in Encrypted Domain for Secret Image Sharing Based on Chinese Remainder Theorem"
},
{
"paperId": "75f530070f7b1bc48c931a035ff076db48ecdde6",
"title": "Secret Sharing Based Reversible Data Hiding in Encrypted Images With Multiple Data-Hiders"
},
{
"paperId": "d5cb4eb93c87ed0ee5dfba6d5b52f677a4f1ca76",
"title": "Status report on the second round of the NIST post-quantum cryptography standardization process"
},
{
"paperId": "ad4c522142fbb0f832137e7c3139e2cc8590f46a",
"title": "A High Capacity Multi-Level Approach for Reversible Data Hiding in Encrypted Images"
},
{
"paperId": "dfd579b5c074f510e6890b8006492a8f3c1c433d",
"title": "A high-capacity reversible data hiding method for homomorphic encrypted images"
},
{
"paperId": "0173fa00ae7504839acce20673387c122964d4f7",
"title": "An Improved Reversible Data Hiding in Encrypted Images Using Parametric Binary Tree Labeling"
},
{
"paperId": "d1cfbf6c268466f62d14cc8db61329f6d055aaa3",
"title": "Fully Homomorphic Encryption Encapsulated Difference Expansion for Reversible Data Hiding in Encrypted Domain"
},
{
"paperId": "0d75f73933fa09344d69eb8d56e537bc02c71cc9",
"title": "High-capacity reversible data hiding in encrypted images based on extended run-length coding and block-based MSB plane rearrangement"
},
{
"paperId": "d83ceee0469eda6a5fe739a275c3e665bfbfa8b7",
"title": "A multilevel reversible data hiding scheme in encrypted domain based on LWE"
},
{
"paperId": "bbec5e847aa526daca170215545224f758db8bcf",
"title": "Adopting secret sharing for reversible data hiding in encrypted images"
},
{
"paperId": "3914bb8d5827bbc684269b13680eb7b94ae5657d",
"title": "Separable reversible data hiding in encrypted image based on pixel value ordering and additive homomorphism"
},
{
"paperId": "40bc9aab99e76b1d3e05d4d17982645791c8804f",
"title": "Lossless and Reversible Data Hiding in Encrypted Images With Public-Key Cryptography"
},
{
"paperId": "24a27f15b9b429a9bcaed8f6072bf5c80e8ff8c1",
"title": "Reversible data hiding: Advances in the past two decades"
},
{
"paperId": "2d34052aa801b2061670be0a39f59ae51b5e5866",
"title": "High Capacity Reversible Data Hiding in Encrypted Images by Patch-Level Sparse Representation"
},
{
"paperId": "69a43fd66c0951fea4ae68396abc9d324bf95643",
"title": "Reversible Data Hiding in Encrypted Images With Distributed Source Encoding"
},
{
"paperId": "0006a1983fc6a346dee3f07c33711094221b781e",
"title": "Block cipher based separable reversible data hiding in encrypted images"
},
{
"paperId": "d8ce977684074d9c5b04dcf2525d19dbbf51df30",
"title": "Data embedding in random domain"
},
{
"paperId": "9cb4069b0d6f360289ba7ef9bda9cd42441305b3",
"title": "Encrypted signal-based reversible data hiding with public key cryptosystem"
},
{
"paperId": "fdbda554a368b4799ca329c2e3d0bb75278f5497",
"title": "Separable and Error-Free Reversible Data Hiding in Encrypted Image with High Payload"
},
{
"paperId": "67dd75f29e61ee9aa9c0148b4a97a8d87d98459a",
"title": "Arnold Transform Based Image Scrambling Method"
},
{
"paperId": "fea72c40b7243ffde2ff75087c83369726688966",
"title": "Reversible Data Hiding in Encrypted Images by Reserving Room Before Encryption"
},
{
"paperId": "e2b8ce23fcdaded1f11c455febc7fe69594371f4",
"title": "Separable Reversible Data Hiding in Encrypted Image"
},
{
"paperId": "88115f4fab9fe50c46d941d3a043134fe774b1ff",
"title": "An Improved Reversible Data Hiding in Encrypted Images Using Side Match"
},
{
"paperId": "a1d73a5cb33412ca61ac14dc851081ba7926efcd",
"title": "Reversible Data Hiding in Encrypted Image"
},
{
"paperId": "d4e605275000b83ae6bd94faabe71a7f7539174f",
"title": "A new hyperchaotic system and its circuit implementation"
},
{
"paperId": "6d039ffb7fc449c6b059a2b1573c49fa6768582a",
"title": "A new hyperchaotic system and its circuit implementation"
},
{
"paperId": "58b8144920ee6e43ea350bc4daf4c1e24cfbca0d",
"title": "A reversible data hiding method for encrypted images"
},
{
"paperId": "6095c24c3a311351bed80fb4b2d32359b8f6a494",
"title": "A Recursive Reversible Data Hiding in Encrypted Images Method With a Very High Payload"
},
{
"paperId": "0041ad3901132a36a327a8c960f12262796fc692",
"title": "On the Security of RC4 in TLS and WPA"
},
{
"paperId": "a8a365d50b48f73607885d4b2c0aaaa7d83965c1",
"title": "Goppa Codes"
},
{
"paperId": null,
"title": "A public-key cryptosystem based on algebraic"
}
] | 18,419
|
en
|
[
{
"category": "Materials Science",
"source": "external"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Physics",
"source": "s2-fos-model"
},
{
"category": "Materials Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00b35f13d6984a469a693bd3b7082f191c30a0d0
|
[
"Materials Science"
] | 0.868624
|
Intermediate band solar cells: Present and future
|
00b35f13d6984a469a693bd3b7082f191c30a0d0
|
Progress in Photovoltaics
|
[
{
"authorId": "6862769",
"name": "I. Ramiro"
},
{
"authorId": "49479948",
"name": "A. Martí"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Prog Photovoltaics"
],
"alternate_urls": [
"https://onlinelibrary.wiley.com/journal/1099159X"
],
"id": "911796e8-f5e4-4c1f-b6d4-967c65f8ea21",
"issn": "1062-7995",
"name": "Progress in Photovoltaics",
"type": "journal",
"url": "http://www3.interscience.wiley.com/cgi-bin/jhome/5860"
}
|
In the quest for high‐efficiency photovoltaics (PV), the intermediate band solar cell (IBSC) was proposed in 1997 as an alternative to tandem solar cells. The IBSC offers 63% efficiency under maximum solar concentration using a single semiconductor material. This high‐efficiency limit attracted the attention of the PV community, yielding to numerous intermediate band (IB) studies and IBSC prototypes employing a plethora of candidate IB materials. As a consequence, the principles of operation of the IBSC have been demonstrated, and the particularities and difficulties inherent to each different technological implementation of the IBSC have been reasonably identified and understood. From a theoretical and experimental point of view, the IBSC research has reached a mature stage. Yet we feel that, driven by the large number of explored materials and technologies so far, there is some confusion about what route the IBSC research should take to transition from the proof of concept to high efficiency. In this work, we give our view on which the next steps should be. For this, first, we briefly review the theoretical framework of the IBSC, the achieved experimental milestones, and the different technological approaches used, with special emphasis on those recently proposed.
|
has been published in final form at https://doi.org/10.1002/pip.3351. This article may be used for non commercial purposes in accordance with Wiley Terms and Conditions
for Use of Self-Archived Versions.
# Intermediate Band Solar Cells: Present and Future
## I. Ramiro* and A. Martí
Instituto de Energía Solar, Universidad Politécnica de Madrid, 28040 Madrid, Spain
**Abstract**
In the quest for high-efficiency photovoltaics (PV), the intermediate band solar cell (IBSC) was
proposed in 1997 as an alternative to tandem solar cells. The IBSC offers 63% efficiency under
maximum solar concentration using a single semiconductor material. This high efficiency limit
attracted the attention of the PV community, yielding to numerous intermediate band (IB) studies
and IBSC prototypes employing a plethora of candidate IB materials. As a consequence, the
principles of operation of the IBSC have been demonstrated, and the particularities and difficulties
inherent to each different technological implementation of the IBSC have been reasonably
identified and understood. From a theoretical and experimental point of view, the IBSC research
has reached a mature stage. Yet, we feel that, driven by the large number of explored materials
and technologies so far, there is some confusion about what route the IBSC research should take
to transition from the proof of concept to high efficiency. In this work, we give our view on which
the next steps should be. For this, first we briefly review the theoretical framework of the IBSC,
the achieved experimental milestones, and the different technological approaches used, with
special emphasis in those recently proposed.
**KEYWORDS**
Intermediate band, high efficiency, solar cell
***Correspondence:**
Iñigo Ramiro, Instituto de Energía Solar, Universidad Politécnica de Madrid, 28040 Madrid,
Spain
E-mail: [email protected]
1
-----
**1. INTRODUCTION AND CONTEXT**
The intermediate band solar cell (IBSC) was proposed by Luque and Martí[1] as a structurally
simple yet highly efficient photovoltaic (PV) concept. It builds on and completes an early idea by
Wolf[2] of exploiting in-gap levels to allow bellow-bandgap photon absorption as a means of
surpassing the efficiency limit for conventional single-gap solar cells (SGSC), known as the
Shockley and Queisser (S&Q) limit.[3] To summarize the basis and operation of the IBSC we will
rely on Figure 1a.
The S&Q limit imposes a maximum conversion efficiency –determined only by the bandgap, EG,
of the absorbing material– under the assumption that all photons with energy higher than EG are
sub-optimally harvested (because of carrier thermalization), and all photons with energy lower
than the bandgap are wasted (not absorbed). The IBSC reduces non-absorption losses by
introducing the idea of an intermediate band (IB) material. The optoelectronic properties of such
material, similarly to a semiconductor, are defined by three electronic bands: the conventional
valence and conduction bands (VB and CB) and an additional band, the IB, that lies in-between
those two (in Figure 1a the IB is arbitrarily placed closer to the VB). Part of the photons with
energy lower than EG can be absorbed in electronic transitions from the VB to the IB (transition
1 in the figure) and from the IB to the CB (transition 2). These two additional sub-gaps are
generally named EH and EL, for the higher one and the lower one, respectively. In our description,
the energy width of the IB will be considered approaching zero so that optical and electronic gaps
have the same values and 𝐸𝐸𝐺𝐺 = 𝐸𝐸𝐻𝐻 + 𝐸𝐸𝐿𝐿. Removing this condition leads to interesting variations
of the IBSC concept such as the so-called ratchet IBSC.[4,5]
Extra electron-hole pairs are generated via a two-photon absorption process, using the IB as
steppingstone, which yields to an increase in photocurrent. Despite the contribution of sub
bandgap photons to the photocurrent, the maximum voltage that an ideal IBSC can deliver is
fundamentally limited by EG, and not the sub-gaps EH or EL. This phenomenon is usually called
_voltage preservation and demands that non-radiative channels connecting the IB and the other_
two bands, such as Auger or phonon-assisted recombination, are minimized. For this reason, an
ideal IB material is usually described as having a null density of states in between the IB and the
other two bands, which hampers phonon-assisted recombination. The time scale of intraband
electron-electron interaction processes within each band is assumed to be much shorter than
interband electron-electron processes (for example between the CB and the IB) and therefore, the
carrier population in each band is described by its own electrochemical potential or quasi-Fermi
level: µC, µV, and µI, for the CB, VB and IB, respectively. In addition all the electrons are assumed
to interact with a common background of photons and phonons so that all these particles: electrons
(independently of the band where they are), photons and phonons share the same temperature
(say, room temperature TC).[6,7]
2
-----
In an ideal IBSC, with high carrier mobility, the output voltage e·V, where e is the elementary
charge, is equal to the electrochemical potential difference µC - µV and is independent of µI. To
ensure this, it is necessary to include in the device hole and electron selective contacts (HSC and
ESC) that allow extracting electrons from the CB (current Je) and holes from the VB (current Jh),
but not from the IB (Figure 1a).
**FIGURE 1. (a) Sketch of the simplified band diagram and operation of an IBSC. (b) Limiting efficiency**
of an ideal SGSC (broken lines) and an ideal IBSC (solid lines) as a function of EG. Red lines represent the
case of maximum sunlight concentration (Xmax), whereas blue lines represent one-sun illumination (1X).
The value of EH that maximizes the efficiency of the IBSC is indicated for some values of _EG. (c)_ _J-V_
characteristic under one-sun illumination of an ideal SGSG with optimum bandgap (1.31 eV), an ideal
IBSC with optimum bandgap (2.40 eV), and an ideal SGSC with bandgap 2.40 eV.
Thanks to the presence of the IB and the carrier selective contacts, IBSCs can achieve efficiencies
as high as 63%[1] under maximum light concentration (see Figure 1b), which represents a relative
increment of around 50% with respect to conventional SGSCs.[3] Actually, the limiting efficiency
of an IBSC is very close to that of a tandem cell with three gaps.[8] The potential high efficiency,
combined with a conceptually simple structure, for instance, when compared with multi-junction
solar cells (MJSC), were probably decisive factors that motivated extensive research on the
topic.[9,10] Many different IB materials have been explored, as we will discuss later on. Some of
them implied expensive raw materials and/or fabrication methods, but the prospect of high
3
-----
efficiency and relatively small cells used in concentration PV (CPV) systems made the research
worthwhile, not only scientifically, but also from the point of view of the energy price.[11] However,
the PV landscape has changed greatly in the las two decades. On the one hand, the price of flat
panel Si PV has experienced a major decrease as the annual installed capacity increased.[12] On the
other hand, MJSCs are established as a valid technology for CPV systems, with demonstrated
efficiencies well over 40%,[13] depending on the number of junctions, and present in the industry.[14]
In this new context, it is worth recalling that, although less frequently pointed out, the IBSC
concept is equally powerful under one-sun illumination (Figure 1b), in the sense that it can exceed
the SGSC efficiency limit by around 50%.[15] The idea of an IBSC working at one sun entails some
changes in the design and fabrication of IB materials and devices. Firstly, the bandgap of a highly
efficient IBSC depends on the sunlight concentration factor. Under maximum concentration, the
limiting efficiency is higher than 60% in the range 1.5 eV < _EG < 2.5 eV, being 1.96 eV the_
optimum value. However, at one-sun, the efficiency is higher than 40% for 1.5 eV < EG < 3.5 eV,
being 2.40 eV the optimum value. This opens the possibility of exploring wide-bandgap materials,
with EG - 2.5 eV, as high-efficiency IB absorbers. Secondly, the cost of the employed materials
for solar cell manufacturing gains importance in PV systems working at one sun vs concentration
systems and needs to be more carefully considered.
Figure 1c plots the current-voltage (J-V) characteristics of an ideal SGSC and an ideal IBSC with
optimum bandgaps working at one sun. When compared with the optimum SGSC, the IBSC
exhibits somewhat less photogenerated current but a larger voltage, which combined yield to an
increased output power. It is also illustrative to compare the curve of the optimum IBSC with an
ideal SGSC having the same bandgap (2.40 eV). The SGSC delivers higher output voltage but a
much lower current, consequence of the lower number of high-energy photons in the solar
spectrum. This example serves to clarify the concept of voltage preservation in IBSCs. Voltage
is said to be preserved when it is not limited by the sub-gaps introduced by the IB, this is, when
_e·V > EH. This does not mean that the open-circuit voltage VOC is not reduced upon the inclusion_
of the IB when compared to a SGSC with the same total gap but without the IB. In fact, under
sunlight concentration smaller than _Xmax, the inclusion of the IB entails a reduction of_ _VOC as_
compared to the ideal SGSG with the same gap, as shown in Figure 1c, but the gain in current is
such that the output power balance lies in favor of the IBSC. The reason for this reduction in VOC
is the extra recombination channels –even if radiative– introduced by the IB, which are dominant
at low sunlight concentration.
The solar cell efficiencies and _J-V curves previously discussed were obtained from detailed_
balance calculations[1,3] for a solar cell operating at 300 K, modelling the sun as a blackbody at
6000 K, and setting _Xmax = 46050 suns. Higher efficiency values are obtained if the AM1.5D_
tabulated spectrum is considered.[16] It has also been assumed that the absorption coefficients of
4
-----
the three bands do not overlap, which ensures that each photon is absorbed in the largest possible
transition and yields the highest efficiency in the optimum case. The removal of the constraint of
non-overlapping absorption coefficients results in different efficiency values and can be
beneficial when the IB is not placed at the optimum position.[15,17,18]
**2. TECHNOLOGICAL APPROACHES EMPLOYED IN IBSC**
The different technological approaches employed so far to manufacture IB materials and IBSC
prototypes can be grouped in four categories, summarized in Table 1 and illustrated in Figure 2a
d. (a) Quantum dots (QDs). The IB stems from confined states of the QDs.[19] In this work we will
differentiate between two QD technologies, epitaxial QDs and colloidal QDs, since the use of one
or the other may come with important practical differences, as we will discuss later on. (b) Bulk
_with deep-level impurities (DLIs). In this approach, the IB is formed by the deep levels introduced_
by impurities in a host material.[20] There is controversy, though, about whether an IB emerging
from a high density of deep levels will be actually able to suppress non-radiative recombination,[21]
a necessary condition for high efficiency. (c) Highly mismatched alloys (HMAs). In this kind of
alloys, the inclusion of a small fraction of a new element in the host, interacts with one the bands
(the CB in the illustration) of the host, splitting it into two sub-bands, _E+ and_ _E-.[22] The least_
energetic sub-band (E-) is taken as the IB.[23] (d) Organic molecules (OMs). This approach makes
use of different organic species that play the role of either sensitizer or high-bandgap acceptor.[24]
The sensitizer molecules can absorb photons with energy lower than the bandgap _EG of the_
acceptor, transitioning from the ground state to an excited singlet state. This singlet state can
naturally relax into a triple state of the same species. Subsequently, a process of energy transfer
(ET) between the sensitizers and the acceptor can take place, leading to triplet states in the
acceptor. Finally, two triple states in acceptor molecules can combine and give raise, via a triplet
triplet annihilation (TTA) process, to one higher-energy singlet state of the acceptor species. In
essence, the two below-bandgap photons absorbed in the sensitizers are up-converted[25] into one
high-energy electron-hole pair in the high-energy absorber. The reader is referred to Refs. [24] and
25 for more detailed explanation of this mechanism.
In addition to these approaches, inspired perhaps by some physical intuition, there has been
extensive theoretical work based on first-principles calculations as a way of verifying or
predicting the existence of an IB in a given alloy (for example, V in In2S3,[26] perovskite based
systems,[27] ZnS and ZnTe,[28] CdSe nanoparticles,[29] or (N, P, As and Sb) doped Cu2ZnSiSe4[30]).
5
-----
**FIGURE 2. Simplified band diagram of the different technological approaches used in IBSCs.**
(a) Quantum dots. (b) Bulk with deep level impurities. (c) Highly mismatched alloys. (d) Organic
molecules. For consistency in the nomenclature, the highest occupied molecular orbital (HOMO)
and the lowest unoccupied molecular orbital (LUMO) of the high-bandgap molecular absorber
are identified, respectively, as the VB and the CB.
**Proposed for IBSC /**
**Technological approach** **Origin of the IB**
**First employed**
Quantum dots (QDs) Confined levels in the quantum dots 2000[19] / 2004[31]
Bulk with deep-level impurities (DLIs) Levels introduced by the impurities 2001[20] /2012[32]
Highly mismatched alloys (HMAs) Split of the CB or the VB of the alloy 2003[23] / 2009[33]
Organic molecules (OMs) Singlet and triplet molecular states 2008[24] / 2015[34]
**TABLE 1. Technological approaches employed in IBSC fabrication.**
**3. EXPERIMENTAL MILESTONES & TECHNOLOGY STATUS**
**3.1 Achieved and pending experimental milestones**
Some of the most relevant achieved experimental milestones in IBSC research are sorted in
chronological order in Figure 3. Additionally, the emergence of IBSC technological approaches
is also indicated. As described before, an IBSC should produce current when illuminated with
two below-bandgap photons that promote electrons from the VB to the IB and from the IB to the
CB. This process of two-photon photocurrent (TPPC) was first demonstrated in 2006 using
InAs/GaAs EQDs operating at low temperature.[35] Initially, these photocurrent experiments were
6
|Technological approach|Origin of the IB|Proposed for IBSC / First employed|
|---|---|---|
|Quantum dots (QDs)|Confined levels in the quantum dots|200019 / 200431|
|Bulk with deep-level impurities (DLIs)|Levels introduced by the impurities|200120 /201232|
|Highly mismatched alloys (HMAs)|Split of the CB or the VB of the alloy|200323 / 200933|
|Organic molecules (OMs)|Singlet and triplet molecular states|200824 / 201534|
-----
taken using broadband infrared light. It took almost one decade more to achieve energy spectral
resolution in the TPPC, in In(Ga)As/AlGaAs EQD prototypes operating at low temperature.[36,37]
It is important to remark that an ideal IBSC, without overlapping in the absorption coefficients,
should not produce photocurrent under monochromatic below-bandgap illumination. However,
as introduced earlier, some degree of overlapping may be beneficial in practice for some cases in
which the IB is placed in a sub-optimal position. Additionally, the existence of other non-radiative
processes such as thermal or tunnel electron exchange between the IB and the CB or VB,[38] or
Auger generation in one of the sub-gaps[39] may lead to photo-response to monochromatic below
bandgap illumination even in the case of non-overlapping absorption coefficients.
Monochromatic below-bandgap photocurrent was the first signature of an optically active IB in
early EQD-based IBSC prototypes[31] and is still today one of the first IB signatures investigated
in new devices.
The first demonstration of voltage preservation (VOC > EH/e) was reported in InAs/GaAs EQD
prototypes operating at low temperature in 2010.[40] A step forward was given recently with the
demonstration in GaSb/GaAs EQD prototypes, also at low temperature, of two-photon
photovoltage;[41] that is, that two-step two-photon below-bandgap absorption produces an increase
in photovoltage with respect to one-photon below-bandgap absorption. Finally, the existence of
three electrochemical potentials in the IBSC comes with a luminescence signature with three
distinct emission peaks corresponding to the three gaps of the IB material.[42] This characteristic
IBSC signature was first reported in GaNAs HMA prototypes in 2011 via electroluminescence
measurements at low temperature.[43]
In our view, two main experimental milestones are still pending. The first one is the simultaneous
demonstration of photocurrent response to below-bandgap photons and voltage preservation. In
this respect, so far, below-bandgap absorption has been reported under short-circuit conditions (V
= 0), and voltage preservation has been reported at open circuit (J = 0). In both cases the power
delivered by the cell is zero. The production of below-bandgap photocurrent when the cell is
producing power, and specifically when e·V > EH, would be a necessary condition for the second
and more demanding milestone: the demonstration of an increase in the cell efficiency, which
will finally lead to high-efficiency devices. Finally, it is worth noting that some of the discussed
milestones have been obtained generally under cryogenic temperatures. The ultimate goal, of
course, is achieving a practical IBSC, which would require that all the previously mentioned
phenomena take place at room temperature.
7
-----
**FIGURE 3. Experimental progress in IBSC development from the perspectives of achieved**
experimental milestones and the demonstration of new technological approaches. In purple,
milestones yet to be achieved. In red, the ultimate goal: a practical high-efficiency IBSC.
**3.2 IBSC technology status**
Although experimental progress has been made within each technological approach, none of the
IBSC implementations so far have fully exploited the benefits of the IB.
The use of OMs in IB devices is still in its infancy, yet demonstration of below-bandgap
photocurrent in the first reports gives an indication of its potential.[34,44,45] Research is needed to
find the adequate combination of sensitizers and acceptors for which the ET and TTA processes
are efficient, paying attention to how this process is affected by the operation voltage of the cell.
Bulk semiconductors with DLIs have demonstrated the capability of achieving relatively strong
below-bandgap photocurrent.[32,46] New candidate materials continue to be proposed and
analyzed,[47–54] generally proving below-bandgap absorption, which evidences that the DLI
approach is far from exhausted. However, we think that at this moment more profound studies
are needed. It is important to discriminate IB candidates based on the amount of non-radiative
recombination introduced by the deep levels, which will ultimately determine whether the IB
plays a detrimental or beneficial role. In this regard, Ref. [55] presents a model for predicting the
suitability of an IB candidate material from basic materials properties.
In a similar line, HMAs have proven its potential as below-bandgap absorbers;[33,43,56,57] but studies
aimed to understand how to preserve the voltage are still lacking and should be addressed.
QDs, in particular epitaxial quantum dots (EQDs), are the most investigated IB technology[9] and
the one that has allowed verification of the underlying physics of the IBSC, as previously detailed.
Nevertheless, EQD-based IBSCs face two major problems. First, absorption of the transitions
involving the IB is too weak, mainly due to the low volumetric concentration of EQDs (in the
order of 10[15]-10[16] cm[-3]). As an example, Figure 4 shows photocurrent produced in an
InAs/AlGaAs EQD-based IBSC[58] where below-bandgap photocurrent is several orders of
magnitude weaker than supra-bandgap photocurrent. Similar behavior is obtained in other EQD
8
-----
systems such as GaSb/GaAs.[59] To enhance absorption in the QD material, light trapping
techniques such as texturing[60,61] or plasmonic scattering[62] have been investigated, although the
results are still far from the requirements of a high-efficiency IBSC.[63] The second problem is
excessive non-radiative electron exchange between the IB and the VB or the CB of the host, which
prevents the preservation of the voltage at room temperature.[38,64] This fast electron exchange is
due to the non-optimal size and shape of EQDs, which give rise to closely spaced confined
electronic levels, favoring carrier thermalization; and/or to electron-hole Auger recombination,
which may be dominant in type-I EQDs.[65] What has been learnt from all this is that higher QD
densities, and better control on the shape, size and band alignment of the QDs are needed in order
to use this technology as efficient absorber in IBSCs.
**FIGURE 4. Photocurrent measured in an InAs/AlGaAs EQD-based IBSC showing the three**
absorption thresholds in the IB material. Reproduced with permission from Ref. [58].
**4. FUTURE DIRECTIONS**
It is difficult to foresee which technology will first succeed in making practical IBSCs.
Nonetheless, in this work we want to focus on one kind of QD technology still very little explored
in IB devices: colloidal quantum dots (CQDs). CQDs[66] are quantum dots synthesized via wet
chemical routes that produce nanocrystals dispersed in a solvent. We think that this technological
approach has the potential to overcome the main limitations found in EQDs. First, CQDs can be
densely packed (volumetric densities of 10[19]-10[20] cm[-3]) in solid-state films that are highly
absorbent in both the VBIB and the IBCB transitions.[67] Second, the size of the CQDs can be
precisely controlled,[68] allowing for a true gap between the IB and the VB and CB. Additionally,
CQD thin-films can be fabricated by low-cost solution-processing techniques, such as spin
9
-----
coating or drop casting, which allows envisaging CQD-based IBSCs operating at one sun. CQDs
were first suggested as IB materials by Mendes et al.[69]
One key difference between EQDs and CQDs, resulting from their respective fabrication
methods, is that EQDs are grown inside a semiconductor host or matrix, whereas CQDs are self
_standing, in the sense that, once deposited on a substrate, they are surrounded by air. However, it_
has recently been demonstrated[70] that perovskites and preformed PbS CQDs, combined in
solution phase, can produce epitaxially-aligned dots-in-a-matrix heterocrystals. In this work, we
will refer to such a material, in a general manner, as colloidal quantum dots in a matrix (CQDM),
which have been also suggested as candidates for IBSCs.[71] Sketches of CQD-based and CQDM
based IBSCs are shown in Figure 5a-b. Their corresponding simplified band diagrams are
depicted in Figure 5c-d, where we assume that the dots are n-doped such that the confined ground
states of their conduction band is partially populated. An analogous alternative case in which the
dots are p-doped is also possible but is left out of the discussion for simplicity. In CQDs, the
ground state of the conduction band of the dots, plays the role of the IB, whereas the ground state
of the valence band and the first excited state of the conduction band of the dots play the role,
respectively, of the VB and CB as they are described in Figure 1a. In CQDMs, the CB and the
VB are those of the matrix, just as it was the case in EQDs.
Both approaches are, in principle, valid for implementing IBSCs from the point of view of strong
photon absorption and control over the band diagram. There is, however, an important difference
between CQDs and CQDMs that may tip the scale in favor of the latter. CQD films usually have
reduced mobilities as compared to crystalline bulk semiconductors, because transport relies on
carrier hopping between neighboring dots[72] (see Figure 5c). In this situation, long carrier lifetimes
for the CBIB recombination would be required to achieve efficient carrier collection. However,
evidence in some CQD materials suggests that this lifetime is in the sub-nanosecond regime.[73,74]
To solve this issue, one challenging pathway would be to engineer the CQDs so that they exhibit
band-like transport and high mobility[75] through the CB and the VB. In CQDM-based devices, on
the other hand, charge transport occurs naturally within the bands of the crystalline matrix, with
higher mobility, thus favoring carrier extraction. Additionally, the CQDM approach allows
decoupling the absorption coefficient between the two component materials: the dots need only
to be strong absorbers in the two sub-gaps (EH and _EL), whereas the matrix can be a strong_
absorber for photon energies greater than _EG. Nevertheless, the number of available different_
CQDM materials is still limited.[76]
10
-----
**FIGURE 5. Sketches of (a) a CQD-based IBSC and (b) a CQDM-based IBSC. (c) and (d)**
illustrate the band diagrams of (a) and (b), respectively. In (c) charge transport occurs between
confined states of adjacent QDs. In (d) charge transport occurs within the VB and the CB of the
matrix. (1) and (2) represent absorption processes between confined states of the QDs, whereas
(2’) represents absorption between a QD confined state and a delocalized state in the matrix.
The first CQDM-based IBSC prototypes, using PbS CQDs in a perovskite matrix, have provided
satisfactory results.[77] Monochromatic below-bandgap absorption was demonstrated, proving that
the IB is optically active in the device (Figure 6b). TPPC was also reported, although it yielded
very low currents (Figure 6c). In our opinion, the low values of the TPPC may be due to two main
reasons. (i) Absorption from the IB to the CB is proportional to the occupancy of the IB. If the IB
is naturally empty of electrons, IBCB absorption will be hindered. Hence, it is possible that
pre-doping of the CQDs is needed in order to semi-fill the IB, so that both the VBIB and the
IBCB absorptions are strong.[78] This represents an additional challenge, since controlling
doping in CQDs is not an easy task.[79] (ii) The experiments performed in Ref. [77] probe the IBCB
transition as occurring between a confined state of the QDs and the delocalized states of the matrix
(transition 2’ in Figure 5). Such transition has an energy of around 0.8 eV (Figure 6a). Although
this requires further studies, it is possible that the probability of this transition is not very strong.
Instead, as discussed earlier, IBCB absorption can be strong in CQDs if the transition takes
place between confined states[67,74] (transition 2 in Figure 5). However, in the CQDs used in Ref.
77 (EH = 1.0 eV), the transition between confined states that would represent EL is smaller than 0.3
eV.[67]
11
-----
As a guideline for future experiments using CQDM, we think that emphasis must be put in
engineering the band alignment of the CQDs and the matrix so that it resembles that of Figure 5d
(the first excited state of the conduction band of the QDs should be closely aligned with the bottom
edge of the CB of the matrix). This would allow relying on strong absorption between confined
states (for below-bandgap photons) and would guarantee a true energy gap between the IB and
the bands of the matrix, which would reduce non-radiative recombination. We remark also that,
to achieve the highest efficiencies at one sun, values of EL greater than 0.5 eV are required, as it
can be deduced from Figure 1b. Therefore, _small QDs should be targeted so that the strong_
quantum confinement allows such energy differences between consecutive confined states.
**FIGURE 6. (a) Band diagram and the different absorption thresholds in a PbS/perovskite CQDM-**
based IBSC. TSPA stands for two-step photon absorption. (b) EQE as a function of the PbS QDs
content. (c) Increase in the EQE upon addition of a second beam of IR light. Reproduced from
Ref. [77], licensed under CC BY 4.0.
**5. CONCLUSIONS**
IBSC research has reached a mature state. The theoretical framework is well established and
understood thanks to continuous progress in experimentation using four technological IB
approaches: QDs, DLIs, HMAs, and OMs. Each technology has its strengths and weaknesses, but
overall QDs is the one that has verified most of the phenomena expected in IBSC operation. OMs
have potential as a low-cost technology, but their development in IBSCs is still at its infancy.
Regarding DLIs and HMAs, we advise the community to focus efforts on understanding the
mechanisms of non-radiative recombination introduced by the IB, so that they can be suppressed.
Within the QD approach, CQDs have emerged as a technology with potential for overcoming the
two main hindrances encountered in EQD-based IBSCs: weak below-bandgap absorption and fast
non-radiative recombination between the IB and the VB or the CB. Moreover, CQDs is a
12
-----
potentially low-cost technology, which allows envisaging the use of IBSCs in flat plate PV. In
this regard, we have discussed how the IBSC concept is still very powerful without sunlight
concentration, and we advocate for steering IBSC research towards low cost and high efficiency
at one sun.
**Acknowledgements**
This work was supported in part by the Project 2GAPS (TEC2017-92301-EXP) funded by the
Spanish Ministerio de Ciencia, Innovación y Universidades and the Project MADRID-PV2-CM
(P2018/EMT-4308) funded by the Comunidad de Madrid supported with FEDER funds.
**REFERENCES**
1. Luque, A. & Martí, A. Increasing the efficiency of ideal solar cells by photon induced
transitions at intermediate levels. Physical Review Letters **78, 5014–5017 (1997).**
2. Wolf, M. Limitations and possibilities for improvement of photovoltaic solar energy
converters: part I: considerations for earth’s surface operation. Proceedings of the IRE **48,**
1246–1263 (1960).
3. Shockley, W. & Queisser, H. J. Detailed balance limit of efficiency of p n junction solar
cells. Journal of Applied Physics **32, 510–519 (1961).**
4. Yoshida, M., Ekins-Daukes, N. J., Farrell, D. J. & Phillips, C. C. Photon ratchet
intermediate band solar cells. Applied Physics Letters **100, 263902–263904 (2012).**
5. Pusch, A. & Ekins-Daukes, N. J. Voltage Matching, Étendue, and Ratchet Steps in
Advanced-Concept Solar Cells. Phys. Rev. Applied **12, 044055 (2019).**
6. Martí, A. & Luque, A. Electrochemical Potentials (Quasi-Fermi Levels) and the Operation
of Hot-Carrier, Impact-Ionization, and Intermediate-Band Solar Cells. Photovoltaics, IEEE
_Journal of_ **3, 1298–1304 (2014).**
7. Martí, A. From the Hot Carrier Solar Cell to the Intermediate Band Solar Cell, Passing
through the Multiple-Exciton Generation Solar Cell and then back to the Hot Carrier Solar
Cell: the Dance of the Electro-Chemical Potentials. in 36th European Photovoltaic Solar
_Energy Conference and Exhibition 6–12 (2019)._
13
-----
8. Martí, A. & Araújo, G. L. Limiting efficiencies for photovoltaic energy conversion in
multigap systems. Solar Energy Materials and Solar Cells **43, 203–222 (1996).**
9. Ramiro, I., Martí, A., Antolín, E. & Luque, A. Review of experimental results related to the
operation of intermediate band solar cells. IEEE Journal of Photovoltaics **4, 736–748**
(2014).
10. Okada, Y. et al. Intermediate band solar cells: Recent progress and future directions.
_Applied Physics Reviews_ **2, 21302 (2015).**
11. Luque, A. Will we exceed 50% efficiency in photovoltaics? Journal of Applied Physics
**110, 031301 (2011).**
12. Louwen, A., van Sark, W. G. J. H. M., Faaij, A. P. C. & Schropp, R. E. I. Re-assessment of
net energy production and greenhouse gas emissions avoidance after 40 years of
photovoltaics development. Nat Commun **7, 13728 (2016).**
13. Geisz, J. F. et al. Six-junction III–V solar cells with 47.1% conversion efficiency under 143
Suns concentration. Nature Energy **5, 326–335 (2020).**
14. CPV Solar Cells - AZUR SPACE Solar Power GmbH.
http://www.azurspace.com/index.php/en/products/products-cpv/cpv-solar-cells. Accessed:
2020-09-17.
15. Krishna, A. & Krich, J. J. Increasing efficiency in intermediate band solar cells with
overlapping absorptions. J. Opt. **18, 074010 (2016).**
16. Brown, A. S. & Green, M. A. Impurity photovoltaic effect: Fundamental energy conversion
efficiency limits. Journal of Applied Physics **92, 1329–1336 (2002).**
17. Cuadra, L., Martí, A. & Luque, A. Influence of the overlap between the absorption
coefficients on the efficiency of the intermediate band solar cell. Electron Devices, IEEE
_Transactions on_ **51, 1002–1007 (2004).**
18. López, E., Martí, A., Antolín, E. & Luque, A. On the Potential of Silicon Intermediate Band
Solar Cells. Energies **13, 3044 (2020).**
19. Martí, A., Cuadra, L. & Luque, A. Quantum dot intermediate band solar cell. in 28th
_Photovoltaic Specialists Conference vol. Conference 940–943 (IEEE, 2000)._
14
-----
20. Luque, A. & Martí, A. A metallic intermediate band high efficiency solar cell. Progress in
_Photovoltaics: Research and Applications_ **9, 73–86 (2001).**
21. Krich, J. J., Halperin, B. I. & Aspuru-Guzik, A. Nonradiative lifetimes in intermediate band
photovoltaics—Absence of lifetime recovery. Journal of Applied Physics **112, 013707**
(2012).
22. Shan, W. et al. Band anticrossing in GaInNAs alloys. Physical Review Letters **82, 1221–**
1224 (1999).
23. Yu, K. M. et al. Diluted II-VI oxide semiconductors with multiple band gaps. Physical
_Review Letters_ **91, 246403 (2003).**
24. Ekins-Daukes, N. J. & Schmidt, T. W. A molecular approach to the intermediate band solar
cell: The symmetric case. Applied Physics Letters **93, 2–5 (2008).**
25. Singh-Rachford, T. N. & Castellano, F. N. Photon upconversion based on sensitized triplet–
triplet annihilation. Coordination Chemistry Reviews **254, 2560–2573 (2010).**
26. Palacios, P., Aguilera, I., Sánchez, K., Conesa, J. C. & Wahnón, P. Transition-Metal
Substituted Indium Thiospinels as Novel Intermediate-Band Materials: Prediction and
Understanding of Their Electronic Properties. Phys. Rev. Lett. **101, 046403 (2008).**
27. Jiang, L. et al. Semiconducting ferroelectric perovskites with intermediate bands via B-site
Bi5+ doping. Phys. Rev. B **90, 075153 (2014).**
28. Tablero, C. Survey of intermediate band materials based on ZnS and ZnTe semiconductors.
_Solar Energy Materials and Solar Cells_ **90, 588–596 (2006).**
29. Vörös, M., Galli, G. & Zimanyi, G. T. Colloidal Nanoparticles for Intermediate Band Solar
Cells. ACS Nano **9, 6882–6890 (2015).**
30. Jibran, M., Sun, X., Wang, B., Yamauchi, Y. & Ding, Z. Intermediate band solar cell
materials through the doping of group-VA elements (N, P, As and Sb) in Cu 2 ZnSiSe 4.
_RSC Adv._ **9, 28234–28240 (2019).**
31. Luque, A. et al. General equivalent circuit for intermediate band devices: Potentials,
currents and electroluminescence. Journal of Applied Physics **96, 903–909 (2004).**
15
-----
32. Marsen, B., Klemz, S., Unold, T. & Schock, H.-W. Investigation of the Sub-Bandgap
Photoresponse in CuGaS2: Fe for Intermediate Band Solar Cells. Progress in
_Photovoltaics: Research and Applications_ **20, 625–629 (2012).**
33. Wang, W., Lin, A. S., Phillips, J. D. & Metzger, W. K. Generation and recombination rates
at ZnTe: O intermediate band states. Applied Physics Letters **95, 261103–261107 (2009).**
34. Simpson, C. et al. An intermediate band dye-sensitised solar cell using triplet-triplet
annihilation. Physical Chemistry Chemical Physics **17, 24826–24830 (2015).**
35. Martí, A. et al. Production of photocurrent due to intermediate-to-conduction-band
transitions: a demonstration of a key operating principle of the intermediate-band solar cell.
_Physical Review Letters_ **97, 247701–247704 (2006).**
36. Ramiro, I. et al. Two-photon photocurrent and voltage up-conversion in a quantum dot
intermediate band solar cell. in 2014 IEEE 40th Photovoltaic Specialist Conference, PVSC
_2014 3251–3253 (2014)._
37. Tamaki, R., Shoji, Y., Okada, Y. & Miyano, K. Spectrally Resolved Interband and
Intraband Transitions by Two-Step Photon Absorption in InGaAs/GaAs Quantum Dot Solar
Cells. IEEE Journal of Photovoltaics **5, 229–233 (2014).**
38. Antolín, E. et al. Reducing carrier escape in the InAs/GaAs quantum dot intermediate band
solar cell. Journal of Applied Physics **108, 064513 1–7 (2010).**
39. Luque, A., Martí, A. & Cuadra, L. Impact-ionization-assisted intermediate band solar cell.
_Electron Devices, IEEE Transactions on_ **50, 447–454 (2003).**
40. Antolín, E. et al. Advances in quantum dot intermediate band solar cells. in Conference
_Record of the IEEE Photovoltaic Specialists Conference 65–70 (2010)._
41. Ramiro, I. et al. Analysis of the intermediate-band absorption properties of type-II
GaSb/GaAs quantum-dot photovoltaics. Physical Review B **96, 125422 (2017).**
42. Ekins-Daukes, N. J., Honsberg, C. B. & Yamaguchi, M. Signature of intermediate band
materials from luminescence measurements. in Photovoltaic Specialists Conference. vol.
Conference 49–54 (IEEE, 2005).
16
-----
43. López, N., Reichertz, L. A., Yu, K. M., Campman, K. & Walukiewicz, W. Engineering the
Electronic Band Structure for Multiband Solar Cells. Physical Review Letters **106, 4 (2011).**
44. Lin, Y. H. L. et al. Enhanced sub-bandgap efficiency of a solid-state organic intermediate
band solar cell using triplet-triplet annihilation. Energy and Environmental Science **10,**
1465–1475 (2017).
45. Hill, S. P., Dilbeck, T., Baduell, E. & Hanson, K. Integrated Photon Upconversion Solar
Cell via Molecular Self-Assembled Bilayers. ACS Energy Lett. **1, 3–8 (2016).**
46. Sheu, J.-K. et al. Photoresponses of manganese-doped gallium nitride grown by
metalorganic vapor-phase epitaxy. Applied Physics Letters **102, 71103–71107 (2013).**
47. Hu, K. et al. Iron-incorporated chalcopyrite of an intermediate band for improving solar
wide-spectrum absorption. Journal of Solid State Chemistry **277, 388–394 (2019).**
48. Hu, K. et al. Intermediate Band Material of Titanium-Doped Tin Disulfide for Wide
Spectrum Solar Absorption. Inorg. Chem. **57, 3956–3962 (2018).**
49. Han, L., Wu, L., Liu, C. & Zhang, J. Doping-Enhanced Visible-Light Absorption of
CH3NH3PbBr3 by Bi3+-Induced Impurity Band without Sacrificing Bandgap. The Journal
_of Physical Chemistry C acs.jpcc.8b12026 (2019)._
50. Khoshsirat, N. et al. Efficiency enhancement of Cu2ZnSnS4 thin film solar cells by
chromium doping. Solar Energy Materials and Solar Cells **201, 110057 (2019).**
51. Sampson, M. D., Park, J. S., Schaller, R. D., Chan, M. K. Y. & Martinson, A. B. F.
Transition metal-substituted lead halide perovskite absorbers. J. Mater. Chem. A **5, 3578–**
3588 (2017).
52. Nematollahi, M. et al. Interpretation of photovoltaic performance of n-ZnO:Al/ZnS:Cr/p
GaP solar cell. Solar Energy Materials and Solar Cells **169, 56–60 (2017).**
53. Garcia-Hemme, E. et al. Vanadium supersaturated silicon system: a theoretical and
experimental approach. J. Phys. D: Appl. Phys. **50, 495101 (2017).**
54. Lee, M.-L., Huang, F.-W., Chen, P.-C. & Sheu, J.-K. GaN intermediate band solar cells
with Mn-doped absorption layer. Sci Rep **8, 8641 (2018).**
17
-----
55. Sullivan, J. T., Simmons, C. B., Buonassisi, T. & Krich, J. J. Targeted Search for Effective
Intermediate Band Solar Cell Materials. IEEE J. Photovoltaics **5, 212–218 (2015).**
56. Ahsan, N. et al. Two-photon excitation in an intermediate band solar cell structure. Applied
_Physics Letters_ **100, 172111–172114 (2012).**
57. Antolín, E. et al. Intermediate band to conduction band optical absorption in ZnTeO. in
_Photovoltaic Specialists Conference (PVSC), 2012 38th vol. 4 1–5 (IEEE, 2012)._
58. Datas, A. et al. Intermediate band solar cell with extreme broadband spectrum quantum
efficiency. Physical Review Letters **114, (2015).**
59. Ramiro, I. et al. Three-Bandgap Absolute Quantum Efficiency in GaSb/GaAs Quantum Dot
Intermediate Band Solar Cells. IEEE Journal of Photovoltaics **7, 508–512 (2017).**
60. Smith, B. L. et al. Inverted growth evaluation for epitaxial lift off (ELO) quantum dot solar
cell and enhanced absorption by back surface texturing. in 2016 IEEE 43rd Photovoltaic
_Specialists Conference (PVSC) 1276–1281 (IEEE, 2016)._
61. Shoji, Y., Watanabe, K. & Okada, Y. Photoabsorption improvement in multi-stacked
InGaAs/GaAs quantum dot solar cell with a light scattering rear texture. Solar Energy
_Materials and Solar Cells_ **204, 110216 (2020).**
62. Feng Lu, H. et al. Plasmonic quantum dot solar cells for enhanced infrared response. Appl.
_Phys. Lett._ **100, 103505 (2012).**
63. Mellor, A., Luque, A., Tobías, I. & Martí, A. The feasibility of high-efficiency InAs/GaAs
quantum dot intermediate band solar cells. Solar Energy Materials and Solar Cells **130,**
225–233 (2014).
64. Ramiro, I. et al. InAs/AlGaAs quantum dot intermediate band solar cells with enlarged sub
bandgaps. in Conference Record of the IEEE Photovoltaic Specialists Conference 652–656
(2012).
65. Tomić, S., Martí, A., Antolín, E. & Luque, A. On inhibiting Auger intraband relaxation in
InAs/GaAs quantum dot intermediate band solar cells. Appl. Phys. Lett. **99, 053504 (2011).**
66. Alivisatos, A. P. Semiconductor Clusters, Nanocrystals, and Quantum Dots. Science **271,**
933–937 (1996).
18
-----
67. Ramiro, I. et al. Size- and Temperature-Dependent Intraband Optical Properties of Heavily
n-Doped PbS Colloidal Quantum Dot Solid-State Films. ACS Nano **14, 7161–7169 (2020).**
68. Murray, C. B., Norris, D. J. & Bawendi, M. G. Synthesis and Characterization of Nearly
Monodisperse CdE (E = S, Se, Te) Semiconductor Nanocrystallites. Journal of the
_American Chemical Society_ **115, 8706–8715 (1993).**
69. Mendes, M. J. et al. Self-organized colloidal quantum dots and metal nanoparticles for
plasmon-enhanced intermediate-band solar cells. Nanotechnology **24, (2013).**
70. Ning, Z. et al. Quantum-dot-in-perovskite solids. Nature **523, 324–328 (2015).**
71. Sanchez, R. S. et al. Tunable light emission by exciplex state formation between hybrid
halide perovskite and core/shell quantum dots: Implications in advanced LEDs and
photovoltaics. Science Advances **2, (2016).**
72. Guyot-Sionnest, P. Electrical transport in colloidal quantum dot films. Journal of Physical
_Chemistry Letters_ **3, 1169–1175 (2012).**
73. Pandey, A. & Guyot-Sionnest, P. Slow Electron Cooling in Colloidal Quantum Dots.
_Science_ **322, 929–932 (2008).**
74. Deng, Z., Jeong, K. S. & Guyot-Sionnest, P. Colloidal quantum dots intraband
photodetectors. ACS Nano **8, 11707–11714 (2014).**
75. Lan, X. et al. Quantum dot solids showing state-resolved band-like transport. Nat. Mater.
**19, 323–329 (2020).**
76. Ngo, T. T. & Mora-Seró, I. Interaction between Colloidal Quantum Dots and Halide
Perovskites: Looking for Constructive Synergies. J. Phys. Chem. Lett. **10, 1099–1108**
(2019).
77. Hosokawa, H. et al. Solution-processed intermediate-band solar cells with lead sulfide
quantum dots and lead halide perovskites. Nature Communications **10, 4–6 (2019).**
78. Kim, J., Choi, D. & Jeong, K. S. Self-doped colloidal semiconductor nanocrystals with
intraband transitions in steady state. Chemical Communications **54, 8435–8445 (2018).**
79. Stavrinadis, A. & Konstantatos, G. Strategies for the Controlled Electronic Doping of
Colloidal Quantum Dot Solids. ChemPhysChem **17, 632–644 (2016).**
19
-----
| 12,076
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1002/pip.3351?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1002/pip.3351, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GREEN",
"url": "https://oa.upm.es/64919/2/IBSC_present_and_future_PIP3351.pdf"
}
| 2,020
|
[
"Review"
] | true
| 2020-10-21T00:00:00
|
[] | 12,076
|
en
|
[
{
"category": "Biology",
"source": "external"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
},
{
"category": "Agricultural and Food Sciences",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00b65a44a837786e095eced730b2ddb8e4dfb825
|
[
"Biology"
] | 0.862643
|
Risk of Human Pathogen Internalization in Leafy Vegetables During Lab-Scale Hydroponic Cultivation
|
00b65a44a837786e095eced730b2ddb8e4dfb825
|
Horticulturae
|
[
{
"authorId": "17112333",
"name": "G. Riggio"
},
{
"authorId": "122511126",
"name": "Sarah Jones"
},
{
"authorId": "145667962",
"name": "K. Gibson"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"https://www.mdpi.com/journal/horticulturae",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-556387"
],
"id": "34ca0066-cdcf-441d-9ff1-11ee3c08fb05",
"issn": "2311-7524",
"name": "Horticulturae",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-556387"
}
|
Controlled environment agriculture (CEA) is a growing industry for the production of leafy vegetables and fresh produce in general. Moreover, CEA is a potentially desirable alternative production system, as well as a risk management solution for the food safety challenges within the fresh produce industry. Here, we will focus on hydroponic leafy vegetable production (including lettuce, spinach, microgreens, and herbs), which can be categorized into six types: (1) nutrient film technique (NFT), (2) deep water raft culture (DWC), (3) flood and drain, (4) continuous drip systems, (5) the wick method, and (6) aeroponics. The first five are the most commonly used in the production of leafy vegetables. Each of these systems may confer different risks and advantages in the production of leafy vegetables. This review aims to (i) address the differences in current hydroponic system designs with respect to human pathogen internalization risk, and (ii) identify the preventive control points for reducing risks related to pathogen contamination in leafy greens and related fresh produce products.
|
## horticulturae
_Review_
# Risk of Human Pathogen Internalization in Leafy Vegetables During Lab-Scale Hydroponic Cultivation
**Gina M. Riggio** **[1], Sarah L. Jones** **[2]** **and Kristen E. Gibson** **[2,]***
1 Cellular and Molecular Biology Program, Department of Food Science, University of Arkansas, Fayetteville,
AR 72701, USA; [email protected]
2 Department of Food Science, University of Arkansas, Fayetteville, AR 72704, USA; [email protected]
***** Correspondence: [email protected]; Tel.: +1-479-575-6844
Received: 13 February 2019; Accepted: 7 March 2019; Published: 15 March 2019
[����������](http://www.mdpi.com/2311-7524/5/1/25?type=check_update&version=1)
**�������**
**Abstract: Controlled environment agriculture (CEA) is a growing industry for the production of**
leafy vegetables and fresh produce in general. Moreover, CEA is a potentially desirable alternative
production system, as well as a risk management solution for the food safety challenges within the
fresh produce industry. Here, we will focus on hydroponic leafy vegetable production (including
lettuce, spinach, microgreens, and herbs), which can be categorized into six types: (1) nutrient
film technique (NFT), (2) deep water raft culture (DWC), (3) flood and drain, (4) continuous drip
systems, (5) the wick method, and (6) aeroponics. The first five are the most commonly used in the
production of leafy vegetables. Each of these systems may confer different risks and advantages in the
production of leafy vegetables. This review aims to (i) address the differences in current hydroponic
system designs with respect to human pathogen internalization risk, and (ii) identify the preventive
control points for reducing risks related to pathogen contamination in leafy greens and related fresh
produce products.
**Keywords: hydroponic; leafy greens; internalization; pathogens; norovirus; Escherichia coli; Salmonella;**
_Listeria spp.; preventive controls_
**1. Introduction**
In 2018, the United States (U.S.) fresh produce industry was implicated in three separate
multi-state outbreaks linked to contaminated field-grown romaine lettuce from Arizona and California,
which produce 94.7% of the leafy greens in the U.S. [1]. The three leafy green outbreaks were cited in
294 illnesses and six deaths across the U.S. [2–4]. From 1973 to 2012, leafy greens have comprised more
than half of the fresh produce-associated outbreaks reported in the U.S. [5]. While risk management
strategies and regulatory requirements (e.g., the Food Safety Modernization Act Produce Safety
Rule) were developed in response to produce-associated outbreaks, these are primarily applicable to
conventional, field-grown crops as opposed to controlled environment agriculture (CEA). Meanwhile,
CEA is a growing industry and a potentially desirable alternative production system, as well as a risk
management solution for the fresh produce industry. According to a 2017 survey of over 150 farms
worldwide, a total of 450,000 square feet of production space was added during a one-year period [6].
Moreover, 16% of responding farms had opened during that same one-year period [6].
For hydroponic systems to be a viable risk management strategy for addressing food safety issues
in the leafy vegetable industry, established CEA producers that use hydroponics must strive to balance
productivity with produce safety. Currently, there are minimal science-based reports on the benefits of
CEA overall with respect to product safety. Moreover, although conventional production systems have
made great strides through the adoption of Good Agricultural Practices (GAPs; e.g., Leafy Greens
Marketing Agreement), traditional field growers may look to CEA and hydroponics as an opportunity
-----
_Horticulturae 2019, 5, 25_ 2 of 22
to enhance the safety of their product along with the longevity of their operations. This review aims
to (i) address the differences in current hydroponic system designs with respect to human pathogen
internalization risk, and (ii) identify preventive control points for reducing the risks related to pathogen
contamination in leafy greens and related fresh produce products.
_Review Methodology_
To inform this review paper, the authors searched the following databases: Web of Science,
PubMed, and Google Scholar. The key word search terms were a combination of the following:
foodborne pathogens, food safety, pathogen internalization, endophytic, hydroponic, soilless, soil-free
horticulture, greenhouse, indoor farm, growth chamber, leafy greens, lettuce, leafy vegetables,
microgreens, and herbs. Additional searches were done for specific human pathogens, including Shiga
toxin-producing Escherichia coli, Salmonella enterica, Listeria monocytogenes, human norovirus, and its
surrogates Tulane virus and murine norovirus. The authors further narrowed the search for studies
in hydroponic systems by searching for the names of specific types of systems such as deep water
culture, wick systems, nutrient film technique, continuous drips, as well as the phrases ‘flood and
drain’ and ‘ebb and flow’. Numerous studies have been conducted on pathogen internalization in
fresh produce as reviewed by Erickson [7], and these studies include all of the production systems and
produce types, as well as experimental designs investigating internalization outside of the ‘normal’
germination process (e.g., directly through stomata as opposed to roots). For the present review,
studies were excluded if they did not specifically study internalization via roots, if they did not include
a technique resembling soilless horticulture, or if they were investigating internalization in produce
that are typically eaten raw and were not leafy vegetables (e.g., tomato, cantaloupe, or berries). Based
on these criteria, 17 papers were identified for primary discussion in Section 5.
**2. Controlled Environment Agriculture (CEA) and Food Safety**
Controlled environment agriculture encompasses a variety of non-traditional farming methods
that take place inside climate-controlled buildings. Examples of CEA locations may include
greenhouses or high tunnels, which have transparent or translucent walls that let in natural sunlight.
CEA may also include indoor buildings or warehouse spaces with opaque walls that rely on
artificial lighting for photosynthesis. Greenhouses and fully indoor spaces require varying degrees
of climate modulation, such as heating, cooling, humidity control, CO2 injection, and supplemental
lighting. Indoor farmers often use soil-free horticultural techniques including hydroponics, aquaponics,
aeroponics, or growing on mats (e.g., Biostrate) and soil alternatives (e.g., coco coir). This review will
focus on hydroponic leafy vegetable production (including lettuce, spinach, microgreens, and herbs),
which can be categorized into six types: (1) nutrient film technique (NFT), (2) deep water raft culture
(DWC), (3) flood and drain, (4) continuous drip systems, (5) the wick method, and (6) aeroponics [8,9];
however, aeroponics will not be discussed in this review. Overall, each of these systems may confer
different risks and advantages in the production of leafy vegetables.
A 2016 survey of 198 indoor farms by Agrilyst [10], an indoor farm management and analytics
platform company, reported that 143/198 (72%) of farms grow leafy greens, herbs, or microgreens, and
98/198 (49%) of respondents use hydroponic greenhouses as their operating system. Furthermore, 86%
of the small CEA farms (<1500 square feet) stated that they planned to expand their farm size “over
the next five years,” as stated in the survey question [10]. Previous research on food safety practices on
small to medium-sized field-based farms demonstrates that these groups typically struggle to maintain
consistent food safety practices [11,12]. If these trends are similar to indoor hydroponic farmers, it will
be imperative to deter inadequate food safety practices in beginner CEA growers before they expand.
In general, a preventive control point of particular concern in fresh produce production is agricultural
water quality. While numerous studies, as reviewed by De Keuckelaere et al. (2015), have investigated
the impact of agricultural water quality on the food safety aspects of field-grown crops [13], very little
attention has been given to their CEA counterparts. In hydroponic leafy vegetable farming, pathogen
-----
_Horticulturae 2019, 5, 25_ 3 of 22
internalization via contaminated nutrient solution could be a significant issue as well as an obvious
control point; thus, more detailed research in this area is needed for developing relevant guidelines.
Furthermore, because hydroponic systems are often housed in built environments, pathogens
may more feasibly recirculate in air handling systems and in the recirculating water supply.
Microbiome studies of the built environment infrastructure suggest that humans are the main
driver of microbial diversity in these settings, and a wide variety of microbes occupy niches in
the buildings [14]. Additionally, human handling can contribute significantly to the contamination of
fresh produce [15]. Human pathogens commonly associated with contaminated fresh produce include
_Listeria monocytogenes, Salmonella enterica serovars, Shiga toxin-producing E. coli (STEC), and human_
noroviruses, which are the most common cause of gastroenteritis associated with fresh produce [16–18].
Each of these pathogens has characteristics that enable their survival in the built environment for weeks
to months or even years [19–21]. The presence of persistent microorganisms within the environment
could lead to the superficial deposition or even internalization of pathogens in leafy vegetables.
**3. Pathogen Internalization in Leafy Vegetables**
Internalization refers to the transfer of microorganisms from the environment to the inner tissue
of the plant. One of the earliest studies demonstrating pathogen internalization in fresh produce
was Hara-Kudo et al. [22]. The study was in response to a July 1996 outbreak in Sakai City, Japan
involving hydroponically grown radish sprouts contaminated with Escherichia coli O157:H7 that
sickened ~6000 people [23]. Hara-Kudo et al. [22] demonstrated that contamination of either the seed
or hydroponic water with E. coli O157:H7 can result in marked colonization of the edible parts of the
sprout. In addition, the frequency of internalization increased with increasing concentrations of E. coli
O157:H7 in the hydroponic water. Meanwhile, Itoh et al. [24] used immunofluorescence microscopy
and scanning electron microscopy to confirm pathogen contamination on the surface, in leaf stomata,
and on inner plant tissue such as xylem. The internalization of E. coli O157:H7 in lettuce cut edges has
also been observed, even following chlorine treatment [25]. In one of the first field trials, Solomon et
al. [26] demonstrated that soil (i) fertilized with E. coli O157:H7-contaminated manure or (ii) irrigated
with contaminated water both led to the internalization of E. coli O157:H7 in the lettuce tissue, as
confirmed by fluorescence microscopy. Since internalized pathogens cannot be effectively removed
by post-harvest disinfection [27], a large body of research has been conducted in order to address
the mechanisms, causes, and prevention of pathogen internalization in fresh produce, specifically
leafy vegetables.
It is well established, as shown in lab-based experiments, that foodborne pathogens can become
internalized and disseminated in plant crops via the plant root systems, through wounds in the cuticle,
or through stomata, as shown in lab-based experiments [28–30]. Multiple reviews have thoroughly
addressed the pathogen internalization of leafy vegetables. Hirneisen et al. [30] concluded that
internalization is specific to the plant and pathogen, and that the use of soil or hydroponic media
highly impacts the absorption of microorganisms in produce. The authors go on to conclude that
healthy, non-injured roots appear to hinder the internalization of microorganisms, and that if an uptake
of pathogens does occur, the microbial load does not directly correlate with the concentration in leaves
and stems. Hirneisen et al. [30] determined that, in general, pathogen internalization within the edible
portion of leafy greens was observed less frequently in contaminated soil-based systems compared
to contaminated hydroponic systems. In studies where internalization was greater in soil, it was
attributed to root damage during growth [31] or features of soil, such as resident microorganisms,
that may suppress internalization through competition [31,32]. Other reviews support the notion that
hydroponic systems pose a greater internalization risk [7,32–34] with water as a common source of
contamination [35]. Therefore, it is critical to identify contamination risk factors within the various
hydroponic plant culture systems and define potential preventive control measures for hydroponic
leafy vegetable growers.
-----
_Horticulturae 2019, 5, 25_ 4 of 22
**4. Hydroponic System Designs**
Hydroponic crop production combines irrigation and fertilization into one system by submerging
plant roots in buffered fertilizer salt solutions. Hydroponic plant culture systems and the terminology
used to describe them vary widely. However, there are some common design themes such as the
use or non-use of a solid horticulture substrate, active pumping or passive water flow, open-cycle
or closed-cycle water use, the degree to which the roots are submerged in water, the method of root
aeration, and whether the flow rate is zero, continuous, or intermittent (Table 1). These characteristics
are potentially relevant to pathogen internalization via roots because they determine the nature of the
physical contact between the plant root system and the nutrient solution.
The five systems most commonly described in the literature for growing leafy vegetables include
the NFT, DWC, flood and drain, continuous drip [36], and the wick method [37]. Aeroponics, where
roots are sprayed with a nutrient solution rather than submerged, can also be used for leafy vegetables.
However, the aeroponics technique was developed primarily for growing root crops for the herbal
supplement industry [38], and thus will not be discussed in this review. Hydroponic systems may also
be classified by the container type used, such as window boxes, troughs, rails, buckets, bags, slabs, or
beds [36,39]. For the purpose of this review, they have been grouped by how the roots interact with
the nutrient solution (Figure 1).
The preparation of seedlings for hydroponic systems includes germination and transplantation.
Germination is usually performed by adding one seed to a piece of a moistened solid medium called a
“plug”, which is often made of rockwool, or a netted cup filled with peat and perlite. Plugs must be
stabilized with a nutrient solution of pH = 4.5–5.6, sub-irrigated, and then germinated for 2–3 weeks
at 17–20 _[◦]C under a humidity dome. For NFT systems, it is of particular importance that the roots_
penetrate the bottom of the plug before transplanting, so that they can extend into the nutrient
solution [39–41].
**Table 1. Hydroponic leafy vegetable systems compared to conventional farming systems.**
**Deep Water** **Nutrient Film** **Conventional,**
**Continuous Drip** **Wick Method** **Flood and Drain**
**Raft Culture** **Technique** **Field-based**
**Submergence of plant roots in nutrient solution**
Roots are fully
submerged in NS
throughout the
growing process.
No water flow in
plant reservoir. NS is
passively
replenished through
capillary action from
the tank up through
fibrous wicks.
Roots grow
through a solid
matrix in a grow
bed that is filled
with NS.
NS is actively
pumped
continuously at a
low flow rate.
Roots grow through a
solid matrix in a grow
bed that is mostly filled
with NS when flooded,
and exposed to air when
not flooded.
Grow bed is periodically
flooded with NS at a
higher flow rate than
NFT or drip, by active
pumping, and then
drained. The pump is
typically
timer-controlled.
Roots are fully
covered by the soil
matrix and exposed
to water through
irrigation.
Roots grow in soil
and are watered by
drip irrigation and
surface watering.
Roots are fully
submerged in NS
throughout the
growing process.
**Water Flow**
No water flow
**Water recirculation**
Root tips touch a
1–10-mm film of NS
running along the
bottom of plastic gutters.
NS is actively pumped
continuously or
intermittently at a low
flow rate.
OC CC CC OC CC OC
**Solid phase**
Soil, compost,
No No Yes Yes Yes
manure
**Method of root aeration**
All but the root tips are
Injection exposed to the air inside
the gutters.
Agitation from
Injection
pump
Exposed to air during
drained periods, from
agitation by the pump
during flood periods.
By ensuring
adequate soil
drainage
Solid phase = Y: Gravel, perlite, vermiculite, pumice, expanded clay, plastic mats, plastic beads, rice hulls; NFT,
nutrient film technique; NS, nutrient solution; OC, open-cycle; CC, closed-cycle; Soil, silt loams, sandy soils, or clay
with good drainage.
-----
_Horticulturae 2019, 5, 25_ 5 of 22
_Horticulturae 2019, 5, x FOR PEER REVIEW_ 5 of 23
**Figure 1. Types of hydroponic plant culture systems. “Deep water raft culture” may also be referred**
to as “float hydroponics” [36], while “flood and drain” can be referred to as “ebb and flow” [39].
**Figure 1. Types of hydroponic plant culture systems. “Deep water raft culture” may also be referred**
The “continuous drip” system is typically called a “drip system” [36], but “continuous” is used
to as “float hydroponics” [36], while “flood and drain” can be referred to as “ebb and flow” [39]. The
here to differentiate it from flood and drain systems that have similar construction, but the pump
“continuous drip” system is typically called a “drip system” [36], but “continuous” is used here to
runs intermittently.
differentiate it from flood and drain systems that have similar construction, but the pump runs
By contrast, the planting process for commercial field-based lettuce production is most oftenintermittently.
seeded directly into the soil using pelleted seeds and a mechanical seeder; however, an increasing
By contrast, the planting process for commercial field-based lettuce production is most often
minority of lettuce crops is transplanted. Generally, seedlings that are used for transplant are 4–6 weeks
seeded directly into the soil using pelleted seeds and a mechanical seeder; however, an increasing
old, sowed in 200-well seed trays, and germinated at a target temperature of 20 _[◦]C. Most irrigation is_
minority of lettuce crops is transplanted. Generally, seedlings that are used for transplant are 4–6
performed by surface drip [42–45].
weeks old, sowed in 200-well seed trays, and germinated at a target temperature of 20 °C. Most
**5. Pathogen Internalization in Hydroponic Systemsirrigation is performed by surface drip [42–45].**
Few studies involve hydroponic systems that are representative of commercial operations.
**5. Pathogen Internalization in Hydroponic Systems**
Laboratory-scale plant cultivation resembling the hydroponic concept dominates the literature, using
Hoagland’s solution in trays, tubes, or flasks. This method is similar in concept to deep water culture,Few studies involve hydroponic systems that are representative of commercial operations.
as no pumps, recirculation, or aeration are typically used, and the roots are mostly or fully submergedLaboratory-scale plant cultivation resembling the hydroponic concept dominates the literature, using
in the solution [Hoagland’s solution in trays, tubes, or flasks. This method is similar in concept to deep water culture, 31,46–49]. In some lab-based systems, plants were cultivated using an agar-solidified
hydroponic nutrient solution rather than a fluid solution. Two studies have utilized a NFT or NFT-likeas no pumps, recirculation, or aeration are typically used, and the roots are mostly or fully submerged
system [in the solution [31,46–49]. In some lab-based systems, plants were cultivated using an agar-solidified 50,51], while one study utilized a continuous drip system, but inoculated the solid phase as
opposed to the nutrient solution [hydroponic nutrient solution rather than a fluid solution. Two studies have utilized a NFT or NFT-52]. Research addressing the internalization of pathogens in leafy
vegetables across a variety of hydroponic systems has been summarized in Tablelike system [50,51], while one study utilized a continuous drip system, but inoculated the solid phase 2.
as opposed to the nutrient solution [52]. Research addressing the internalization of pathogens in leafy
vegetables across a variety of hydroponic systems has been summarized in Table 2.
**Figure 1. Types of hydroponic plant culture systems. “Deep water raft culture” may also be referred**
-----
_Horticulturae 2019, 5, 25_ 6 of 22
**Table 2. Investigations of pathogen internalization in leafy greens grown hydroponically by system type.**
**System** **Solid Phase** **Pathogen** **Plant** **Inoculation** **Surface** **Compared** **Internalization Outcome** **Ref.**
**Type** **Sterilized** **with Soil**
[28]
[29]
Yes No Levels of all pathogens increased from
2 log to ~5–6 log CFU during 10-day
germination. Counts and SEM showed
a plant-specific effect (cress and radish
most susceptible), a pathogen-specific
effect (L. monocytogenes most
abundant), and an age-specific effect
(internalization was greater in
young plants)
Seeds soaked in 2 log CFU/mL,
and then air-dried on sterile filter
paper for 2 h at ~22 _[◦]C_
HA-GB N/A _E. coli O157:H7, Salmonella_
Typhimurium, and
_L. monocytogenes_
Carrot, cress,
lettuce, radish,
spinach and
tomato
DWC-L-T No _E. coli TG1 expressing GFP_ Corn seedlings 7 log CFU/mL added directly to No No Internalized E. coli TG1 detected in
(Zea mays) the 4-L tray of nutrient solution shoots. Entire root system removed
(430 CFU/g), root tips severed
(500 CFU/g), undamaged plants
(18 CFU/g).
DWC-L-F No GFP-expressing E. coli O157:H7
and S. Typhimurium (MAE 110
and 119)
DWC-L-T No GFP-expressing E. coli O157:H7
from a spinach outbreak and a beef
outbreak as well as a
non-pathogenic clinical E. coli
isolate
DWC-L-F Sand _S. Typhimurium (LT1 and S1) and_
_L. monocytogenes sv4b, L. ivanovii,_
_L. innocua_
DWC-L-C No Six strains of E. coli O157:H7, five
strains of S. Typhimurium and
_S. Enteritidis, six strains of_
_L. monocytogenes_
Lettuce
(Lactuca sativa cv.
Tamburo)
29 mL of hydroponic nutrient
solution with a final concentration
of 7 log CFU/mL
Yes Yes Hydroponic: S. Typhimurium MAE [31]
119 internalized at 5 log CFU/g.
[32]
[46]
[47]
Spinach 3 and 7 log CFU/mL or g added
directly to the nutrient solution or
soil. Group 1: Inoculated
hydroponic for 21 d; Group 2:
Hydroponic for 21 d, transplanted
into sterile soil; Group 3:
hydroponic for 21 d, transplanted
into inoculated soil
Barley 8 log CFU/mL suspension per
(Hordeum vulgare) bacterial species added directly to
the surface of the sand 1 to 2 days
after planting
Yes Yes At both 4 log and 7 log CFU/mL in
hydroponic water, between 2–4 log
CFU/shoot internalized pathogen
detected at cultivation day 14. Soil
recovery was negligible for both high
and low inocula and required
enrichment to detect. 23/108
soil-grown plants showed E. coli in
root tissues, but no internalization
in shoots.
Yes No _Salmonella internalized in roots, stems,_
and leaves, while Listeria spp. only
colonized the root hairs.
Spinach (Brassica
_rapa var._
perviridis)
3 or 6 log CFU/mL added directly No No Across all microorganisms, the 3 log
to the hydroponic water solution CFU/mL had an average recovery of
<1.7 log CFU/leaf in 7/72 samples.
The 6 log CFU/mL inoculum resulted
in better recovery (50/76 samples) in a
range of 1.7 to 4.4 log CFU/leaf.
-----
_Horticulturae 2019, 5, 25_ 7 of 22
**Table 2. Cont.**
**System** **Solid Phase** **Pathogen** **Plant** **Inoculation** **Surface** **Compared** **Internalization Outcome** **Ref.**
**Type** **Sterilized** **with Soil**
DWC-L-T No _E. coli O157:H7_ Spinach cultivars 5 or 7 log CFU/mL added directly
Space and Waitiki to the Hoagland medium.
Hoagland medium was
re-inoculated as needed to
maintain initial bacterial levels.
Yes Yes _E. coli O157:H7 internalized in 15/54_
samples at 7 days after inoculation
with 7 log CFU/mL. Neither curli or
spinach cultivar had an impact on the
internalization rate.
[48]
[49]
[50]
[51]
[52]
[53]
[54]
DWC-L-J Vermiculite _Coxsackievirus B2_ Lettuce (L. sativa) 7.62–9.62 log genomic copies/L in Unknown No Virus detected in leaves on the first
water solution day at all inoculation levels; however,
decreased to below LOD over the next
3 days.
NFT Rockwool plugs _E. coli P36 (fluorescence labeled)_ Spinach
(Spinacia !oleracea L.
cv. Sharan)
NFT No MNV Kale microgreens
(Brassica napus)
and mustard
microgreens
(Brassica juncea)
2 to 3 log CFU/mL E. coli added to
the nutrient solution in the holding
tank. 2 log CFU/g was added
to soil.
Nutrient solution containing
~3.5 log PFU/mL on day 8
of growth
DS Peat
pellets/clay
pebbles
MNV (type 1), S. Thompson Basil MNV (8.46 log-PFU/mL) or S.
(FMFP 899) (Ocimum basilicum) Thompson (8.60 log-CFU/mL) via
soaking the germinating discs
for 1 h
DWC No _Citrobacter freundii PSS60,_
_Enterobacter spp. PSS11, E. coli_
PSS2, Klebsiella oxytoca PSS82,
_Serratia grimesii PSS72,_
_Pseudomonas putida PSS21,_
_Stenotrophomonas maltophilia PSS52,_
_L. monocytogenes ATCC 19114_
HA-TT N/A _Klebsiella pneumoniae 342,_
_Salmonella Cubana, Infantis, 8137,_
and Typhimurium; E. coli K-12, E.
coli O157:H7
Radish
(R. sativus L.)
microgreens
Alfalfa (M. sativa)
and Barrelclover
(M. truncatula)
Final concentration of 7 log
CFU/mL for each bacterium
added directly to the
nutrient solution
1 to 7 log CFU/mL added directly
to the growth medium at the
seedling root area after 1 day
of germination.
Yes Yes For hydroponic: total surface
(7.17 ± 1.39 log CFU/g), internal
(4.03 ± 0.95 log CFU/g). For soil: total
surface (6.30± 0.64 log CFU/g),
internal (2.91± 0.81 log CFU/g)
Unknown No MNV was internalized into roots and
edible tissues of both microgreens
within 2 h of nutrient solution
inoculation in all samples at 1.98 to
3.47 log PFU/sample. After 12 days,
MNV remained internalized and
detectable in 27/36 samples at 1.42 to
1.61 log PFU/sample.
No No MNV was internalized into edible
parts of basil via the roots with 400 to
580 PFU/g detected at day 1 p.i., and
the LOD was reached by day 6.
Samples were positive for S.
Thompson on days 3 and 6
post-enrichment.
Yes No _C. freundii PSS60, Enterobacter spp._
PSS11, K. oxytoca PSS82 were
suspected to have internalized in
hypocotyls. These three strains were
detected with and without the surface
sterilization of plant samples.
Yes No _K. pneumoniae 342 colonized root tissue_
at low inoculation levels. S. Cubana
H7976 colonized at high inoculation
levels. No difference between
_Salmonella serovars_
-----
_Horticulturae 2019, 5, 25_ 8 of 22
**Table 2. Cont.**
**System** **Solid Phase** **Pathogen** **Plant** **Inoculation** **Surface** **Compared** **Internalization Outcome** **Ref.**
**Type** **Sterilized** **with Soil**
HA-TT N/A _S. Dublin, Typhimurium,_ Lettuce
Enteritidis, Newport, Montevideo (Lactuca sativa cv.
Tamburo, Nelly,
Cancan)
10 µL of a 7 log CFU/mL
suspension per serovar added
directly to the 0.5% Hoagland’s
water agar containing two-week
old seedlings
DWC No hNoV GII.4 isolate 5 M, MNV, Romaine lettuce TV and MNV (6 log PFU/mL), and
and TV (Lactuca sativa) hNoV (6.46 log RNA copies/mL)
added directly to the nutrient
solution
DWC Vermiculite _E. coli O157:H7_ Red sails lettuce Started with 7 log CFU/mL and
(Lactuca sativa) maintained in water at 5 log
CFU/mL
DWC-(AP) Vermiculite Total coliforms Red sails lettuce No inoculation. Detected 2 to 4 log
(Lactuca sativa) CFU/mL natural concentration of
coliform bacteria in a pilot system
downstream of a cattle pasture
Yes Yes Hydroponic: S. Dublin, Typhimurium,
Enteritidis, Newport, and Montevideo
internalized in L. sativa Tamburo at
4.6 CFU/g, 4.27 CFU/g, 3.93 CFU/g,
~3 CFU/g, and ~4 log CFU/g,
respectively
Yes No TV, MNV, and hNoV detected in
leaves within 1 day. At day 14,
recovery levels were TV: 5.8 log
PFU/g, MNV: 5.5 log PFU/g, and
hNoV: 4 log RNA copies/g were
recovered
Yes No _E. coli O157:H7 internalized in_
contaminated lettuce of cut and uncut
roots. Mean uncut: 2.4 ± 0.7; Mean 2
cuts: 4.0 ± 1.9; Mean 3 cuts: 3.3 ± 1.3.
No significant difference was found
between two and three cuts.
Yes No UV light at 96.6% transmittance and a
flow rate of 48.3 L/min reduced total
coliforms by 3 log CFU/mL in water.
Internalized coliform was not
recovered from either samples or
control lettuce.
[55]
[56]
[57]
[58]
AP, aquaponics; C, cups; CFU, colony-forming units; DS, drip system; DWC, deep water culture; DWC-L, DWC-like; GB, grow beds; GFP, green fluorescent protein; HA, hydroponic agar;
hNoV, human norovirus; J, jars; LOD, limit of detection; MNV, murine norovirus; NFT, nutrient film technique; PFU, plaque forming units; p.i., post-inoculation; SEM, scanning electron
microscopy; T, trays; TT, test tubes; TV, Tulane virus.
-----
_Horticulturae 2019, 5, 25_ 9 of 22
Briefly, Table 2 is designed to highlight the key aspects impacting the microbial internalization
results of the lab-scale hydroponic studies, including the type of microorganisms, plant type and
cultivar, inoculation procedure, and the application of surface sterilization prior to microbial analysis.
With respect to surface sterilization, 12 out of the 17 studies cited in Table 2 specifically described
the application of a decontamination procedure prior to microbial recovery and detection. Most of
the investigators validated the decontamination procedures and showed the complete inactivation of
external microorganisms while maintaining the viability of internalized microorganisms.
_5.1. Deep Water Culture_
DWC systems are the most prominent hydroponic CEA systems used, thus making them of
heightened interest to researchers [59]. As outlined in Table 1, DWC systems traditionally do not have
a solid phase component, and yet many studies use a DWC-like system that does include various solid
phase components (Table 2). Therefore, for the purposes of this review, DWC-like systems without a
solid phase will be compared here, while those with a solid phase are discussed in Section 5.3.
In a traditional DWC system, Settanni et al. [53] used a variety of microorganisms (Table 2) to
inoculate the hydroponic solution for radish microgreen cultivation. To determine if internalization
occurred, researchers sampled the mature hypocotyls of the plants, and found that less than half of
the microorganisms were found to be internalized and in “living form” in the plant tissue. Citrobacter
_freundii, Enterobacter spp., and Klebsiella oxytoca were found to have internalized within the hypocotyls._
These three strains were detected with and without the surface sterilization of plant samples, indicating
microbial persistence both externally as well as via internalization.
Macarisin et al. [48] used a DWC-like system with no solid phase to grow two spinach cultivars.
The researchers inoculated E. coli O157:H7 into the hydroponic medium and soil to study the impact
of (i) curli expression by E. coli O157:H7, (ii) growth medium, and (iii) spinach cultivar on the
internalization of the bacteria in plants. Curli are one of the major proteinaceous components of
the extracellular complex expressed by many Enterobacteriaceae [60]. When curli fibers are expressed,
they are often involved in biofilm formation, cell aggregation, and the mediation of host cell adhesion
and invasion [60]. Neither the curli expression by E. coli O157:H7 nor the spinach cultivar impacted
internalization. The authors found that under experimental contamination conditions, spinach grown
in soil resulted in more internalization incidences when compared to those grown hydroponically.
These data highlight that injuring the root system in hydroponically grown spinach increased the
incidence of E. coli O157:H7 internalization and dissemination throughout the plant. The authors
concluded that these results suggest that E. coli O157:H7 internalization is dependent on root damage
and not the growth medium, which could be linked to (1) root damage in soil or (2) increased plant
defenses in hydroponics where plants were exposed to repeated contamination.
Similar to Macarasin et al. [48], Koseki et al. [47] utilized hydroponically cultivated spinach to
determine potential pathogen internalization. Briefly, the authors inoculated hydroponic medium
at two concentrations (3 and 6 log colony-forming units [CFU]/mL) with various strains of E. coli
O157:H7, S. Typhimurium and Enteritidis as well as L. monocytogenes. The authors observed that
the 3 log CFU/mL inoculum resulted in limited detection (seven out of 72 samples) of internalized
bacteria with an average concentration of <1.7 log CFU/leaf (i.e., limit of detection of the assay) across
all bacteria. The 6 log CFU/mL inoculum level resulted in greater detection (50 out of 76 samples)
ranging from >1.7 to 4.4 log CFU/leaf.
Meanwhile, Franz et al. [31] inoculated their hydroponic nutrient solution with 7 log CFU/mL of E.
_coli O157:H7 and S. Typhimurium (MAE 110 and MAE 119). The two morphotypes of S. Typhimurium,_
MAE 110 and 119, represent a multicellular phenotype with the production of aggregative fimbriae and
a wild-type phenotype lacking the fimbriae, respectively. The internalization of S. Typhimurium MAE
119 in the leaves and roots of lettuce Tamburo occurred at approximately 5 log CFU/g, while E. coli
O157:H7 did not result in any positive samples, thus indicating that internalization likely did not occur.
Additionally, S. Typhimurium MAE 110 was only detected at an average of 2.75 log CFU/g in roots.
-----
_Horticulturae 2019, 5, 25_ 10 of 22
The lack of internalization by the MAE 110 type within the hydroponic system was an interesting
finding, as it was previously suggested that the aggregative fimbriae are critical in the attachment and
colonization of plant tissue [61]. Finally, similar to Macarasin et al. [48], Franz et al. [31] hypothesized
that E. coli O157:H7 must be more dependent on root damage for the colonization of plant tissues, as
significant differences in internalization were observed between hydroponic and soil-grown lettuce,
with the latter more likely to cause root damage.
Interestingly, the study by Klerks et al. [55] also documented serovar-specific differences in the
endophytic colonization of lettuce with Salmonella enterica, as well as significant interactions between
_Salmonella serovar and lettuce cultivar with respect to the degree of colonization (CFU per g of leaf)._
More specifically, the root exudates of lettuce cultivar Tamburo were reported to attract Salmonella,
while other cultivars’ root exudates did not. These authors utilized a hydroponic agar system, which is
discussed further in Section 5.3.
Sharma et al. [32] reported one of the few studies that directly compared the hydroponic and
soil cultivation of spinach. The researchers determined that there was no detectable internalization
of E. coli in spinach cultivated in the soil medium. In comparison, 3.7 log CFU/shoot and 4.35 log
CFU/shoot of E. coli were detected in shoot tissue from all three replicate plants grown in inoculated
hydroponic solution on days 14 and 21, respectively. The authors suggested that the semisolid nature
of the hydroponic solution may have allowed motile E. coli cells to travel through the medium more
readily when compared to soil. In addition, populations of E. coli increased in the hydroponic solution
over time, while the soil population levels declined to less than 1 log CFU/g by day 21. This difference
is likely due to the lack of environmental stressors on E. coli cells in the hydroponic solution, which
improves the internalization capacity in spinach tissues.
DiCaprio et al. [56] investigated the internalization and dissemination of human norovirus
GII.4 and its surrogate viruses—murine norovirus (MNV) and Tulane virus (TV)—in romaine lettuce
cultivated in a DWC system. Seeds were germinated in soil under greenhouse conditions for 20 days
prior to placement in the DWC system with feed water. The feed water (800 mL) was inoculated
with 6 log RNA copies/mL of a human norovirus (hNoV) GII.4 or 6 to 6.3 log plaque-forming unite
(PFU)/mL of MNV and TV to study the uptake of viruses by lettuce roots. Samples of roots, shoots,
and leaves were taken over a 14-day growth period. By day 1 post-inoculation, 5 to 6 log RNA copies/g
of hNoV were detected in all of the lettuce tissues, and these levels remained stable over the 14-day
growth period. For MNV and TV, the authors reported lower levels of infectious virus particles (1 to
3 log PFU/g) in the leaves and shoots at days 1 and 2 post-inoculation. MNV reached a peak titer
(5 log PFU/g) at day 3, whereas TV reached a peak titer (6 log PFU/g) at day 7 post-inoculation.
The authors suggested that it is possible that different viruses may have varying degrees of stability
against inherent plant defense systems, thus explaining the variation amongst the viruses within this
study, as well as other studies on this subject.
_5.2. Nutrient Film Technique_
While NFT is more commonly used by small operations, the NFT production share is growing [62].
If contaminated hydroponic nutrient water is capable of introducing pathogens via plant roots—and
the roots of NFT-grown plants make contact with the nutrient water only at root tips—it is worth
investigating if this reduced root surface contact (i.e., compared to DWC) has an impact on pathogen
internalization risk. If differences are identified, system choice could be added to food safety guidelines
for indoor-grown leafy greens, and would have no such analogous recommendation in soil-based
production guidance. Unfortunately, at the time of this review, only two studies have been published
that address pathogen internalization using the NFT for hydroponic leafy green production (Table 2).
Warriner et al. [50] compared non-pathogenic E. coli P36 internalization in hydroponic spinach
and soil-grown spinach. For spinach grown in contaminated potting soil, E. coli P36 was detected
consistently from day 12 to day 35 post-inoculation on leaf surfaces at concentrations of 2 to
6 log CFU/g. However, E. coli P36 was not detected internally in roots or leaves until day 32 at
-----
_Horticulturae 2019, 5, 25_ 11 of 22
~2 log CFU/g. Meanwhile, 16 days post-inoculation, ~2 log CFU/g of E. coli P36 were detected in and
on roots, but not leaves. Both soil and NFT nutrient water had a starting concentration of 2 log CFU/mL
of E. coli P36. These data suggest that E. coli P36 internalizes poorly overall in soil-grown spinach, and
preferentially internalizes in the roots of hydroponic spinach. This is supportive of the hypothesis
that motile bacterial species may be a greater risk in hydroponic systems than in soil. However, these
results differ from the findings reported by Franz et al. [31] and Macarisin et al. [48] with respect to the
role of motility in the E. coli O157:H7 colonization of plant tissues cultivated in hydroponic systems.
A separate study demonstrated that MNV spread throughout a NFT system that had been used
in the cultivation of kale and mustard microgreens [51]. After inoculating the nutrient solution with
3.5 log PFU/mL of the virus on day 8 of cultivation, viral RNA was detected at 10[4] to 10[5] copies per
10-g microgreen sample, and internalized virus was detected at 1.5 to 2.5 log PFU per 10-g microgreen
sample. Similar levels were observed in roots and edible parts. Levels of virus in the nutrient water
lingered at ~2 log PFU/mL for up to 12 days. Moreover, the authors demonstrated cross-contamination
to the second batch of microgreens at 2 log PFU/sample of internalized virus.
These two studies suggest that both bacteria and viruses are capable of internalizing in leafy
greens within NFT systems, and to a greater degree than soil for bacteria [50]. However, non-standard
measurements and different starting inoculum concentrations between studies make true comparisons
difficult. For example, at both 4 log and 7 log CFU/mL contamination of hydroponic water in a DWC
system, between 2–4 log CFU per spinach shoot of internalized E. coli O157:H7 was detected after
day 14 of cultivation. By contrast, Warriner et al. [50] detected ~2 log CFU/g of internalized E. coli
after 16 days of cultivation, but it is difficult to compare “grams” and “shoots” without knowing the
weight of the shoots, which was not reported. Additionally, it is unknown if certain E. coli strains
internalize more effectively than others. Indeed, species-specific and strain-specific differences have
been reported [28,31,46,55].
The paucity of data related to NFT systems and the pathogen contamination of leafy greens
suggest that more research is needed. In particular, the standardization of NFT systems for research
purposes needs to be pursued. For instance, Warriner et al. [50] suggested that the rockwool plugs
used for seed germination and subsequent cultivation in their NFT system may have had a filtering
effect, as evidenced by the E. coli levels dropping in the system over time while increasing in soil.
If the rockwool plugs were submerged sufficiently to absorb contaminants, this may not have been a
true NFT system, as only the root tips should touch the water. It may also indicate that hydroponic
systems that use a solid phase (Figure 1) are at increased risk for internalization via root systems due
to the accumulation of contaminants in the growth medium during recirculation. Since only the plant
root tips are typically submerged in the contaminated nutrient solution in NFT, but internalization is
similar, perhaps the root tips are principle routes of entry for human pathogens. Plant root cell division
and elongation occurs at the greatest extent at root tips and also at root junctions [63], possibly leaving
ample opportunity for pathogen entry. However, as data accumulate, it may be revealed that NFT
systems do not differ from DWC production with respect to pathogen internalization risk.
_5.3. Other Hydroponic Systems_
While DWC and NFT currently comprise the majority of hydroponic systems utilized for leafy
green production, additional systems are used, as illustrated in Figure 1. To our knowledge, little to
no research has specifically been published on these lesser-known hydroponic systems. However,
continuous drip and flood and drain systems are essentially modifications of DWC with the addition of
a solid phase matrix and slight differences in how the water is circulated. Although not a commercial
scale representation of either DWC-like systems, Kutter et al. [46] utilized quartz sand as a solid
phase matrix in combination with Hoagland’s medium for the germination and cultivation of barley
(Hordeum vulgare var. Barke) in large, glass tubes. Here, microorganisms were introduced to the
cultivation system by root-inoculation via the quartz sand matrix. While barley is not a leafy green,
the study authors demonstrated the colonization and internalization of the plant shoot (stem and
-----
_Horticulturae 2019, 5, 25_ 12 of 22
leaves) with S. Typhimurium after four weeks. In contrast to the other studies highlighted in Table 2,
Kutter et al. [46] inoculated the solid phase, although it is plausible to assume that microorganisms
that had been inoculated in the nutrient solution would migrate to the sand matrix.
Moriarty et al. [57] also utilized a DWC-like system containing vermiculite in transplant trays.
In this design, foam trays filled with a vermiculite mixture were directly seeded, and the trays were
submerged in a tank of hydroponic nutrient water inoculated to a final concentration of 5 log CFU/mL.
Holes at the base of the tray compartments allowed water to passively enter. Mean internalization for
roots with no cut, two cuts, and three roots cuts 2.4 ± 0.7 CFU/g, 4.0 ± 1.9 CFU/g, and 3.3 ± 1.3 CFU/g,
respectively. Carducci et al. [49] provided a similar system design to Moriarty et al. [57], and
demonstrated the internalization of enteroviruses in lettuce leaves via nutrient solution contaminated
with viruses. However, Carducci et al. [49] did not investigate the impact of damaged roots on the
level of internalization. The impact of root damage is discussed further in Section 6.2.
An additional study investigated the internalization of S. Thompson and MNV into the edible
parts of basil via the roots [52]. Here, the authors used a four-pot hydroponic drip system filled
with clay pebbles. Basil seeds were germinated in peat pellets and then transplanted to the drip
system. At six weeks old, basil plants in the peat pellets were removed from the pots and soaked
in an inoculum of either MNV or S. Thompson for 1 h. Li and Uyttendaele [52] reported varying
levels of MNV internalization on days 1 and 3 post-inoculation and positive S. Thompson on days
3 and 6 following sample enrichment. This study presents unique differences from the previously
discussed research utilizing DWC-like systems. Most notable is the inoculation method directly to
the plant roots via inoculum-soaked germination discs, as opposed to within the hydroponic nutrient
water. While this may be analogous to nutrient water interactions with solid matrices, additional
research specifically addressing the role of solid matrices in pathogen internalization by leafy greens
is warranted.
The studies presented in Table 2 also encompass those that utilize an experimental setup lacking
any representation of real-world hydroponic systems. Dong et al. [54] evaluated the rhizosphere
and endophytic colonization of alfalfa (Medicago sativa) and barrelclover (M. truncatula) sprouts by
enteric bacteria. Germinated seedlings with ~5 mm roots were transplanted into test tubes containing
10 mL of Jensen’s nitrogen-free medium with 0.45% agar followed by inoculation of the medium
(i.e., proximal to the seedling root area) 24 h later with prepared bacterial suspensions. Overall,
endophytic colonization was observed for all of the enteric bacteria strains, with Klebsiella pneumoniae
being the most efficient, and E. coli K-12 (generic strain) being the least efficient. The efficiency of all the
_Salmonella serovars and E. coli O157:H7 settled somewhere in the middle with respect to colonization_
abilities. For instance, a single CFU of Salmonella Cubana and Infantis inoculated to the root area
resulted in interior colonization of alfalfa within five days post-inoculation, thus suggesting that no
level of contamination is free of risk. Another primary observation from Dong et al. [54] was the
correlation between endophytic and rhizosphere colonization. More specifically, the authors showed
that as the colonization of the rhizosphere increased, there was a complimentary increase in the
endophytic colonization of alfalfa by all of the bacterial strains (r[2] = 0.729–0.951) except for E. coli K-12
(r[2] = 0.017) [54].
Jablasone et al. [28] also utilized a hydroponic agar system to investigate the interactions of E. coli
O157:H7, S. Typhimurium, and L. monocytogenes with plants at various stages in the production cycle.
While the authors reported on two cultivation study designs, our focus will be on the cultivation
studies lasting >10 days in which contaminated seeds were cultivated in 500-mL polypropylene flasks
containing hydroponic solution solidified with 0.8% (w/v) agar. Here, the seeds—seven different
plant types, including cress, lettuce, and spinach—were directly inoculated with pathogens (3.3 to
4.7 log CFU/g) and then germinated. Overall, pathogen levels increased significantly during the
10-day germination period. With respect to internalization, S. Typhimurium was detected in lettuce
seedlings at nine days, but not thereafter, and E. coli O157:H7 was detected in lettuce and spinach
seedlings also at nine days. Meanwhile, L. monocytogenes was not detected in the internal tissues of the
-----
_Horticulturae 2019, 5, 25_ 13 of 22
seedlings at any time point. Overall, the authors concluded that there seemed to be an age-specific
effect on pathogen internalization, with younger plants being more susceptible. In addition, there
were apparent plant-specific and pathogen-specific effects observed, with the latter also observed by
Kutter et al. [46] with respect to the lack of internalization of L. monocytogenes, while other pathogens
such as E. coli and Salmonella were internalized.
As alluded to in Section 5.1, the study by Klerks et al. [55] also utilized a hydroponic agar system
to study the plant and microbial factors that impact the colonization efficiency of five Salmonella
serovars with three commercially relevant lettuce cultivars (Cancan, Nelly, and Tamburo). Within
the same study, the authors investigated the association of Salmonella with lettuce Tamburo grown
in soil. For soil-based studies, only one serovar (Dublin) was detected in the plant tissue of lettuce
Tamburo with a concentration of 2.2 log CFU/g. Meanwhile, S. Dublin, Typhimurium, Enteritidis,
Newport, and Montevideo internalized in Tamburo at 4.6 CFU/g, 4.27 CFU/g, 3.93 CFU/g, ~3 CFU/g,
and ~4 log CFU/g when cultivated hydroponically, respectively. Interestingly, while the prevalence
of Salmonella in lettuce plant tissues was not impacted by the lettuce cultivar, there was a significant
interaction between Salmonella serovar and cultivar with respect to the level of endophytic colonization
(CFU/g) during hydroponic cultivation. Klerks et al. [55] further demonstrated the active movement
of S. Typhimurium to the plant roots of lettuce Tamburo when placed in microcapillary tubes with root
exudates, as well as the upregulation of pathogenicity genes. More specifically, the authors identified
an organic compound in the root exudates that is used as a carbon source by Salmonella and observed
the initiation of processes that allow for host cell attachment [55].
**6. Targeted Preventive Controls in Hydroponic Systems for Leafy Vegetables**
_6.1. Production Water Quality and Whole System Decontamination_
6.1.1. Current Agricultural Water Quality Guidelines for Fresh Produce
Since water is central to hydroponic plant culture, maintaining microbial water quality should be
a primary control point for food safety. Guidelines for pre-harvest agricultural water have been put
forth by the Food and Drug Administration (FDA) through the Food Safety Modernization Act (FSMA)
and the Produce Safety Rule (PSR) (21 CFR § 112.42). Specifically, water used during growing activities
must meet a geometric mean of 126 CFU/100 mL generic E. coli and a statistical threshold value
_≤_
of 410 CFU/100 mL generic E. coli based on a rolling four-year sample dataset. However, as with
_≤_
most aspects of the PSR, requirements are based on field-grown raw agricultural commodities without
consideration for hydroponic systems. This raises the question of whether pre-harvest agricultural
water standards should remain the same or be more or less stringent for hydroponic production.
For instance, Allende and Monaghan [64] suggest hydroponic systems as a risk reduction strategy
for leafy green contamination, as the water does not come into contact with the edible parts of the
crop. However, this review has shown evidence to the contrary. Clearly, based on the data presented
in this review, this is not a simple question given the differences in pathogen internalization across
hydroponic system types as well as plant cultivars and pathogen strain type.
6.1.2. Risk of System Contamination
While maintaining high nutrient solution quality and preventing root damage are major factors
in preventing internalization in leafy greens, a clean hydroponic system can prevent microorganisms
from disseminating throughout the plant and beyond. For instance, Wang et al. [51] introduced MNV
into their experimental NFT system to determine the internalization and dissemination of the virus in
microgreens, as described in Section 5.2. After harvesting the microgreens on day 12, the remaining
microgreens, hydroponic growing pads, and nutrient solution were removed without further washing
or disinfection of the system. To start the new growth cycle, a new set of hydroponic growing pads
and microgreen seeds were utilized for germination. Fresh nutrient solution was used, and no MNV
-----
_Horticulturae 2019, 5, 25_ 14 of 22
was inoculated. Even still, MNV was detected in the nutrient solution for up to 12 days (2.26 to
1.00 log PFU/mL) during this second growing cycle and was also observed in both the edible tissues
and roots of the microgreens.
In a brief review of the microbial composition of hydroponic systems in the Netherlands,
Waechter-Kristensen et al. [65] reported Pseudomonas spp. as the dominant species, with most of
the total aerobic bacteria attached to gutter, growth substrate, and plant roots. In a more sophisticated
analysis, Lopez-Galvez et al. [66] assessed two hydroponic greenhouse water sources for generic E coli
as well as the pathogens Listeria spp., Salmonella enterica, and STEC. The authors found that generic
_E. coli counts were higher in reclaimed water than in surface water. Interestingly, Listeria spp. counts_
increased after adding the hydroponic nutrients in both surface and reclaimed water, although neither
source showed significant differences in generic E. coli counts. STEC was not identified in any sample,
but 7.7% of the water samples tested positive for Salmonella spp., and 62.5% of these were from the
reclaimed water source. Regardless, the microbial contamination of nutrient solution did not translate
into contaminated produce in this instance, as none of the tomato samples tested were positive for
target microorganisms.
Another consideration is the impact of hydroponic feed water recirculation on pathogen survival.
Routine system-wide water changes in hydroponic systems are likely costly and labor-intensive. As a
result, hydroponic practitioners typically monitor nutrient levels in real time or by routine sampling
and add nutrients and water as needed due to uptake and evaporation, respectively. Therefore,
the need arises for routine microbiological testing of feed water and preparing nutrient solutions
with treated water to prevent the rapid spread of pathogens through systems. Furthermore, there
are no formal guidelines for how often to drain nutrient solution to waste and replace, rather than
replenish as needed, other than the obvious scenarios following plant disease outbreaks [39]. Research
is needed to demonstrate if such labor-intensive practices would have a beneficial effect on food safety
in hydroponic systems.
6.1.3. Water Treatment Strategies
Methods for the continuous control of microbial water quality in recirculating hydroponic systems
almost exclusively focus on the removal of plant pathogens and include membrane filtration [67], slow
sand filtration, [68–71], and ultraviolet (UV) light treatment [72–74]. Methods for pre-treating water
that are used to prepare nutrient solutions include ozonation [75], chlorination, iodine, or hydrogen
peroxide. Biological control agents are also used [76] and are discussed further in Section 6.3. Each of
these methods possesses advantages and disadvantages with respect to their practical use [72,77,78],
as outlined in Table 3.
While ozone is a proven water treatment strategy [79], some investigators have suggested [71,77]
that the ozonation of hydroponic nutrient water may lead to the precipitation of mineral nutrients
such as manganese and iron due to the strong oxidizing properties of ozone. However, Ohashi-Kaneko
et al. [75] found that the initial growth of tomato plants supplied with a nutrient solution prepared
with ozonated water at a dissolved ozone concentration of 1.5 mg/L was greater than in non-ozonated
water, indicating that ozonation is not only safe for young plants, but possibly beneficial. This is
the most vulnerable stage for hydroponic vegetables and leafy greens, indicating that ozonation is a
promising strategy particularly to prevent internalization at germination and early stages of growth.
Recently, Moriarty et al. [57] demonstrated that UV light successfully reduced natural levels
of total coliforms by 3 log CFU/mL in nutrient water in a pilot-scale DWC aquaponics system.
Moreover, lettuce samples were surface-sterilized using UV light in a biosafety cabinet as well as
a bleach/detergent mixture prior to testing for internalized coliform bacteria, of which none were
detected. Moriarty et al. [57] stated that this neither confirms nor refutes the effectiveness of UV
light in preventing coliform internalization by lettuce in DWC aquaponics in an open environment.
Nevertheless, the reduction of total coliforms in nutrient water is a desirable outcome and may be
included in prevention guidelines if these effects can be replicated.
-----
_Horticulturae 2019, 5, 25_ 15 of 22
**Table 3. Water treatment strategies and associated advantages and disadvantages.**
**Method** **Advantages** **Disadvantages**
Precise filtration, can choose pore
Membrane filtration Reduced flow rate, easy clogging
size to suit needs
Most common, inexpensive, a
Slow sand filtration May not effectively remove pathogens on its own
variety of substrate choices.
Can be combined with slow sand
UV light treatment
filtration for high efficiency
Water needs high clarity, so must be combined
with sediment filter to ensure maximum light
penetration
Inexpensive, standard
Chlorination Storage issues, toxic to humans
recommendation
Iodine Less toxic than chlorine Need high doses to be effective, costly
Less toxic than chlorine,
Hydrogen peroxide Need high doses to be effective, costly
weak oxidizer
Non-toxic to humans, no residues Strong oxidizer may cause hydroponic mineral
Ozonation
left behind nutrients to precipitate, reducing bioavailability
Inconsistent, difficult to maintain microbial
numbers to sufficiently suppress pathogens,
manipulation of microbiome for this purpose still
a poorly understood research area.
Biological control agents
Takes advantage of natural
features of the system to suppress
pathogens without addition of
harsh chemicals
_6.2. Minimizing Root Damage_
Damage to root tissue has been suspected to increase pathogen internalization in soil cultivation
of leafy greens, but multiple reviews of current evidence suggest that only damage at root tips and
lateral root junctions increases internalization under experimental conditions [7,30,48]. Similarly, root
damage in most hydroponic studies are experimenter-induced. These bench scale investigations
demonstrate that to some extent, root damage is linked to increased internalization in hydroponics as
well. However, it is not known if incidental damage is more likely to occur in hydroponic systems
or soil.
As discussed in Section 5.3, Moriarty et al. [57] demonstrated that intentionally severing root
tips did increase E. coli O157:H7 internalization in deep water cultivated lettuce compared to uncut
controls. While two cuts did increase internalization in a hydroponic system over uncut roots, adding
a third cut did not show a statistically significant increase in internalization. Similarly, within a DWC
cultivation system inoculated with 7 log CFU/mL of E. coli TG1, bacterial density was greater after 48
h in the shoots of corn seedlings with the entire root system removed (430 CFU/g) and with the root
tips severed (500 CFU/g) compared to undamaged plants (18 CFU/g) [29]. These findings are similar
to those in soil-based studies.
Guo et al. [80] utilized a DWC system and reported internalization of Salmonella serovars
(Montevideo, Poona, Michigan, Hartford, Enteritidis) in the leaves, stems, hypocotyls, and cotyledons
of tomato plants with both damaged and undamaged roots. The initial inoculum level was 4.46
to 4.65 log CFU/mL, and at nine days post-inoculation, Salmonella serovars remained between
3.5–4.5 log CFU/mL. Interestingly, internalization was greater in undamaged root systems when
compared to damaged roots.
_6.3. Biological Control_
Since many hydroponic system designs involve the recirculation of nutrient water, the risk
of pathogen spread via water in these systems has attracted considerable attention. The rapid
advancement of next-generation sequencing technologies in recent years has spawned a research
effort to characterize the microbiome of “-ponics” systems and to use this information to develop
“probiotic” disease prevention strategies. Most of this work has been focused on the prevention of
plant pathogens because of their direct impact on crop yield [81]. It is reasonable to assume that
pathogens, where the plant is the natural host, will respond differently to biological control treatments
-----
_Horticulturae 2019, 5, 25_ 16 of 22
compared to pathogens that primarily infect humans. Nevertheless, a few studies have demonstrated
a proof of concept that the introduction of putatively beneficial microorganisms has a noticeable effect
on the plant microbiome, of which pathogens may or may not be a part [81–84].
Thus far, it has been demonstrated that the addition of beneficial bacteria or fungi to hydroponic
systems may improve plant growth in some cases, either indirectly by the suppression of diseases such
as root rot [85] or by improving nutrient bioavailability and uptake by altering the rhizosphere [86].
In other cases, the biological control gave mixed results. For example, Giurgiu et al. [87] found that
_Trichoderma spp. acted as a growth promoter, but not a disease suppressor. Although not purposely a_
study on bioinoculation, Klerks et al. [55] hypothesized the difference in the internalization of Salmonella
in lettuce grown in soil versus axenically in a hydroponic agar-based system. More specifically, the
authors suggest that the lack of endophytic colonization in soil-grown lettuce was due to the presence
of native rhizosphere bacteria, and conversely, the absence of bacteria in the axenic system enabled
_Salmonella easier access to the roots._
Despite a growing body of research on plant protection, there are currently no studies on the use of
beneficial bacteria or fungi to suppress the growth of human pathogens in and on crops in hydroponic
systems. The biological control of fish and plant pathogens has been attempted in aquaponics [88].
Of the 924 bacterial isolates from the aquaponics system itself, 42 isolates were able to suppress
the plant disease Pythium ultimum and fish oomycete pathogen Saprolegnia parasitica in vitro. Such
interventions have not yet been tested in either bench-scale or larger hydroponic systems.
_6.4. Plant Cultivar Selection_
A few studies presented in this review have demonstrated the difference in pathogen
internalization and colonization across plant cultivars, which raises the question as to whether
cultivar selection could be a preventive control for the leafy vegetable hydroponics industry.
As previously discussed in Section 5.3, Klerks et al. [55] demonstrated an interaction between
the level (i.e., CFU/g leaf) of endophytic colonization of Salmonella and lettuce cultivar during
hydroponic cultivation. Moreover, Klerks et al. demonstrated a specific interaction of Salmonella
with root exudates from cultivar Tamburo, suggesting chemotaxis of Salmonella to the roots, and thus
further aiding internalization. Another hydroponic agar system study [28] reported differences in the
microbial colonization of the endophyte, although these differences were across plant genera and not
cultivars within a specific species; even still, the authors demonstrated a plant-specific effect on the
internalization of bacteria.
Meanwhile, although not based on a hydroponic cultivation system, Erickson et al. [89]
investigated the ability of Salmonella to internalize in seven cultivars of leafy greens and one cultivar of
Romaine lettuce. The authors spray-inoculated the foliage of three-week old transplants with green
fluorescent protein (GFP)-labeled Salmonella (Enteritidis and Newport) and evaluated internalization
at 1 and 24 h post-inoculation (p.i.). Simultaneously, non-inoculated plants were analyzed for total
phenols and antioxidant capacity. Erickson et al. reported cultivar as a significant variable for the
internalization of Salmonella via contaminated foliage. More specifically, leafy green cultivar Muir
was the most likely to show endophytic colonization 1 h and 24 h p.i. Interestingly, there was an
inverse relationship between the concentration of antimicrobials (i.e., phenols and antioxidants) and
internalization prevalence, suggesting the importance of plant defenses against human pathogenic
bacteria. However, overall, the path toward risk-based preventive controls based on cultivar selection
in hydroponic production needs further investigation.
**7. Potential Actual Health Risk from Consumption of Leafy Vegetables with**
**Internalized Pathogens**
While this review has focused on the risk of pathogen internalization in leafy vegetables grown
hydroponically, how does this translate to actual human health risk? To begin, determining the specific
health risk from internalized pathogens in leafy vegetables as opposed to contamination in general
-----
_Horticulturae 2019, 5, 25_ 17 of 22
is difficult. Clearly, there is a risk of illness regardless of where the pathogen is located on the edible
portion of the leafy vegetable; however, the primary concern with respect to internalized pathogens
is the inability to inactivate through post-harvest disinfection practices, as stated previously in this
review (Section 3). As purported by Saper [90], one of the major limiting factors in decontamination
efficacy includes the internalization of microbial contaminants within plant tissues, which basically
precludes effective disinfection by washing or sanitizing agents.
Another aspect to consider is the infectious dose linked to the primary pathogens of concern
for leafy vegetable contamination. L. monocytogenes, STECs, Salmonella, and human enteric viruses
have all been documented to cause illness with as few as 10 to 100 infectious units (i.e., bacterial
cells or virus particles) [91,92]. On the other hand, there exists extreme variability across strains of
specific pathogens with respect to the estimated dose and resulting response (i.e., gastroenteritis).
Based on the variable infectious dose as well as the average serving size of leafy vegetables (i.e., 1 to
2 cups, or approximately 75 to 150 g) [93] and the data reported in Table 2, the risk of becoming ill
from the ingestion of leafy vegetables with internalized pathogens is highly probable in the event of
gross levels of contamination. Unfortunately, the microbial load that is internalized under natural
growing conditions has not been well-characterized. For example, in the event of a foodborne disease
outbreak linked to leafy vegetables, not only is it rare to have product left to test, but if the pathogen of
concern is detected, then whether the contamination was external or internal is not usually determined.
Moreover, host factors including age, immune status, and gastrointestinal characteristics (e.g., stomach
acid levels, commensal bacteria, immune cells) also play a critical role in the required infectious dose.
**8. Conclusions**
This review aimed to highlight the risks associated with human pathogen internalization in leafy
vegetables cultivated in lab-scale hydroponic systems. The studies presented within this review
(Table 2) overwhelming suggest that human pathogens—both viruses and bacteria—are readily
internalized within plant tissues via the uptake of contaminated nutrient solution through the root
system. The data also demonstrate the immense amount of variability in the hydroponic system
setup, bacteria and virus type selection, method of inoculation, and plant cultivar selection, as well as
techniques for the recovery and detection of microorganisms within plant tissues.
With respect to the recovery and detection of microorganisms, there are few differences that can
be mentioned. For instance, Warriner et al. [50] utilized non-pathogenic, bioluminescent E. coli P36 for
detection by fluorescence imaging as well as the β-glucuronidase (GUS) assay, where the gene for the
enzyme β-glucuronidase was used as a reporter to measure cell viability and distribution. Sharma
et al. [32] tested three strains of genetically engineered GFP-expressing E. coli O157:H7 detected by
immunofluorescence. Additionally, not all investigators performed a leaf surface sterilization prior
to microbial detection to rule out epiphytic bacteria [46,47,52]. However, the natural contamination
of bacteria at significant levels is unlikely due to the high inoculation levels of the specific strains
used in the study combined with the aseptic environment of lab-scale systems. Furthermore, surface
sterilization protocols vary widely, and may be differentially effective.
As hydroponic systems, particularly DWC, continue to increase in popularity, the impact of
plant cultivar, system type, and microbial type/strain on microorganism internalization needs further
characterization. In order to further the knowledge and understanding within this specialized research
area, several recommendations for the standardization of research related to hydroponic cultivation of
leafy vegetables for the investigation of interactions with human pathogens have been provided:
Development of standard guidelines for lab-scale hydroponic cultivation of leafy vegetables to
_•_
enable study comparison. This includes seed germination protocols, best practices for water
management, and design specifications for each type of hydroponic system.
Determine appropriate pathogen inoculation concentrations and methods for the research
_•_
question being addressed. Should there be a range of concentrations considered? How does
-----
_Horticulturae 2019, 5, 25_ 18 of 22
the inoculation of the seed at germination versus inoculation of the nutrient solution change the
interpretation of the results?
Does the presence of a solid substrate impact colonization efficiency? Is there a differential
_•_
effect between contamination of the substrate and the contamination of nutrient water flowing
through it?
Standardization of microbial extraction methods from plants to ensure the recovery of truly
_•_
endophytic microorganisms.
Selection of microorganisms should be standardized. For instance, surrogate microorganisms
_•_
should be validated as representative of their human pathogen counterparts. Strains of
human pathogens should also be carefully considered and validated for use in hydroponic
cultivation systems.
Given the variation in the susceptibility of plants to pathogen colonization, the selection of plant
_•_
cultivars should be standardized to represent commercially relevant cultivars, and the validation
of cultivars used in hydroponic research is needed.
**Author Contributions: All of the authors contributed equally to the conception, writing, and final review of**
the manuscript.
**Acknowledgments: This research was supported in part by the National Institute of Food and Agriculture (NIFA),**
U.S. Department of Agriculture (USDA), Hatch Act.
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. California Leafy Green Products Handler Marketing Agreement (LGMA). Farming Leafy Greens. 2019.
[Available online: https://lgma.ca.gov/about-us/farming-leafy-greens/ (accessed on 9 January 2019).](https://lgma.ca.gov/about-us/farming-leafy-greens/)
2. Centers for Disease Control and Prevention (CDC). Multistate Outbreak of E. coli O157:H7 Infections Linked
[to Romaine Lettuce (Final Update)—Case Count Maps. 2018. Available online: https://www.cdc.gov/ecoli/](https://www.cdc.gov/ecoli/2018/o157h7-04-18/map.html)
[2018/o157h7-04-18/map.html (accessed on 9 January 2019).](https://www.cdc.gov/ecoli/2018/o157h7-04-18/map.html)
3. Centers for Disease Control and Prevention (CDC). Multistate Outbreak of Shiga toxin-producing Escherichia
coli O157:H7 Infections Linked to Leafy Greens (Final Update)—Case Count Maps. 2018. Available online:
[https://www.cdc.gov/ecoli/2017/o157h7-12-17/map.html (accessed on 9 January 2019).](https://www.cdc.gov/ecoli/2017/o157h7-12-17/map.html)
4. Centers for Disease Control and Prevention (CDC). Outbreak of E. coli Infections Linked to Romaine
[Lettuce—Timeline of Reported Cases. 2018. Available online: https://www.cdc.gov/ecoli/2018/o157h7-11-](https://www.cdc.gov/ecoli/2018/o157h7-11-18/epi.html)
[18/epi.html (accessed on 9 January 2019).](https://www.cdc.gov/ecoli/2018/o157h7-11-18/epi.html)
5. Herman, K.M.; Hall, A.J.; Gould, L.H. Outbreaks attributed to fresh leafy vegetables, United States, 1973–2012.
_[Epidemiol. Infect. 2015, 143, 3011–3021. [CrossRef] [PubMed]](http://dx.doi.org/10.1017/S0950268815000047)_
6. Agrylist. State of Indoor Farming. 2017. Available online: [https://www.agrilyst.com/](https://www.agrilyst.com/stateofindoorfarming2017/)
[stateofindoorfarming2017/ (accessed on 12 March 2019).](https://www.agrilyst.com/stateofindoorfarming2017/)
7. Erickson, M.C. Internalization of fresh produce by foodborne pathogens. Annu. Rev. Food Sci. Technol. 2012,
_[3, 283–310. [CrossRef] [PubMed]](http://dx.doi.org/10.1146/annurev-food-022811-101211)_
8. [Jensen, M.H. Hydroponics Worldwide. Acta Hortic. 1999, 481, 719–729. [CrossRef]](http://dx.doi.org/10.17660/ActaHortic.1999.481.87)
9. Asaduzzaman, M.; Saifullah, M.; Mollick, A.K.M.S.R.; Hossain, M.M.; Halim, G.M.A.; Asao, T. Influence of
Soilless Culture Substrate on Improvement of Yield and Produce Quality of Horticultural Crops. In Soilless
_Culture—Use of Substrates for the Production of Quality Horticultural Crops; InTech: London, UK, 2015._
10. Agrylist. State of Indoor Farming. 2016. Available online: [https://www.agrilyst.com/](https://www.agrilyst.com/stateofindoorfarming2016/)
[stateofindoorfarming2016/ (accessed on 9 January 2019).](https://www.agrilyst.com/stateofindoorfarming2016/)
11. Harrison, J.A.; Gaskin, J.W.; Harrison, M.A.; Cannon, J.L.; Boyer, R.R.; Zehnder, G.W. Survey of food
safety practices on small to medium-sized farms and in farmers markets. J. Food Prot. 2013, 76, 1989–1993.
[[CrossRef] [PubMed]](http://dx.doi.org/10.4315/0362-028X.JFP-13-158)
12. Behnke, C.; Seo, S.; Miller, K. Assessing food safety practices in farmers’ markets. Food Prot. Trends 2012,
_32, 232–239._
-----
_Horticulturae 2019, 5, 25_ 19 of 22
13. De Keuckelaere, A.; Jacxsens, L.; Amoah, P.; Medema, G.; Mcclure, P.; Jaykus, L.A.; Uyttendaele, M. Zero risk
does not exist: Lessons learned from microbial risk assessment related to use of water and safety of fresh
[produce. Compr. Rev. Food Sci. Food Saf. 2015, 14, 387–410. [CrossRef]](http://dx.doi.org/10.1111/1541-4337.12140)
14. Stamper, C.E.; Hoisington, A.J.; Gomez, O.M.; Halweg-Edwards, A.L.; Smith, D.G.; Bates, K.L.; Kinney, K.A.;
Postolache, T.T.; Brenner, L.A.; Rook, G.A.W.; et al. The microbiome of the built environment and human
behavior: Implications for emotional health and well-being in postmodern western societies. Int. Rev.
_[Neurobiol. 2016, 131, 289–323. [CrossRef]](http://dx.doi.org/10.1016/bs.irn.2016.07.006)_
15. Todd, E.C.; Greig, J.D.; Bartleson, C.A.; Michaels, B.S. Outbreaks where food workers have been implicated
in the spread of foodborne disease. Part 5. Sources of contamination and pathogen excretion from infected
[persons. J. Food Prot. 2008, 71, 2582–2595. [CrossRef] [PubMed]](http://dx.doi.org/10.4315/0362-028X-71.12.2582)
16. Harris, L.J.; Farber, J.N.; Beuchat, L.R.; Parish, M.E.; Suslow, T.V.; Garrett, E.H.; Busta, F.F. Outbreaks
associated with fresh produce: Incidence, growth, and survival of pathogens in fresh and fresh-cut produce.
_[Compr. Rev. Food Sci. 2003, 2, 78–141. [CrossRef]](http://dx.doi.org/10.1111/j.1541-4337.2003.tb00031.x)_
17. Heaton, J.C.; Jones, K. Microbial contamination of fruit and vegetables and the behaviour of enteropathogens
[in the phyllosphere: A review. J. Appl. Microbiol. 2008, 104, 613–626. [CrossRef]](http://dx.doi.org/10.1111/j.1365-2672.2007.03587.x)
18. Berger, C.N.; Sodha, S.V.; Shaw, R.K.; Griffin, P.M.; Pink, D.; Hand, P.; Frankel, G. Fresh fruit and vegetables
[as vehicles for the transmission of human pathogens. Environ. Microbiol. 2010, 12, 2385–2397. [CrossRef]](http://dx.doi.org/10.1111/j.1462-2920.2010.02297.x)
[[PubMed]](http://www.ncbi.nlm.nih.gov/pubmed/20636374)
19. Wilks, S.A.; Michels, H.; Keevil, C.W. The survival of Escherichia coli O157 on a range of metal surfaces. Int. J.
_[Food Microbiol. 2005, 105, 445–454. [CrossRef] [PubMed]](http://dx.doi.org/10.1016/j.ijfoodmicro.2005.04.021)_
20. Lopman, B.; Gastanaduy, P.; Park, G.W.; Hall, A.J.; Parashar, U.D.; Vinjé, J. Environmental transmission of
[norovirus gastroenteritis. Curr. Opin. Virol. 2012, 2, 96–102. [CrossRef]](http://dx.doi.org/10.1016/j.coviro.2011.11.005)
21. Zhu, Q.; Gooneratne, R.; Hussain, M.A. Listeria monocytogenes in fresh produce: Outbreaks, prevalence and
[contamination levels. Foods 2017, 6, 21. [CrossRef]](http://dx.doi.org/10.3390/foods6030021)
22. Hara-Kudo, Y.; Konuma, H.; Iwaki, M.; Kasuga, F.; Sugita-Konishi, Y.; Ito, Y.; Kumagai, S. Potential hazard of
[radish sprouts as a vehicle of Escherichia coli O157:H7. J. Food Prot. 1997, 60, 1125–1127. [CrossRef]](http://dx.doi.org/10.4315/0362-028X-60.9.1125)
23. [Gutierrez, E. Is the Japanese O157:H7 E. coli epidemic over? Lancet 1996, 348, 1371. [CrossRef]](http://dx.doi.org/10.1016/S0140-6736(05)65421-9)
24. Itoh, Y.; Sugita-Konishi, Y.; Kasuga, F.; Iwaki, M.; Hara-Kudo, Y.; Saito, N.; Noguchi, Y.; Konuma, H.;
Kumagai, S. Enterohemorrhagic Escherichia coli O157:H7 present in radish sprouts. Appl. Environ. Microbiol.
**1998, 64, 1532–1535.**
25. Seo, K.H.; Frank, J.F. Attachment of Escherichia coli O157:H7 to lettuce leaf surface and bacterial viability in
response to chlorine treatment as demonstrated by using confocal scanning laser microscopy. J. Food Prot.
**[1999, 62, 3–9. [CrossRef]](http://dx.doi.org/10.4315/0362-028X-62.1.3)**
26. Solomon, E.B.; Yaron, S.; Matthews, K.R. Transmission of Escherichia coli O157:H7 from contaminated manure
and irrigation water to lettuce plant tissue and its subsequent internalization. Appl. Environ. Microbiol. 2002,
_[68, 397–400. [CrossRef]](http://dx.doi.org/10.1128/AEM.68.1.397-400.2002)_
27. Gil, M.I.; Selma, M.V.; López-Gálvez, F.; Allende, A. Fresh-cut product sanitation and wash water disinfection:
[Problems and solutions. Int. J. Food Microbiol. 2009, 134, 37–45. [CrossRef]](http://dx.doi.org/10.1016/j.ijfoodmicro.2009.05.021)
28. Jablasone, J.; Warriner, K.; Griffiths, M. Interactions of Escherichia coli O157:H7, Salmonella Typhimurium
and Listeria monocytogenes plants cultivated in a gnotobiotic system. Int. J. Food Microbiol. 2005, 99, 7–18.
[[CrossRef]](http://dx.doi.org/10.1016/j.ijfoodmicro.2004.06.011)
29. Bernstein, N.; Sela, S.; Pinto, R.; Ioffe, M. Evidence for internalization of Escherichia coli into the aerial parts of
[maize via the root system. J. Food Prot. 2007, 70, 471–475. [CrossRef]](http://dx.doi.org/10.4315/0362-028X-70.2.471)
30. Hirneisen, K.A.; Sharma, M.; Kniel, K.E. Human enteric pathogen internalization by root uptake into food
[crops. Foodborne Pathog. Dis. 2012, 9, 396–405. [CrossRef]](http://dx.doi.org/10.1089/fpd.2011.1044)
31. Franz, E.; Visser, A.A.; Van Diepeningen, A.D.; Klerks, M.M.; Termorshuizen, A.J.; van Bruggen, A.H.C.
Quantification of contamination of lettuce by GFP-expressing Escherichia coli O157:H7 and Salmonella enterica
[serovar Typhimurium. Food Microbiol. 2007, 24, 106–112. [CrossRef] [PubMed]](http://dx.doi.org/10.1016/j.fm.2006.03.002)
32. Sharma, M.; Ingram, D.T.; Patel, J.R.; Millner, P.D.; Wang, X.; Hull, A.E.; Donnenberg, M.S. A novel approach
to investigate the uptake and internalization of Escherichia coli O157:H7 in spinach cultivated in soil and
[hydroponic medium. J. Food Prot. 2009, 72, 1513–1520. [CrossRef]](http://dx.doi.org/10.4315/0362-028X-72.7.1513)
33. Warriner, K.; Ibrahim, F.; Dickinson, M.; Wright, C.; Waites, W.M. Internalization of human pathogens within
[growing salad vegetables. Biotechnol. Genet. Eng. Rev. 2003, 20, 117–136. [CrossRef] [PubMed]](http://dx.doi.org/10.1080/02648725.2003.10648040)
-----
_Horticulturae 2019, 5, 25_ 20 of 22
34. Deering, A.J.; Mauer, L.J.; Pruitt, R.E. Internalization of E. coli O157:H7 and Salmonella spp. in plants:
[A review. Food Res. Int. 2012, 45, 567–575. [CrossRef]](http://dx.doi.org/10.1016/j.foodres.2011.06.058)
35. Beuchat, L.R. Vectors and conditions for preharvest contamination of fruits and vegetables with pathogens
[capable of causing enteric diseases. Br. Food J. 2006, 108, 38–53. [CrossRef]](http://dx.doi.org/10.1108/00070700610637625)
36. Resh, H.M. Hydroponic Food Production: A Definitive Guidebook for the Advanced Home Gardener and the
_Commercial Hydroponic Grower, 7th ed.; CRC Press Taylor and Francis Group: Boca Raton, FL, USA, 2012;_
ISBN 9781439878675.
37. Ferrarezi, R.S.; Testezlaf, R. Performance of wick irrigation system using self-compensating troughs with
[substrates for lettuce production. J. Plant Nutr. 2016, 39, 147–161. [CrossRef]](http://dx.doi.org/10.1080/01904167.2014.983127)
38. Hayden, A.L.; Giacomelli, G.A.; Hoffmann, J.J.; Yokelson, T.N. Aeroponics: An alternative production system
[for high-value root crops. Acta Hortic. 2004, 629, 207–213. [CrossRef]](http://dx.doi.org/10.17660/ActaHortic.2004.629.27)
39. Food and Agriculture Organization (FAO) of the United Nations. Good Agricultural Practices for Greenhouse
_Vegetable Crops: Principles for Mediterranean Climate Areas; FAO: Rome, Italy, 2013._
40. Brechner, M.; Both, A.J. Cornell University Hydroponic Lettuce Handbook; Cornell University Controlled
[Environment Agriculture: Ithaca, NY, USA, 2013; Available online: http://cea.cals.cornell.edu/attachments/](http://cea.cals.cornell.edu/attachments/Cornell CEA Lettuce Handbook.pdf)
[CornellCEALettuceHandbook.pdf (accessed on 9 January 2019).](http://cea.cals.cornell.edu/attachments/Cornell CEA Lettuce Handbook.pdf)
41. University of Kentucky Cooperative Extension Service, New Crop Opportunities Center, 2006. Hydroponic
Lettuce. Available online: [https://www.hort.vt.edu/ghvegetables/documents/LeafyGreens/GH-](https://www.hort.vt.edu/ghvegetables/documents/Leafy Greens/GH-grown Lettuce and Greens_UnivKentucky_2006.pdf)
[grownLettuceandGreens_UnivKentucky_2006.pdf (accessed on 9 January 2019).](https://www.hort.vt.edu/ghvegetables/documents/Leafy Greens/GH-grown Lettuce and Greens_UnivKentucky_2006.pdf)
42. Nolte, K.D. Winter Lettuce Production: Yuma, Arizona. Powerpoint Presentation. n.d. University of Arizona
[College of Agriculture and the Life Sciences Available online: https://cals.arizona.edu/fps/sites/cals.](https://cals.arizona.edu/fps/sites/cals.arizona.edu.fps/files/Lettuce Production Presentation.pdf)
[arizona.edu.fps/files/LettuceProductionPresentation.pdf (accessed on 9 January 2019).](https://cals.arizona.edu/fps/sites/cals.arizona.edu.fps/files/Lettuce Production Presentation.pdf)
43. Kerns, D.L.; Matheron, M.E.; Palumbo, J.C.; Sanchez, C.A.; Still, D.W.; Tickes, B.R.; Umeda, K.; Wilcox, M.A.
_Guidelines for Head Lettuce Production in Arizona. IPM Series Number 12. Publication number az1099; Cooperative_
Extension, College of Agriculture and Life Sciences, University of Arizona: Tucson, AZ, USA, February
[1999; Available online: http://cals.arizona.edu/crops/vegetables/cropmgt/az1099.html (accessed on](http://cals.arizona.edu/crops/vegetables/cropmgt/az1099.html)
9 January 2019).
44. Smith, R.; Cahn, M.; Daugovish, O.; Koike, S.; Natwick, E.; Smith, H.; Subbarao, K.; Takele, E.; Turini, T. Leaf
_Lettuce Production in California Publication 7216. University of California Davis. UC Vegetable Resource and_
Information Center. University of California Agriculture and Natural Resources. 2011. Available online:
[https://anrcatalog.ucanr.edu/pdf/7216.pdf (accessed on 9 January 2019).](https://anrcatalog.ucanr.edu/pdf/7216.pdf)
45. Kaiser, C.; Ernst, M. Romaine Lettuce. Center for Crop Diversification Crop Profile, CD-CCP-116; University of
Kentucky College of Agriculture, Food, and the Environment Cooperative Extension: Lexington, KY, USA,
[2017; Available online: http://www.uky.edu/ccd/sites/www.uky.edu.ccd/files/romaine.pdf (accessed on](http://www.uky.edu/ccd/sites/www.uky.edu.ccd/files/romaine.pdf)
9 January 2019).
46. Kutter, S.; Hartmann, A.; Schmid, M. Colonization of barley (Hordeum vulgare) with Salmonella enterica and
_[Listeria spp. FEMS Microbiol. Ecol. 2006, 56, 262–271. [CrossRef] [PubMed]](http://dx.doi.org/10.1111/j.1574-6941.2005.00053.x)_
47. Koseki, S.; Mizuno, Y.; Yamamoto, K. Comparison of two possible routes of pathogen contamination of
[spinach leaves in a hydroponic cultivation system. J. Food Prot. 2011, 74, 1536–1542. [CrossRef] [PubMed]](http://dx.doi.org/10.4315/0362-028X.JFP-11-031)
48. Macarisin, D.; Patel, J.; Sharma, V.K. Role of curli and plant cultivation conditions on Escherichia coli O157:
H7 internalization into spinach grown on hydroponics and in soil. Int. J. Food Microbiol. 2014, 173, 48–53.
[[CrossRef] [PubMed]](http://dx.doi.org/10.1016/j.ijfoodmicro.2013.12.004)
49. Carducci, A.; Caponi, E.; Ciurli, A.; Verani, M. Possible internalization of an enterovirus in hydroponically
[grown lettuce. Int. J. Environ. Res. Public Health 2015, 12, 8214–8227. [CrossRef] [PubMed]](http://dx.doi.org/10.3390/ijerph120708214)
50. Warriner, K.; Ibrahim, F.; Dickinson, M.; Wright, C.; Waites, W.M. Interaction of Escherichia coli with growing
[salad spinach plants. J. Food Protect. 2003, 66, 1790–1797. [CrossRef]](http://dx.doi.org/10.4315/0362-028X-66.10.1790)
51. Wang, Q.; Kniel, K.E. Survival and transfer of murine norovirus within a hydroponic system during kale
[and mustard microgreen harvesting. Appl. Environ. Microbiol. 2016, 82, 705–713. [CrossRef]](http://dx.doi.org/10.1128/AEM.02990-15)
52. Li, D.; Uyttendaele, M. Potential of human norovirus surrogates and Salmonella enterica contamination of
pre-harvest basil (Ocimum basilicum) via leaf surface and plant substrate. Front. Microbiol. 2018, 9, 1728.
[[CrossRef]](http://dx.doi.org/10.3389/fmicb.2018.01728)
-----
_Horticulturae 2019, 5, 25_ 21 of 22
53. Settanni, L.; Miceli, A.; Francesca, N.; Cruciata, M.; Moschetti, G. Microbiological investigation of Raphanus
_sativus L. grown hydroponically in nutrient solutions contaminated with spoilage and pathogenic bacteria._
_[Int. J. Food Microbiol. 2013, 160, 344–352. [CrossRef]](http://dx.doi.org/10.1016/j.ijfoodmicro.2012.11.011)_
54. Dong, Y.; Iniguez, A.L.; Ahmer, B.M.M.; Triplett, E.W. Kinetics and strain specificity of rhizosphere and
endophytic colonization by enteric bacteria on seedlings of Medicago sativa and Medicago truncatula. Appl.
_[Environ. Microbiol. 2003, 69, 1783–1790. [CrossRef]](http://dx.doi.org/10.1128/AEM.69.3.1783-1790.2003)_
55. Klerks, M.M.; Franz, E.; Van Gent-Pelzer, M.; Zijlstra, C.; van Bruggen, A.H.C. Differential interaction of
_Salmonella enterica serovars with lettuce cultivars and plant-microbe factors influencing the colonization_
[efficiency. ISME J. 2007, 1, 620–631. [CrossRef]](http://dx.doi.org/10.1038/ismej.2007.82)
56. DiCaprio, E.; Ma, Y.; Purgianto, A.; Hughes, J.; Li, J. Internalization and dissemination of human norovirus
and animal caliciviruses in hydroponically grown romaine lettuce. _Appl._ _Environ._ _Microbiol._ **2012,**
_[78, 6143–6152. [CrossRef]](http://dx.doi.org/10.1128/AEM.01081-12)_
57. Moriarty, M.J.; Semmens, K.; Bissonnette, G.K.; Jaczynski, J. Internalization assessment of E. coli O157:H7 in
[hydroponically grown lettuce. LWT Food Sci. Technol. 2019, 100, 183–188. [CrossRef]](http://dx.doi.org/10.1016/j.lwt.2018.10.060)
58. Moriarty, M.J.; Semmens, K.; Bissonnette, G.K.; Jaczynski, J. Inactivation with UV-radiation and
internalization assessment of coliforms and Escherichia coli in aquaponically grown lettuce. LWT Food
_[Sci. Technol. 2018, 89, 624–630. [CrossRef]](http://dx.doi.org/10.1016/j.lwt.2017.11.038)_
59. Sharma, N.; Acharya, S.; Kumar, K.; Singh, N.; Chaurasia, O.P. Hydroponics as an advanced technique for
[vegetable production: An overview. J. Soil Water Conserv. 2019, 17, 364. [CrossRef]](http://dx.doi.org/10.5958/2455-7145.2018.00056.5)
60. Barnhart, M.M.; Chapman, M.R. Curli biogenesis and function. Annu. Rev. Microbiol. 2016, 60, 131–147.
[[CrossRef]](http://dx.doi.org/10.1146/annurev.micro.60.080805.142106)
61. Barak, J.D.; Gorski, L.; Naraghi-Arani, P.; Charowski, A.O. Salmonella enterica virulence genes are required
[for bacterial attachment to plant tissue. Appl. Environ. Microbiol. 2005, 7, 5685–5691. [CrossRef]](http://dx.doi.org/10.1128/AEM.71.10.5685-5691.2005)
62. Higgins, C.; Hort Americas, LLC, Bedford, TX, USA. Personal communication, 17 January 2019.
63. Mähönen, A.P.; ten Tusscher, K.; Siligato, R.; Smetana, O.; Díaz-Triviño, S.; Salojärvi, J.; Wachsman, G.;
Prasad, K.; Heidstra, R.; Scheres, B. PLETHORA gradient formation mechanism separates auxin responses.
_[Nature 2014, 515, 125–129. [CrossRef] [PubMed]](http://dx.doi.org/10.1038/nature13663)_
64. Allende, A.; Monaghan, J. Irrigation water quality for leaf crops: A perspective of risks and potential
[solutions. Int. J. Environ. Res. Public Health. 2015, 12, 7457–7477. [CrossRef] [PubMed]](http://dx.doi.org/10.3390/ijerph120707457)
65. Waechter-Kristensen, B.; Sundin, P.; Gertsson, U.E.; Hultberg, M.; Khalil, S.; Jensén, P.;
Berkelmann-Loehnertz, B.; Wohanka, W. Management of microbial factors in the rhizosphere and nutrient
[solution of hydroponically grown tomato. Acta Hortic. 1997, 450, 335–342. [CrossRef]](http://dx.doi.org/10.17660/ActaHortic.1997.450.40)
66. Lopez-Galvez, F.; Allende, A.; Pedrero-Salcedo, F.; Alarcon, J.J.; Gil, M.I. Safety assessment of greenhouse
hydroponic tomatoes irrigated with reclaimed and surface water. Int. J. Food Microbiol. 2014, 191, 97–102.
[[CrossRef]](http://dx.doi.org/10.1016/j.ijfoodmicro.2014.09.004)
67. Ohtani, T.; Kaneko, A.; Fukuda, N.; Hagiwara, S.; Sase, S. SW—Soil and water. Development of a membrane
[disinfection system for closed hydroponics in a greenhouse. J. Agric. Eng. Res. 2000, 77, 227–232. [CrossRef]](http://dx.doi.org/10.1006/jaer.2000.0589)
68. Moens, M.; Hendrickx, G. Drain water filtration for the control of nematodes in hydroponic-type systems.
_[Crop Prot. 1992, 11, 69–73. [CrossRef]](http://dx.doi.org/10.1016/0261-2194(92)90082-G)_
69. Wohanka, W.; Luedtke, H.; Ahlers, H.; Luebke, M. Optimization of slow filtration as a means for disinfecting
[nutrient solutions. Acta Hortic. 1999, 539–544. [CrossRef]](http://dx.doi.org/10.17660/ActaHortic.1999.481.63)
70. van Os, E.A.; Postma, J. Prevention of root diseases in closed soilless growing systems by microbial
[optimisation and slow sand filtration. Acta Hortic. 2000, 97–102. [CrossRef]](http://dx.doi.org/10.17660/ActaHortic.2000.532.10)
71. van Os, E.A. Design of sustainable hydroponic systems in relation to environment-friendly disinfection
[methods. Acta Hortic. 2001, 197–206. [CrossRef]](http://dx.doi.org/10.17660/ActaHortic.2001.548.21)
72. Gratzek, J.B.; Gilbert, J.P.; Lohr, A.L.; Shotts, E.B.; Brown, J. Ultraviolet light control of Ichthyophthirius
_[multifiliis Fouquet in a closed fish culture recirculation system. J. Fish Dis. 1983, 6, 145–153. [CrossRef]](http://dx.doi.org/10.1111/j.1365-2761.1983.tb00062.x)_
73. Wohanka, W. Slow sand filtration and UV radiation: Low-cost techniques for disinfection of recirculating
nutrient solution or surface water. In Proceedings of the 8th International Congress on Soilless Culture,
Rustenburg, Hunter’s Rest, South Africa, 2–9 October 1992; ISOSC: Wageningen, The Netherlands,
1993; pp. 497–511.
74. Zhang, W.; Tu, J.C. Effect of Ultraviolet disinfection of hydroponic solutions on pythium root rot and
[non-target bacteria. Eur. J. Plant Pathol. 2000, 106, 415–421. [CrossRef]](http://dx.doi.org/10.1023/A:1008798710325)
-----
_Horticulturae 2019, 5, 25_ 22 of 22
75. Ohashi-Kaneko, K.; Yoshii, M.; Isobe, T.; Park, J.-S.; Kurata, K.; Fujiwara, K. Nutrient solution prepared with
ozonated water does not damage early growth of hydroponically grown tomatoes. Ozone Sci. Eng. 2009,
_[31, 21–27. [CrossRef]](http://dx.doi.org/10.1080/01919510802587523)_
76. Pagliaccia, D.; Ferrin, D.; Stanghellini, M.E. Chemo-biological suppression of root-infecting zoosporic
[pathogens in recirculating hydroponic systems. Plant Soil 2007, 299, 163–179. [CrossRef]](http://dx.doi.org/10.1007/s11104-007-9373-7)
77. Runia, W.T. A review of possibilities for disinfection of recirculation water from soilless cultures. Acta Hortic.
**[1995, 221–229. [CrossRef]](http://dx.doi.org/10.17660/ActaHortic.1995.382.25)**
78. van Os, E.A.; Bruins, M.; Wohanka, W.; Seidel, R. Slow filtration: A technique to minimise the risks of
[spreading root-infecting pathogens in closed hydroponic systems. Acta Hortic. 2001, 495–502. [CrossRef]](http://dx.doi.org/10.17660/ActaHortic.2001.559.72)
79. Wei, C.; Zhang, F.; Hu, Y.; Feng, C.; Wu, H. Ozonation in water treatment: The generation, basic properties of
[ozone and its practical application. Rev. Chem. Eng. 2017, 33, 49–89. [CrossRef]](http://dx.doi.org/10.1515/revce-2016-0008)
80. Guo, X.; van Iersel, M.W.; Chen, J.; Brackett, R.E.; Beuchat, L.R. Evidence of association of salmonellae with
tomato plants grown hydroponically in inoculated nutrient solution. Appl. Environ. Microbiol. 2002, 30, 7–14.
[[CrossRef]](http://dx.doi.org/10.1128/AEM.68.7.3639-3643.2002)
81. Lee, S.; Lee, J. Beneficial bacteria and fungi in hydroponic systems: Types and characteristics of hydroponic
[food production methods. Sci. Hortic. 2015, 195, 206–215. [CrossRef]](http://dx.doi.org/10.1016/j.scienta.2015.09.011)
82. Lee, S.; An, R.; Grewal, P.; Yu, Z.; Borherova, Z.; Lee, J. High-performing windowfarm hydroponic system:
Transcriptomes of fresh produce and microbial communities in response to beneficial bacterial treatment.
_[Mol. Plant-Microbe Interact. 2016, 29, 965–976. [CrossRef]](http://dx.doi.org/10.1094/MPMI-08-16-0162-R)_
83. Moruzzi, S.; Firrao, G.; Polano, C.; Borselli, S.; Loschi, A.; Ermacora, P.; Loi, N.; Martini, M. Genomic-assisted
characterisation of Pseudomonas sp. strain Pf4, a potential biocontrol agent in hydroponics. Biocontrol Sci.
_[Technol. 2017, 27, 969–991. [CrossRef]](http://dx.doi.org/10.1080/09583157.2017.1368454)_
84. Thongkamngam, T.; Jaenaksorn, T. Fusarium oxysporum (F221-B) as biocontrol agent against plant pathogenic
[fungi in vitro and in hydroponics. Plant Prot. Sci. 2017, 53, 85–95. [CrossRef]](http://dx.doi.org/10.17221/59/2016-PPS)
85. Fujiwara, K.; Iida, Y.; Iwai, T.; Aoyama, C.; Inukai, R.; Ando, A.; Ogawa, J.; Ohnishi, J.; Terami, F.; Takano, M.;
et al. The rhizosphere microbial community in a multiple parallel mineralization system suppresses the
[pathogenic fungus Fusarium oxysporum. Microbiolopen 2013, 2, 997–1009. [CrossRef]](http://dx.doi.org/10.1002/mbo3.140)
86. Radzki, W.; Gutierrez Mañero, F.J.; Algar, E.; Lucas García, J.A.; García-Villaraco, A.; Ramos Solano, B.
Bacterial siderophores efficiently provide iron to iron-starved tomato plants in hydroponics culture.
_[Antonie van Leeuwenhoek. 2013, 104, 321–330. [CrossRef]](http://dx.doi.org/10.1007/s10482-013-9954-9)_
87. Giurgiu, R.M.; Dumitras, A.; Morar, G.; Scheewe, P.; Schröder, F.G. A study on the biological control of
_Fusarium oxysporum using Trichoderma app., on soil and rockwool substrates in controlled environment._
_[Notulae Botanicae Horti Agrobotanici Cluj-Napoca 2018, 46, 260–269. [CrossRef]](http://dx.doi.org/10.15835/nbha46110939)_
88. Sirakov, I.; Lutz, M.; Graber, A.; Mathis, A.; Staykov, Y.; Smits, T.H.M.; Junge, R. Potential for combined
biocontrol activity against fungal fish and plant pathogens by bacterial isolates from a model aquaponic
[system. Water 2016, 8, 518. [CrossRef]](http://dx.doi.org/10.3390/w8110518)
89. Erickson, M.C.; Liao, J.-Y.; Payton, A.S.; Cook, P.W.; Den Bakker, H.C.; Bautista, J.; Pérez, J.C.D. Pre-harvest
internalization and surface survival of Salmonella and Escherichia coli O157:H7 sprayed onto different lettuce
[cultivars under field and growth chamber conditions. Int. J. Food. Microbiol. 2019, 291, 197–204. [CrossRef]](http://dx.doi.org/10.1016/j.ijfoodmicro.2018.12.001)
[[PubMed]](http://www.ncbi.nlm.nih.gov/pubmed/30551016)
90. Sapers, G.M. Efficacy of Washing and Sanitizing Methods. Food Technol. Biotechnol. 2001, 39, 305–311.
91. Hara-Kudo, Y.; Takatori, K. Contamination level and ingestion dose of foodborne pathogens associated with
[infections. Epidemiol. Infect. 2011, 139, 1505–1510. [CrossRef] [PubMed]](http://dx.doi.org/10.1017/S095026881000292X)
92. Bosch, A.; Gkogka, E.; Le Guyader, F.S.; Loisy-Hamon, F.; Lee, A.; van Lieshout, L.; Marthi, B.; Myrmel, M.;
Sansom, A.; Schultz, A.C.; et al. Foodborne viruses: Detection, risk assessment, and control options in food
[processing. Int. J. Food Microbiol. 2018, 285, 110–128. [CrossRef]](http://dx.doi.org/10.1016/j.ijfoodmicro.2018.06.001)
93. Franz, E.; Tromp, S.O.; Rijgersberg, H.; Van Der Fels-Klerx, H.J. Quantitative microbial risk assessment for
_Escherichia coli O157: H7, Salmonella, and Listeria monocytogenes in leafy green vegetables consumed at salad_
[bars. J. Food Prot. 2010, 73, 274–285. [CrossRef] [PubMed]](http://dx.doi.org/10.4315/0362-028X-73.2.274)
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
[(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.)
-----
| 27,152
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/HORTICULTURAE5010025?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/HORTICULTURAE5010025, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2311-7524/5/1/25/pdf?version=1552616769"
}
| 2,019
|
[
"Review"
] | true
| 2019-03-15T00:00:00
|
[
{
"paperId": "c24f95504eb41fb891290437283aa7e0904e1f3e",
"title": "Pre-harvest internalization and surface survival of Salmonella and Escherichia coli O157:H7 sprayed onto different lettuce cultivars under field and growth chamber conditions."
},
{
"paperId": "60b642950b1068fc310b246717743228b6151533",
"title": "Internalization assessment of E. coli O157:H7 in hydroponically grown lettuce"
},
{
"paperId": "c36ceaf371b24ca1f77596b3417609410c25fde5",
"title": "Potential of Human Norovirus Surrogates and Salmonella enterica Contamination of Pre-harvest Basil (Ocimum basilicum) via Leaf Surface and Plant Substrate"
},
{
"paperId": "c1a5a2731aa4e17a8839fa652a52591d3c61ffc8",
"title": "Foodborne viruses: Detection, risk assessment, and control options in food processing"
},
{
"paperId": "7729a303e8baef8c312401fb5aee108c5dec3c1e",
"title": "Inactivation with UV-radiation and internalization assessment of coliforms and Escherichia coli in aquaponically grown lettuce"
},
{
"paperId": "6ed14cdb26e2e852989e90d57bba435d8c00bb83",
"title": "Genomic-assisted characterisation of Pseudomonas sp. strain Pf4, a potential biocontrol agent in hydroponics"
},
{
"paperId": "d0f88de4abda00e7cd21d2566f95c052145361cb",
"title": "Biological Control of Fusarium oxysporum Using Trichoderma spp. on Soil and Rockwool Substrates in Controlled Environment"
},
{
"paperId": "35e5246581873f58f99459139cec468c3ea800ee",
"title": "Fusarium oxysporum (F221-B) as biocontrol agent against plant pathogenic fungi in vitro and in hydroponics"
},
{
"paperId": "21cbfdadb456dfbba94e98e1a00c3f827bd73d4b",
"title": "Listeria monocytogenes in Fresh Produce: Outbreaks, Prevalence and Contamination Levels"
},
{
"paperId": "802401f87ca0f86954a13024f4b253b38a6359e7",
"title": "Ozonation in water treatment: the generation, basic properties of ozone and its practical application"
},
{
"paperId": "112a9eceee5ffa125cba61889407b593a28f7ba5",
"title": "High-Performing Windowfarm Hydroponic System: Transcriptomes of Fresh Produce and Microbial Communities in Response to Beneficial Bacterial Treatment."
},
{
"paperId": "861124d901cb85794dde2fba043ec77f95fa5712",
"title": "Potential for combined biocontrol activity against fungal fish and plant pathogens by bacterial isolates from a model aquaponic system"
},
{
"paperId": "c93192a7e9492037ad090d8306f8d81267ae19a5",
"title": "Performance of wick irrigation system using self-compensating troughs with substrates for lettuce production"
},
{
"paperId": "894c8fc4c45cdc79da5162e6c82d7c37663029cd",
"title": "Beneficial bacteria and fungi in hydroponic systems: Types and characteristics of hydroponic food production methods"
},
{
"paperId": "eed22bc699f2de56f9a8c7d9c33d499170b85805",
"title": "Survival and Transfer of Murine Norovirus within a Hydroponic System during Kale and Mustard Microgreen Harvesting"
},
{
"paperId": "09fa039433bca45e96fc2562adef5f667721f155",
"title": "Irrigation Water Quality for Leafy Crops: A Perspective of Risks and Potential Solutions"
},
{
"paperId": "eb0a6201b080cd61c663c173dfd7119d6516e25f",
"title": "Zero Risk Does Not Exist: Lessons Learned from Microbial Risk Assessment Related to Use of Water and Safety of Fresh Produce"
},
{
"paperId": "f1b3788e1665f6accdb0670dfa1a2bb34df62744",
"title": "Possible Internalization of an Enterovirus in Hydroponically Grown Lettuce"
},
{
"paperId": "942bb7d21c2110e2ff51fa25773e017a2c7c4fde",
"title": "Outbreaks attributed to fresh leafy vegetables, United States, 1973–2012"
},
{
"paperId": "95c80edb2a44e0e4a6e44e079f434207d460332c",
"title": "Safety assessment of greenhouse hydroponic tomatoes irrigated with reclaimed and surface water."
},
{
"paperId": "8b6c70d526527e2b02b21a6acae52719f8b373ea",
"title": "PLETHORA gradient formation mechanism separates auxin responses"
},
{
"paperId": "11b2aaa1cf65c054553e3d75efd304e747165b5c",
"title": "Role of curli and plant cultivation conditions on Escherichia coli O157:H7 internalization into spinach grown on hydroponics and in soil."
},
{
"paperId": "96e59c44f5227d92c65b870039e0b1d66b078f2f",
"title": "The rhizosphere microbial community in a multiple parallel mineralization system suppresses the pathogenic fungus Fusarium oxysporum"
},
{
"paperId": "ae530a048133b1834b1e6db94c41cc63a34a52a2",
"title": "Survey of food safety practices on small to medium-sized farms and in farmers markets."
},
{
"paperId": "126cc1067aece467a6e94dfd2fa3adc47c2c7601",
"title": "Bacterial siderophores efficiently provide iron to iron-starved tomato plants in hydroponics culture"
},
{
"paperId": "5da1e2396671f2a3ff45d921a0b941f219820345",
"title": "Internalization and Dissemination of Human Norovirus and Animal Caliciviruses in Hydroponically Grown Romaine Lettuce"
},
{
"paperId": "6a6c36c53ab16006716f3af4396090f71378fd94",
"title": "Human enteric pathogen internalization by root uptake into food crops."
},
{
"paperId": "c1b149ef08928e5df5449de16a7073931bf6ffde",
"title": "Internalization of fresh produce by foodborne pathogens."
},
{
"paperId": "e21cdd4ff22a2e5bbae16d626a9e26281fb3451f",
"title": "Internalization of E. coli O157:H7 and Salmonella spp. in plants: A review"
},
{
"paperId": "b7a311611bfd05bf46936018f5444fe7b129e9bf",
"title": "Environmental transmission of norovirus gastroenteritis."
},
{
"paperId": "0741e681f6c04e94d27f20756d286132895a84d3",
"title": "Comparison of two possible routes of pathogen contamination of spinach leaves in a hydroponic cultivation system."
},
{
"paperId": "18c156452c32627c5de46da1620dab35ec6bf1e9",
"title": "Contamination level and ingestion dose of foodborne pathogens associated with infections"
},
{
"paperId": "33653d34f0f0d5ecd5804f1ff897b566899a210e",
"title": "Fresh fruit and vegetables as vehicles for the transmission of human pathogens."
},
{
"paperId": "2157f00d266e9559cae1f025b3f59467e3b353d6",
"title": "Quantitative microbial risk assessment for Escherichia coli O157:H7, salmonella, and Listeria monocytogenes in leafy green vegetables consumed at salad bars."
},
{
"paperId": "1a12e3a3b6c4f52b6efd880328b7749c64171e14",
"title": "Fresh-cut product sanitation and wash water disinfection: problems and solutions."
},
{
"paperId": "f0295f10999bec46043616ce77f9be02cf6ddbc2",
"title": "A novel approach to investigate the uptake and internalization of Escherichia coli O157:H7 in spinach cultivated in soil and hydroponic medium."
},
{
"paperId": "2f0d0b22b21eb77ec6ba921d7f2034db7282a030",
"title": "Nutrient Solution Prepared with Ozonated Water does not Damage Early Growth of Hydroponically Grown Tomatoes"
},
{
"paperId": "aedcce206627ab712ff8b1fa66d3cefc3462acfc",
"title": "Outbreaks where food workers have been implicated in the spread of foodborne disease. Part 5. Sources of contamination and pathogen excretion from infected persons."
},
{
"paperId": "3a48ab024bf6131235eaff9c38e0d111b6c24be6",
"title": "Microbial contamination of fruit and vegetables and the behaviour of enteropathogens in the phyllosphere: a review"
},
{
"paperId": "8a907a5ca9376af972d5616c7c895a171db7edbc",
"title": "Differential interaction of Salmonella enterica serovars with lettuce cultivars and plant-microbe factors influencing the colonization efficiency"
},
{
"paperId": "ad7d849ac691d57972c450881e08a4608356e723",
"title": "Chemo-biological suppression of root-infecting zoosporic pathogens in recirculating hydroponic systems"
},
{
"paperId": "355b63bf70509aca152cea5b585482c47f156938",
"title": "Evidence for internalization of Escherichia coli into the aerial parts of maize via the root system."
},
{
"paperId": "89875519df2162103e8b87486671252456c7a93a",
"title": "Quantification of contamination of lettuce by GFP-expressing Escherichia coli O157:H7 and Salmonella enterica serovar Typhimurium."
},
{
"paperId": "2012b418665ffc74327a3e5d5ca4c7163ea31496",
"title": "Curli biogenesis and function."
},
{
"paperId": "2184f8ffcfbe26dfb32cfc59ae64615800f5c222",
"title": "Colonization of barley (Hordeum vulgare) with Salmonella enterica and Listeria spp."
},
{
"paperId": "247e48c9d97fbbb07b6f992ceebb5f74e565fbf4",
"title": "The survival of Escherichia coli O157 on a range of metal surfaces."
},
{
"paperId": "eeafc3bedaa20bd1313cb8478cf7d471ef3bbd5f",
"title": "Salmonella enterica Virulence Genes Are Required for Bacterial Attachment to Plant Tissue"
},
{
"paperId": "91b586a499ceed47c5b90679fbb06f8ea4c302b9",
"title": "Interactions of Escherichia coli O157:H7, Salmonella typhimurium and Listeria monocytogenes plants cultivated in a gnotobiotic system."
},
{
"paperId": "364116891b9d8ee7efd6abb3bd516b454e9ae24e",
"title": "Internalization of Human Pathogens within Growing Salad Vegetables"
},
{
"paperId": "7d703ab97c5890bb72644c4b4c47e5cc01c4bbfe",
"title": "Interaction of Escherichia coli with growing salad spinach plants."
},
{
"paperId": "709e367cf3693d0e85964454082daefd2e40e05c",
"title": "Kinetics and Strain Specificity of Rhizosphere and Endophytic Colonization by Enteric Bacteria on Seedlings of Medicago sativa and Medicago truncatula"
},
{
"paperId": "567e5a035894e2b539156183510147d98c270773",
"title": "Evidence of Association of Salmonellae with Tomato Plants Grown Hydroponically in Inoculated Nutrient Solution"
},
{
"paperId": "f8456e9d5b538333e5e201100d2a62c035962e14",
"title": "Transmission of Escherichia coli O157:H7 from Contaminated Manure and Irrigation Water to Lettuce Plant Tissue and Its Subsequent Internalization"
},
{
"paperId": "5976cdf17d1e867202220daeb2c686ec5bd901d0",
"title": "Slow filtration: a technique to minimise the risks of spreading root-infecting pathogens in closed hydroponic systems"
},
{
"paperId": "3d5bf3a003124615f5b2b3b8de50bea8a8c79720",
"title": "Design of sustainable hydroponic systems in relation to environment-friendly disinfection methods"
},
{
"paperId": "f52f15eb0287d0641a25d4921098d707bf7d3fe7",
"title": "SW—Soil and Water: Development of a Membrane Disinfection System for Closed Hydroponics in a Greenhouse"
},
{
"paperId": "34282ec3d319641072c08c8e92ab053db0c42e6f",
"title": "Prevention of root diseases in closed soilless growing systems by microbial optimisation and slow sand filtration"
},
{
"paperId": "4e68b74114c044860d2e3c05c0a85cf53d8b4c80",
"title": "Effect of Ultraviolet Disinfection of Hydroponic Solutions on Pythium Root Rot and Non-target Bacteria"
},
{
"paperId": "b4c7f24acb429951e7c7c6f04e17247c3d4b5bca",
"title": "Enterohemorrhagic Escherichia coli O157:H7 Present in Radish Sprouts"
},
{
"paperId": "867c7bbba64f874f8a4eee0f4a1a16e049a6420b",
"title": "Potential Hazard of Radish Sprouts as a Vehicle of Escherichia coli O157:H7."
},
{
"paperId": "b4b8b7b214ffc3ae4e27a0b104162499e9198c52",
"title": "MANAGEMENT OF MICROBIAL FACTORS IN THE RHIZOSPHERE AND NUTRIENT SOLUTION OF HYDROPONICALLY GROWN TOMATO"
},
{
"paperId": "fdc05c765e145ab2f0f10107c12de7b53faf6341",
"title": "Is the Japanese O157: H7 E coli epidemic over?"
},
{
"paperId": "a7b0d50c6d1dee63e8469926a66c43e1bb7f3aa7",
"title": "review of possibilities for disinfection of recirculation water from soilless cultures"
},
{
"paperId": "ae006fcd8512afda4502983f5fef35a9c0dd1405",
"title": "DRAINWATER FILTRATION FOR THE CONTROL OF NEMATODES IN HYDROPONIC-TYPE SYSTEMS"
},
{
"paperId": "d5e52cc809bf027c8c329d16e0c7fe67b06bcb33",
"title": "Ultraviolet light control of Ichthyophthirius multifiliis Fouquet in a closed fish culture recirculation system"
},
{
"paperId": null,
"title": "Yuma, Arizona"
},
{
"paperId": "b73aae3a6a6a24cfe3cb9ae111a29e294f74469d",
"title": "Hydroponics as an advanced technique for vegetable production: An overview"
},
{
"paperId": null,
"title": "State of Indoor Farming"
},
{
"paperId": "fa374668ae9ec7d1ed0aecabfbfd3d0526e11933",
"title": "The Microbiome of the Built Environment and Human Behavior: Implications for Emotional Health and Well-Being in Postmodern Western Societies."
},
{
"paperId": "0dc90530a9243b2588853f022ead9194645a0f12",
"title": "Microbiological investigation of Raphanus sativus L. grown hydroponically in nutrient solutions contaminated with spoilage and pathogenic bacteria."
},
{
"paperId": "818aa871b1b1150dfdb9ccbc2891c70c0a9c88ac",
"title": "Assessing Food Safety Practices in Farmers' Markets"
},
{
"paperId": "0bf2611b4704280be96948e12586a7e202a69c03",
"title": "Vectors and conditions for preharvest contamination of fruits and vegetables with pathogens capable of causing enteric diseases"
},
{
"paperId": "27797b2da76095858780212c1af2a6503982bb74",
"title": "AEROPONICS: AN ALTERNATIVE PRODUCTION SYSTEM FOR HIGH-VALUE ROOT CROPS"
},
{
"paperId": "9674aa929a5d3cf44fe561f6da9731fe85f4b3b4",
"title": "Outbreaks Associated with Fresh Produce: Incidence, Growth, and Survival of Pathogens in Fresh and Fresh‐Cut Produce"
},
{
"paperId": null,
"title": "Efficacy of Washing and Sanitizing Methods"
},
{
"paperId": "6f035d0ba099759ba3bdaf08794d2d45cb6ffbf7",
"title": "OPTIMIZATION OF SLOW FILTRATION AS A MEANS FOR DISINFECTING NUTRIENT SOLUTIONS"
},
{
"paperId": "c19ffb059aecf81a7f9c5d5539cd0a0e281eb0fd",
"title": "Attachment of Escherichia coli O157:H7 to lettuce leaf surface and bacterial viability in response to chlorine treatment as demonstrated by using confocal scanning laser microscopy."
},
{
"paperId": null,
"title": "Guidelines for Head Lettuce Production in Arizona. IPM Series Number 12. Publication number az1099; Cooperative Extension, College of Agriculture and Life Sciences, University of Arizona"
},
{
"paperId": null,
"title": "Leaf Lettuce Production in California Publication 7216 . University of California Davis . UC Vegetable Resource and Information Center . University of California Agriculture and Natural Resources"
},
{
"paperId": "b5176a1d0a7254c1d51dafed8a05b6a9d1162ea8",
"title": "Slow sand filtration and UV radiation; Low-cost technique for disinfection of recirculating nutrient solution or surface water"
},
{
"paperId": "884e5df9e1eed367a16d25cbdc5c76245c5d1e6a",
"title": "Hydroponic food production : a definitive guidebook for the advanced home gardener and the commercial hydroponic grower"
},
{
"paperId": "0b71ee455ba8bf0075f929e5db3ea69cfe4854b9",
"title": "World's Science, Technology Medicine Open Access book"
},
{
"paperId": null,
"title": "Hydroponic Lettuce"
}
] | 27,152
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00b6c6203b3f9eb46d333540c1cbfa6c939ce33a
|
[
"Computer Science"
] | 0.905487
|
Fog radio access network system control scheme based on the embedded game model
|
00b6c6203b3f9eb46d333540c1cbfa6c939ce33a
|
EURASIP Journal on Wireless Communications and Networking
|
[
{
"authorId": "153274441",
"name": "Sungwook Kim"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Eurasip J Wirel Commun Netw",
"Eurasip Journal on Wireless Communications and Networking",
"EURASIP J Wirel Commun Netw"
],
"alternate_urls": [
"https://jwcn-eurasipjournals.springeropen.com/"
],
"id": "3215af4b-a40f-474d-bc19-27c154ff31a3",
"issn": "1687-1472",
"name": "EURASIP Journal on Wireless Communications and Networking",
"type": "journal",
"url": "http://jwcn.eurasipjournals.com/"
}
|
As a promising paradigm for the 5G wireless communication system, a new evolution of the cloud radio access networks has been proposed, named as fog radio access networks (F-RANs). It is an advanced socially aware mobile networking architecture to provide a high spectral and energy efficiency while reducing backhaul burden. In particular, F-RANs take full advantages of social information and edge computing to efficiently alleviate the end-to-end latency. Based on the benefit of edge and cloud processing, key issues of F-RAN technique are radio resource allocation, caching, and service admission control. In this paper, we develop a novel F-RAN system control scheme based on the embedded game model. In the proposed scheme, spectrum allocation, cache placement, and service admission algorithms are jointly designed to maximize system efficiency. By developing a new embedded game methodology, our approach can capture the dynamics of F-RAN system and effectively compromises the centralized optimality with decentralized distribution intelligence for the faster and less complex decision making process. Through simulations, we compare the performance of our scheme to the existing studies and show how we can achieve a better performance under dynamic F-RAN system environments.
|
# Fog radio access network system control scheme based on the embedded game model
### Sungwook Kim
Abstract
As a promising paradigm for the 5G wireless communication system, a new evolution of the cloud radio access
networks has been proposed, named as fog radio access networks (F-RANs). It is an advanced socially aware mobile
networking architecture to provide a high spectral and energy efficiency while reducing backhaul burden. In particular,
F-RANs take full advantages of social information and edge computing to efficiently alleviate the end-to-end latency.
Based on the benefit of edge and cloud processing, key issues of F-RAN technique are radio resource allocation,
caching, and service admission control. In this paper, we develop a novel F-RAN system control scheme based on
the embedded game model. In the proposed scheme, spectrum allocation, cache placement, and service admission
algorithms are jointly designed to maximize system efficiency. By developing a new embedded game methodology,
our approach can capture the dynamics of F-RAN system and effectively compromises the centralized optimality
with decentralized distribution intelligence for the faster and less complex decision making process. Through
simulations, we compare the performance of our scheme to the existing studies and show how we can achieve a
better performance under dynamic F-RAN system environments.
Keywords: Fog radio access network, Service admission control, Cache placement, Embedded game model,
Radio resource allocation
1 Introduction bottleneck of a C-RAN system per critical indicators
In the past decade, the evolution toward 5G is fea- such as spectral efficiency and latency [1–3].
tured by the explosive growth of traffic in the wireless As an extension of C-RAN paradigm, fog computing
network, due to the exponentially increased number is a promising solution to the mission critical tasks inof user devices. Compared to the 4G communication volving quick decision making and fast response. It is a
system, the 5G system should bring billions of user distributed paradigm that provides cloud-like services
devices into wireless networks to demand high band- to the network edge nodes. Instead of using the
width connections. Therefore, system capacity and en- remoted cloud center, the fog-computing technique leergy efficiency should be improved to get the great verages computing resources at the edge of networks
success of 5G communications. Cloud radio access based on the decentralized transmission strategies.
network (C-RAN) is an emerging architecture for the Therefore, it can help overcome the resource conten5G wireless system. A key advantage of C-RAN is the tion and increasing latency. Due to the effective coordpossibility to perform cooperative transmissions across ination of geographically distributed edge nodes, the
multiple edge nodes for the centralized cloud process- fog-computing approach can meet the 5G application
ing. However, the cloud processing comes at the cost constraints, i.e., location awareness, low latency, and
of the potentially large delay entailed by fronthaul supports for mobility or geographical distribution of
transmissions. It may become a major performance services. The most frequently referred use cases for the
fog-computing concept are related to the Internet of
[Correspondence: [email protected]](mailto:[email protected]) Things (IoT) [4, 5].
Department of Computer Science, Sogang University, 35 Baekbeom-ro
(Sinsu-dong), Mapo-gu 121-742, Seoul, South Korea
© The Author(s). 2017 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0
[International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and](http://creativecommons.org/licenses/by/4.0/)
reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to
the Creative Commons license, and indicate if changes were made.
-----
Taking full advantages of fog computing and CRANs, fog radio access networks (F-RANs) have been
proposed as an advanced socially aware mobile networking architecture in 5G systems. F-RANs harness
the benefits of, and the synergies between, fog computing and C-RAN in order to accommodate the broad
range of Quality of Service (QoS) requirements of 5G
mobile broadband communications [2]. In the F-RAN
architecture, edge nodes may be endowed with caching
capabilities to serve the local data requests of popular
content with low latency. At the same time, a central
cloud processor allocates radio and computational resources to each individual edge nodes while ensuring as
much as various applications [6]. To maximize the FRAN system performance, application request scheduling, cache placement, and communication resource
allocation should be jointly designed. However, it is an
extremely challenging issue.
In the architecture of F-RANs, multiple interest relevant system agents exist; they are the central cloud server (CS), edge nodes (ENs), and mobile users (MUs).
The CS provides contents to download and allocates
radio and communication resources to ENs. ENs, known
as fog-computing-based access points (F-APs), manage
the allocated radio resource and admit MUs to provide
application services. MUs wish to enjoy different QoS
services from the F-RAN system. Different system
agents have their own benefits, but their benefits could
conflict with each other, and each agent only cares about
its own profit. Therefore, it is necessary to analyze the
interactions among these conflicting system agents and
design proper solutions. Although dozens of techniques
have been proposed, a systematic study on the interactions among CS, F-APs, and MUs is still lacking [7].
Recently, game theory has been widely concerned and
studied in solving conflict and cooperation in distributed
optimization problems. As a branch of mathematics, it is
suitable for analyzing the performance of multi-agent systems. In game theory, a variety of interactive behaviors
can be described by different payoff functions and the outcome of a game depends on the combinations of the decisions taken by each agent. Depending on the way the
decisions are taken, it is possible to classify the games as
cooperative or non-cooperative games. Nowadays, game
theory has been widely recognized as an important tool in
many fields. In the last few years, it has attracted considerable attentions and has been investigated extensively in
computer science and tele-communications [8]. This is
one of the reasons why game theory is applied more and
more intensively to cloud computing and network management fields.
However, the traditional game theoretic analysis should
rely on the perfect information and idealistic behavior assumptions. Therefore, there is a quite general consensus
to say that the predicted game solutions are useful but
would be rarely observed in real world situations. Recently, specialized sub-branches of game theory have been
developed to encounter this problem. The main goal of
this study is also to develop a new game paradigm. To design a practical game model for the F-RAN system management, we adopt an online dynamic approach based on
the interactive relationship among system agents. Our approach exploits a partial information on the game and obtains an effective solution under mild and practical
assumptions. From the standpoint of algorithm designers,
our approach can be dynamically implemented in the
real-world F-RAN environments.
Motivated by the above discussion, we propose a novel
F-RAN control scheme based on the methodology of
game theory. In this study, we design a new game model,
called embedded game, to effectively solve the conflict
problem among F-RAN system agents. From the realistic
point of view, we need not the complete knowledge of
the system information. Instead, our game procedure
imitates the interactive sequential game process while
ensuring the system practicality. To summarize, the
major contribution of this study is to provide a new
game-based F-RAN control algorithm. Main features of
our proposed game model are as follows: (i) the adjustable dynamics considering the current F-RAN system
environment, (ii) an interactive online process based on
the embedded game model, (iii) a cooperative control
manner with the nested non-cooperative approach, (iv)
the joint design to obtain synergistic and complementary
features, and (v) a practicality under realistic system operation scenarios. Especially, the important novelty of
our proposed scheme is obtained from the key principle
of embedded game approach, which can better capture
the reality of F-RAN operations. To the best of our
knowledge, very little research has been done, and there
is still few published work to discuss the joint F-RAN
control algorithms.
This article is organized as follows. In the next section,
we review some related schemes and their problems. In
Section 3, we define the embedded game model considered in this paper and explain in detail the proposed FRAN control scheme. In particular, this section provides
fresh insights into the benefits and design of developed
control algorithms. For convenience, the main steps of
the proposed scheme are also listed in Section 3. Section
4 reports the simulation results and performance analysis. Lastly, we give our conclusion in Section 5. Open
issues and challenges are also discussed in Section V.
2 Related work
Over the years, a lot of state-of-the-art research work on
the radio access network control problem has been conducted. In [9], K. Sundaresan et al. proposed a scalable,
-----
light-weight scheme for realizing the full potential of CRAN systems. For small cells, this scheme determined
configurations that maximized the traffic demand while
simultaneously optimizing the compute resource usage
in the baseband processing unit pool. Briefly, the developed scheme in [9] adopted a two-step approach: (i) the
first-step determined the optimal combination of configurations; it needed to support the traffic demand from a
set of small cells, and (ii) the second step consolidated
the configurations to further reduce the compute resource usage [9].
The article [10] provided a brief overview of the infrastructure and logic structure of C-RAN systems. In
addition, a new coordinated user scheduling algorithm
and parallel optimum precoding algorithm were specifically designed based on the concept of a service cloud
and a three-layer logical structure. This approach
utilized extensive computation resources to improve the
C-RAN system performance. Compared to traditional CRAN algorithms, the developed scheme in [10] matched
well with the C-RAN architecture while managing interference efficiently and accelerating the cooperation processing in parallel.
Q. Vien et al. [11–13] proposed a non-orthogonal
multiple access-based power allocation scheme for CRAN systems. In this scheme, base stations were allocated with different power levels depending on their
distances to the cloud, and the optimal number of BSs
in C-RAN systems was found to achieve an improved
performance. Specifically, a successive interference
cancellation mechanism was designed at the cloud to lay
multiple base stations over each other in the power domain. Taking into account the constraints of the total
available power and the cloud-edge throughput requirement, this approach has shown to support a higher
number of base stations when compared to the existing
scheme [11–13].
The paper [14] surveyed heterogeneous C-RAN research achievements and challenges, and provided a
summary of recent advancements in the computing convergence of heterogeneous wireless networks. In particular, the system architecture, performance analysis,
cloud-computing-based cooperative processing, networking techniques, key large-scale cooperative processing and
networking techniques, including cloud-computing-based
coordinated multipoint transmission, cooperative radio resource management, and self-organizing networks have
been briefly summarized. Furthermore, potential challenges and open issues in heterogeneous C-RANs have
been discussed as well [14].
In [15], Sengupta et al. provided a latency-centric analysis of the degrees of freedom of an F-RAN by accounting for the total content delivery delay across the
fronthaul and wireless segments of the network. The
main goal of the analysis was the identification of optimal caching, fronthaul, and edge transmission policies.
In this study, authors detailed a general model and a
novel performance metric, referred to as Normalized
Delivery Time (NDT), which captured the worst-case
delivery latency with respect to an ideal interference-free
system. Finally, they revealed optimal caching-fronthaul
transmission policies as a function of the system resources [15].
Azimi, et al. [16] considered an online caching setup,
in which the set of popular files was time-varying and
both cache replenishment and content delivery could
take place in each time slot. They developed online
caching and delivery schemes based on both reactive
and proactive caching principles, and bounds on the corresponding achievable long-term NDTs were derived. In
particular, a lower bound on the achievable long-term
NDT was obtained. Using this bound, the performance
loss caused by the variations in the set of popular files in
terms of delivery latency was quantified by comparing
the NDTs achievable under offline and online caching.
Finally, numerical results were provided in which the
performance of reactive and proactive online caching
schemes were compared with offline caching [16].
The Traffic Balancing and Dynamic Clustering
(TBDC) scheme investigated the joint design of multicast beamforming, dynamic clustering, and backhaul
traffic balancing [17]. To minimize the power consumption for higher energy efficiency, the TBDC scheme designed the beamforming vectors and clustering pattern
in the downlink of F-RAN. This approach balanced the
backhaul traffic according to individual backhaul capacities, guaranteed the QoS of each user, and minimized
the power consumption. Especially, the TBDC scheme
dynamically excluded a radio unit from a cluster when it
contributed comparatively less to the corresponding
multicast group. If a radio unit contributed comparatively more to the corresponding multicast group, it
would be involved in a cluster in order to guarantee the
required QoS [17].
The Cloud Structure with Edge Caching (CSEC)
scheme presented an information-theoretic model of
F-RANs [6]. This scheme aimed at providing a
latency-centric understanding of the degrees of freedom in the F-RAN network by accounting for the
available limited resources in terms of fronthaul capacity, cache storage sizes, as well as power and bandwidth on the wireless channel. In addition, a new
performance measure was introduced; it captured the
worst-case latency incurred over the fronthaul. Finally,
the CSEC scheme characterized the trade-off between
the fronthaul and caching resources of the system
while revealing optimal caching-fronthaul transmission
policies [6].
-----
The Joint Optimization of Cloud and Edge (JOCE)
scheme introduced the joint design of cloud and edge
processing for the downlink of F-RAN [3]. To design the
delivery phase for an arbitrary pre-fetching strategy,
transfer modes can be categorized into two classes:
hard-transfer mode and soft-transfer mode. In the hardtransfer mode, non-cached files are communicated over
the fronthaul links to a subset of access points. Therefore, this approach transfers hard information of subfiles
that were not cached. In the soft-transfer mode, the
fronthaul links are used to convey quantized baseband
signals as in a cloud RAN. Therefore, this approach
transfers a quantized version of the precoded signals for
the missing files in line with the C-RAN paradigm. In
the JOCE scheme, a novel superposition coding approach was proposed that was based on the hybrid use
of the fronthaul links in both hard-transfer and softtransfer modes. The problem of maximizing the delivery
rate was tackled under fronthaul capacity and per enhanced remote radio head power constraints. This study
was concluded that the JOCE scheme based on the
superposition coding provided a more effective way and
could have the potential to strictly outperform both conventional soft-transfer and hard-transfer modes [3].
2.1 Comparison and main contributions
Some earlier studies [9–16] have attracted considerable
attention while introducing unique challenges in handling the cloud radio control problems. Even though these
existing schemes dynamically control the cloud radio access network for the efficient system management, there
are some difficulties to compare performance between
these work with our proposed scheme. The scheme in
[9] was developed only for small cells, i.e., houses, based
on the partially centralized C-RAN model. The studies
in [11–13] strongly concentrated on the non-orthogonal
multiple access method to improve the spectral efficiency. Therefore, these studies were specially focused
on the wireless downlink control problems in C-RAN
systems. The papers [10, 14] surveyed various C-RAN
research achievements and challenges and discussed issues of system architectures, spectral and energy efficiency performance, and promising key techniques.
However, these surveys only covered research fields of
traditional cloud radio access methods. In particular, the
earlier studies [9–14] did not concern the issue of fogcomputing paradigm. Therefore, they did not provide
cloud-like services to the network edge nodes.
The studies [15, 16] considered the edge processing in
F-RANs and specially investigated fundamental information theoretic limits. However, these schemes relied
upon the cache-aided fog network paradigm while causing the extra cost to implement control mechanisms.
This architecture-oriented approach was inappropriate
to fairly compare performance under general F-RAN
system operations. The schemes in [3, 6, 17] have
attracted considerable attention while introducing
unique challenges in handling the edge cloud control
problems. In this paper, we demonstrate through extensive simulation and analysis that our proposed scheme
significantly outperforms these existing TBDC, CSEC,
JOCE schemes.
The specific difference between the proposed scheme
and the existing schemes in [3, 6, 9–17] is the decision
making procedure; we designed a new embedded game
model for the F-RAN system. Based on the step-by-step
interactive mechanism, the proposed scheme jointly
develops spectrum allocation algorithm and service admission algorithm; they are interlocked and serially correlated to capture the dynamics of F-RAN systems.
Therefore, our approach is suitable in dynamically changing F-RAN environments and provides a well-balanced
system performance than existing schemes, which were
designed as one-sided protocols.
3 Embedded game model for F-RAN control
algorithms
In recent years, the F-RAN system has attracted much
attention due to its significant benefits to meet the
enormous 5G application demands. Based on the general
F-RAN architecture, different solutions have been proposed. In this section, the architecture of F-RAN is
firstly introduced, and then the embedded game model
is defined for effective F-RAN operations. Finally, we explain in detail about the proposed algorithm in the tenstep procedures.
A. Embedded game model for F-RAN systems
In the C-RAN architecture, all control functions and
application storage are centralized at the CS, which
requires a lot of MUs to transmit and exchange their
data fast enough through the fronthaul link. To
overcome this C-RAN’s disadvantage with the
fronthaul constraints, much attention has been paid
to mobile fog computing and the edge cloud. The
design of a fog-computing platform has been introduced to deliver large-scale latency-sensitive applications. To implement the fog-computing architecture,
traditional edge nodes are evolved to the fogcomputing-based access point (F-AP) by being
equipped with a certain caching, cooperative radio
resource, and computation power capability [2, 18].
The main difference between the C-RAN and the FCRAN is that centralized storage cloud and control
cloud functions are distributed to individual F-APs.
Usually, F-APs are used to forward and process the
received data, and interface to the CS through the
fronthaul links. To avoid all traffic being loaded
-----
directly to the centralized CS, some local traffic game to formulate interactions between the ith F-AP
should be delivered from the caching located in F- and its corresponding MUs. Firstly, the G[super] can be
APs. Therefore, each F-AP integrates not only the defined as G[super] ¼ �N; RCS; S[R]CS[;][ U] [i][;][1][≤][i][≤][n][;][ T]� at
front radio spectrum but also the locally distributed each time period t of gameplay.
cached contents and computation capacity. This ap- � N is the finite set of G[super] game players N
proach can save the spectral usage of constrained ¼ CSf ; F � AP1; F � AP2…F � APng where the
fronthauls while decreasing the transmission delay. total n + 1 number of G[super] players; one CS and
In conclusion, the main characteristics of F-RAN in- n F-APs.
clude ubiquity, decentralized management, and co- � The total spectrum resources of CS is ℛCS, which
operation [2, 18]. The general architecture of a F- would be distributed to n F-APs.
RAN system is shown in Fig. 1. � S[ℛ]CS [= {][δ][1][,][ δ][2][,][…][.][ δ][n][} is the sets of CS][’][s]
During the F-RAN system operations, system agents, strategies for the spectrum resource allocation.
i.e., CS, F-APs, MUs, should make decisions indi- δt in S[ℛ]CS [is the allocated spectrum amount for]
vidually by considering the mutual-interaction rela- the F ‐ APt,1 ≤ i ≤ n.
tionship. Under the dynamic F-RAN environments, � The Ut, 1 ≤ i ≤ n is the payoff received by the F ‐
system agents try to maximize their own profits in a APt. It is estimated as the obtained outcome
competitive or cooperative manner. In this study, we minus the cost from the spectrum resource
develop a new game model, called embedded game, allocation.
for the F-RAN system. According to the decision � The T is a time period. The G[super] is repeated
making method, the embedded game procedure can t ∈ T < ∞ time periods with imperfect
be divided two phases. At the first phase, the CS and information.
F-APs play a superordinated game; the CS distribute Secondly, the G[sub]i is the ith subordinated game, and
the available spectrum resource to each F-AP by it can be defined as G[sub]i ¼
n
using a cooperative manner. At the second phase, F- Mi; ℜi; S[δ]F[i]−APi [;][ S]F[C][i]−APi [;][ S]F[σ] [i]−APi [;] U j[i];1≤j≤m[;][ T] [g][ at each]
APs and MUs play subordinated games. By employ- time period t of gameplay.
ing a non-cooperative manner, an individual F-AP � Mi is the finite set of G[sub]i game players Mi = {F ‐
selectively admits its corresponding MUs to provide APi, MU[i]1[,][…][,][ MU][i]m[} where][ MU][i]j;1≤j≤m [is the][ j][th]
different application services. Taken as a whole, mul- MU in the area covered by the F ‐ APi.
tiple subordinated games are nested in the superor- � The set of F ‐ APi’s resources is ℜi = {δi, Ci, σi}
dinated game. where δi, Ci, σi are the allocated spectrum
Formally, we define the embedded game model G resource, the computation capacity, and the
n o
¼ G[super]; G[sub]i;1≤i≤n where G[super] is a placed cache files in the F ‐ APi, respectively.
superordinated game to formulate interactions � S[δ]F[i]−APi [,][ S]F[C][i]−APi [and][ S]F[σ] [i]−APi [are the sets of F][ ‐][ AP][i][’][s]
between CS and F-APs, and G[sub]i is a subordinated strategies for the spectrum allocation for MUs,
the computation capacity assignment for MUs,
and cache placement in the F ‐ APi, respectively.
� The Uj[i];1≤j≤m [is the][ MU]j[i][’][s payoff received by the]
F ‐ APi.
� The T is a time period. The G[sub]i is repeated t ∈
T < ∞ time periods with imperfect information.
Table 1 lists the notations used in this paper.
B. Solution concept for the superordinated game
In the superordinated game, game players are
CS and F-APs, and they are rational to reach a
win-win situation. In many situations, each rational agent is able to improve its objectives
without preventing others from improving their
objectives. Therefore, they are more prone to
coordinate and willing to play cooperative games
[19]. Usually, solution concepts are different in
different games. For the CS and F-AP interactions, the Kalai and Smorodinsky Bargaining Solution (KSBS) is an interesting solution concept.
Fig. 1 General F-RAN system structure
Like as the well-known Nash Bargaining
-----
Table 1 Parameters used in the proposed algorithm
Notations Explanation
CS Cloud server
F-APs Fog-computing-based access points
MUs Mobile users
ENs Edge nodes
ℕ The finite set of superordinated game players
ℛCS The total spectrum resources of CS
δi The allocated spectrum amount for the F ‐ APi
δi(t − Δt) The δi value at the time period [t − Δt].
Ui(Δt) The payoff received by the F ‐ APi during the
recent Δt time period
Mi The finite set of subordinated game players
ℜi The set of F ‐ APi‘s resources
Ci The computation capacity in the F ‐ APi
σi The placed cache files in the F ‐ APi
U[i]j The MU[i]j [‘][s payoff received by the F][ ‐][ AP][i]
β The parameter weighs the past experience by
considering a trust decay over time
ϕ The parameter specifies the impact of past
experience
Ti (t) At time t, the F ‐ APi‘s trust assessment
F[t]KSBS KSBS at time t
d = (d1, .. dn) Disagreement point when players cannot reach an
agreement
ω[t]i The player F ‐ APi‘s bargaining power at time t
ℝ[n] A jointly feasible utility solution set
τ Factor to characterize the file popularity
M ¼ {ℳ1.. ℳL} A multimedia file set consists of L popular
multimedia files
Q ¼ ℳ½ 1; …; ℳ L� Vector to represent the popularity distribution
among M
I ¼ 0½ ; 1�[n][�][L] A two-dimensional matrix to indicate the caching
placement
Zi[l] The revenue from the lth file caching in the F ‐ APi,
ℭ[l]i The cost from the lth file caching in the F ‐ APi,
Θ[i]j New service request of MU[i]j
Min_S(Θ[i]j[)] The minimum spectrum requirement of Θ[i]j
Min_C(Θ[i]j[)] The minimum computation requirement of Θ[i]j
χ[i] The currently using spectrum amount in the F ‐ APi
y[i] The currently using computation amount in the F ‐
APi
X[i] The current fronthaul transmission rate
M[i] The maximum fronthaul transmission rate
Solution (NBS), the KSBS also provides a fair
and optimal solution in a cooperative manner.
In addition, the KSBS can be used when the
feasible payoff set is not convex. It is the main
advantage of KSBS over the NBS. Due to this
appealing property, the KSBS approach has been
practically implemented to solve real-world
problems [8].
In order to show the effectiveness of the KSBS,
it is necessary to evaluate each player’s
credibility. In this paper, we obtain the KSBS
based on the F-APs’ trustworthiness. This information can be inferred implicitly from the
F-APs’ outcome records. Therefore, we can enhance the effectiveness of KSBS while restricting
the socially uncooperative F-APs. At time t, the
F ‐ APi’s trust assessment ðT i tð ÞÞ for the
spectrum allocation process is denoted by
where Ui(Δt) is the throughput of the F ‐ APi
during the recent Δt time period, and δi(t − Δt)
is the δi value at the time period [t − Δt]. The
parameter β is used to weigh the past experience
by considering a trust decay over time. In
addition, we introduce another parameter to
specify the impact of past experience on
T i tð −ΔtÞ. Essentially, the contribution of current
information increases proportionally as
increases. In this case, we can effectively adapt
to the currently changing conditions while
improving resiliency against credibility
fluctuations [20]. In Eq. 1, the first term means
the T i value of the previous time period, and the
second term represents the change ratio of δi to
Ui at the current time period. In the point view
of CS, T i tð Þ is a weighted average of these two
terms.
Under the dynamic F-RAN environment, we assume that F-APs request individually their
spectrum resources to the CS at each time
period. To adaptively respond to the current FRAN system conditions, the sequential KSBS
bargaining approach gets the different KSBS at
each time period. It can adapt the timely dynamic F-RAN situations. At time t, the timed
T i tð Þ ¼ fð1−βÞ �T i tð −ΔtÞg
8 00 1
þ >>>>><>>>>>:β � BBBBB@@XUinj¼ Δð 1tUjÞ Δð tÞA,�δi tℛð CS−ΔtÞ
�
19
CC>>>>>=
C
C
CA>>>>>;
1
ð Þ
s:t:; β
¼
ðϕ�T i tð −ΔtÞÞ[�]
ð1þ ϕf �T i tð −ΔtÞgÞ [and] [ϕ][ ≥] [0]
-----
F[t]KSBS�[S]cs[ℛ]� ¼ δf 1 tð Þ; δ2 tð Þ; …δn tð Þg ¼ supω U�[t]1 [�][t]1ð�[δ][O][1][ t]1[t][ð Þ][−][d]Þ�[1]�−d1 ¼ … ¼ [sup]ω[ U]�[t]i [�]i[t]ð�[δ][O][i][ t][ð Þ]i[t][−][d]Þ�[i]�−di ¼ … ¼ [sup]ω[ U]�[t]n [�]n[t] ð�[δ][O][n][ t][ð Þ]n[t] [−][d]Þ�[n]�−dn
!
s:t:; ; ¼ O[t]i [¼][ max] �U [t]i ð[δ][i][ t][ð Þ]Þ j U [t]i ð[δ][i][ t][ð Þ]Þ∈ℝ[n]�; ω[t]i ¼
T i tð Þ=
Xn
j¼1 T j tð Þ
and sup U� [t]i ð[δ][i][ t][ð Þ]Þ� ¼ sup U� [t]i ð[δ][i][ t][ð Þ]Þ : ��U [t]1ð[δ][1][ t][ð Þ]Þ; …; U [t]nð[δ][n][ t][ð Þ]Þ��⊂ℝ[n]� ð2Þ
KSBS (F[t]KSBS[) for the spectrum resource problem]
is mathematically defined as;
where U[t]ið[δ][i][ t][ð Þ]Þ is the F ‐ APi’s payoff with the
strategy δi during the recent time period (Δt). ℝ[n] is
a jointly feasible utility solution set, and a
disagreement point (d) is an action vector d = (d1, ..
dn) ∈ ℝ[n] that is expected to be the result if players,
i.e., F-APs, cannot reach an agreement (i.e., zero in
the system). ω[t]i [(0 <][ ω]i[t] [< 1) is the player F][ ‐][ AP][i][’][s]
bargaining power at time t, which is the relative ability to exert influence over other players. O[t]i [is the]
ideal point of player F ‐ APi at time t. Therefore,
players choose the best outcome subject to the condition that their proportional part of the excess over
the disagreement is relative to the proportion of the
excess of their ideal gains. Geometrically, the F[t]KSBS
�S[ℛ]cs � is the intersection point between the bargaining set S[ℛ]cs [and the line, which is drawn from the dis-]
agreement point (d) to the best utilities, i.e., the
ideal gains, of players. Simply, we can think that the
KSBS is the maximal point which maintains the ratios of gains [21]. Therefore, F[t]KSBS�[S]cs[ℛ]� = {δ1(t),
δ2(t), … δn(t)} = { sup U� [t]1ð[δ][1][ t][ð Þ]Þ�, sup U� [t]2ð[δ][2][ t][ð Þ]Þ�,
…, sup U� [t]nð[δ][n][ t][ð Þ]Þ�} is a joint strategy, which is
taken by the CS at time t.
In non-deterministic settings, F[t]KSBS�[S]cs[ℛ]� is a selection function to define a specific spectrum allocation
strategy for every F-APs. Due to the main feature of
KSBS, the increasing of bargaining set size in a direction favorable to a specific F-AP always benefits
that F-AP. Therefore, in our superordinated game,
self-interested F-AP can be satisfied during the FRAN system operations. To practically obtain the
F[t]KSBS�[S]cs[ℛ]� in Eq. 2, we can re-think the KSBS as a
weighted max-min solution like as
Edge processing is a key emerging trend in the FRAN system. It refers to the localization of computing, communication, and storage resources at the FAPs. In the F-RAN architecture, F-APs are connected to the CS through fronthaul links. Under this
centralized structure, the performance of F-RANs is
clearly constrained by the fronthaul link capacity; it
incurs a high burden on fronthaul links. Therefore, a
prerequisite requirement for the centralized CS processing is the high bandwidth and low latency
fronthaul interconnections. However, during the operation of F-RAN system, unexpected growth of service requests may create a traffic congestion. It has a
significant impact on the F-RAN performance. To
overcome the disadvantages of F-RAN architecture
imposed by the fronthaul constraints, new techniques have been introduced with the aim of reducing the delivery latency by limiting the need to
communicate between the CS and MUs [6].
Currently, there are evidences that MUs’
downloading of on-demand multimedia data is the
major reason for the data avalanche over F-RAN;
numerous repetitive requests on the same data lead
to redundant transmissions. Usually, multimedia
data are located in the CS and far away from MUs.
To ensure an excellent QoS provisioning, an efficient
solution is to locally store these frequently accessed
data into the cache memory of F-APs while reducing
the transmission latency; it is known as caching. This
approach can effectively mitigate the unnecessary
fronthaul overhead caused by MUs’ repetitive service
requests. Therefore, CS, F-APs and MUs are all the
beneficiaries from the local caching mechanism [22].
In the subordinated game, an efficient caching
mechanism is designed by carefully considering the
relations and interactions among CS, F-APs and
MUs. This approach can relive the heavy traffic load
at fronthaul links, and also decrease the request latency; it results in better QoS [6]. A practical caching mechanism is coupled with the data placement.
In our F-RAN architecture, we assume that a multimedia file set M = {ℳ1,…, ℳL} consists of L popular multimedia files in the CS, and files in M can be
possibly cached in each F-AP. The popularity
F[t]KSBS�[S]cs[ℛ]� ¼ δf 1 tð Þ; δ2 tð Þ; …δn tð Þg ¼
( sup U� [t]i ð[δ][i][ t][ð Þ]Þ�−di!)
¼ argfδ1 tð Þmax;δ2 tð Þ;…δn tð Þg δi;min1≤i≤n [ð Þ][t] ω[t]i [�] �[O]i[t][−][d][i]�
3
ð Þ
C. Solution concept for the subordinated games
-----
distribution among M is represented by a vector Q
= [g1,…, gL]. Generally, the vector Q can be modeled
by a Zipf distribution [22];
� ; s:t:; 1≤l≤L and τ>0
4
ð Þ
gl ¼
1
[ð Þ]l[τ] [.]
L
�X
f ¼1
1
f [τ]
In the proposed scheme, we set out to obtain
fundamental insights into the SAC problem by
means of a game theoretic approach. Therefore, the
subordinated game is designed to formulate the
interactions of the F-AP and MUs while investigating the system dynamics with imperfect information.
To implement our subordinated game, we adopt the
concept of dictator game, which is a game in experimental economics, similar to the ultimatum game,
first developed by D. Kahneman et al. [23]. In the
dictator game, one player, called the proposer, distributes his resource, and the other players, called
the responders, simply accept the decision, which is
made by the proposer. As one of decision theory, the
dictator game is treated as an exceptional noncooperative game or a multi-agent system game that
has a partner-feature and involves a trade-off between self- and other-utility. Based on its simplicity,
the dictator game can capture an essential characteristic of the repeated interaction situation [8].
In the proposed subordinated game model, each FAP is the proposer and MUs are the responders. They
interact with each other and repeatedly work together toward an appropriate F-RAN performance.
To effectively make SAC decisions, the proposer considers the current system conditions such as the
available spectrum amount, the current caching
placement and fronthaul overhead status. By a sophisticated combination of these conflicting condition factors, the proposer attempts to approximate a
temporary optimal SAC decision. The SAC decision
procedure is shown in Fig. 2.
According to the SAC procedure, each F ‐ APi can
maintain the finest SAC solution while avoiding the
heavy computational complexity or overheads. For
the subordinated game, we propose a new solution
concept, Temporal Equilibrium (TE). In the
proposed scheme, all MUs follow compulsorily the
decision of F-APs, and the outcome profile of our
SAC process constitutes the TE, which is the current
service status.
TE ¼ T�!ℰ i[j]ð[ μ]i[∪][ψ]iÞ→�Θ[i]j;1≤j≤m[∈]ð[μ]i[∪][ψ]iÞ�∪�T!ℰ i
8 Θ[i]j[∈][μ]i[;][ if][ Θ][i]j [isaccepted]
<
7
¼ ð Þ
: Θ[i]j[∈][ψ]i[;][ otherwise]
where T�! ℰ is the set of MUs in the F ‐ APi and μi, ψi
are the MUs’ set of accepted or rejected by the F ‐
APi, respectively. Therefore, TE is the status quo of
dictator game.
D. The main steps of proposed F-RAN control
algorithm
where the τ factor characterizes the file popularity.
In this study, we assume that MUs in each F-AP
area request independently the lth file ℳl,1 ≤ l ≤L.
Therefore, the τ value is different for each F-AP. According to (4), ℳ1 (or ℳL) has the highest (or lowest)
popularity. The CS intends to rent a frequencyaccessing fraction of M for caching to maximize the
F-RAN system performance. In this study, we can
denote the caching placement strategy as a twodimensional matrix I ¼ 0½ ; 1�[n][�][L] consisting of binary
entries where 1 is indicating the caching placement in
a F-AP, and 0 is not. I is defined as
2 I [1]1 ⋯ I [L]1
I≜4 ⋮ ⋱ ⋮
I [1]n ⋯ I [L]n
3
∈ 0; 1 5
5 ½ �[n][�][L] ð Þ
where I[l]i [¼][ 1][ means that the file][ ℳ][l][ is cached at the]
F ‐ APi and I[l]i [¼][ 0][ means the opposite. For the F][ ‐]
APi, the profit (ℜ[c]i [) gained from the local caching]
mechanism can be defined as follows;
L L
ℜ[c]i [¼] X�g[i]l [�] [ℒ] [i][ �Z]l[i] [�] [I] i[l]�−X�ℭ[i]l [�] [I] i[l]�; s:t:; g[i]l[∈][Q][i]
l¼1 l¼1
6
ð Þ
where Q[i] is the vector Q of F ‐ APi and ℒ[i] is the
total number of service requests on average. Zi[l] [and]
ℭ[l]i [is the revenue and cost from the][ l][th file caching]
in the F ‐ APi, respectively. From the viewpoint of F
‐ APi, the fraction [I[1]i […][I]i[L][] of][ I][ (][Q][i][) needs to be]
optimized for maximizing the ℜ[c]i [.]
Based on the current caching placement, Service
Admission Control (SAC) algorithm should be
developed to make admission decisions to maximize
the spectrum efficiency while maintaining a
desirable overhead level. Especially, when the
requested services are heavy, that is, the sum of the
requested resource amount exceeds the currently
available system capacity, the SAC algorithm comes
into act whether to accept a new service request or
not. Based on the acceptance condition, such as the
current caching status and resource capacity, the
SAC problem can be formulated as a joint
optimization problem. In this problem, we take into
account the maximization of spectrum efficiency
while minimizing the fronthaul overhead.
-----
For 5G wireless communications, the F-RAN architecture is a promising paradigm to provide high
spectrum efficiency and improved QoS. The core idea
of F-RAN is to take full advantages of distributed
edge processing, cooperative spectrum allocation, and
reduced communication latency [24–26]. However, it
is questionable whether existing F-RAN control
schemes are adequate in dynamically changing FRAN environments. In this study, we have studied
the joint design of spectrum allocation and SAC decision in an F-RAN architecture. Focusing on the practical assumption, we develop a new embedded game
model while investigating the benefits and challenges
of F-RAN control mechanisms. In our embedded
game approach, the superordinated game for
spectrum allocations and the subordinated game for
SAC decisions are interlocked and serially correlated.
The subordinated game depends on the outcome of
the superordinated game, and the result of subordinated games is the input back to the superordinated
game process. Structurally, the multiple subordinated
games are nested in the superordinated game, and
they are linked based on the step-by-step interactive
feedback process. It may be the only realistic approach to solve complex and dynamically changing
F-RAN control problems. Usually, the traditional optimal and centric algorithms have exponential time
complexity. However, the proposed distributed control method has only polynomial time complexity.
The main steps of the proposed F-RAN control algorithm are given next (see Fig. 3).
Step 1: At the initial time, the spectrum resource
allocation S[ℛ]cs [= {][δ][1][,][ δ][2][,][…][.][ δ][n][} and trustworthiness]
( ) for F-APs are equally distributed. This starting
T
guess guarantees that each F-AP enjoys the same
benefit at the beginning of the game.
Step 2: Control parameters are given from the
simulation scenario (refer to Table 2). To fairly
compare with the existing schemes, the system
parameters are carefully selected in our simulation
model.
Step 3: By taking into account the current F-RAN
situations, our superordinated and subordinated
games are executed in parallel.
-----
Table 2 System parameters used in the simulation experiments
Application type Computation offloading Computation requirement Minimum spectrum requirement Maximum spectrum requirement
I Y 300 MHz/s 128 kbps 128 kbps
II N N/A 256 kbps 768 kbps
III Y 600 MHz/s 384 kbps 640 kbps
IV N N/A 512 kbps 1.28 Mbps
Parameter Value Description
n 10 The number of F-APs
ℛCS 200 Mbps The total spectrum resources of CS
C 5 GHz The F-AP’s computation capacity
ϕ 0.2 A factor to specify the impact of recent experience
Δt 1 s The time interval to monitor the F-RAN system
Z 5 / one bps The revenue from the caching per one bps
ℭ 1 / one bps The cost from the caching per one bps
τ [0.1–0.9] A factor to characterize the file popularity: randomly selected for F-AP
L 10 The popular multimedia files in the CS for caching
M 30 Mbps The maximum fronthaul transmission rate
0.95 A control factor to consider the fronthaul congestion
-----
Step 4: The trustworthiness ( ) for each F-AP is
T
modified periodically by using (1).
Step 5: At each superordinated game period, S[ℛ]cs [=]
{δ1, δ2,…. δn} is dynamically adjusted according to
the timed KSBS manner. According to (2), the
F[t]KSBS�[S]cs[ℛ]� is obtained, and each δ value is
decided for the next game period.
Step 5: At each subordinated game period, the
caching placement in each F-AP occurs while
maximizing the ℜ[c] according to Eqs. 4, 5 and 6.
Step 7: In a distributed manner, each F-AP makes
MUs’ admission decisions based on the service admission procedure of Fig. 2, and the TE is obtained
using (7).
Step 8: The superordinated and subordinated games
are interlocked and serially correlated. Based on
the assigned δ value, each F-AP performs its subordinated game, and the result of each subordinated game is the input back to the
superordinated game.
Step 9: Based on the interactive feedback mechanism,
the dynamics of embedded game can cause
cascade interactions of game players and players
can make their decisions to quickly find the most
profitable solution.
Step 10: Under widely diverse F-RAN environments,
the CS and F-APs are self-monitoring constantly for
the next embedded game process; proceed to step 3.
4 Performance evaluation
In this section, we compare the performance of our F-RAN
control scheme with other existing schemes [3, 6, 17] and
can confirm the performance superiority of the proposed
approach by using a simulation model. To fairly compare
the system performance, the assumptions and detailed system scenario are outlined as follows.
� The simulated system consists of one CS, 10 F-APs
and multiple MUs. The number of MUs (m) for
each F-AP is generated based on the process for
new service requests.
� The process for new service requests is Poisson with
rate λ (services/s), and the range of offered service
load was varied from 0 to 3.
� There are four different service applications. They
are randomly generated from MUs, and some of
them are computation offloading tasks.
� The durations of service applications are
exponentially distributed.
� The total spectrum resources of CS (ℛCS) is 200
Mbps.
� For each F-AP, the computation capacity (C) is
5 GHz, and the fronthaul link capacity is 30 Mbps.
� The cache size in each F-AP is the same as the file
set M in the CS.
� System performance measures obtained on the basis
of 100 simulation runs are plotted as functions of
the service generation rate.
� For simplicity, we assume the absence of physical
obstacles in the experiments.
Performance measures obtained through simulation
are the normalized F-RAN throughput, spectrum efficiency, and fronthaul transmission delay. To facilitate
the development and implementation of our simulator,
Table 2 lists the system control parameters.
Figure 4 shows the performance comparison of each
scheme in terms of the normalized system throughput.
It is estimated as the total data transmission in the FRAN system. From the viewpoint of system operator, it
is a key factor in the F-RAN management. It can be seen
from Fig. 4 that the throughput of all the schemes increases as the service request rate increases, and we can
confirm the performance superiority of our scheme. The
proposed scheme’s gain on performance can be obtained
through (i) the effective coordination paradigm by
employing an embedded game model, and (ii) the
jointed design of spectrum allocation and SAC decision
algorithms to obtain synergistic and complementary features. Therefore, our scheme can get a better performance than other existing schemes, which were designed
as one-sided protocols and did not consider the feasibility to respond the current F-RAN system conditions.
In Fig. 5, we plot the spectrum efficiency. It means a
bandwidth usage ratio of F-RAN system. In general, as
the service request rate increases, the spectrum efficiency also increases. This is intuitively correct. In our
embedded game approach, all system agents adaptively
interact with each other and decide their strategy based
on the current F-RAN system conditions. Therefore, we
allocate the spectrum resource based on the timed KSBS
approach, and can maintain a higher spectrum efficiency. Figure 5 clearly indicates that the proposed
scheme can effectively handle the resource allocation
problem than the existing schemes in [3, 6, 17] from low
to heavy service load distributions.
Figure 6 shows the fronthaul transmission delay curves
for the data communication implemented with four different choices of each schemes. It is estimated as the
normalized time delay between the CS and its corresponding MU. In order to quantify the F-RAN’s QoS
performance, it is one of important metrics. The result
shows that the proposed scheme with an adaptive SAC
mechanism can achieve a significantly lower transmission delay. The simulation results shown in Figs. 4, 5
and 6 demonstrate the performance comparison of the
proposed scheme and other existing schemes [3, 6, 17],
-----
and verify that the proposed embedded game approach
can strike the appropriate performance balance between
system throughput, spectrum efficiency, and transmission delay; the Joint Optimization of Cloud and Edge
(JOCE) scheme [3], the Cloud Structure with Edge Caching (CSEC) scheme [6], and the Traffic Balancing and
Dynamic Clustering (TBDC) scheme [17] cannot offer
such an attractive performance balance.
5 Conclusions
As a promising paradigm for the 5G communication
system, the F-RAN has been proposed as an advanced
socially aware wireless networking architecture to
provide the higher spectral efficiency while maximizing the system performance. In this study, we have
studied joint design of cloud and edge processing in
the F-RAN system to solve the resource allocation
-----
and SAC problems. Based on the newly developed
embedded game model, we have explored the feasibility of F-RAN control decision process and the practicality for the real-world implementation. In our
embedded game structure, the SAC algorithm is
nested in the spectrum allocation algorithm to effectively control the conflict problem of F-RAN system
agents. Based on the interactive feedback mechanism,
the proposed scheme has the potential to handle multiple targets without using more complex multi-target
tracking algorithm. The extensive simulation result is
very encouraging, showing that our embedded gamebased approach provides a more effective way to control the F-RAN system than the other existing
schemes. Open issues for the further research are the
designs and validations of the original F-RAN systems
for big data mining, cognitive radio, software-defined
network, and network security problems. The progress of trial tests and test bed development of FRANs can be anticipated to be promoted in the future, which makes F-RANs’ commercial rollout as
early as possible.
Acknowledgements
This research was supported by the MSIP (Ministry of Science, ICT and Future
Planning), Korea, under the ITRC (Information Technology Research Center)
support program (IITP-2017-2014-0-00636) supervised by the IITP (Institute for
Information & communications Technology Promotion), and was supported by
Basic Science Research Program through the National Research Foundation of
Korea (NRF) funded by the Ministry of Education (NRF-2015R1D1A1A01060835)
Author’s contribution
SK: is a sole author of this work and ES (i.e., participated in the design of the
study and performed the statistical analysis).
Competing interests
The author, Sungwook Kim, declares that there is no competing interests
regarding the publication of this paper.
6 Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in
published maps and institutional affiliations.
Received: 28 January 2017 Accepted: 7 June 2017
References
1. S-C Hung, H Hsu, S-Y Lien, K-C Chen, Architecture Harmonization Between
Cloud Radio Access Networks and Fog Networks. IEEE Access 3, 3019–3034
(2015)
2. R Tandon, O Simeone, Harnessing cloud and edge synergies: toward an
information theory of fog radio access networks. IEEE Commun. Mag. 54(8),
44–50 (2016)
3. S-H Park, O Simeone, S Shamai, Joint optimization of cloud and edge
processing for fog radio access networks (IEEE ISIT, 2016), pp. 315–319
4. AV Dastjerdi, R Buyya, Fog Computing: Helping the Internet of Things
Realize Its Potential. Computer 49(8), 112–116 (2016)
5. P Borylo, A Lason, J Rzasa, A Szymanski, A Jajszczyk, Energy-aware fog and
cloud interplay supported by wide area software defined networking (IEEE ICC,
2016), pp. 1–7
6. R Tandon, O Simeone, Cloud-aided wireless networks with edge caching:
Fundamental latency trade-offs in fog Radio Access Networks (IEEE ISIT, 2016),
pp. 2029-2033
7. Z Hu, Z Zheng, T Wang, L Song, X Li, Game theoretic approaches for
wireless proactive caching. IEEE Commun. Mag. 54(8), 37–43 (2016)
8. S Kim, Game Theory Applications in Network Design (IGI Global, Hershey,
2014)
9. K Sundaresan, MY Arslan, S Singh, S Rangarajan, SV Krishnamurthy, FluidNet:
A Flexible Cloud-Based Radio Access Network for Small Cells. IEEE/ACM
Trans. Networking 24(2), 915–928 (2016)
10. J Wu, Z Zhang, H Yu, Y Wen, Cloud radio access network (C-RAN): a primer.
IEEE Netw. 29(1), 35–41 (2015)
11. Q-T Vien, N Ogbonna, HX Nguyen, R Trestian, P Shah, in Non-Orthogonal
Multiple Access for Wireless Downlink in Cloud Radio Access Networks,
Proceedings of European Wireless (2015), pp. 1-6
-----
12. Q-T Vien, TA Le, B Barn, CV Phan, Optimising energy efficiency of nonorthogonal multiple access for wireless backhaul in heterogeneous cloud
radio access network. IET Commun. 10(18), 2516–2524 (2016)
13. HQ Tran, PQ Truong, CV Phan, Q-T Vien, On the energy efficiency of NOMA
for wireless backhaul in multi-tier heterogeneous CRAN (SigTelCom, 2017), pp.
229–234
14. M Peng, Y Li, J Jiang, J Li, C Wang, Heterogeneous cloud radio access
networks: a new perspective for enhancing spectral and energy efficiencies.
IEEE Wirel. Commun. 21(6), 126–135 (2014)
15. A Sengupta, R Tandon, O Simeone, Fog-Aided Wireless Networks for Content
[Delivery: Fundamental Latency Trade-Offs, (2015) [Online]. Available: https://](https://arxiv.org/abs/1605.01690)
[arxiv.org/abs/1605.01690. Accessed 16 Apr 2017.](https://arxiv.org/abs/1605.01690)
16. SM Azimi, O Simeone, A Sengupta, R Tandon, Online Edge Caching in Fog[Aided Wireless Network, (2017) [Online]. Available: https://arxiv.org/abs/1701.](https://arxiv.org/abs/1701.06188)
[06188. Accessed 16 Apr 2017.](https://arxiv.org/abs/1701.06188)
17. D Chen, S Schedler, V Kuehn, Backhaul traffic balancing and dynamic
content-centric clustering for the downlink of Fog Radio Access Network (IEEE
SPAWC, 2016), pp. 1–5
18. M Peng, S Yan, K Zhang, C Wang, Fog-computing-based radio access
networks: issues and challenges. IEEE Netw. 30(4), 46–53 (2016)
19. H Qiao, J Rozenblit, in Ferenc Szidarovszky and Lizhi Yang, Multi-Agent
Learning Model with Bargaining, Proceedings of the 2006 Winter Simulation
Conference, pp. 934-940, 2006
20. F Bao, I-R Chen, Trust management for the internet of things and its
application to service composition (IEEE WoWMoM, 2012), pp. 1–6
21. K Sungwook, News-vendor game-based resource allocation scheme for
next-generation C-RAN systems. EURASIP J. Wirel. Commun. Netw. 2016(1),
1–11 (2016)
22. J Li, J Sun, Y Qian, F Shu, M Xiao, W Xiang, A Commercial Video-Caching
System for Small-Cell Cellular Networks using Game Theory. IEEE Access 4,
7519–7531 (2016)
23. D Kahneman, JL Knetsch, RH Thaler, Fairness and the assumptions of
economics. J. Bus. 59(4), 285–300 (1986)
24. W Zhu, C Lee, A New Approach to Web Data Mining Based on Cloud
Computing. JCSE 8(4), 181–186 (2014)
25. Y Liu, Y Sun, J Ryoo, S Rizvi, AV Vasilakos, A Survey of Security and Privacy
Challenges in Cloud Computing: Solutions and Future Directions. JCSE 9(3),
119–133 (2015)
26. K Lee, I Shin, User Mobility Model Based Computation Offloading Decision
for Mobile Cloud. JCSE 9(3), 155–162 (2015)
-----
| 14,164
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1186/S13638-017-0900-9?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1186/S13638-017-0900-9, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://jwcn-eurasipjournals.springeropen.com/track/pdf/10.1186/s13638-017-0900-9"
}
| 2,017
|
[
"JournalArticle"
] | true
| 2017-12-01T00:00:00
|
[
{
"paperId": "9c181910bace778f53f66e6d66ba9e009b5d59fb",
"title": "Online edge caching in fog-aided wireless networks"
},
{
"paperId": "334e254163266e0ee2759acb34e8be0b1e780965",
"title": "On the energy efficiency of NOMA for wireless backhaul in multi-tier heterogeneous CRAN"
},
{
"paperId": "de1265bf1ed119509ca78aeee75c007b21488083",
"title": "Harnessing cloud and edge synergies: toward an information theory of fog radio access networks"
},
{
"paperId": "5d4e1f3b09811c1b820f70053898fed523888c4a",
"title": "Game theoretic approaches for wireless proactive caching"
},
{
"paperId": "778b90dbd4edcb1df6201978c20aa860a26eab7c",
"title": "Fog Computing: Helping the Internet of Things Realize Its Potential"
},
{
"paperId": "b7118f8a54124b8ee1b381fe1df042b272d59cb0",
"title": "Cloud-aided wireless networks with edge caching: Fundamental latency trade-offs in fog Radio Access Networks"
},
{
"paperId": "0ab5d9fc2dd06468978ef06d24331b97f80677d3",
"title": "News-vendor game-based resource allocation scheme for next-generation C-RAN systems"
},
{
"paperId": "fa3b473f821bb5313d214137d8d4a04e0a3c51f9",
"title": "A Commercial Video-Caching System for Small-Cell Cellular Networks Using Game Theory"
},
{
"paperId": "60f50dc9e0e17708f9116299eec266ee40b79433",
"title": "Energy-aware fog and cloud interplay supported by wide area software defined networking"
},
{
"paperId": "840752985d53aff3fe70bf2f8026c1d8124f748f",
"title": "Fog-Aided Wireless Networks for Content Delivery: Fundamental Latency Tradeoffs"
},
{
"paperId": "7de15d6b7aaf58cca526ab4b10f2028397475ee6",
"title": "Backhaul traffic balancing and dynamic content-centric clustering for the downlink of Fog Radio Access Network"
},
{
"paperId": "dd495fc7326f870191403f780ab73aae3f9700e7",
"title": "Joint optimization of cloud and edge processing for fog radio access networks"
},
{
"paperId": "251d32722c473a3c752ae3668a0b1a2970a93738",
"title": "Modeling and analyzing interference signal in a complex electromagnetic environment"
},
{
"paperId": "fbb144930b3873312686223b0b0d19f8c32a4f1a",
"title": "Architecture Harmonization Between Cloud Radio Access Networks and Fog Networks"
},
{
"paperId": "3385b800ace9d03f0e8258ab90dcf5687808b0e5",
"title": "A Survey of Security and Privacy Challenges in Cloud Computing: Solutions and Future Directions"
},
{
"paperId": "c99ae14f8013dc6f9be9b2f5dbd5362656019343",
"title": "User Mobility Model Based Computation Offloading Decision for Mobile Cloud"
},
{
"paperId": "70243021f90294acc659d0819f41045a6735deb5",
"title": "Fog-computing-based radio access networks: issues and challenges"
},
{
"paperId": "5e081d2c9cfc133e5293de0ff4df32bedd158e75",
"title": "Non-Orthogonal Multiple Access for Wireless Downlink in Cloud Radio Access Networks"
},
{
"paperId": "512742cb3fb812d8e0e89c256f57986aa23d025f",
"title": "Cloud radio access network (C-RAN): a primer"
},
{
"paperId": "aa7d9adf2beee05777d0806551dcd46e4567bdb9",
"title": "A New Approach to Web Data Mining Based on Cloud Computing"
},
{
"paperId": "454dffd49e9737e176620cc4af0503a2bf9f6da6",
"title": "Heterogeneous cloud radio access networks: a new perspective for enhancing spectral and energy efficiencies"
},
{
"paperId": "3261db134a52239f38da347960fafa0bdfed8ba8",
"title": "FluidNet: A Flexible Cloud-Based Radio Access Network for Small Cells"
},
{
"paperId": "af98f81714b8a44cc209dbabb1827abe1aa0cab7",
"title": "Trust management for the internet of things and its application to service composition"
},
{
"paperId": "2f197b1e8723d861f604f8b19980f8f96f36afd3",
"title": "Multi-Agent Learning Model with Bargaining"
},
{
"paperId": "351acbd93753aa5c8dc3978c6c0e4e0d15865dfb",
"title": "Optimising Energy Efficiency of NOMA for Wireless Backhaul in Heterogeneous CRAN"
},
{
"paperId": "8e7acdc3aacae3e0d5c0ed32bca7385b3fd3be25",
"title": "Game theory: applications"
},
{
"paperId": "a3d2dcfed78248ac3944c6b00ef97ed36807a1c1",
"title": "Fairness and the Assumptions of Economics"
}
] | 14,164
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Medicine",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Medicine",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00b6cde4ec0e59269d78fda5124148db2fbb71c2
|
[
"Computer Science",
"Medicine"
] | 0.860794
|
Wireless access to a pharmaceutical database: A demonstrator for data driven Wireless Application Protocol applications in medical information processing
|
00b6cde4ec0e59269d78fda5124148db2fbb71c2
|
Journal of Medical Internet Research
|
[
{
"authorId": "2074275914",
"name": "M. S. Hansen"
},
{
"authorId": "152531859",
"name": "J. Dørup"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"J Med Internet Res"
],
"alternate_urls": [
"http://www.jmir.org/",
"https://www.jmir.org/"
],
"id": "2baad992-2268-4c38-9120-e453622f2eeb",
"issn": "1438-8871",
"name": "Journal of Medical Internet Research",
"type": "journal",
"url": "http://www.symposion.com/jmir/index.htm"
}
|
Background The Wireless Application Protocol technology implemented in newer mobile phones has built-in facilities for handling much of the information processing needed in clinical work. Objectives To test a practical approach we ported a relational database of the Danish pharmaceutical catalogue to Wireless Application Protocol using open source freeware at all steps. Methods We used Apache 1.3 web software on a Linux server. Data containing the Danish pharmaceutical catalogue were imported from an ASCII file into a MySQL 3.22.32 database using a Practical Extraction and Report Language script for easy update of the database. Data were distributed in 35 interrelated tables. Each pharmaceutical brand name was given its own card with links to general information about the drug, active substances, contraindications etc. Access was available through 1) browsing therapeutic groups and 2) searching for a brand name. The database interface was programmed in the server-side scripting language PHP3. Results A free, open source Wireless Application Protocol gateway to a pharmaceutical catalogue was established to allow dial-in access independent of commercial Wireless Application Protocol service providers. The application was tested on the Nokia 7110 and Ericsson R320s cellular phones. Conclusions We have demonstrated that Wireless Application Protocol-based access to a dynamic clinical database can be established using open source freeware. The project opens perspectives for a further integration of Wireless Application Protocol phone functions in clinical information processing: Global System for Mobile communication telephony for bilateral communication, asynchronous unilateral communication via e-mail and Short Message Service, built-in calculator, calendar, personal organizer, phone number catalogue and Dictaphone function via answering machine technology. An independent Wireless Application Protocol gateway may be placed within hospital firewalls, which may be an advantage with respect to security. However, if Wireless Application Protocol phones are to become effective tools for physicians, special attention must be paid to the limitations of the devices. Input tools of Wireless Application Protocol phones should be improved, for instance by increased use of speech control.
|
JOURNAL OF MEDICAL INTERNET RESEARCH Hansen & Dørup
##### Original Paper
# Wireless access to a pharmaceutical database: A demonstrator for data driven Wireless Application Protocol applications in medical information processing
##### Michael Schacht Hansen; Jens Dørup, MD, PhD
Section for Health Informatics, Institute of Biostatistics, University of Aarhus, Denmark
**Corresponding Author:**
Jens Dørup, MD, PhD
Section for Health Informatics
Institute of Biostatistics
University of Aarhus
Vennelyst Boulevard 6
DK 8000 Aarhus C
Denmark
Phone: +45 8942 6123
Fax: +45 8942 6140
[Email: [email protected]](mailto:[email protected])
### Abstract
**Background:** The Wireless Application Protocol technology implemented in newer mobile phones has built-in facilities for
handling much of the information processing needed in clinical work.
**Objectives:** To test a practical approach we ported a relational database of the Danish pharmaceutical catalogue to Wireless
Application Protocol using open source freeware at all steps.
**Methods:** We used Apache 1.3 web software on a Linux server. Data containing the Danish pharmaceutical catalogue were
imported from an ASCII file into a MySQL 3.22.32 database using a Practical Extraction and Report Language script for easy
update of the database. Data were distributed in 35 interrelated tables. Each pharmaceutical brand name was given its own card
with links to general information about the drug, active substances, contraindications etc. Access was available through 1) browsing
therapeutic groups and 2) searching for a brand name. The database interface was programmed in the server-side scripting language
PHP3.
**Results:** A free, open source Wireless Application Protocol gateway to a pharmaceutical catalogue was established to allow
dial-in access independent of commercial Wireless Application Protocol service providers. The application was tested on the
Nokia 7110 and Ericsson R320s cellular phones.
**Conclusions:** We have demonstrated that Wireless Application Protocol-based access to a dynamic clinical database can be
established using open source freeware. The project opens perspectives for a further integration of Wireless Application Protocol
phone functions in clinical information processing: Global System for Mobile communication telephony for bilateral communication,
asynchronous unilateral communication via e-mail and Short Message Service, built-in calculator, calendar, personal organizer,
phone number catalogue and Dictaphone function via answering machine technology. An independent Wireless Application
Protocol gateway may be placed within hospital firewalls, which may be an advantage with respect to security. However, if
Wireless Application Protocol phones are to become effective tools for physicians, special attention must be paid to the limitations
of the devices. Input tools of Wireless Application Protocol phones should be improved, for instance by increased use of speech
control.
**_(J Med Internet Res 2001;3(1):e4)_** [doi: 10.2196/jmir.3.1.e4](http://dx.doi.org/10.2196/jmir.3.1.e4)
**KEYWORDS**
Medical Informatics Applications; Database Management Systems; Dictionaries, Pharmaceutical; Wireless Application Protocol;
Open source software
-----
JOURNAL OF MEDICAL INTERNET RESEARCH Hansen & Dørup
### Introduction
The Global System for Mobile communication (GSM) digital
wireless network that is used to transmit audio communication
in cellular phones may also be used to transmit data at rates that
are typically limited to 9600 bits/s. However, for access to the
Internet a mobile phone needs connection to a computing device,
i.e. either a portable or stationary computer or a Personal Digital
Assistant (PDA) with an appropriate interface connection. The
Wireless Application Protocol (WAP) is a specification for a
communication protocol used to standardize the way wireless
devices, such as cellular telephones and radio transceivers, can
be used for Internet access, including e-mail and the World
Wide Web. The aim of using a standard protocol is to enable
devices and service systems that use WAP to operate together.
The advantage of WAP phones is that connection to the Internet
can be obtained using a modem, a small computer, and a
dedicated browser all of which are built into the WAP device.
On the other hand, the small screen size, keyboard size, lack of
pointing device and especially the low bandwidth made it
necessary to develop a standard for design of web pages aimed
at WAP devices and a modified markup language, the Wireless
Markup Language (WML), had to be developed, taking the
limitations of the device into consideration. Cellular phones
using the WAP for access to the Internet comprise potentials
for assisting in handling many clinical information needs [1].
- Conventional GSM telephony for synchronous, two-way
voice telephony
- Asynchronous unilateral communication via e-mail and
Short Message Service (SMS)
- Dictaphone function using answering machine technology
or built-in speech message facilities
- Built-in calendar and personal organizer functions
- Phone number catalogue and other smaller databases built
into the device
- Calculator and other dedicated built-in applications
In addition WAP technology allows access to databases on
Internet servers - e.g. pharmaceutical information, laboratory
data, educational materials, and access can be gained to Internet
**Table 1.** WAP MIME types
based Electronic Patient Records [2]. Reference materials
(pocket manuals) are often used by physicians in the daily work,
but printed reference books are rarely updated and may thus
become outdated. Many doctors carry some sort of paging or
communications device like a PDA with varying capacity to
store clinical databases. There are a number of advantages to
be gained by incorporating references manuals and other clinical
information into handheld devices through the WAP standard
[3]. This would allow easy access to several reference manuals
through a single device. Manuals would be updated centrally
and dynamically. Although many of the functions mentioned
are already available in today's cellular phones, they have only
been exploited only to a limited extent. This paper describes
our first experiences with porting a pharmaceutical database to
a WAP accessible database, involving the following steps:
a. a) A pharmaceutical relational database was interfaced with
server side scripting and deployed to a WAP device
b. b) The information should be formatted in a way suited for
small handheld devices
c. c) The project was implemented using a standard personal
computer without purchase of any new software
### Methods
##### Web Server
Establishing a data-driven online resource available to WAP
devices requires a modified web server, with a database engine
and a programming interface to the database. If the server needs
to work as a dial-in interface for the WAP device, a WAP
gateway must also be established. All of these features were
implemented using free, open source software. Documents
served from a web server are associated with a Multi-Purpose
Internet Mail Extension (MIME) type. The MIME type is needed
by the browser to determine how the file should be processed
(e.g. rendered like a normal hypertext markup language (HTML)
file or handled by a helper application). The file types used for
WAP devices have a new set of MIME types (Table 1) unknown
to most web servers and the web server must have these types
added.
_MIME type_ File extension Content
Text/vnd.wap.wml .wml WML source code
Application/vnd.wap.wmlc .wmlc Compiled WML
Text/vnd.wap.wmlscript .wmls WML script source code
Application/vnd.wap.wmlscriptc .wmlsc Compiled WML script
image/vnd.wap.wbmp .wbmp Wireless bitmap
We used an Apache 1.3 web server installed on a Linux server.
The MIME types were registered by adding the lines shown in
(Table 1) to the configuration file "httpd.conf":
-----
JOURNAL OF MEDICAL INTERNET RESEARCH Hansen & Dørup
```
AddType text/vnd.wap.wml .wml
AddType application/vnd.wap.wmlc .wmlc
AddType text/vnd.wap.wmlscript .wmls
AddType application/vnd.wap.wmlcscriptc .wmlsc
AddType image/vnd.wap.wbmp .wbmp
##### Database
```
Data containing the Danish pharmaceutical catalogue was
imported from an ASCII file received every two weeks from
the Danish Medical Association. Data was distributed in 35
interrelated tables with easy access to the hierarchy in the
pharmaceutical directory, facilitating browsing through the
pharmaceutical classes. The database structure also facilitated
search for specific brand names or active substances. Import
into a MySQL 3.22.32 database was done using a dedicated
Practical Extraction and Report Language (Perl) script designed
for easy update of the database. The program structure was
designed around the brand names. Each brand name was given
its own WML page (card) with links to general information
about the drug, active substances, contraindications etc. Access
to these cards was available through browsing the therapeutic
groups or searching for a specific brand name. The text entry
was made as simple as possible. Typically only the first three
characters of the brand name need to be entered before activating
the search.
##### Programming
A server-side scripting layer was used to interface the database.
The scripting layer is used to a) send SQL queries to the
database and b) format the data from the database as WML for
interpretation by the WAP gateway. The database interface was
programmed in the server-side scripting language PHP3. PHP
is designed as a scripting language embedded in HTML and it
is designed to generate HTML. To ensure that the content
returned by the script was WML the document MIME type was
sent explicitly with the "header" function. An example of a PHP
script that returns a WML page is shown in Figure 1.
-----
JOURNAL OF MEDICAL INTERNET RESEARCH Hansen & Dørup
**Figure 1.** An example of the code to be entered in the header of the WML document for the web
This example does not send any queries to the database, but it
illustrates how http headers can be formed with the correct
MIME type using PHP. Database queries were handled through
the structured query language (SQL) access to the database and
the contents of the database were sent to the WAP enabled
device. The choice of scripting language is somewhat arbitrary.
Other popular scripting languages like Active Server Pages
(ASP) or Perl could also have been used. The communication
between cellular phone and database could also have been
implemented through an executable application on the web
server (e.g. C/C++ programming). However, the overhead
involved in starting a process for each database request, makes
such a solution less feasible. Regardless of the implementation
strategy, special care should be taken to ensure that the
content-type header field is formed correctly.
##### Dataflow
The communication between a handheld device and a database
passes through several different layers and different
communication protocols are used (Figure 2). The individual
layers have restrictions some of which are crucial to the
implementation of the WAP application.
The handheld device connects to an Internet Service Provider
(ISP) with a standard Point to Point protocol (like connecting
to the Internet with a standard modem)w. The ISP is in contact
with a WAP gateway; the ISP often provides both the Internet
access and WAP gateway. The gateway may be public and
-----
JOURNAL OF MEDICAL INTERNET RESEARCH Hansen & Dørup
provided by one of the mobile telecommunication companies.
(See a list of public gateways at 4. www.wapdrive.com. WAP
Gateways around the world. www.wapdrive.com/DOCS/
wap_gateway/index.html) [4] or it may be private (see below).
The role of the ISP is to transmit data between the handheld
device and the gateway. The gateway sends requests from the
phone to web-servers on the Internet and it encodes the results
received from the web-servers to a more compact form to
facilitate the communication across the low bandwidth
connection. The encoded data is sent to the handheld device
using the WAP. The amount of data that can be sent to the
handheld device depends on the device. The Nokia 7110 has a
limitation of 1397 bytes in the compressed form sent by the
gateway [5]. An uncompressed WML document should be kept
below 1500 bytes to ensure that handheld devices can receive
it. When the handheld device sends a request for a Uniform
Resource Locator (URL), the gateway passes the request to the
web-server using the standard http-protocol. The web-server
handles the requests as it would a normal request for a web
page. However, if the requested URL is a WML document the
request is returned to the gateway for further processing. If the
URL refers to a script (in this case a PHP script), the PHP
interpreter will process the script (handle database queries,
format the output and return it to the gateway). The gateway
will subsequently encode and compress the data for transmission
with the WAP protocol.
-----
JOURNAL OF MEDICAL INTERNET RESEARCH Hansen & Dørup
**Figure 2.** The flow of data during a request from the WAP device
-----
JOURNAL OF MEDICAL INTERNET RESEARCH Hansen & Dørup
##### WAP Gateway
A WAP Gateway was established for direct dial-in access to
the pharmaceutical catalogue. A free, and open source gateway
was downloaded from www.kannel.org [6] and installed on a
Linux server. The gateway is still being developed and the latest
stable version is 0.11.2 (September 29th 2000). The gateway
relies on an Extensible Markup Language (XML) parser to
interpret the WML pages and the Linux server should have the
library: libxml-1.8.7 or higher installed to compile the gateway.
For dial-in, a modem (ISDN or analogue) was connected and
controlled by dial-in software on the server.
##### Phone set-up
The WAP enabled phone must be configured to access the
appropriate gateway. Phone number, username and password
(for the dial-in connection) and IP-address of the gateway (the
IP-address of the server running the gateway) must be entered
in the phone.
-----
JOURNAL OF MEDICAL INTERNET RESEARCH Hansen & Dørup
**Figure 3.** Sequence of screen dumps illustrating the search for of the dosage of Ibumetin on a Nokia 7110
-----
JOURNAL OF MEDICAL INTERNET RESEARCH Hansen & Dørup
### Results
A data driven interactive WAP-based pharmaceutical catalogue
was established. Access to the individual brand names was
available through free text search or by browsing the therapeutic
groups. The application can be tested at http://hilist.au.dk/wap.
php by using a public gateway or by using a WAP emulator on
the www (Figure 3). Response time for accessing a new level
in the catalogue hierarchy or completing a search was usually
less than three seconds. Searching a brand name, which could
be completed in only a few steps (Figure 3) in most cases was
found to be faster than browsing the content hierarchy. The
application was tested on Nokia 7110 and Ericsson R320s WAP
phones. Several device-specific limitations were revealed. The
display resolution is 95 x 45 pixels for the Nokia 7110 and 101
x 52 pixels for Ericsson R320s allowing four (Nokia) or five
(Ericsson) lines of text to be displayed. The maximum amount
of data per card (the maximum card size) was 1397 bytes for
the Nokia and 3000 bytes for the Ericsson. These limitations
must be considered when designing the WML pages (split data
in a sequence of cards).
### Discussion
With the present project we have demonstrated that an open
source freeware WAP gateway to a complex database can be
established with information of clinical relevance. However, a
number of practical and technological problems still have to be
solved before WAP devices can effectively substitute or
supplement other devices for processing clinical information.
Because of the high energy transmitted while communicating
with GSM phones, their use is still prohibited within many
hospital wards and the security is under debate [7,8]. Yet there
seem to be several solutions to this problem. Handheld WAP
devices, using a comparable communication technology, but
transmitting significantly less energy may be used. The
##### Conflicts of Interest
None declared.
##### References
development of medical electronic devices for use on hospital
wards is towards protection of individual devices that allows
use of regular GSM communication without interference. The
small screen and relatively ineffective input tools of the WAP
phone should be improved. The first steps towards speech
control have been taken on some newer WAP phones. Further
development in this direction will significantly improve usability
[9]. Doctors may connect to databases and even call for data on
a specific patient by use of speech control. Further, the present
speech message technology found in, for instance, the Ericsson
R320s could be further developed to allow functions that are
traditionally found in dictaphones. This would allow the
physician to edit and finish a full dictation before sending the
note for entry into the patient record. This technology will offer
many advantages compared with present technologies; for
example the secretary will have the dictated note directly
available without a risk of audiotapes being mislaid and possibly
the speech message could be stored on a central server for
temporary access by others before it has been entered into the
patient record. Testing the use of WAP phones for information
processing in a clinical ward was not part of the present project.
However, this project has shown that even with the small screen
and scrolling text, once connection to the server is established,
it is possible to fetch text from the database with a speed that
comes close to normal reading speed. Entering larger amounts
of text, however, is time-consuming on a cellular phone
keyboard so we conclude that for text input is a bigger problem
than output. New technologies are constantly being developed
in an extremely dynamic market for handheld communication
devices. Bandwidth is increased using i.e. the GPRS or UTMS
services in conjunction with Bluetooth and other local wireless
communication technologies. Functions found in PDA devices
are being incorporated into cellular phones. Technology
however, needs to be adapted to the clinical reality before we
can expect a widespread use by physicians.
1. [Coiera E. When conversation is better than computation. J Am Med Inform Assoc 2000;7(3):277-286. [PMC: 10833164 ]](http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pubmed&pubmedid=10833164)
[[Medline: 20290761]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=20290761&dopt=Abstract)
2. Bunschoten B, Deming B. Hardware issues in the movement to computer-based patient records. Health Data Manag 1995
[Feb;3(2):45-8, 50, 54. [Medline: 95346545]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=95346545&dopt=Abstract)
3. Buchauer A, Werner R, Haux R. Cooperative problem solving with personal mobile information tools in hospitals. Methods
[Inf Med 1998 Jan;37(1):8-15. [Medline: 98212198]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=98212198&dopt=Abstract)
4. [; www.wapdrive.com. WAP Gateways around the world. URL: http://www.wapdrive.com [accessed Feb 2001]](http://www.wapdrive.com)
5. [; African Cellular. WAP Phone Developer Specifications. URL: http://www.cellular.co.za/wap_browser_spec.htm [accessed](http://www.cellular.co.za/wap_browser_spec.htm)
Feb 2001]
6. [Kannel. Kannel, WAP gateway development team. Kannel: Open Source WAP and SMS gateway. URL: http://www.](http://www.kannel.org/)
[kannel.org/ [accessed Feb 2001]](http://www.kannel.org/)
7. [Bludau HB. Secure & Mobile Communication technology in a hospital environment. URL: http://www.ukl.uni-heidelberg.de/](http://www.ukl.uni-heidelberg.de/med/innereII/mitarb/hbbludau/flyer.html)
[med/innereII/mitarb/hbbludau/flyer.html [accessed Feb 2001]](http://www.ukl.uni-heidelberg.de/med/innereII/mitarb/hbbludau/flyer.html)
8. Tan KS, Hinberg I. Effects of a wireless local area network (LAN) system, a telemetry system, and electrosurgical devices
[on medical devices in a hospital environment. Biomed Instrum Technol 2000;34(2):115-118. [Medline: 20280274]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=20280274&dopt=Abstract)
9. Praissman JL, Sutherland JC. Laboratory voice data entry system. Biotechniques 1999 Dec;27(6):1202-6, 1208. [Medline:
[20097074]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=20097074&dopt=Abstract)
-----
JOURNAL OF MEDICAL INTERNET RESEARCH Hansen & Dørup
##### Abbreviations
**Apache:** is a freely available Web server that is distributed under an "open source" license. It runs on most
UNIX-based operating systems
**ASCII:** American Standard Code for Information Interchange is the most common format for text files in computers
and on the Internet. Bluetooth is a specification that describes how mobile phones, computers, and personal digital
assistants can easily interconnect with each other and with home and business phones and computers using a
short-range wireless connection Gateway is a network point that acts as an entrance to another network
**GPRS:** General Packet Radio Service is a packet-based wireless communication service that promises data rates
from 56 up to 114 Kbps and continuous connection to the Internet for mobile phone and computer users
**GSM:** Global System for Mobile communication is the most widely used of the three digital wireless telephone
technologies (TDMA, GSM, and CDMA). GSM digitizes and compresses data, then sends it down a channel with
two other streams of user data, each in its own time slot. It operates at either the 900 MHz or 1800 MHz frequency
band.
**Http:** Hypertext Transfer Protocol is the set of rules for exchanging files (text, graphic images, sound, video, and
other multimedia files) on the World Wide Web
**HTML:** Hypertext Markup Language is the set of mark-up symbols or codes inserted in a file intended for display
on a World Wide Web browser page.
**ISP:** An Internet Service Provider is a company that provides access to the Internet and other related services
**IP:** The Internet Protocol is the method or protocol by which data is sent from one computer to another on the
Internet Linux is a UNIX-like operating system that was designed to provide personal computer users a free or
very low-cost operating system comparable to traditional and usually more expensive UNIX systems
**MIME:** Multi-Purpose Internet Mail Extensions is an extension of the original Internet e-mail protocol that lets
people use the protocol to exchange different kinds of data files on the Internet.
**MySQL:** is an open source relational database management system that uses Structured Query Language (SQL),
for adding, accessing, and processing data in a database.
**Perl:** Practical Extraction and Reporting Language is a script programming language that is similar in syntax to
the C language. It was invented by Larry Wall.
**PDA:** Personal Digital Assistant is a term for any small mobile hand-held device that provides computing and
information storage and retrieval capabilities
**PHP:** is a script language and interpreter that is freely available and used primarily on Linux Web servers. The
initials come from the earliest version of the program, which was called "Personal Home Page Tools"
**SMS:** Short Message Service is a service for sending messages of up to 160 characters to mobile phones that use
Global System for Mobile (GSM) communication
**UMTS:** Universal Mobile Telecommunications System is a broadband, packet-based transmission of text, digitized
voice, video, and multimedia at data rates up to and possibly higher than 2 megabits per second (Mbps)
**URL:** Uniform Resource Locator is the address of a file (resource) accessible on the Internet
**WML:** Wireless Markup Language, allows the text portions of Web pages to be presented on cellular telephone
and personal digital assistants (PDA) via wireless access.
_submitted 01.10.00; peer-reviewed by E Coiera; comments to author 16.01.01; revised version received 08.02.01; accepted 22.02.01;_
_published 17.03.01_
_Please cite as:_
_Hansen MS, Dørup J_
_Wireless access to a pharmaceutical database: A demonstrator for data driven Wireless Application Protocol applications in medical_
_information processing_
_J Med Internet Res 2001;3(1):e4_
_[URL: http://www.jmir.org/2001/1/e4/](http://www.jmir.org/2001/1/e4/)_
_[doi: 10.2196/jmir.3.1.e4](http://dx.doi.org/10.2196/jmir.3.1.e4)_
_[PMID: 11720946](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=11720946&dopt=Abstract)_
© Michael Schacht Hansen, Jens Dørup. Originally published in the Journal of Medical Internet Research (http://www.jmir.org),
17.3.2001. Except where otherwise noted, articles published in the Journal of Medical Internet Research are distributed under
the terms of the Creative Commons Attribution License (http://www.creativecommons.org/licenses/by/2.0/), which permits
unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited, including full
bibliographic details and the URL (see "please cite as" above), and this statement is included.
-----
| 6,199
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC1761886, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.jmir.org/2001/1/e4/PDF"
}
| 2,001
|
[
"JournalArticle"
] | true
| 2001-03-17T00:00:00
|
[
{
"paperId": "d290cc8a15ec1e1cc9c32ff1b900f12d705f07f8",
"title": "Viewpoint: When Conversation Is Better Than Computation"
},
{
"paperId": "50ab83e3e884f1d223d55ec8d9e6d0070aed7dc3",
"title": "Effects of a wireless local area network (LAN) system, a telemetry system, and electrosurgical devices on medical devices in a hospital environment."
},
{
"paperId": "a448a4cc73efa36a7298edfb74c6d009295b3471",
"title": "Laboratory voice data entry system."
},
{
"paperId": "40c226da6ec14b04d9c4be83fbcb2a46b821787c",
"title": "Cooperative Problem Solving with Personal Mobile Information Tools in Hospitals"
}
] | 6,199
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00b748b74fc51ade9e62c29ccf08060af3fe9d54
|
[
"Computer Science"
] | 0.867135
|
Homogeneous Learning: Self-Attention Decentralized Deep Learning
|
00b748b74fc51ade9e62c29ccf08060af3fe9d54
|
IEEE Access
|
[
{
"authorId": "2412555",
"name": "Yuwei Sun"
},
{
"authorId": "40274057",
"name": "H. Ochiai"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://ieeexplore.ieee.org/servlet/opac?punumber=6287639"
],
"id": "2633f5b2-c15c-49fe-80f5-07523e770c26",
"issn": "2169-3536",
"name": "IEEE Access",
"type": "journal",
"url": "http://www.ieee.org/publications_standards/publications/ieee_access.html"
}
|
Federated learning (FL) has been facilitating privacy-preserving deep learning in many walks of life such as medical image classification, network intrusion detection, and so forth. Whereas it necessitates a central parameter server for model aggregation, which brings about delayed model communication and vulnerability to adversarial attacks. A fully decentralized architecture like Swarm Learning allows peer-to-peer communication among distributed nodes, without the central server. One of the most challenging issues in decentralized deep learning is that data owned by each node are usually non-independent and identically distributed (non-IID), causing time-consuming convergence of model training. To this end, we propose a decentralized learning model called Homogeneous Learning (HL) for tackling non-IID data with a self-attention mechanism. In HL, training performs on each round’s selected node, and the trained model of a node is sent to the next selected node at the end of each round. Notably, for the selection, the self-attention mechanism leverages reinforcement learning to observe a node’s inner state and its surrounding environment’s state, and find out which node should be selected to optimize the training. We evaluate our method with various scenarios for two different image classification tasks. The result suggests that HL can achieve a better performance compared with standalone learning and greatly reduce both the total training rounds by 50.8% and the communication cost by 74.6% for decentralized learning with non-IID data.
|
Received December 3, 2021, accepted January 10, 2022, date of publication January 13, 2022, date of current version January 21, 2022.
_Digital Object Identifier 10.1109/ACCESS.2022.3142899_
# Homogeneous Learning: Self-Attention Decentralized Deep Learning
YUWEI SUN 1,2, (Member, IEEE), AND HIDEYA OCHIAI 1, (Member, IEEE)
1Graduate School of Information Science and Technology, University of Tokyo, Tokyo 1138654, Japan
2RIKEN AIP, Tokyo 1030027, Japan
Corresponding author: Yuwei Sun ([email protected])
This work was supported in part by the JRA Program at RIKEN AIP.
**ABSTRACT Federated learning (FL) has been facilitating privacy-preserving deep learning in many walks of**
life such as medical image classification, network intrusion detection, and so forth. Whereas it necessitates
a central parameter server for model aggregation, which brings about delayed model communication and
vulnerability to adversarial attacks. A fully decentralized architecture like Swarm Learning allows peerto-peer communication among distributed nodes, without the central server. One of the most challenging
issues in decentralized deep learning is that data owned by each node are usually non-independent and
identically distributed (non-IID), causing time-consuming convergence of model training. To this end,
we propose a decentralized learning model called Homogeneous Learning (HL) for tackling non-IID data
with a self-attention mechanism. In HL, training performs on each round’s selected node, and the trained
model of a node is sent to the next selected node at the end of each round. Notably, for the selection, the
self-attention mechanism leverages reinforcement learning to observe a node’s inner state and its surrounding
environment’s state, and find out which node should be selected to optimize the training. We evaluate our
method with various scenarios for two different image classification tasks. The result suggests that HL can
achieve a better performance compared with standalone learning and greatly reduce both the total training
rounds by 50.8% and the communication cost by 74.6% for decentralized learning with non-IID data.
**INDEX TERMS** Collective intelligence, distributed computing, knowledge transfer, multi-layer neural
network, supervised learning.
**I. INTRODUCTION**
Centralized deep learning in high performance computing (HPC) environments has been facilitating the advancement in various areas such as drug discovery, disease diagnosis, cybersecurity, and so on. Despite its broad applications in
many walks of life, the associated potential data exposure of
training sources and privacy regulation violation have greatly
decreased the practicality of such centralized learning architecture. In particular, with the promotion of GDPR [1], data
collection for centralized model training has become more
and more difficult.
Decentralized deep learning (DDL) is a concept to bring
together distributed data sources and computing resources
while taking the full advantage of deep learning models.
Nowadays, DDL such as Federated Learning (FL) [2] has
been offering promising solutions to social issues of data
The associate editor coordinating the review of this manuscript and
approving it for publication was Hailong Sun .
privacy, especially in large-scale multi-agent learning. These
massively distributed nodes can facilitate diverse use cases,
such as industrial IoT [3], environment monitoring with smart
sensors [4], human behavior recognition with surveillance
cameras [5], connected autonomous vehicles control [6], [7],
network intrusion detection [8], [9], and so forth.
Though FL has been attracting great attention due to
the privacy-preserving architecture, recent years’ upticks
in adversarial attacks cause its hardly guaranteed trustworthiness. FL encounters various threats, such as backdoor attacks [10]–[12], information stealing attacks [13],
and so on. On the contrast, fully decentralized architectures
like Swarm Learning (SL) [14] leverages the blockchain,
smart contract, and other state-of-the-art decentralization
technologies to offer a more practical solution. Whereas,
a great challenge of it has been deteriorated performance in
model training with non-independent identically distributed
(non-IID) data, leading to extremely increased time of model
convergence.
-----
_A. OUR CONTRIBUTIONS_
We propose a self-attention decentralized deep learning
model called Homogeneous Learning (HL). HL leverages
a shared communication policy for adaptive model sharing
among nodes. A starter node initiates a training task and by
iteratively sending the trained model and performing training
on each round’s selected node its model is updated for achieving the training goal. Notably, a node selection decision is
made by reinforcement learning agents based on the current
selected node’s inner state and outer state of its surrounding
environment to maximize a reward for moving towards the
training goal. Finally, comprehensive experiments and evaluation results suggest that HL can accelerate the model training
on non-IID data with 50.8% fewer training rounds and reduce
the communication cost by 74.6%.
_B. PAPER OUTLINE_
This paper is organized as follows. Section II reviews the
most recent work about DDL and methodologies for tackling
data heterogeneity problems in model training. Section III
discusses assumptions and definitions used in this research.
Section IV presents the technical underpinnings of Homogeneous Learning, including the local machine learning (ML)
task model, the reinforcement learning model, and the
self-attention mechanism to learn an optimized communication policy. Section V demonstrates experimental evaluations
for tackling various image classification tasks with three
baseline models applied. Section VI concludes the paper and
gives out a future direction of this work.
**II. RELATED WORK**
_A. DECENTRALIZED DEEP LEARNING_
In recent years, lots of DDL architectures have been proposed leveraging decentralization technologies such as the
blockchain and ad hoc networks. For instance, Li et al. [15]
presented a blockchain-based decentralized learning framework based on the FISCO blockchain system. They applied
the architecture to train AlexNet models on the FEMNIST dataset. Similarly, Lu et al. [16] demonstrated a
blockchain empowered secure data sharing architecture
for FL in industrial IoT. Furthermore, Mowla et al. [17]
proposed a client group prioritization technique leveraging the Dempster-Shafer theory for unmanned aerial vehicles (UAVs) in flying ad-hoc networks. HL is a fully decentralized machine learning model sharing architecture based
on decentralization technology such as token exchanges.
_B. CONVERGENCE OPTIMIZATION WITH SKEWED DATA_
In a real-life application, usually data owned by different
clients in such a decentralized system are skewed. For this
reason, the model training is slow and even diverges. Methodologies for tackling such data heterogeneity such as FL,
have been studied for a long time. For example, Sener and
Savarese [18] presented the K-Center clustering algorithm
which aims to find a representative subset of data from a very
large collection such that the performance of the model based
on the small subset and that based on the whole collection
will be as close as possible. Moreover, Wang et al. [19]
demonstrated reinforcement learning-based client selection
in FL, which counterbalances the bias introduced by non-IID
data thus speeding up the global model’s convergence.
Sun et al. [8] proposed the Segmented-FL to tackle heterogeneity in massively distributed network intrusion traffic
data, where clients with highly skewed training data are
dynamically divided into different groups for model aggregation respectively at each round. Furthermore, Zhao et al. [20]
presented a data-sharing strategy in FL by creating a small
data subset globally shared between all the clients. Likewise,
Jeong et al. [21] proposed the federated augmentation where
each client augments its local training data using a generative neural network. Different from the aforementioned
approaches, HL leverages a self-attention mechanism that
optimizes the communication policy in DDL using reinforcement learning models. It is aimed to reduces computational and communication cost of decentralized training on
skewed data.
**III. PRELIMINARIES**
_A. CLASSIFICATION TASK_
We specifically consider supervised learning with C categories in the entire dataset D. Let x ∈ R[D] be a sample and
_y ∈{1, 2, . . ., C} = Y a label. D consists of a collection_
of N samples as D = {(xi, yi)}i[N]=1[. Suppose that][ f][ denotes]
a neural network classifier taking an input xi and outputting
a C-dimensional real-valued vector where the jth element of
the output vector represents the probability that xi is recognized as class j. Given f (x), the prediction is given by _y_
ˆ =
arg maxj f (x)j where f (x)j denotes the jth element of f (x).
The training of neural network is attained by minimizing the
following loss function with respect to the model parameter θ
_B. DECENTRALIZED DEEP LEARNING_
We assume there are K clients. The kth client has its own
dataset D[(][k][)] := {(xi, yi)}i[N]=[ (][k]1[)] [where][ N][ (][k][)][ is the sample size]
of dataset D[(][k][)]. Here, ∪i[K]=1[D][(][k][)][ =][ D][ and][ N][ =][ �]k[K]=1 _[N][ (][k][)][.]_
We also suppose that each client cannot share data each
other due to some reason, mainly due to data confidentiality.
Decentralized deep learning (DDL) is a framework to obtain
a global model that is trained over the entire data without
sharing distributed samples. For instance, federated learning
(FL) [2] consists of the parameter server (PS) and lots of
clients. Let Gt be the global model of the PS and Lt[(][k][)] be the
local model of the clientk at the round t. For each training
round t, a subset of clients Kselected is selected for model
training with the latest global model parameters Gt based
on their own dataset D[(][k][∈][K][selected] [)]. Then, the trained models
_Lt[(]+[k][∈]1[K][selected]_ [)] are sent back to the PS for aggregation thus
improving the joint global model Gt+1.
_J_ (θ, D) = [1]
_N_
_N_
�
_ℓ(yi, f (xi; θ))._ (1)
_i=1_
-----
Moreover, a peer-to-peer DDL system consists of distributed nodes functioning as both the server and the client
based on decentralization technologies such as blockchain
[14]–[16], token-exchange [17], and so on. For example, the
token-exchange validates and issues security tokens to enable
nodes to obtain appropriate access credentials for exchanging
resources without the central server. This is different from
FL where the parameter server plays the key role in learning
process control of model sharing.
_C. DATA HETEROGENEITY_
The challenges related to heterogeneity of nodes in DDL
refer to two categories, i.e., data heterogeneity and hardware heterogeneity. Notably, data heterogeneity results in
time-consuming convergence or divergence of model learning. Let p(x _y) be the common data distribution of the entire_
|
data D. We assume the common distribution p(x _y) is shared_
|
by all nodes. Then, Nodek has pk (y). We first consider
an independent and identically distributed (IID) setting, i.e.,
_pi(x, y) = p(x|y)pi(y) s.t. pi(y) = pj(y) for all i ̸= j. Under_
this assumption, the data distribution of the entire dataset can
be represented by a node’s local data distribution. Unfortunately, in real-life application, samples held by clients are
usually skewed with various data distributions, i.e., pi(x, y) =
_p(x|y)pi(y) s.t. pi(y) ̸= pj(y) for all i ̸= j. Node1 follows_
_p1(x, y) and Node2 follows p2(x, y). We further define and_
clarify such data heterogeneity as follows: given samples
{(xi, yi)}i[N]=[ (][k]1[)] [in node][k][’s local dataset][ D][(][k][)][, when][ α][ samples]
are from a single main data class c[(][k][)] subject to α > _[N]C[ (][k][)]_ and
the remaining samples are randomly drawn from the other
_C_ 1 data classes, the heterogeneity level H [(][k][)] of nodek
−
is formulated as H [(][k][)] = −p(yi = c[(][k][)]) ∗ _log(p(yi ̸= c[(][k][)]))_
subject to yi ∈{yi}i[N]=[ (][k]1[)] [. Moreover, we assign a main data class]
_c[(][k][)]_ _k%C to nodek._
=
_D. COMMUNICATION OVERHEAD_
Communication overhead in DDL usually refers to the payload of shared local model parameters [22], [23] and communication distances between nodes that share a model with
each other. We mainly discuss the later case here. In particular, let di,j be the communication distance from nodei to
nodej. Disi×j is a symmetrical matrix where the bidirectional
distances between two nodes are equal and the distance to a
node itself di,j|i=j is zero. In addition, each distance di,j|i̸=j in
the matrix is a random numerical value taken between 0 and
_β, where β denotes the upper bound of the relative distance_
(Equation 2).
d1,1 _d1,2_ - · · _d1,j_
_d2,1_ _d2,2_ - · · _d2,j_
_Disi×j =_ _..._ _..._ _..._ _..._
_di,1_ _di,2_ - · · _di,j_
_subject to: di,j|i=j = 0, di,j = dj,i,_
_di,j|i̸=j ∈_ (0, β] (2)
**IV. HOMOGENEOUS LEARNING**
We propose a novel decentralized deep learning architecture
called Homogeneous Learning (HL) (Fig. 1). HL leverages
reinforcement learning (RL) agents to learn a shared communication policy of node selection, thus contributing to
fast convergence of model training and reducing communication cost as well. In HL, each node has two machine
learning (ML) models, i.e., a local ML task model L[(][k][)] for
the multi-classification task and an RL model L[DQN] for the
node selection in peer-to-peer communications.
**FIGURE 1. Homogeneous learning: self-attention decentralized deep**
learning.
_A. LOCAL ML TASK MODEL_
We assume the K nodes in HL share the same model architecture for a classification task, which we call a local ML
task model. Let yi be the layeri’s output of L[(][k][)]. yi =
_fi(Wiyi−1), i = 1, . . ., p, y0 = x, where fi is the activation_
function, Wi is the weight matrix of layeri, yi−1 represents the
output of the previous layer, and p is the number of layers in
_L[(][k][)]. Notably, we employ a three-layer convolutional neural_
network (CNN) with an architecture as follows: the first convolutional layer of the CNN model has a convolution kernel
of size 5 5 with a stride of 1 and it takes one input plane and
×
it produces 20 output planes, followed by a ReLU activation
function; the second convolutional layer takes 20 input planes
and produces 50 output planes and it has a convolution kernel
of size 5 5 with a stride of 1, followed by ReLU; the output
×
is flattened followed by a linear transformation of a fully
connected layer, which takes as input the tensor and outputs
a tensor of size C representing the C categories. Moreover,
the categorical cross-entropy is employed to compute a loss
_J_ (Lt[(][k][)], D[(][k][)]). After that, we apply as a learning function the
Adam to update the model.
_B. REINFORCEMENT LEARNING MODEL_
In addition to the local ML task model, each nodek in
HL is also associated with a reinforcement learning (RL)
-----
model L[DQN] . The goal of the RL model is to learn a communication policy for the node selection in decentralized
learning. There are three main components of the RL model,
the state s, the action a, and the reward r. Based on the input
state s, the RL model outputs an action a for the next node
selection, and at the same time, updates itself by correlating
the attained reward r with the performed action a. As a result,
the recursive self-improvement of the RL model allows a
node to constantly explore the relation between the system’s
performance and the selection policy (i.e., the self-attention
mechanism in HL), contributing to faster convergence of
model learning.
Every round t, a RL model observes the state st from two
different sources, i.e., model parameters s[(]t[k][)] of the selected
nodek and parameters of models in the surrounding environment {s[(]t[i][)][|][i][ ∈] _[K]_ _[,][ i][ ̸=][ k][}][. In particular, we employ a]_
deep Q-network (DQN), which approximates a state-value
function in a Q-learning framework with a neural network. Let y[DQN]i be the layeri’s output of L[DQN] . y[DQN]i =
_fi[DQN]_ (Wi[DQN] _y[DQN]i−1_ [)][,][ i][ =][ 1][, . . .,][ q][,][ y]0[DQN] = s, where fi[DQN] is
the activation function of layeri, Wi[DQN] is the weight matrix
of layeri, y[DQN]i−1 represents the output of the previous layer,
and q is the number of layers in L[DQN] . Notably, a DQN
model consisting of three fully connected layers is applied
(Fig. 2). The two hidden layers consist of 500 and 200 neurons
respectively, using as an activation function the ReLU. The
output layer with a linear activation function consists of K
neurons that output the rewards for selecting each nodek
respectively, k ∈{1, 2, . . ., K }. Furthermore, at each round
_t, the node with the largest reward will be selected. ˆat =_
arg maxj f _[DQN]_ (st )j. Consequently, the RL model selects and
sends the trained local model Lt[(]+[k][)]1 [of node][k][ to the next]
node at . As such, the local ML task model of nodeat is then
updated to Lt[(]+[k][)]1[.]
To understand the training of the RL model, we first define
the input state st . The state st is a concatenated vector of the
flattened model parameters of all nodes in the systems. st =
{s[(]t[k][)][|][k][ ∈] _[K]_ [}][. To efficiently represent the state and compute]
the RL model prediction, we adopt the principal component analysis (PCA) to reduce the dimension of the state
_st from an extremely large number (e.g., 33580 dimensions_
for the model parameters used in an MNIST classification
task with an input size of 28 28) to K, where K is the
×
number of nodes. K is adopted due to the minimum possible
dimension of a PCA-based output vector is the number of
input samples. Then, we define the output reward rt . Every
round t, a trained ML task model is evaluated on a hold-out
validation set Dval, and the reward rt can be computed from
the validation accuracy ValAcct, the communication distance
between the current node k and the next selected node at,
and a penalty of minus one for taking each training step.
_rt = 32[(][ValAcc][t]_ [−][GoalAcc][)] −dat−1,at −1, where GoalAcc denotes
the desired performance on the validation set and dat−1,at is
the communication distance drawn from the distance matrix
_Disi×j. We employ an exponentially increasing function 32[(][·][)]_
to distinguish between different validation results when the
ML task model is close to convergence when only small
variance is observed in the results. In addition, an episode
reward R is the accumulative reward of the current reward
and discounted future rewards in the whole training process
of HL. R = [�]t[T]=1 _[γ][ t][−][1][r][t]_ [, where][ T][ is the total training rounds]
of HL in one episode.
With DQN, we often use experience replay during training.
A RL model’s experience at each time step t is stored in a data
set called the replay memory. Let et be the model’s experience
at time t. et = (st _, at_ _, rt_ _, st+1), where rt is the reward given_
the current state-action pair (st _, at_ ) and st+1 is the state of
the ML task models after training. We assume a finite size
limit M of the replay memory, and it will only store the last
_M experiences. Moreover, to facilitate constant exploration_
of a RL model, epsilon is a factor to control the probability of
the next node being selected by the RL model. In particular,
for each round, a random numerical value between 0 and 1
is obtained and compared with the current epsilon value
_Epsilonep where ep denotes the current episode. Then if the_
randomly picked value is greater than Epsilonep, the next
node will be selected by the RL model. Otherwise, a random
action of node selection will be performed. For either case,
an experience sample et = (st _, at_ _, rt_ _, st+1) will be stored_
in the replay memory. The decentralized learning terminates
when either the model achieves the desired performance on
the validation set or exceeds a maximum number of rounds
_Tmax, the learning progress of which is called an episode_
of HL. For each episode, we apply the epsilon decay ρ to
gradually increase the possibility of the RL model’s decisionmaking. Epsilonep+1 = Epsilonep · _e[−][ρ], where Epsilonep+1 is_
the computed epsilon for the next episode and e is the Euler’s
number that is approximately equal to 2.718.
Furthermore, at the end of each episode ep, the RL model
is trained on a small subset b of samples randomly drawn
from the replay memory. We adopt as a learning function
the Adam. Then, the optimization of the DQN model is
formulated in (3). The updated DQN model is shared with the
next selected node. As such, the RL model performs better
and better in predicting the expected rewards of selecting
each node for the next round, which results in the increase
of the episode reward R by selecting the node with the largest
expected reward at each round t.
_rt+ˆ_ 1 = maxai _f_ _[DQN]_ (si+1; Lep[DQN] )ai
_rˆt = f_ _[DQN]_ (si; Lep[DQN] )ai
_B_
�
_Q(Lep[DQN]_ ) = _ℓ(rt + γ_ _rt+ˆ_ 1, ˆrt )
_i=1_
_θ_ [∗] = arg min _Q(θ), subject to θ = Lep[DQN]_ (3)
_θ_
where ai denotes the predicted next step’s action that maximizes the future reward, γ denotes the discount factor of the
future reward, B denotes the number of samples in the subset
_b, and Q is the mean squared error loss function._
-----
**FIGURE 2. Next-node selection based on the RL model of HL.**
Finally, the model training of HL is formulated as
Algorithm 1. Algorithm 2 demonstrates the application phase
of HL after obtaining the optimized communication policy of
node selection.
**V. EXPERIMENTS**
_A. SETTINGS_
We evaluated the proposed method based on two different
image classification tasks of MNIST and Fashion-MNIST.
MNIST [24] is a handwritten digit image dataset containing
50,000 training samples and 10,000 test samples labeled
as 0-9, and Fashion-MNIST [25] is an image collection
of 10 types of clothing containing 50,000 training samples
and 10,000 test samples labeled as shoes, t-shirts, dresses,
and so on. The image data in these two datasets are grayscale
with a size of 28 28. Moreover, we considered both a
×
10-node scenario and a 100-node scenario of HL for tackling
the two classification tasks respectively. The machine learning library we used to build the system is Tensorflow. All
experiments were conducted on a GPU server with 60 AMD
Ryzen Threadripper CPUs, two NVidia Titan RTX GPUs
with 24 GB RAM each, and Ubuntu 18.04.5 LTS OS.
To compare the performance, we adopted three different
baseline models, which are a centralized learning model
based on the data collection of all nodes, a decentralized
learning model based on a random communication policy,
and a standalone learning model based on a node’s local data
without communication. For each type of model, we used the
same architecture of the ML task model and the same training
hyperparameters. We assigned the training goal of a model
validation accuracy of 0.80 for the MNIST classification task
and 0.70 for the Fashion-MNIST classification task respectively, using the hold-out test set in the corresponding dataset.
In addition, for the standalone learning, we adopted the
early stopping to monitor the validation loss of the model
**Algorithm 1 Model Training of Homogeneous Learning**
1: initialize L1[DQN]
2: for each episode ep = 1, 2, . . . do
3: initialize L0[(][a][0][)] - _a0 is the starter node_
4: **for each step t = 1, 2, . . . do**
5: **while ValAcct < GoalAcc and t < Tmax do**
6: _ValAcct+1, at_ _, Lt[(]+[a][t]1[−][1][)]_ = HL(Lt[(][a][t][−][1][)], Lep[DQN] )
7: Send {Lt[(]+[a][t]1[−][1][)][,][ L]ep[DQN] } to at for the next step’s
model training
8: **end while**
9: **end for**
10: _rt+ˆ_ 1 = maxai f _[DQN]_ (si+1; Lep[DQN]+1 [)][a][i]
11: _rˆt = f_ _[DQN]_ (si; Lep[DQN]+1 [)][a][i]
12: _Lep[DQN]+1_ [=][ arg min]Lep[DQN]+1 �Bi=1 _[ℓ][(][r][t][ +][ γ]_ _rt+ˆ_ 1, ˆrt )
13: _Epsilonep+1 = Epsilonep · e[−][ρ]_
14: end for
15:
16: function HL(Lt[(][a][t][−][1][)], Lep[DQN] )
17: _Lt[(]+[a][t]1[−][1][)]_ = Train(Lt[(][a][t][−][1][)], D[(][a][t][−][1][)])
18: _ValAcct+1 = Acc(Dval; Lt[(]+[a][t]1[−][1][)][)]_
19: _s[(]t[a][t][−][1][)]_ = Lt[(]+[a][t]1[−][1][)]
20: _s[(]t[i][)]_ = Lt[(][i][)] subject to i ∈ _K_ _, i ̸= at−1_
21: _st = {s[(]t[a][t][−][1][)], s[(]t[i][)][|][ i][ ∈]_ _[K]_ _[,][ i][ ̸=][ a][t][−][1][}]_
22: _aˆt = arg maxj f_ _[DQN]_ (st ; Lep[DQN] )j
23: _rt = 32[(][ValAcc][t]_ [−][GoalAcc][)] − _dat−1,a ˆt −_ 1
24: Add {st−1, at−1, rt−1, st } to the replay memory
25: **return ValAcct+1, ˆat** _, Lt[(]+[a][t]1[−][1][)]_
26: end function
at each epoch with a patience of five, which automatically
terminated the training process when there appeared no further decrease in the validation loss of the model for the last
-----
**FIGURE 3. With the increase of training episodes, the mean reward over last 10 episodes is gradually increasing. The DQN model learned a better**
communication policy by training on samples from the replay memory, contributing to a faster convergence of model training.
**Algorithm 2 Application of Homogeneous Learning**
1: initialize L0[(][a][0][)]
2: obtain L[DQN]
3: for each step t = 1, 2, . . . do
4: **while ValAcct < GoalAcc do**
5: _Lt[(]+[a][t]1[−][1][)]_ = Train(Lt[(][a][t][−][1][)], D[(][a][t][−][1][)])
6: _s[(]t[a][t][−][1][)]_ = Lt[(]+[a][t]1[−][1][)]
7: _s[(]t[i][)]_ = Lt[(][i][)] subject to i ∈ _K_ _, i ̸= at−1_
8: _st = {s[(]t[a][t][−][1][)], s[(]t[i][)][|][ i][ ∈]_ _[K]_ _[,][ i][ ̸=][ a][t][−][1][}]_
9: _aˆt = arg maxj f_ _[DQN]_ (st ; L[DQN] )j
10: Send {Lt[(]+[a][t]1[−][1][)][,][ L][DQN] [}][ to][ ˆ][a][t][ for the next step’s]
model update
11: **end while**
12: end for
five epochs. In both the centralized learning and the standalone learning, evaluation was performed at the end of each
training epoch. On the other hand, in the two decentralized
learning cases, due to multiple models existing in the system,
evaluation was performed on the trained local model of each
step’s selected node with the same hold-out test set above.
Furthermore, for the decentralized learning, each nodek
owned a total of 500 skewed local training data that have a
heterogeneity level H = 0.56 (p(yi = c[(][k][)]) = 0.8) subject to
_yi ∈{yi}i[N]=[ (][k]1[)]_ [. The discussion on various heterogeneity levels]
is in Section V-B3. In HL, to generate the distance matrix,
the relative communication cost represented by the distance
between two different nodes di,j|i̸=j takes a random numerical
value between 0 and 0.1. A random seed of 0 was adopted for
the reproducibility of the distance matrix (See Appendix A).
For the local ML task model training, we adopted an epoch
of one with a batch size of 32. A further discussion on
the selection of these two hyperparameters can be found
in Appendix B. The Adam was applied as an optimization
function with a learning rate of 0.001.
_B. EXPERIMENTAL RESULTS_
1) COMMON COMMUNICATION POLICY LEARNING
As aforementioned, each nodek has a specific main data class
_c[(][k][)]. We considered a starter node that had a main data class of_
**TABLE 1. Hyperparameters in Homogeneous Learning.**
digit ’0’ for MNIST and a main class of T-shirt for FashionMNIST. Then, starting from the starter node, a local ML
task model was trained on the current node’s local data and
sent to the next step’s node decided by either the RL model
or a random action every step, depending on the epsilon of
the current episode (we adopted an initial epsilon of one
and a decay rate of 0.02). For each episode, we applied a
maximum step of 35 for MNIST and 100 for Fashion-MNIST.
Moreover, the ML task model and the RL model were updated
using the hyperparameters in Table. 1. In addition, we applied
a maximum replay memory size of 50,000 and a minimum
size of 128, where the training of the DQN model started only
when there were more than 128 samples in the replay memory
and the oldest samples would be removed when samples were
more than the maximum capacity.
For each episode, we computed the step rewards and the
episode reward for the model training to achieve the desired
performance. With the advancement of episodes, the communication policy evolved to improve the episode reward thus
benefiting better decision-making of the next-node selection.
Fig. 3 illustrates the episode reward and the mean reward
over the last 10 episodes of HL in the 10-node and 100-node
scenarios for MNIST and Fashion-MNIST respectively.
2) COMPUTATIONAL AND COMMUNICATION COST
Computational cost refers to the required total rounds for a
system to achieve the desired performance and was evaluated for all methods. Communication cost refers to the total
communication distance of model sharing from the starter
node to the last selected node and was evaluated for the
two decentralized learning methods. Notably, to evaluate
the computational and communication cost, we conducted
-----
**FIGURE 4. (a) Total training rounds based on different methods. (b) Cost**
comparison between the random policy-based decentralized learning and
our method HL. Each error bar illustrates 10 individual experiments’
results.
10 individual experiments using different random seeds for
each method and adopted as final results the best cases of
node selection over the last five episodes when the learned
communication policy was prone to settling. The experiments
were performed in the 10-node scenario for the MNIST task.
As shown in Fig. 5.a, due to limited local training data,
the standalone learning appeared to be extremely slow after
the validation accuracy reached 0.70. It terminated with a
final accuracy of around 0.75 with the early-stopping strategy.
Moreover, by comparing the decentralized learning methods
with and without the self-attention mechanism, the result
suggests that our proposed method of HL can greatly reduce
the total training rounds facilitating the model convergence.
In addition, though centralized learning shows the fastest
convergence, it suffers from problems of data privacy.
As shown in Fig. 5.b, the bottom and top of the error
bars represent the 25th and 75th percentiles respectively, the
line inside the box shows the median value, and outliers are
shown as open circles. As a result, it shows that HL can
greatly reduce the total training rounds by 50.8% and the
communication cost by 74.6% in decentralized learning of
the 10-node scenario for the MNIST task.
3) HL WITH VARIOUS HETEROGENEITY LEVELS
We further studied the performance of the proposed method
with different heterogeneity levels H = {0.24, 0.56, 0.90}
**FIGURE 5. Total training rounds when applying local training data with**
various heterogeneity levels. The dash lines are the results of HL and the
solid lines are the results of the random policy-based decentralized
learning. Different colors represent different heterogeneity levels
_H = {0.24, 0.56, 0.90}. As we can see, HL becomes more efficient when_
training on distributed data with a higher heterogeneity level,
contributing to a larger ratio of reduced total training rounds.
(p(yi = c[(][k][)]) = {0.6, 0.8, 0.9} subject to yi ∈{yi}i[N]=[ (][k]1[)] [).]
We evaluated the model performance in the 10-node scenario
for the MNIST task. For the cases of H = {0.24, 0.56},
we applied a maximum training step of 35 as defined above.
For the case of H = 0.90, we applied a maximum training
step of 80 instead due to a challenging convergence of the ML
task model using the highly skewed local training data. Fig. 5
illustrates the comparison of computational cost between HL
and the random policy-based decentralized learning.
**VI. CONCLUSION**
Decentralized deep learning (DDL) leveraging distributed
data sources contributes to a better neural network model
while safeguarding data privacy. Despite the broad applications of DDL models such as federated learning and swarming learning, the challenges regarding edge heterogeneity
especially the data heterogeneity have greatly limited their
scalability. In this research, we proposed a self-attention
decentralized deep learning method of Homogeneous Learning (HL) that recursively updates a shared communication
policy by observing the system’s state and the gained reward
for taking an action based on the observation. We comprehensively evaluated the proposed method in the 10-node and
100-node scenarios for tackling two different image classification tasks, applying as criteria the computational and
communication cost. The evaluation results show that HL
can greatly reduce the training cost with highly skewed distributed data. In future, a decentralized learning model that
can leverage various communication policies in parallel is
considered for the further study of HL.
**APPENDIX A**
**COMMUNICATION DISTANCE MATRIX**
Fig. 6 illustrates the generated distance matrix Di×j in the
10-node scenario when applying a β of 0.1 and a random
seed of 0.
-----
**FIGURE 6. The distance matrix Di** **×j in the 10-node scenario.**
**FIGURE 7. Model distribution representation optimization.**
**APPENDIX B**
**MODEL DISTRIBUTION REPRESENTATION OPTIMIZATION**
Under the assumption of data heterogeneity, to allow a reinforcement learning (RL) agent to efficiently learn a communication policy by observing model states in the systems,
a trade-off between the batch size and the epoch of local
foundation model training was discussed. Fig. 7 illustrates
the trained models’ weights distribution in the 10-node scenario after applying the principal component analysis (PCA),
with different batch sizes and epochs applied to train on
the MNIST dataset. Moreover, it shows the 100-node scenario where each color represents nodes with the same main
data class. As shown in the graphs, various combinations of
these two parameters have different distribution representation capabilities. By comparing the distribution density and
scale, we found that when adopting a batch size of 32 and
an epoch of one the models distribution was best represented,
which could facilitate the policy learning of an agent.
**ACKNOWLEDGMENT**
The authors would like to thank the anonymous reviewers for
helpful comments.
**REFERENCES**
[1] General Data Protection Regulation. Accessed: Sep. 22, 2021.
[2] J. Konecný, H. B. McMahan, X. F. Yu, P. Richtarik, A. T. Suresh, and
D. Bacon, ‘‘Federated learning: Strategies for improving communication efficiency,’’ in Proc. NIPS Workshop Private Multi-Party Mach.
_Learn., 2016._
[3] P. M. S. Priya, Q.-V. Pham, K. Dev, P. K. R. Maddikunta, T. R. Gadekallu,
and T. Huynh-The, ‘‘Fusion of federated learning and industrial Internet of
Things: A survey,’’ 2021, arXiv:2101.00798.
[4] Y. Gao, L. Liu, B. Hu, T. Lei, and H. Ma, ‘‘Federated region-learning for
environment sensing in edge computing system,’’ IEEE Trans. Netw. Sci.
_Eng., vol. 7, no. 4, pp. 2192–2204, Oct. 2020._
[5] Y. Liu, A. Huang, Y. Luo, H. Huang, Y. Liu, Y. Chen, L. Feng, T. Chen,
H. Yu, and Q. Yang, ‘‘Fedvision: An online visual object detection platform powered by federated learning,’’ in Proc. AAAI Conf. Artif. Intell.,
pp. 13172–13179, vol. 34, no. 8, Apr. 2020.
[6] S. R. Pokhrel and J. Choi, ‘‘Federated learning with blockchain for
autonomous vehicles: Analysis and design challenges,’’ IEEE Trans. Com_mun., vol. 68, no. 8, pp. 4734–4746, Aug. 2020._
[7] B. Liu, L. Wang, and M. Liu, ‘‘Lifelong federated reinforcement learning:
A learning architecture for navigation in cloud robotic systems,’’ IEEE
_Robot. Autom. Lett., vol. 4, no. 4, pp. 4555–4562, Oct. 2019._
[8] Y. Sun, H. Esaki, and H. Ochiai, ‘‘Adaptive intrusion detection in the
networking of large-scale LANs with segmented federated learning,’’ IEEE
_Open J. Commun. Soc., vol. 2, pp. 102–112, 2021._
[9] S. A. Rahman, H. Tout, C. Talhi, and A. Mourad, ‘‘Internet of Things
intrusion detection: Centralized, on-device, or federated learning?’’ IEEE
_Netw., vol. 34, no. 6, pp. 310–317, Nov. 2020._
[10] H. Brendan McMahan, D. Ramage, K. Talwar, and L. Zhang, ‘‘Learning
differentially private recurrent language models,’’ in Proc. ICLR, 2018.
[11] D. Cao, S. Chang, Z. Lin, G. Liu, and D. Sun, ‘‘Understanding distributed
poisoning attack in federated learning,’’ in Proc. IEEE 25th Int. Conf.
_Parallel Distrib. Syst. (ICPADS), Dec. 2019, pages 233–239._
[12] T. Nguyen, P. Rieger, M. Miettinen, and A. Sadeghi, ‘‘Poisoning attacks
on federated learning-based IOT intrusion detection system,’’ in Proc.
_Workshop Decentralized IoT Syst. Secur. (DISS), 2020, pp. 1–7._
[13] M. Duan, D. Liu, X. Chen, R. Liu, and Y. Tan, ‘‘Self-balancing federated
learning with global imbalanced data in mobile systems,’’ IEEE Trans.
_Parallel Distrib. Syst., vol. 32, no. 1, pp. 59–71, Jul. 2021._
[14] S. Warnat-Herresthal, H. Schultze, K. L. Shastry, S. Manamohan,
S. Mukherjee, V. Garg, R. Sarveswara, K. Händler, P. Pickkers, N. A. Aziz,
and S. Ktena, ‘‘Swarm learning for decentralized and confidential clinical
machine learning,’’ Nature, vol. 594, no. 7862, pp. 265–270, 2021.
[15] Y. Li, C. Chen, N. Liu, H. Huang, Z. Zheng, and Q. Yan, ‘‘A
blockchain-based decentralized federated learning framework with committee consensus,’’ IEEE Netw., vol. 35, no. 1, pp. 234–241, Jan. 2021.
[16] Y. Lu, X. Huang, Y. Dai, S. Maharjan, and Y. Zhang, ‘‘Blockchain and
federated learning for privacy-preserved data sharing in industrial IoT,’’
_IEEE Trans. Ind. Informat., vol. 16, no. 6, pp. 4177–4186, Jun. 2020._
[17] N. Mowla, N. H. Tran, I. Doh, and K. Chae, ‘‘Federated learning-based
cognitive detection of jamming attack in flying ad-hoc network,’’ IEEE
_Access, vol. 8, pp. 4338–4350, 2020._
[18] O. Sener and S. Savarese, ‘‘Active learning for convolutional neural networks: A core-set approach,’’ 2018.
[19] H. Wang, Z. Kaplan, D. Niu, and B. Li, ‘‘Optimizing federated learning on
non-IID data with reinforcement learning,’’ in Proc. IEEE Conf. Comput.
_Commun. (INFOCOM), Jul. 2020, pages 1698–1707._
[20] Y. Zhao, M. Li, L. Lai, N. Suda, D. Civin, and V. Chandra, ‘‘Federated learning with non-IID data,’’ CoRR, vol. abs/1806.00582, pp. 1–13,
Jun. 2018.
[21] E. Jeong, S. Oh, H. Kim, J. Park, M. Bennis, and S.-L. Kim,
‘‘Communication-efficient on-device machine learning: Federated
distillation and augmentation under non-IID private data,’’ CoRR,
vol. abs/1811.11479, pp. 1–6, Nov. 2018.
[22] C. He, M. Annavaram, and S. Avestimehr, ‘‘Group knowledge transfer:
Federated learning of large CNNs at the edge,’’ in Proc. NeurIPS, 2020.
[23] A. Singh, P. Vepakomma, O. Gupta, and R. Raskar, ‘‘Detailed comparison
of communication efficiency of split learning and federated learning,’’
_CoRR, vol. abs/1909.09145, pp. 1–5, Sep. 2019._
-----
[24] Y. LeCun, C. Cortes, and C. Burges. (Feb. 2010). MNIST Handwritten
_Digit Database. ATT Labs. [Online]. Available: http://yann.lecun.com/_
exdb/mnist
[25] H. Xiao, K. Rasul, and R. Vollgraf, ‘‘Fashion-MNIST: A novel
image dataset for benchmarking machine learning algorithms,’’ CoRR,
vol. abs/1708.07747, pp. 1–6, Aug. 2017.
YUWEI SUN (Member, IEEE) received the B.E.
degree in computer science and technology from
North China Electric Power University, in 2018,
and the M.E. degree (Hons.) in information and
communication engineering from the University of
Tokyo, in 2021, where he is currently pursuing the
Ph.D. degree with the Graduate School of Information Science and Technology. In 2020, he was the
fellow of the Advanced Study Program (ASP) at
the Massachusetts Institute of Technology. He has
been working with the Campus Computing Centre, United Nations University Centre on Cybersecurity, since 2019. He is a member of the AI Security
and Privacy Team with the RIKEN Center for Advanced Intelligence Project
working on trustworthy AI, and a Research Fellow at the Japan Society for
the Promotion of Science (JSPS).
HIDEYA OCHIAI (Member, IEEE) received the
B.E., M.E., and Ph.D. degrees from the University of Tokyo, Japan, in 2006, 2008, and 2011,
respectively. He is an Associate Professor with
the University of Tokyo. He is involved in the
standardization of facility information access protocol in IEEE1888, ISO/IEC, and ASHRAE. His
research interests include sensor networking, delay
tolerant networking, building automation systems,
the IoT protocols, and cyber security.
-----
| 11,348
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2110.05290, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://ieeexplore.ieee.org/ielx7/6287639/9668973/09680704.pdf"
}
| 2,021
|
[
"JournalArticle"
] | true
| 2021-10-11T00:00:00
|
[
{
"paperId": "24d21ecaeb2d2ecc20e26a5e3f5128247704ccfe",
"title": "Swarm Learning for decentralized and confidential clinical machine learning"
},
{
"paperId": "3779f75736aceb7df86c434be6034f355a27a379",
"title": "Fusion of Federated Learning and Industrial Internet of Things: A Survey"
},
{
"paperId": "bdef93491b2eec2e71b9ba14fd304ff5eebf5188",
"title": "Self-Balancing Federated Learning With Global Imbalanced Data in Mobile Systems"
},
{
"paperId": "9e2b66b52531566c1cb57b8782703fe3d6a15bd8",
"title": "Federated Region-Learning for Environment Sensing in Edge Computing System"
},
{
"paperId": "c6c023c2209ce2b8f593dfea0b5b88493c0c00e3",
"title": "Group Knowledge Transfer: Federated Learning of Large CNNs at the Edge"
},
{
"paperId": "82f7073b503143cf3ed4d59dca2f206ba18a31b9",
"title": "Optimizing Federated Learning on Non-IID Data with Reinforcement Learning"
},
{
"paperId": "d792ce75ae10d0534cada7fb9c8d6ef316e35a9f",
"title": "Blockchain and Federated Learning for Privacy-Preserved Data Sharing in Industrial IoT"
},
{
"paperId": "18a041379e483eead41b20b7fc57b5e21f220dd7",
"title": "What Is COVID-19?"
},
{
"paperId": "3cb02737814fccf88287aa54371f844c32207a4d",
"title": "Federated Learning With Blockchain for Autonomous Vehicles: Analysis and Design Challenges"
},
{
"paperId": "b6b4d9d9fd893bbab0a284d2739cc805d00ebf9c",
"title": "A Blockchain-Based Decentralized Federated Learning Framework with Committee Consensus"
},
{
"paperId": "e75ced6ce865448ce4b12f12d57de4f2fc1303e7",
"title": "Understanding Distributed Poisoning Attack in Federated Learning"
},
{
"paperId": "b7cbbe2566765daf9af070c9ce3df4a6ba8c9cec",
"title": "Detailed comparison of communication efficiency of split learning and federated learning"
},
{
"paperId": "765dcaf34e182df21c2f4361aa073691e5902df0",
"title": "Lifelong Federated Reinforcement Learning: A Learning Architecture for Navigation in Cloud Robotic Systems"
},
{
"paperId": "015562837d3bf7fbdfbaccb43eadc6981ee5e35e",
"title": "Communication-Efficient On-Device Machine Learning: Federated Distillation and Augmentation under Non-IID Private Data"
},
{
"paperId": "9445423239efb633f5c15791a7abe352199ce678",
"title": "General Data Protection Regulation"
},
{
"paperId": "5cfc112c932e38df95a0ba35009688735d1a386b",
"title": "Federated Learning with Non-IID Data"
},
{
"paperId": "ed46493d568030b42f0154d9e5bf39bbd07962b3",
"title": "Learning Differentially Private Recurrent Language Models"
},
{
"paperId": "f9c602cc436a9ea2f9e7db48c77d924e09ce3c32",
"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"
},
{
"paperId": "c342c71cb23199f112d0bc644fcce56a7306bf94",
"title": "Active Learning for Convolutional Neural Networks: A Core-Set Approach"
},
{
"paperId": "7fcb90f68529cbfab49f471b54719ded7528d0ef",
"title": "Federated Learning: Strategies for Improving Communication Efficiency"
},
{
"paperId": "2511197687961b8d5eba001c2772e03246da4375",
"title": "Adaptive Intrusion Detection in the Networking of Large-Scale LANs With Segmented Federated Learning"
},
{
"paperId": "7a5fb34a3df61187f10648ea26de152e123ca405",
"title": "Federated Learning-Based Cognitive Detection of Jamming Attack in Flying Ad-Hoc Network"
},
{
"paperId": "49b59935f19364d670312de7b72fb73c19841d73",
"title": "Intrusion Detection : Centralized , On-Device , or Federated Learning ?"
},
{
"paperId": "35ff04db3be0e98c40c6483081484308daa9ad82",
"title": "Poisoning Attacks on Federated Learning-based IoT Intrusion Detection System"
},
{
"paperId": null,
"title": "from North China Electric Power University and M.E. in Information and Communication Engineering with honors in 2021 from the University of Tokyo"
},
{
"paperId": null,
"title": "Mnist handwritten digit database"
}
] | 11,348
|
en
|
[
{
"category": "Economics",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00b7990a72e52f77c2a63a2b781b92bf2435c17f
|
[] | 0.89424
|
The Impact of Cryptocurrency on the Global Financial System: A Quantitative Investigation (2021)
|
00b7990a72e52f77c2a63a2b781b92bf2435c17f
|
journalofcardiovasculardiseaseresearch
|
[] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
# The Impact of Cryptocurrency on the Global Financial System: A
Quantitative Investigation
**Naveen Negi,**
Asst. Professor, School of Management, Graphic Era Hill University, Dehradun
Uttarakhand India
**DOI:10.48047/jcdr.2021.12.06.326**
## Abstract
Cryptocurrencies emerged as a disruptive and transformative force within the global financial
landscape, challenging conventional banking systems and monetary policies. Her metrics
such as market capitalization, trading volumes, and price volatility of cryptocurrencies,
alongside their correlation with traditional financial assets and macroeconomic variables. The
trading volumes of cryptocurrencies have attained substantial levels, indicating an escalating
acceptance and adoption by investors and market participants. There are degrees of
correlation, with certain cryptocurrencies exhibiting weak or negative correlations, while
others exhibit stronger positive associations. There, underscoring the necessity of the intricate
interactions between cryptocurrencies and macroeconomic factors. There is transformative
influence exerted by cryptocurrencies on the global financial system. The researcher had
considered people from financial sector to know the impact of cryptocurrency on the global
financial system. The study concludes that there is significant impact of cryptocurrency on
global financial system where Cryptocurrencies fostered financial inclusion for individuals
lacking access to traditional banking services and elevated economic and social standing.
**Keyword: Crypto Effect, Global Financial System, Financial Inclusion, Social Standing,**
Economic Standing.
## Introduction
The advent of cryptocurrencies has garnered immense popularity due to their core tenets of
decentralization and the potential for substantial returns. Luchkin et al., (2020) the volatile
nature of these digital assets continues to pose a significant risk, often surpassing that of
conventional investment options. By employing cryptography, cryptocurrencies encode
information in a manner that is easily decipherable with the correct key, yet extremely
challenging to interpret without it. Consequently, while the manufacturing of these coins can
be intricate, the verification of transactions becomes relatively straightforward. Despite the
2085
-----
expanding popularity and growth of the cryptocurrency realm, traditional financial
institutions have exhibited reluctance in embracing these digital assets. Their reservations
primarily stem from the perceived risks associated with cryptocurrencies outweighing the
potential benefits.In a recent development, the OCC issued several interpretive letters
outlining how traditional financial institutions can engage in transactions involving digital
currencies or even develop services cantered around them.
These efforts aim to acquaint banks with these assets, thereby increasing their comfort level.
In early January, the OCC announced that national banks and federal savings associations are
now permitted to utilize public blockchains and stablecoins for conducting payment
activities. Azarenkova, Shkodina, Samorodov, & Babenko, (2018) newfound allowance
enables banks to process payments more swiftly and efficiently without requiring the
involvement of third-party agencies. It is imperative for these institutions to overcome their
reservations and embrace the potential that cryptocurrencies offer. The blockchain functions
as a public ledger that undergoes verification by a multitude of nodes, making fraudulent
activities exceedingly arduous, if not impossible. Additionally, the transparent nature of the
blockchain enables the tracking of specific transactions between anonymous user accounts or
wallets with relative ease. The recent allowances for the use of public blockchains and
stablecoins in payment activities highlight the potential efficiency and innovation that can be
derived from embracing cryptocurrencies.
## Literature review
Cryptocurrencies have emerged as a digital and user-friendly alternative to traditional fiat
currencies, presenting innovative solutions to individuals worldwide. According to
Seetharaman, Saravanan, Patwa, and Mehta (2017) citizens of countries like the United States
or the European Union view cryptocurrencies as a thrilling advancement, numerous nations
struggle with effectively managing their own domestic currencies. These digital currencies
have introduced a range of opportunities and challenges that necessitate a detailed
quantitative investigation. However, concerns regarding volatility and regulatory oversight
have hindered their widespread adoption in this context. Financial stability represents another
crucial aspect affected by the ascent of cryptocurrencies. The decentralized nature of these
digital assets presents both opportunities and risks.
2086
-----
According to Othman, Musa Alhabshi, Kassim, Abdullah, and Haron (2020),
cryptocurrencies can foster financial inclusion for individuals lacking access to traditional
banking services. Conversely, their unregulated nature and susceptibility to market
manipulation raise concerns about systemic risks and the potential for financial instability.
Employing quantitative analysis can aid in identifying the factors contributing to the stability
or fragility of the financial system in the presence of cryptocurrencies. The underlying
blockchain infrastructure supporting cryptocurrencies holds potential applications beyond
finance, including supply chain management, intellectual property protection, and
decentralized governance.
A quantitative investigation into their influence on monetary policy, international trade,
financial stability, and technological innovation can yield valuable insights. Government
responses to cryptocurrencies have exhibited a diverse range of attitudes and concerns within
central banks and financial institutions. According to Srokosz and Kopciaski (2015), some
organizations have shown support for these emerging digital assets, numerous central banks
have approached them cautiously due to the inherent volatility of the market and the potential
risks it entails. Furthermore, concerns regarding tax evasion and capital restrictions have
further contributed to public apprehension. Powell emphasizes the necessity of establishing
effective governance and robust risk management practices before these digital assets can
achieve broader acceptance and mainstream integration within the financial system. The Fed's
cautious approach underscores the significance of addressing the potential risks associated
with these digital assets. Within the European Central Bank (ECB), skepticism towards
cryptocurrencies has prevailed. However, the PBOC emphasizes the necessity of maintaining
complete control over the cryptocurrency ecosystem, leading to stringent regulations on
various aspects of the market within China.
According to Jacobs (2018) cautious approach by the central bank highlights its intention to
regulate and manage cryptocurrencies within the existing framework of their financial
system.By embracing cryptocurrencies, Carney believes the financial system can undergo
transformative changes that have the potential to benefit both individuals and institutions.RBI
Deputy Governor T Rabi Sankar expresses concerns about the potential implications of this
characteristic. The central bank's apprehension emphasizes the necessity of carefully
considering the regulatory implications and associated risks of decentralized digital assets.
The overall responses from central banks and financial institutions worldwide underscore the
2087
-----
complexity and divergent perspectives surrounding cryptocurrencies. The profound impact of
cryptocurrencies on the global financial system has captured significant attention, their
noncorrelated nature to conventional financial markets renders cryptocurrencies an alluring
option for risk-averse investors, comparable to the allure of traditional precious commodities
such as gold. Nevertheless, amid the optimism surrounding cryptocurrencies, certain analysts
harbor concerns regarding the potential negative repercussions that a downturn in the
cryptocurrency market could precipitate within the broader financial landscape. Nonetheless,
cryptocurrencies, as a distinctive asset class, embody a dynamic and relatively nascent
proposition that harbors the potential for both auspicious and deleterious outcomes.
Within the investment community, cryptocurrencies are frequently regarded as speculative
vehicles or prudent hedges against the perils of inflation. To plumb the depths of the impact
of cryptocurrencies, a quantitative investigation can furnish invaluable insights. Such an
investigation would entail an exhaustive analysis of various pivotal indicators and metrics,
aimed at assessing the potential ramifications of cryptocurrencies on the stability and efficacy
of the global financial system. Such investigations can serve to unearth potential risks and
vulnerabilities associated with cryptocurrencies while facilitating the formulation of judicious
regulatory frameworks to mitigate these risks. In summation, the burgeoning interest in
cryptocurrencies as an investment avenue is a testament to their distinctive merits in
facilitating seamless transactions and conferring individuals with a measure of control over
inflationary pressures.
Vincent & Evans, (2019), lingering concerns persist regarding their potential influence on the
broader financial system, necessitating the perpetual analysis and investigation of this
multifaceted domain. Their inherent advantages of swift accessibility and user-friendliness
empower individuals to procure resources and avail financial services, thereby propelling
economic and social progress on a worldwide scale. A distinguishing characteristic of
cryptocurrencies is their decentralized nature. This ensures that neither corporations nor
individuals can manipulate the system, significantly minimizing the likelihood of fraudulent
activities. Within developing economies, cryptocurrencies play a pivotal role in elevating
economic and social standing. The introduction of blockchain technologies has bestowed
2088
-----
entrepreneurs with greater autonomy, granting them increased control and facilitating access
to capital.
This heightened accessibility to financial resources stimulates economic activities and fosters
overall growth.
According to Dierksmeier and Seele (2018), the emergence of the crypto-based economy is
driving towards open-source principles and global accessibility, transcending nationality and
socioeconomic status. In addition to their impact on financial inclusivity, blockchain projects
have also discovered utility in sectors such as electricity data management and commodity
trading. By harnessing blockchain technology, these industries have witnessed enhanced real
time speed, efficiency, and transparency. For example, in energy trading transactions,
blockchain facilitates the recording and settlement of transactions without necessitating
reconciliation, as all parties involved are utilizing the same platform. At the core of this
robustness lies their decentralized nature, bestowing an additional stratum of steadfastness
and impregnability upon the global financial landscape. Unlike the archetypal financial
frameworks reliant on centralized entities, such as banks or governments, cryptocurrencies
operate seamlessly on decentralized networks. Transactions undergo meticulous verification
and indelible documentation on a dispersed ledger known as the blockchain, a boundless
archive accessible to all participants.
The essence of decentralization ensures that transactions transcend dependency on a singular
authority, mitigating the perils of a lone weak point. Even if one node or participant falters,
the entire network perseveres relentlessly. In times of financial upheaval or political
turbulence, traditional financial systems often encounter formidable disruptions. Banks falter,
currencies plummet precipitously, and access to funds dwindles perilously. It is precisely in
such predicaments that cryptocurrencies emerge as an alternative conduit for conducting
transactions and preserving value. The decentralized fabric of cryptocurrencies bestows upon
individuals and enterprises a heightened command over their financial endeavors, thereby
curtailing their exposure to systemic hazards. Krause, (2016), the transparency and
impregnability intrinsic to cryptocurrencies contribute to their unwavering resilience.
Transactions meticulously etched onto the blockchain remain immutable and impervious to
tampering, engendering a pinnacle of trust and thwarting fraudulent undertakings. Such
enhanced security acts as a panacea to mollify the risks entangled with traditional financial
systems, be it identity theft, counterfeit currency, or unauthorized transactions. Furthermore,
2089
-----
cryptocurrencies facilitate the realm of borderless transactions, effectively empowering
individuals and enterprises to partake in international trade sans intermediaries or orthodox
banking infrastructures. This phenomenon becomes particularly salient amidst the throes of
political instability or economic sanctions that tend to constrict traditional financial channels.
According to Bindseil (2020), cryptocurrencies furnish individuals and enterprises with a
medium to circumvent such limitations, unabatedly engaging in global economic activities.
Nevertheless, it is crucial to acknowledge that while cryptocurrencies confer resilience
against conventional financial crises and political instability, they do encounter distinctive
challenges of their own. The mercurial nature of cryptocurrency markets poses inherent risks
for investors, while regulatory frameworks strive to adapt and address concerns pertaining to
consumer protection, taxation, money laundering, and market manipulation. blockchain
technology is increasingly being explored for its applications in supply chain management.
By utilizing blockchain, supply chains can achieve enhanced transparency, traceability, and
accountability. This can help eradicate fraud, counterfeiting, and ensure the genuineness of
products, benefiting both businesses and consumers.
According to Knezevic (2018), cryptocurrencies have sparked the development of smart
contracts.
These self-executing contracts are encoded on the blockchain, enabling automated and
trustless transactions. These systems enable the decentralized storage and sharing of data
among multiple participants. The impact of cryptocurrencies on technological innovation
extends to other domains as well.This innovation can unlock opportunities for broader access
to investment assets and increase market efficiency. However, along with technological
innovation, come challenges. The scalability, energy consumption, and regulatory
frameworks surrounding these technologies need to be addressed for widespread adoption.
Cos have gained popularity as a crowdfunding mechanism, granting early-stage projects
direct access to public capital. However, it is crucial to acknowledge that ICOs entail elevated
risks and diminished regulatory oversight, necessitating investors to meticulously conduct
due diligence prior to participation. These exchanges facilitate the purchase and sale of
various cryptocurrencies, enabling investors to leverage price fluctuations. Trading digital
assets can be exceptionally lucrative owing to the frequent and substantial volatility
witnessed in the cryptocurrency market.
2090
-----
According to DeVries (2016), this volatility also presents notable risks, as prices can undergo
rapid and dramatic changes within brief timeframes. Consequently, investors must exercise
prudence and implement risk management strategies when partaking in cryptocurrency
trading. Furthermore, the advent of decentralized finance (DeFi) has introduced innovative
investment prospects within the cryptocurrency ecosystem. DeFi platforms harness
blockchain technology to offer diverse financial services, encompassing lending, borrowing,
yield farming, and liquidity provision. Investors can partake in these decentralized protocols
and accrue returns on their cryptocurrency holdings through interest payments or by staking
their assets as collateral. However, DeFi investments carry their own array of risks, including
smart contract vulnerabilities and market volatility. Moreover, the absence of regulatory
oversight and the prevalence of fraudulent activities in the cryptocurrency domain underscore
the necessity for caution and comprehensive research before allocating funds. Remaining
well-informed about market trends, conducting exhaustive due diligence, and seeking
professional advice can aid in mitigating risks and optimizing potential returns in this
dynamic investment landscape.
**Objective: To Know the Impact of Cryptocurrency on the Global Financial System.**
**Methodology:** The researcher had considered people from financial sector to know the
impact of cryptocurrency on the global financial system. The survey was conducted with the
help of a questionnaire. The researcher had collected the primary data through random
sampling method and was analysed by statistical tool called mean.
## Findings
**Table 1 Impact of cryptocurrency on the global financial system**
**S.** **Mean**
**Statements**
**No.** **Value**
Cryptocurrencies fostered financial inclusion for individuals lacking
1. 3.15
access to traditional banking services
Shown a diverse range of attitudes and concerns within central banks
2. 3.19
and financial institutions
Financial system undergone transformative changes that have the
3. 3.16
potential to benefit both individuals and institutions
2091
|Col1|Table 1 Impact of cryptocurrency on the global financial system|Col3|
|---|---|---|
|S. No.|Statements|Mean Value|
|1.|Cryptocurrencies fostered financial inclusion for individuals lacking access to traditional banking services|3.15|
|2.|Shown a diverse range of attitudes and concerns within central banks and financial institutions|3.19|
|3.|Financial system undergone transformative changes that have the potential to benefit both individuals and institutions|3.16|
-----
|4.|Cryptocurrencies are hypothetical vehicles or practical privets against the risks of inflation|3.13|
|---|---|---|
|5.|Facilitates unified transactions and discuss individuals with a measure of control over inflationary pressures|3.17|
|6.|Had elevated economic and social standing|3.14|
Table above is showing impact of cryptocurrency on the global financial system. The
respondent says that Cryptocurrencies had shown a diverse range of attitudes and concerns
within central banks and financial institutions with mean value 3.19, Facilitates unified
transactions and discuss individuals with a measure of control over inflationary pressures
with mean value 3.17 and financial system undergone transformative changes that have the
potential to benefit both individuals and institutions with mean value 3.16. The respondent
also says that Cryptocurrencies fostered financial inclusion for individuals lacking access to
traditional banking services with mean value 3.15, Had elevated economic and social
standing with mean value 3.14 and Cryptocurrencies are hypothetical vehicles or practical
privets against the risks of inflation with mean value 3.13.
## Conclusion
Quantitative investigation that has illuminated the profound influence of cryptocurrencies on
the global financial system. The results have uncovered a substantial impact of these digital
assets on various facets of the financial landscape, encompassing both favourable and
unfavourable implications. Additionally, the decentralized nature inherent to cryptocurrencies
presents an alternative avenue to traditional banking systems, thereby reducing reliance on
intermediaries and fostering heightened transparency. Moreover, cryptocurrencies have
garnered considerable investments and speculative activities, thereby giving rise to novel
economic prospects and fostering innovation. Underlying cryptocurrencies is blockchain
technology, which holds the capacity to revolutionize diverse sectors by facilitating secure
and efficient transactions, bolstering supply chain management, and empowering
decentralized applications. Nonetheless, our investigation has also brought to the fore several
challenges and risks entwined with cryptocurrencies.
The volatility and absence of comprehensive regulation raise concerns surrounding market
stability, investor safeguards, and consumer confidence. Instances of fraudulent activities,
2092
-----
cyber assaults, and money laundering have sparked apprehensions regarding the security and
integrity of the cryptocurrency ecosystem. Furthermore, the potential disruption of traditional
financial institutions and government-controlled monetary systems has evoked mixed
reactions from regulators and policymakers worldwide. Achieving a delicate balance between
fostering innovation and implementing effective regulations remains a critical hurdle for the
widespread adoption of cryptocurrencies. As the global financial system continues its
evolution, it becomes imperative for all stakeholders to diligently monitor and comprehend
the ongoing developments within the cryptocurrency realm. Collaboration among
governments, regulatory bodies, and industry participants assumes utmost importance in
formulating a robust framework that nurtures innovation while effectively addressing the
risks and challenges inherent in cryptocurrencies. The study was conducted to know the
impact of cryptocurrency on the global financial system and found that Cryptocurrencies had
shown a diverse range of attitudes and concerns within central banks and financial
institutions and also facilitates unified transactions and discuss individuals with a measure of
control over inflationary pressures.
## References
1. Luchkin, A. G., Lukasheva, O. L., Novikova, N. E., Melnikov, V. A., Zyatkova, A.
V., & Yarotskaya, E. V. (2020, August). Cryptocurrencies in the global financial
system: problems and ways to overcome them. In _Russian Conference on Digital_
_Economy and Knowledge Management (RuDEcK 2020) (pp. 423-430). Atlantis Press._
2. Azarenkova, G., Shkodina, I., Samorodov, B., & Babenko, M. (2018). The influence
of financial technologies on the global financial system stability. _Investment_
_Management & Financial Innovations, 15(4), 229._
3. Seetharaman, A., Saravanan, A. S., Patwa, N., & Mehta, J. (2017). Impact of Bitcoin
as a world currency. Accounting and Finance Research, 6(2), 230-246.
4. Othman, A. H. A., Musa Alhabshi, S., Kassim, S., Abdullah, A., & Haron, R. (2020).
The impact of monetary systems on income inequity and wealth distribution: a case
study of cryptocurrencies, fiat money and gold standard. _International Journal of_
_Emerging Markets, 15(6), 1161-1183._
5. Srokosz, W., & Kopciaski, T. (2015). Legal and economic analysis of the
cryptocurrencies impact on the financial system stability. _Journal of Teaching and_
_Education, 4(2), 619-627._
2093
-----
6. Jacobs, G. (2018). Cryptocurrencies & the challenge of global governance. _Cadmus,_
_3(4), 109-123._
7. Vincent, O., & Evans, O. (2019). Can cryptocurrency, mobile phones, and internet
herald sustainable financial sector development in emerging markets?. _Journal of_
_Transnational Management, 24(3), 259-279._
8. Dierksmeier, C., & Seele, P. (2018). Cryptocurrencies and business ethics. Journal of
_Business Ethics, 152, 1-14._
9. Krause, M. (2016). Bitcoin: Implications for the developing world.
10. Bindseil, U. (2020). Tiered CBDC and the financial system. _Available at SSRN_
_3513422._
11. Knezevic, D. (2018). Impact of blockchain technology platform in changing the
financial sector and other industries. Montenegrin Journal of Economics, 14(1), 109
120.
12. DeVries, P. D. (2016). An analysis of cryptocurrency, bitcoin, and the future.
_International Journal of Business Management and Commerce, 1(2), 1-9._
2094
-----
| 4,980
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.48047/jcdr.2021.12.06.326?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.48047/jcdr.2021.12.06.326, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBYNCSA",
"status": "HYBRID",
"url": "https://www.jcdronline.org/admin/Uploads/Files/64908b5cbcefb7.44119311.pdf"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-06-22T00:00:00
|
[] | 4,980
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00b8a079922f614e9705782e1569e6313d3ff005
|
[
"Computer Science"
] | 0.880149
|
Design and implementation of the MESH services platform
|
00b8a079922f614e9705782e1569e6313d3ff005
|
TINA '99. 1999 Telecommunications Information Networking Architecture Conference Proceedings (Cat. No.99EX368)
|
[
{
"authorId": "2931177",
"name": "H. Batteram"
},
{
"authorId": "37912375",
"name": "John-Luc Bakker"
},
{
"authorId": "2286780404",
"name": "Jack P.C. Verhoosel"
},
{
"authorId": "2286771751",
"name": "Nikolay K. Diakov"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
###### Design and Implementation of the MESH Services Platform
**Harold J. Batteram** **John-Luc Bakker** **Jack P.C. Verhoosel** **Nikolay K. Diakov**
Lucent Technologies Lucent Technologies Telematics Institute CTIT
P.O. Box 18 P.O. Box 18 P.O. Box 58 P.O. Box 217
**1270 AA Huizen NL** **1270 AA Huizen NL** **7500 AN** Enschede NL **7500 AE Enschede NL**
**_batterarn@lucent. corn_** **_jlbakker@ucent._** **_corn_** **_J Verhoosel@elin nl_** **_Diakov@ctit. utwente. nl_**
Absb.ad-Industry acceptance of TINA (Telecommunications Several groups of people can work on a joint product at
Information Networking Architecture) will depend heavily on different locations and, if necessary, even at different times.
both the evaluation of working systems that implement this ar- They should feel as if they are all gathered in the same meet-
chitecture, and on the experiences obtained during the design ing room, with all the necessary facilities. During the MESH
and implementation of these systems.
project, a number of services have been developed for the
During the MESH' (Multimedia services on the Electronic
following application domains:
Super Highway) project, a TINA based platform for networked
Tele-consultation in the health care sector, for use by
multimedia services has been developed and evaluated. This
platform, referred to as the MESH platform, implements major specialists at rehabilitation clinics,
parts of the TINA Service Architecture version **5.0** and the **_0_** teamwork between lecturers at different universities,
TINA Network Architecture version 3.0. In addition, several **_0_** tele-learning for students at different universities,
demonstration services such as multiparty highquality audio **_0_** tele-meeting in a distributed organization.
and video conferencing, shared database access and subscription
management services have been created.
MESH aims to bring the needs of future users and the op-
To support the design and implementation of the MESH
platform a DSC (Distributed Software Component) framework portunities of the marketplace together. To achieve this, all
has been developed. This framework is a generalization and the important players are represented in this project. Suppli-
implementation of the TINA computational object model and ers of hardware and network services, such as Lucent Tech-
can also be applied outside the TINA domain. The DSC frame- nologies, KPN Research and SURFnet. Users, such as the
work acts as a middleware layer, which shields component de- Academic Hospital of the University of Amsterdam and Ro-
signers from many communication level details. A DSC can be essingh Research and Development, Research institutes, such
mapped to a computational object or object group. DSCs can **be** as the Telematica Instituut and the Centre for Telematics and
grouped to form compound components from sub-components
Information Technology of the University of Twente.
that also can consist of multiple components, etc. In addition, the
An important objective of MESH was to design a platform
DSC framework addresses flexible configuration, dynamic com-
ponent construction from (downloadable) sub-components, and architecture, which would be supported by open industry
dynamic interface instantiation. standards. This lead to the choice of TINA, which is an open
The MESH platform not only demonstrates the potential of software architecture supported by the world's leading net-
TINA, but also reveals several weak areas. This paper describes work operators, telecommunications equipment and computer
the DSC approach, which we used to design and implement ma- equipment manufacturers.
jor parts of TINA, and our experiences with TINA. In this paper, we describe our approach and experiences
with the design and implementation of the MESH services
I. INTRODUCTION
platform. In Section `I1 we first give a description of a DSC`
During the MESH [I] (Multimedia services on the Elec-
[3] (Distributed Software Component) framework which has
tronic Super Highway) project, a TINA [2] (Telecommunica-
been developed within the MESH project to support the de-
tions Information Networking Architecture) based platform
sign and implementation of the MESH platform. This frame-
for networked multimedia, multiparty services has been de- work can be seen as the foundation on top of wE.ich the
signed, implemented and evaluated.
MESH implementation of the TINA service and network ar-
In the MESH project a number of companies and knowl- chitecture has been build. In Section 111 we describe the
edge institutes have developed pragmatic ways of working
MESH platform architecture and our implementation of vari-
together on the electronic highway. The project focused par-
ous parts of the TINA service and network architecture. Fi-
ticularly on multiparty, multimedia applications such as nally, in Section IV, we draw some conclusions and give an
desktop video conferencing and co-authoring (electronic overview of fhture work.
teamwork on the same document). The goal of MESH is to
support teamwork in such a way that the natural, dynamic 11. THE DSC FRAMEWORK
communication process between individuals remains intact.
The Distributed Software Component framework has been
developed within the MESH project with the goal to acceler-
The MESH project was sponsored financially by the Dutch Ministry of ate the design and implementation of the MESH platform.
Economic Affairs Frameworks have been describes as a technology for reifj4ng
0-7803-5785-X/$10.00 0 1998 IEEE **250**
-----
proven software designs and implementations in order to im- During the implementation of the MESH platform, the use
prove the quality of s o h a r e [4] The DSC fiamework is a of this 6amework has resulted in a significant productivity
concrete part of the platform and can be seen as the infia- increase. The fiamework provides many lower level services,
structure that allows components to interact and collaborate. which allows developers to focus on high level tasks and re-
The DSC fiamework: sponsibilities of the numerous **TINA** service components
Supports a Component Oriented Programming [5] (COP) 60m which the MESH platform is build.
paradigm,
**_A._** **_Distributed Sofnare Components_**
provides a runtime support environment,
To fit into the 6amework each component must support a
implements TINA's engineering viewpoint,
set of common features. Components in the DSC fiamework
**_0_** supports TINA's computational viewpoint,
are the basic building blocks fiom which complex systems
provides development support tools with source code
can be constructed. Compound components can constructed
generation,
by aggregated sub-components (see Section B ) with arbitrary
provides runtime monitoring, tracing and debugging fa-
levels of nesting. Non-compound components form a unit of
cilities.
distribution. They can operate on any physical node within
the network provided it is accessible through the DPE. The
COP is an increasingly popular modeling and implementa-
distributed components can be located through a naming
tion paradigm. COP has been described as a natural extension
server or through references hold within other components.
of object oriented programming to cater the needs of inde-
Just as in the ODP model, components in the DSC frame-
pendently extensible systems [5, 61 COP allows developers to
work have one or more operational interfaces, which allow
concentrate on high-level application content, which im-
access to the services the components offers. However, in the
proves the rate and productivity of development. Examples of
DSC fkamework all operational interfaces inherit also opera-
this technology can be found in ActiveX components [7],
tions from a common i-Operational interface. In addition,
OpenDoc [SI, and JavaBean [9].
each DSC must also provide a single `i-Component inter-`
**_An_** important benefit of the component model is that it
face. This interface acts as the root access point to the com-
provides a higher level of abstraction compared to the object
ponent giving the component a unique identity and through
model and that it enables flexible software construction by
which references to other interhces can be obtained.
combining and connecting individual components. The goal
Each component must support a set of common services.
is to create a repository of multi-usable components that _can_
These services are available through the `i-Component and`
be used for component-based software development. Soft-
the i-Operat ional interfaces. The i-Component interface
ware development then becomes the selection, adaptation,
provides a standard interface for the following common
and composition of components.
services:
The TINA architecture is also modeled as a set of collabo-
**_0_** Property services, with operations to read and define them,
rating components using the Reference Model for Open Dis-
**_0_** component life cycle services, with operations such as
tributed Processing (RM-ODP). RM-ODP was a joint effort
create, delete, suspend, and resume,
by the IS0 and ITU-T to develop a coordinating 6amework
transaction services, which allow the component to oper-
for the standardization of open distributed processing (ODP)
[lo]. RM-ODP aims to achieve portability between heteroge- ate within the context of a transaction and with operations
neous platforms, interworking between ODP systems, and to commit or abort changes made,
distribution transparency, i.e. hide the consequences of distri- configuration services, which provide operations to sup-
bution fiom both the application programmer and user. **_An_** port dynamic construction of compound components,
excellent introduction to Rh4-ODP can be found in [ll]. The debugging facilities, with which all invocations on `an in-`
DSC fiamework is a generalization and implementation of terface can be monitored.
the TINA computational object model. Runtime support for
the DSC 6amework provides a middleware layer, which ................. _: **Legend**
shields component designers 60m many communication level I **Container**
**i ................. :**
details. A development environment supports the DSC
**Core of distributed**
framework and assist developers by generating component [I
**software component**
implementation templates from a formal component specifi-
cation. The DSC fiamework also provides runtime testing, `0 Control interface`
monitoring and debugging facilities. COMA [ 121 (Common `(i-C ompone n t )`
Object Request Broker Architecture) was used as the under- **Operational interface**
###### =
lying Distributed Processing Environment (DPE) and the `(i-Operational)`
implementation was done using the Java programming lan-
guage.
**Fig. 1. Component symbology.**
25 1
-----
...................................................................... .
In addition, components share common behavior, for ex- i?, **Legend**
ample the ability to be notified of or to subscribe to events.
; ,+
Each component has a standard property named _EventList,_
**Interface**
which contains the list of events that can be generated by the **visible for**
component. **A** client can register with the component as an .,,.,,- compound
observer for specified events. Clients also maintain the list of **component peers.**
all the events for which they can act as an observer as a prop-
erty. ,[ Contained i] I
**component**
The i-Component interface is the root access point to the **Invocation**
###### -
component. **A** client component, which wishes to use the
I **Compound component 1**
services of another component, must first obtain a reference
to its i-Component interface. In the DSC framework, clients **Fig. 2. Compound component.**
may obtain `i-Component interface references through the`
**COMA naming service. Once the i-Component interface is** These operations in the `i-Component` interface are (in
obtained, operational interface references can be retrieved OMG IDL notation):
using the getoperational ( ) operation.
```
void addcomponent (
```
The operational interfaces provide the service specific op-
```
in icomponent c
```
erations implemented by a component. In the TINA compu- 1 ;
tational object model a component may have several opera- `void removeComponent(`
tional interfaces where each operational interface is a group `in i-Component` `c`
of closely related operations. This allows different clients to ) ;
have a different perspective of a component. For example, a `void exportoperational(`
```
in i-Component c,
```
component might support interfaces supporting management
```
in string type
```
services and interfaces supporting control services.
) ;
Fig. **1** shows a component as used in the DSC framework
```
1-Operational getoperational(
```
including the mandatory `i-Component` interface and one `in string type`
operational interhce. ) ;
_B._ _Compound Components_
**A** compound component maintains a list of sub-
**A compound component is a (possibly nested) aggregate of** components. The operation `addcomponent will add the`
sub-components, which, fiom an external view, is similar to given sub-component reference “cy’ to this list and the opera-
single component, i.e. presenting the same set of operational tion `removecomponent will remove it from the list. The`
interfaces, properties etc. One of the main strengths of the operation `exportoperational will add the given interface`
DSC ftamework is the ability to dynamically create such “type”, from the given sub-component “c” to a list of exter-
compound components. This allows the dynamic composition nally visible interface types, which is also maintained by the
of complex components fiom simpler ones and stimulates the control component. Both these lists are available as properties
reuse of basic building blocks. and can be queried by other components.
Compound components can be used to extend hctionality In general, a property is an attribute of a component or an
of existing components. For example, a new operational in- interface, which can be used to provide detailed information
terfaces can added to an existing component by creating a about the component or interface. **A** property has a name,
compound component which contains the original existing type, and value. Within the DSC framework properties can
component plus a new component which implements the ad-
have a component wide scope or a scope which is limited to
ditional interfaces. an operational interface that contains the property. Compo-
Compound components present a single `i-Component` nent wide properties are access through the `i-Component`
interface `and a single identity` to the external world. The interface.
`i-Component interface is provided by the top level compo-` Properties can be used as configuration variables, for ex-
nent. Client components obtain operational interface refer-
ample to specify engineering attributes such as concurrency
ences for any of the sub-components through the
policies, interface names, event generation, component com-
```
i-Component interface of the top level component. The top
```
position etc.
level component also defines which properties, interfaces, or
events ftom sub-components are exported and visible at the _C. Component Container_
compound level. Fig. 2 shows an example of compound Each component in the DSC framework belongs to a com-
component containing two components. ponent container. The container is a specialized compound
Compound components can be dynamically created or de- component, which provides the _run time context_ in which
stroyed through operations provided by the `i-Component` components operate. In our implementation, the run time
interface of top level component. context includes the Java virtual machine and the Object Re-
**_252_**
-----
quest Broker (ORB). The container also controls concurrency The following fiagment is an example of the component
policies, for example creating a thread pool of configurable specification language. It specifies a compound component
size to allow concurrent access to the components within the named mycomponent, which contains a sub-component my-
container. The container itself is also a component with a Subcomponent, which exports a single interface, an Exporte-
`i-Component` interface and one operational interface dhterface. MyComponent further includes one operational
`i-Container. This interface is accessible by all components` interface, i-interfacel. This interface accepts an event named
within the container. The container component allows new myEvent of type short. The component itself can also fire and
components dynamically to be added or removed. The accept one event.
```
1-Container interface provides an operation to create a
```
**component** `mycomponent {`
new component instance of a given type within the container.
**contains** `mySubcomponent {`
To be able to create a new component instance all neces-
```
anExportedInterface
```
sary byte codes (in case of Java) must be available on the
**1**
local machine. In our implementation component byte codes **interface** `i-interface1` {
are packaged and distributed in Java jar archive files. To cre- **accepts** `myEvent as` `short`
ate a new component the container will first examine a local **_1_**
component repository for the availability of the requested **accepts**
component and instantiate the component fiom this reposi- `acceptedEvent as` `string`
**fires**
tory if present. If the component is not present in the local
```
firedEvent as octet
```
repository, the container will contact the service provider and
**1**
request all missing component packages to be downloaded
into the local repository (see Fig. 3). After the download is The component specification, combined with interface IDL
completed the components can be instantiated. This process specifications is used to generate source code implementation
is completely transparent to the end user. The download proc- templates, which a developer must further complete. This
ess must take place within a secure context in which a trusted process is explained in Section E.
relationship exists between the end user and the service pro-
vider fiom which the components are downloaded. **_E._** **_Component Development Emironment_**
Fig. 4 shows the component development process. During
```
D. Component SpecSfication Language
```
component development three separate stages can be identi-
The component specification language can be used to fied: (1) specification of interfaces and events in OMG IDL,
specifL components. It can specify the initial topology of (2) specification of components in the component specifica-
compound components and list per sub-component which of tion language, and (3) implementing the components behav-
its interfaces are exported to the compound component. The
ior in any language for which there exists IDL bindings. We
specification includes properties and events that can be ac-
used Java [ 131 as an implementation language.
cepted or that fired fiom an interface or component. Inter- During the first stage, all required and supported (both
faces can be specified to be **_dynamic,_** in which case a new static and dynamic) interfaces and all emitted and accepted
object instance is created per `getoperational ( )` request, or event types are specified in OMG IDL.
**_static in which case a single object instance is associated with_**
the interface. Interfaces operations are not specified in the
component specification language; they are specified sepa-
rately in OMG IDL.
User domain **Provider domain**
**Developer modifies**
**repository**
###### U
**Fig. 3. Component downloading**
**Fig. 4. Component development process.**
**_253_**
-----
During the second stage, the component specification lan-
guage is used. It relates interfaces and event types together to
form a component. In addition, information about component
composition, interfaces imported fiom sub-components,
properties, incoming and outgoing event can be specified. A
component skeleton generation tool is developed which proc-
esses OMG IDL files together with component specification
files to generate a set of implementation skeleton files. The
generated Java files contain code to start the static interfaces,
to encapsulate components, export static interfaces, and to set
the specified properties. Also, implementation skeletons,
code needed to map events to JavaBean events, and interme-
diate debug source is generated per interface. All generated
and modified files are compiled using a Java compiler and the
resulting classes are collected in a JAR (Java Archive) file. A
generated JAR file contains all required classes and resources
needed at run-time.
The last stage consists of implementing and testing the be-
havior of the components. Except for the previously gener-
ated implementation skeletons, and debugging facilities, this
stage is not further automated. The debugging facilities can
be optionally activated per interface. They can be used to
gradually monitor all invocations on an interface or to trace a
sequence of invocations on subsequent interfaces. The later
information can be graphically presented. It is especially use-
ful to verify the dynamic behavior of the components with the
message sequence diagrams found in the design documents.
111. MESH SERVICES PLATFORM
The DSC fi-amework described in Section I1 has been used
to implement the MESH platform. The MESH platform im- **_A._** **_Access Level_**
plements a large part of the TINA service architecture [14] In the TINA architecture, all interactions between a user
version 5.0 and the TINA Network Architecture [15] version and a provider are executed within the context of a session.
3.0. In the current version of the MESH platform, the TINA The architecture distinguishes between an access session and
roles of the Retailer and third party service provider have a service session. The access session is used for the identifi-
been combined into one service provider role. The service cation of the user and the establishment of the terminal used
provider implements the full retailer reference point interface during a service session. After the access session is success-
needed for both access and usage sessions, but not the retailer hily completed, the user can start a service session in which
to retailer reference points. Future work may expand the im- he can select one or multiple services to use.
plementation to include these reference points as well.
Fig. 5 gives on overview of the TINA service components
that have been implemented in the MESH platform. The
components within Fig. 5 are grouped in several domains.
The consumer domain contains all components that ca be Component w
instantiated at the end-user terminal. The service provider **rn**
domain contains all components that are instantiated at one or Interface
multiple service provider nodes within the network. The con-
nectivity provider domain contains the components that are Instantiation
used to set-up streambindings between end-users.
The architecture also consists of four distinct levels. The
access session level contains all the components that play a
SS-UAP USM
role during an access session. The access session level is de-
scribed in subsection A subsection B describes the service
level components, subsection C describes the communication
level components and finally, in subsection D the connec- **Fig. 6. Access level architecture components.**
tivity level components are described
**254**
-----
Fig. 6 shows the components that play a role during the ac-
cess session. These are the Access Session User Application
(AS-UAP), the Initial Agent (IA), the Provider Agent (PA),
Subscription Management Component **(SUB)** and the User
Agent (UA). The AS-UAP contains a graphical user inter-
face, which will prompt the user for identification and **@ E** **S** **H**
authentication. The PA is used by the AS-UAP to communi- **working together**
cate with the provider through the IA. The IA authenticates in a world
**without distances**
the user using the SUB to obtain subscription information and
personalized access session that allows the user to select and starts a UA. Together the PA and UA establish a secure and start any service for which he or she has a subscription. **USB~ID PBclnmld Login (batteram I" to MESH S~rvlcbs**
Before a user can start an access session he **or** she must `(4 199) ~ L u m n t T e c h m o I q l k` **Exlt** 1
first have all necessary software installed on his or her local
machine. In the **MESH** project, this bootstrap process is **Fig. 8. Access session login dialog.**
solved using an installation procedure that can be started
through a common web browser. In this scenario, the service
provider runs a web server with a home page through which After the user has created an account, he or she can start to
an installation process can be started. When the end-user ac- use the services for which he or she has a subscribtion. First a
cesses this home page, he may choose to start the MESH in- new access session must be started using the new account.
stallation. The home page contains a Java applet, which Once the initial software has been downloaded to the user's
downloads all necessary software to an installation directory terminal, subsequent access sessions can be started as stand-
of the users choice (and for which he must have granted secu- alone applications or within a browser context, whichever the
**rity permissions).** user prefers.
Once the software has been downloaded, an access session When the user starts a new access session, the user may see
will be started. The user can now login as an anonymous user a list of previously suspended sessions, which the user may
and start a subscription service. The subscription service will choose to resume. In addition, a list of active sessions to
allow the user to fill in personal account data such as a login which the user has been invited can be shown. The user can
name and password. The subscription service also allows the accept `or decline each` of the invitations. If an invitation is
user to subscribe to a set of services that the service provider accepted the service specific user application (SS-UAP) for
offers. The subscription service can also be used later to that service will be started and the user is joined to the ses-
change the selections. sion. During an active session, new invitations may arrive
which the PA handles by popping up a dialog window giving
the user the choice to accept or decline the invitation as
shown in Fig. 9.
**You are invited to join a session:**
**Session** I NAMED-UA
**Invitee** **batteram**
**Purpose** **Sharedwhiteboard** **session**
**Reason** **Lek** **draw something**
i **Accept** 1 **Decline** I
###### I d 1 Fig. 9. Invitation dialog.
**Fig. 7. Browser activated installation.**
**_255_**
-----
the complete set of services provided by a provider. The main
features provided by this component are:
**_0_** Creation, modification, deletion and query of subscrib-
ers,
**_0_** creation, modification, deletion and query of subscriber
related information (associated end users, end user
groups, etc.),
creation, modification, deletion and query of service
contracts (definition of subscribed service profiles),
**_0_** retrieval of the list of services, either the ones available
in the provider domain or the subscribed ones,
**_0_** retrieval of the service profile (SAGServiceProfile) for a
specific user (or terminal or NAP).
All interfaces of the SUB component as proposed by TINA
have been implemented. Several interfaces required modifi-
cations to support missing, essential fkctionality. Several
inconsistencies, which were discovered during the imple-
mentation, are summarized in the next section. The internal
architecture has been implemented as suggested by the TINA
documentation with only minor changes.
The Subscription Coordinator (SCoo) sub-component is
responsible for the management of the other sub-components
as well as being a main control point for the functionality of
the whole **SUB.** It coordinates the subscriber management
and the service contract management. The SCoo also imple-
ments interfaces that are exportdvisible outside the **SUB**
**Fig. 10. Subscription service..** and through which clients of the SUB can initiate interaction
with the SUB, create new subscriber, contract services to a
Subscription Manapement
subscriber, list services, etc. The SCoo uses the Subscriber
The Subscription Management Component **(SUB)** in Management (SubM) sub-component for managing the sub-
MESH provides fkctionality to manage the subscription scribers and the Service Contract Managers (SCM) sub-
information model for the whole set of services in the Service components to manage the service contracts.
Provider domain as defined in **[14].** It is implemented with The Subscriber Management sub-component (SubM) is re-
compliance to the suggested TINA model for a subscription sponsible for the management of a pool of Subscriber Objects
management component. (SubO) - one per subscriber - that implement interfaces for
The SUB interacts with other components mainly during managing entities (users, terminals, nap) and subscription
the access session (see Fig. 6). The IA contacts the SUB to assignment groups within a subscriber.
retrieve user subscription information during the user authen- There is one Service Contract Management (SCM) sub-
tication process. The UA also interacts with the SUB compo- component per service in the provider domain. **An** SCM is
nent to retrieve user information, obtaining or storing user responsible for managing a pool of Service Contract Objects
properties, etc. Since the SUB also contains the description of (SCO), one per subscriber, contracted the particular service.
the services, the Service Factory (SF) contacts the SUB dur-
ing the usage session and retrieves all the information needed
for the proper instantiation of service specific components.
The SUB component is a compound component, consisting
of two loosely coupled-sub-componentsy a SUB and a Data-
base Management component (see Fig. **11).** This separation
serves two purposes: (1) to ensure independence from a par-
ticular DBMS and (2) to allow distribution of the workload;
e.g., the Database Management component can be run on a
dedicated machine. Since the Database Management compo-
nent interacts only with the SUB it is treated as an encapsu-
lated part of the compound SUB.
The SUB allows the management of subscribers, service
contracts between subscribers and services and entities such
as users, terminal and network assignment points (NAP) for **Fig. 1 I . Internal structure of the SUB component.**
```
256
```
-----
Each SCO implements interfaces for manipulating service Factory (SF) creates, upon request by the UA, the Service
contracts and service profiles. Session Manager (SSM) and the User Session Manager
In our experience we found the model as suggested by **(USM).** The MESH platform supports only a single SF that
TINA to be quite usable. It is a flexible model that allows creates service components for each service provided via the
easy and straightforward approach with the management of platform. The SF contacts the SUB component to obtain the
user oriented subscription information model. The suggested names of the service specific SSM and USM components to
software component decomposition allows a dynamic imple- be created for a given ServiceId. The reason for using a single
mentation that, once instantiated, can be easily controlled. SF is simply because there was no need for multiple SFs in
During the implementation of the SUB component, several our implementation. When the number of services used on
problems and inconsistencies in the TINA documentation the MESH platform becomes difficult to handle by a single
were encountered. The following list summarizes the most **SF, extra SFs can easily be added.**
important ones: In the end-user domain, the Service Session User APplica-
The description of the service profiles scheme is not con- tion (SS-UAP) is created by the PA. The SS-UAP present the
sistent. Special subscription assignment groups `can be` service to the end-user.
created to group entities (users and terminals) and to as- The SSM maintains the global view of the session and
sign service profiles to groups. The documentation also contains the entire session model of parties, stream bindings
describes that the entities could be assigned service pro- and resources in the session. Thus, the session model is not
files directly. However, on p.244 of Annex 3, it is ex- distributed over the SSM and all the USMs. The reason be-
plained that there is a third way, a user profile assigned to hind this design decision is that consistency of the session
a user which does the same as the service profiles. Here model is much easier to maintain and that the SSM is the sin-
we had to make a decision in order to be able to imple- gle point of control of and access to the session model infor-
ment a good service profile model. mation.
Incomplete interfaces: We had to make changes to the The MESH platform only supports the TINA session
original TINA interface definitions - The model. Thus, a session consists of parties, streambindings,
`I-SubscriberLCMgmt` and control session relationships and so on. Consequently, the
`I-ServiceContractLCMgmt interfaces were only de-` session model is fixed and there is no negotiation about ses-
scribed but not prescribed. We had to add in a number of sion model support by a service during the start service sce-
**operations that we needed** **for** **the communication be-** nario. The USM only serves as a security guard for controlled
tween the internal sub components within the Subscrip- access to the SSM and as a service hatch to the proper **SS-**
tion Component. The `I-SubscriberInfoMgmt inter-` **U N .**
fhce was extended with several operations since the The service level components within the MESH platform
original TINA IDL specification was not expressive support all the feature sets described in the TINA Ret Refer-
ence Point Specifications 0.7 as far as IDL specifications of
enough. **_An_** additional `I-ServiceMgmt interface was`
defined to provide additional service manipulation fea- the interfaces were made available by TINA-C. These inter-
tures to the SUB. Originally, this role had to be done faces are:
**_0_** BasicFS: to support end and suspend session requests.
fiom the Service Life Cycle Management component
Allows the party doinain to discover interfaces supported
(SLCM). However, since this component was not im-
by the session.
plemented, the **SUB was extended to meet the require-**
BasicExtFS: to allow the provider domain to discover in-
ments.
terfaces supported by the party domain components.
Some structures (for example `t-SAE)` in the described MultipartyFS: to allow the session to support multiparty
information model only operations for creation/deletion services, such as information on other parties, end-
but no accesses operations were provided. inghuspending a party in the session, and inviting a user
Some operations definitions lead to poor performance. to join the session.
For example, operations which copy big structures fiom **_0_** MultipartyIndFS: to allow the session to indicate requests
that are to be processed to the party components.
the remote objects. It is more preferable to decompose
VotingFS: to allow parties to vote in order to determine if
these operations into several which fetch small parts of
a request should be accepted and executed.
the structures since the most frequent need is not the
whole structure.
_B._ **_Service Level_**
At the service level of the architecture, single or multiparty
service sessions can be started and stopped and stream bind-
ings for continuous data streams can be setup to communicate
with each other.
Fig. 12 depicts the service level TINA components in a
two-party service session. At the service provider, the Service **Fig. 12. Service level architecture components.**
**257**
-----
ControlSFWS: to support parties having ownership and The DSC framework enables a component to export an in-
readwrite rights on session entities (i.e. parties, resources, terface of one of its sub-components. Thus, the service-
stream bindings, etc.).
specific SS-UAP, SSM and USM can export some or all of
ParticipantSBFS: to provide high level support for setting
the interfaces of the generic SS-UAP, SSM and USM de-
up stream bindings in terms of session members participa-
pending on which interfaces or feature sets are required by
tion.
the service. On the other hand, the service-specific compo-
**_0_** ParticipantSBIndFS: to provide participant type stream
nents can overload or extend operations of their generic sub-
bindings with indications.
components in order to perform service-specific actions. For
Announcement of service sessions is not yet supported by
example, for a database service, the initialize operation of the
the MESH platform and thus all parties have to be explicitly
`i-Init interface of the specific` SSM might extend the
invited to a service session. Adding and removing of re-
generic initialize operation to open a database that is used
sources to a service session is done using a non-TINA inter-
during the service. Obviously, the service developer has to
face at the SSM, because at the time of writing the Re-
use this feature with care in order not to disable generic h c -
sourceFS was not yet standardized by the TINA-C.
tionality that is vital for proper service behavior.
Within the MESH platform a stream binding consists of a
Besides exporting/extending interfaces of the generic sub-
number of uni-directional Stream Flow Connections (SFCs)
components, the service-specific components can additionally
to which some or all of the participants are bound. A SFC
provide for interfaces with operations that implement service-
consists of a set of Stream Flow End Points (SFEPs), one per
specific fkctionality. To allow these operations to filly use
participant in the SFC. All SFEPs in a SFC have the same
the functionality of the generic sub-components, extra inter-
binding tag. Consequently, the binding algorithm executed by
nal interfaces at the generic SS-UAP, SSM and USM have
the SSM can be kept relatively simple. It only has to match
been defined. In addition to the TINA specified interfaces we
SFEPs with similar binding tags. `When the SSM has bound`
had to define two new interfaces:
all the SFEPs to SFCs, it interacts with components at the
communication level to actually setup the SFCs. Interface `i-SessionMOdel that allows the specific SSM`
to query the session model that is maintained in the ge-
In our development approach, specific services are build
neric SSM. This interface also allows the specific SSM to
on top of the service level components of the TINA architec-
modi6 the session model in case that is not possible via
ture, in particular the SS-UAP, SSM and USM. These com-
ponents provide generic service session management func- the TINA interfaces. For example, the `i-SessionModel`
tionality that is necessary in each service. In particular, with interface of the SSM allows for the addition and removal
this generic service session management fimctionality, serv- of resources to the service session, because this is not part
ice sessions can be started and deleted, and participants and of the TINA Service Architecture 5.0,
stream bindings can be added to a service session, modified Interface `i-GenericSSM that allows the specific SSM to`
and deleted fi-om it. In `our approach, any service is build by` apply for globally unique identifiers, to obtain references
extending the generic SS-UAP, SSM and USM components to interfaces of other components, and to register a call-
with service-specific functionality. A service then consists of back interface i-Specif icSSM of the specific SSM.
service-specific versions compound components of the SS-
UAP, SSM and USM components that encapsulate the ge- The.generic USM has one extra interface:
neric SS-UAP, SSM and USM as sub-components. In Fig. 13, Interface `i-GenericUSM that allows the specific USM to`
an example service-specific SSM is depicted. obtain references to interfaces of other components, to
check the secretID provided by the specific SS-UAP and
to register a callback interface `i-SpecificUSM of the`
specific USM.
The SS-UAP has two extra interfaces:
Interface i `SessionModel that allows the specific SS-`
UAF' to getsession model information that is maintained
in the generic SS-UAF'.
Interhce `i-GenericsSS-UAP` that allows the specific
SS-UAP to obtain references to interfaces of other compo-
nents, and to register a callback interface
```
i-Specif icSS-UAP of the specific SS-UAP.
```
The callback interfaces of the specific components provide
operations that can be called by the generic subcomponents
upon initialize, suspend, resume and end of a service session.
**L** **I**
Although registering **a** callback interface is not obligatory,
there is one requirement that each specific service has to sat-
**Fig. 13. A service-specific SSM.**
isfy: the `i-SpecificSS-UAP interface` must be registered
**258**
-----
with the generic SS-UAP and this interface must implement a point capabilities into a stream flow connection, all capa-
`startservice operation in which the service-specific` **SS-** bilities have to match, or a special resource that can translate
UAP is started. the capabilities needs to be available. Inspecting the QoS pa-
Besides callback interfaces, a specific component and its rameters and the capabilities results in constraints for map-
generic sub-component can interact via an event-listener ping logical stream flows into physical network flows.
mechanism. In particular, the specific component can register The TCSM manages the communication characteristics of
itself for certain events within the generic sub-component. the terminal. It maps general medium descriptions with QoS
Especially within the generic SS-UAP, various events occur parameters into stream flow end points that match the request
in which the specific SS-UAP might be interested. These of the SS-UAP. Each stream flow end point has additional
events occur as a result of indication and information mes- communication capabilities. Besides codec configuration, a
sages fiom the USWSSM. They include invitations, the ad- communication capability might state connectivity requke-
dition, modification and deletion of participants, stream ments. For example it might state that this stream flow end
bindings, and indications on which a vote is required. point is based upon RTP (Real-time Transport Protocol) and it
is best used upon a UDP (User Datagram Protocol) binding
**_C. Communication Level_**
which is, in `turn, best used upon an` IP (Internet Protocol)
The components within the communication level manage layer network. These requirements are input for the CSM to
the communication network and control the communication choose a connectivity provider with whom the service pro-
sessions. A communication session provides a service- vider has a contract profile that allows the control of the
oriented view on the stream bindings between the participants specified physical network flows.
of the session. Typically, the service session specifies a A full specification of the interface between the SS-UAP
stream binding to be set up between parties in **_QoS_** parame- and the TCSM was not available; this interface is part of the
ters and abstract medium descriptions. The communication terminal intra-domain reference point. Through this interface
session encapsulates the details involved with the process of the SS-UAP queries the TCSM for available stream flow end
matching the terminal specific communication characteristics point descriptions based upon a high-level medium descrip-
(such as codecs, audio and video capabilities, etc.) upon the tion. A simple interface supporting our target scenarios has
requested QoS (quality of service). been specified in this project. Also, the TSCM supports the
There are three components that play a role in a communi- `i-TerminalComSSetup` and the `i-TerminalComSCtrl` inter-
cation session: the Terminal Communication Session Man- faces. The former interface supports querying for the capa-
ager (TCSM), the Communication Session Manager Factory bilities of the stream flow end point descriptions. Yet, in
(CSMF), and the Communication Session Manager (CSM), specified operation it is not clear which capability is related
see Fig. 14. The TCSM is part of a user’s terminal; it man- to which stream flow end point description. Consequently, we
ages the communication characteristics of the terminal and slightly modified the operations to reflect the relation be-
controls the bindings within the customer premises. The tween capabilities and stream flow end point descriptions.
CSMF is a factory for CSMs. The CSM controls the network
part of the communication session, where the TCSM controls `D. Connectivity Level`
the terminal part of the communication session for each party The components within the connectivity level manage the
only. connectivity network and control the connectivity sessions.
The CSM controls the individual elements of the network The connectivity session hides the network technology re-
communication session. The terminal-specific part of the lated details towards the communication session and the
communication session is controlled by the involved TCSMs. service session. **_An_** example of a connectivity detail is
The CSM is responsible for combining the stream flow end whether a network supports mi-directional of bi-directional
bindings. The communication session models stream flow
connections always as unidirectional, but the connectivity
...... ........................................................................................................... ..................................................... session can support multiple stream flow connections using
SS-UAP SSM only one network flow connection depending on the network
capabilities.
One connectivity session might span multiple connectivity
networks, provided special resources that map connectivity
details between different connectivity networks are available
**[16]. Neither the service session nor the communication ses-**
sion is aware of this.
Fig. 15 shows four components part of the connectivity
# & level: the CCF (Connection Coordinator Factory), CC (Con-
TLA nection Coordinator), the FCC (Flow Connection Controller),
and the Layer Network Controller (LNC). The CCF is a fac-
**Fig** **14** **Components involved in the communication session.** tory for CCs. The CC sets up and controls the entire connec-
259
-----
tivity session. A connectivity session consists of network might not be consistent. We have extended the CCF and
flow end points and network flow connections. Each network CC with a notification interface that accepts messages of a
flow connection is set up and controlled by a separate FCC. child-component that is about to be released. In addition, the
The CC instantiates a FCC per request for a network flow CC and FCC components are extended with component con-
connection. The FCC contacts LNCs to claim and use re- structor interfaces that enable their parent components to con-
sources in their layer networks that make up the actual bind- struct and initialize them.
ings. A layer network can contain multiple administrative
domains and is typed by the supported types of bindings. An Iv. **CONCLUSIONS AND FUTURE WORK**
LNC sets up and controls bindings through one administra- In our experience, the TINA architecture is complex, ex-
tive domain of a layer network. tensive, and still immature. Several TINA reference points are
Before compiling the prescriptive connectivity level inter- incomplete and others are not yet specified, such as the
faces, we had to change the NFEP definitions. NFEPs are LNFed and CSLN reference points. However, during the de-
maintained by TLAs. A TLA is layer network specific. Layer sign and implementation efforts it proved to be conceptually
network type specific components contact the corresponding sound. In our opinion, the TINA access and service levels are
TLA that offers the NFEPs. Therefore, the TLA its interface the maturest.
reference has to be available. Originally, there were two During January 1998 we obtained the prescriptive IDL
**NFEP** definitions i.e., `t-ANfep` and `t-NfepDesc,` where specifications for the descriptive components fiom [ 171 and
`t-NfepDesc` Contains a `t A N f e p and where` `t-ANfep SUppOrtS` [ 181. We noticed three IDL coding styles: one for the service
recursive NFEP specification. Neither of them contained a and access level interfaces, one for the communication level
TLA interface reference. We combined both definitions into a interfaces, and one for the network level interfaces. Each
new t-NfepDesc and we created a SepWate `t-NfepPoolDesc.` style differed in module policy, distribution of interfaces over
The former contains a TLA interface reference field and the modules, and in include file approach. Consequently the in-
latter contains a sequence of `t-NfepDescS.` Therefore, our terfaces where hard to read. Not only there where cosmetic
**NFEP definition did not support recursion. We had to drop** problems to overcome, also the IDL code per style featured
the recursion requirements since the used IDL compiler did different naming conventions. Both integers `(unsigned`
not support this. `long),` single Strings, and sequences of Strings (t-TinaName)
Additionally, the prescriptive connectivity level interfaces where mixed. In order to stick to one coding style and naming
contained a security parameter per method, a `t-SecHandle.` convention; we modified the IDL files appropriately.
This `t-SecHandle` was defined as a `long. Ckarly, a` `long is` A general coding experience is that much time was lost
neither a flexible security parameter nor a future proof solu- coding the processing of the complex arguments of the speci-
tion. It is even questionable whether it is good practice to fied operations. We recommend generating object-oriented
**enforce a** **security procedure that demands per** call authenti- code for the processing of complex arguments like
cation. Rather than solving such issues on connectivity serv- `t-CapabilitySet Ort-TinaName.`
ice level, it should be solved by procedures running in paral- The DSC framework and support tools have played a sig-
lel with a connectivity session. Establishing and maintaining nificant role in the implementation of the MESH platform. It
a secure context between stakeholders is the responsibility of has accelerated the implementation process through template
the connectivity access session that is executed before re- generation and by providing a comprehensive runtime envi-
questing a connectivity service. ronment which offers many common services such as soft-
To get the proposed descriptive components running we ware downloading, dynamic component composition, com-
had to add interfaces to CCF, CC, and FCC. Unlike the ponent configuration, and distribution transparencies. It also
CSMF, that controls the life cycle of the CSM, the CCF and accelerated the testing and debugging process through auto-
CC are not explicitly involved if one of their spawned com- mated generation of test components and runtime diagnostic
ponents is released; the administration of the CCF and CC services such as interface analysis and call flow analysis.
Future work will expand the implementation of the MESH
platform in the following areas:
TCSM CSM
Large scale deployment with scalability, load balancing
and fault tolerance,
accounting and billing services,
**_0_** service creation through component composition and spe-
cialization with graphical software tool support,
###### I LNC I new services for electronic commerce, medical and edu-
.................................... ~ ........ ........................... . .. cational sectors.
These activities will be done in a new project named
**Fig. 15. Network resource architecture components. The layer network-type** FRIENDS (FRamework for Integrated Engineering and De-
**specific components were omitted.** ployment of Services), starting January 1999. The project
**260**
-----
**partners are Lucent Technologies, the research arms the** `[6] International Workshop on Component-Oriented Programming;`
**Dutch telecom operator KPN,** **the Dutch Telematic Institute,** University of Linz, Linz Austria; `1996, see:`
**the Dutch National Organization for Applied Scientific Re-** `httpi/www.ide.hkr.se/-bosch/WCOP97/WCOP.96.report.ps`
```
[7] ActiveX, see httpi/www.rnicrosoft.com/
```
**search (“NO) and the University of Twente (CTIT).**
```
[8] Orfali, R., D. Harkey, and J. Edwards, The essential distributed
```
_objects survivalguide, Wiley, New York (7”) USA, 1996._
**ACKNOWLEDGMENT** `[9] JAVA Beans, see httpd/java.srn.com/`
[ 101 International Standards Organisation, Basic reference model of
**We** _thank all contributors who made the_ **MESH** **project a**
_@en Distributed Processing - Part 1: Overview and guide to_
**success. The views expressed in this paper are those** **of the** _use, Standard IS0.EC_ `10746-1, 1995.`
**authors, and not necessarily those of the other MESH project** `[l 11 Raymond, K., “Reference Model of Open Distributed Process-`
**partners.** ing; introduction”, _Proceedings of the 3rd IFIP TC6/WG6. I_
_International Conference on Open Distributed Processing, pp_
```
3-14, Brisbane (Australia), February 2&24, 1995.
```
**REFERENCES** `1121 OMGKORBA, see httd/www.orng.org/ -` **I**
[Java, See: httpi/java.sunsofl.com/](http://httpi/java.sunsofl.com)
MESH, see httpi/www.mesh.nll
TINA-C, Service Architecture, Kristiansen, L. (ed.),
TINA-C, see httpi/www.tinac.wm/
TINA-C, Redbank, NJ (USA), June 1997
**131** Bakker, J.L., and H.J. Batteram, “Design and evaluation of the
TINA-C, Network Resource Architecture, Steegmans, F. (ed.),
Distributed Software Component Framework for Distributed
TINA-C, Redbank, NJ (USA), February 1997.
Communication Architectures“, Proceedings ofthe 2“d interna-
Bakker, J.L., and F.J. Pattenier, “The Layer Network Federa-
_tional Workshop on Enterprise Distributed Object Computing_
tion Reference Point., Definition and implementation”, 77ze Ap-
(EDOC‘98), 98EX244. EEE, pp. 282-288, San Diego (USA),
_plication of Distributed Computing Technologies to Telecom-_
November 3-5, 1998. (ISBN: 0-7803-5158-4)
_munications Solutions (TINA ‘99), Kahuku-Oahu, Hawaii_
Mohamed E. Fayad and Douglas C. Schmidt, “Object Oriented
(USA), April `17-20, 1999. In press.`
Application Frameworks”, Communications of the ACM, Octo-
TINA-C, Network Components SpeciJication,
ber 1997, volume 40, number 10, pp 32-38.
International Workshop on Component-Oriented Programming; `http://tinac.com/l/97/resources/network/docs/ncs/v2.2/idl/modules.`
TINA-C, _Service Component Specification: Computational_
Jyvaskyla Finland, 1997, see:
_Model and bnamic,_
httpi/www. ide.hkr.se/-bosch/WCOP97/papers.html
```
http://tinac.com/l/97/services/docs/scs/compmod/final/idl/.
```
**26 1**
-----
| 13,033
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/TINA.1999.789995?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/TINA.1999.789995, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GREEN",
"url": "https://research.utwente.nl/files/6141507/00789995.pdf"
}
| 1,999
|
[
"Conference"
] | true
| 1999-08-31T00:00:00
|
[] | 13,033
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00b95db88e4f64b36714c7381101e6cd7c5fc310
|
[
"Computer Science"
] | 0.885407
|
Mapping Applications Intents to Programmable NDN Data-Planes via Event-B Machines
|
00b95db88e4f64b36714c7381101e6cd7c5fc310
|
IEEE Access
|
[
{
"authorId": "51898188",
"name": "Ouassim Karrakchou"
},
{
"authorId": "1732711",
"name": "N. Samaan"
},
{
"authorId": "1681169",
"name": "A. Karmouch"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://ieeexplore.ieee.org/servlet/opac?punumber=6287639"
],
"id": "2633f5b2-c15c-49fe-80f5-07523e770c26",
"issn": "2169-3536",
"name": "IEEE Access",
"type": "journal",
"url": "http://www.ieee.org/publications_standards/publications/ieee_access.html"
}
|
Location-agnostic content delivery, in-network caching, and native support for multicast, mobility, and security are key features of the novel named data networks (NDN) paradigm. NDNs are ideal for hosting content-centric next-generation applications such as Internet of things (IoT) and virtual reality. Intent-driven management is poised to enhance the performance of the offered NDN services to these applications while reducing its management complexity. This article proposes I2DN, intent-driven NDN, a novel architecture that aims at realizing the first step towards intent modeling and mapping to data-plane configurations for NDNs. In I2DN, network operators and application developers express their abstract and declarative content delivery and network service goals and constraints using uttered or written intents. The intents are classified using built-in intent templates, and a slot filling procedure identifies the semantics of the intent. We then employ Event-B machine (EBM) language modeling to represent these intents and their semantics. The resulting EBMs are then gradually refined to represent configurations at the NDN programmable data-plane. The advantages of the proposed adoption of EBM modeling are twofold. First, EBMs accurately capture the desired behavior of the network in response to the specified intents and automatically refine it into concrete configurations. Second, EBM’s formal verification property, referred to as its proof obligation, ensures that the desired properties of the network or its services, as defined by the intent, remain satisfied by the refined EBM representing the final data-plane configurations. Experimental evaluation results demonstrate the feasibility and efficiency of our proposed work.
|
Received February 9, 2022, accepted March 4, 2022, date of publication March 10, 2022, date of current version March 21, 2022.
_Digital Object Identifier 10.1109/ACCESS.2022.3158753_
# Mapping Applications Intents to Programmable NDN Data-Planes via Event-B Machines
OUASSIM KARRAKCHOU, (Graduate Student Member, IEEE),
NANCY SAMAAN, (Member, IEEE), AND AHMED KARMOUCH, (Member, IEEE)
School of Electrical and Computer Engineering, University of Ottawa, Ottawa, ON K1N 6N5, Canada
Corresponding author: Ouassim Karrakchou ([email protected])
This work was supported by a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada.
**ABSTRACT** Location-agnostic content delivery, in-network caching, and native support for multicast,
mobility, and security are key features of the novel named data networks (NDN) paradigm. NDNs are
ideal for hosting content-centric next-generation applications such as Internet of things (IoT) and virtual
reality. Intent-driven management is poised to enhance the performance of the offered NDN services to
these applications while reducing its management complexity. This article proposes I2DN, intent-driven
NDN, a novel architecture that aims at realizing the first step towards intent modeling and mapping to
data-plane configurations for NDNs. In I2DN, network operators and application developers express their
abstract and declarative content delivery and network service goals and constraints using uttered or written
intents. The intents are classified using built-in intent templates, and a slot filling procedure identifies the
semantics of the intent. We then employ Event-B machine (EBM) language modeling to represent these
intents and their semantics. The resulting EBMs are then gradually refined to represent configurations at the
NDN programmable data-plane. The advantages of the proposed adoption of EBM modeling are twofold.
First, EBMs accurately capture the desired behavior of the network in response to the specified intents and
automatically refine it into concrete configurations. Second, EBM’s formal verification property, referred to
as its proof obligation, ensures that the desired properties of the network or its services, as defined by the
intent, remain satisfied by the refined EBM representing the final data-plane configurations. Experimental
evaluation results demonstrate the feasibility and efficiency of our proposed work.
**INDEX TERMS** Event-B machines, intent-driven networking, named data networks, programmable
data-planes.
**I. INTRODUCTION**
Named data networks (NDNs) [1], [2] and intent-driven
networking (IDN) [3], [4] are two orthogonal research
paradigms that aim at revolutionizing the current use of networks from conventional communication services into integral components of next-generation applications. Examples
of these applications include time-sensitive, content-centric,
and dispersed applications that allow humans to interact
seamlessly with virtual objects within the context of virtual
and augmented reality. Industrial automation functionalities
built on top of sensors and connected machines represent
another example of these applications. On the one hand,
NDNs facilitate building advanced applications by shifting
the application developers’ focus away from address- and
location-centric communication and towards a simplified
The associate editor coordinating the review of this manuscript and
approving it for publication was Salekul Islam .
content-centric one. On the other hand, IDN allows network
operators and hosted application developers to describe what
is required from the network at a high level of abstraction without being concerned about how these requirements
should be implemented at the network data-plane [4].
NDNs are designed to deliver contents that are uniquely
identified using hierarchical naming structures such as the
Uniform Resource Identifiers (URIs) [2]. Contents can be
conventional data components such as files, video clip
chunks, or books but can also represent sensor readings
or exchanged commands between machines. NDNs operate
using two packet types: interest packets (Ipkts) and data
packets (Dpkts). A content consumer (e.g., a user device)
sends an Ipkt containing the name of the required content
in the network. Each switch then serves the Ipkt by either
forwarding it along a path to the content producer or to a
nearby router that is already storing the requested contents in
its cache. A content producer or a router storing the content
-----
then replies with a Dpkt containing the requested content. The
Dpkt follows the reverse path of the Ipkt until it reaches the
Ipkt sender.
NDN’s simplified mechanism natively supports multicast
services while eliminating the well-known IP addressing
problems, such as address scalability and user mobility.
Its location-agnostic communication also facilitates hosting
distributed applications executing on virtual resources [5].
These advanced functionalities are attributed to a more complex NDN data-plane. NDN switches must manage forwarding information bases (FIBs) storing content name prefixes,
special pending interests tables (PITs) logging unsatisfied
requests, as well as content stores (CSs) that can cache
received contents. These tables transform NDN switches into
state-aware devices that can make adaptive packet processing, forwarding, and content caching decisions in addition to
traditional traffic engineering and network services such as
traffic shaping and monitoring.
Network operators are envisioned to take advantage of
NDN switch functionalities to offer novel per-application,
per-content, or per-consumer highly customized network services such as time-sensitive delivery using prefetching and
caching, semantics-based forwarding, and content encryption and decryption [6]. This vision is motivated by emerging technologies, such as software-defined networks [7],
that succeeded in separating the network control functionality from that of the data-plane packet forwarding process. More recently, the emergence of switch programming
languages, such as P4 [8], has enabled the notion of
programmable data-planes (PDPs). Using these languages,
the control-plane can continuously configure and fine-tune
the switch behavior with respect to packet parsing and
processing [9].
Despite these advances, operators are still limited by the
current network management tools to direct the installation of
per-flow or per-path switch configurations. These tools may
require error-prone manual configuration and policy validation [4]. In addition, control-plane functionalities (e.g., routing, traffic engineering, and congestion control) still require
manual parameter setting and have a network-wide service
focus that lacks the needed per-application customization.
Finally, these tools provide no direct means for application
developers or users to directly define their network service
requirements in a declarative manner.
The emerging concept of IDN attempts to bridge the gap
between network management complexity and the emerging network service demands on one side and advances
in data-plane programmability on the other [3]. The main
premise of IDN is to allow operators and application developers to describe what is expected from the network serving the
applications but not how that behavior is implemented using
intents [10]. IDN tools can then automatically ‘‘convert, ver_ify, deploy, configure and optimize’’ [4] the network to satisfy_
these intents. The realization of IDN necessitates addressing
three main challenges: first, the development of expressive
intent and network state models. Second, the realization of
new mechanisms to automate intent validation and mapping
to data-plane configurations. Third, novel intelligent machine
learning-based techniques must be developed to allow the
network to continuously self-adapt and self-heal to maintain
the satisfaction of these intents [11].
This article addresses the first two challenges described
above. We consider a single domain NDN with programmable switches that each can process packets using a
chain of stateful match-action tables (MATs) (e.g., switches
based on P4 [9] or those supporting program-based forwarding strategies [12]). We propose a novel intent-driven
NDN (I2DN) architecture that models and captures high-level
intents and transforms them into configurations for the programmable NDN data-plane. In I2DN, intents are first captured as uttered or written sentences. These are tokenized
and classified using preexisting intent templates. A slot filling
procedure is then employed to extract a set of intent parameters from the uttered words. The output from this phase is
then translated using Event-B modeling into abstract Event-B
machines (EBM) [13] which provide abstract descriptions of
the desired network behavior to satisfy the given intents. Each
EBM describes a desired behavior as a set of events acting on
an abstract state representing the network. Abstract EBMs are
then refined using existing tools, such as Rodin [14], to gradually introduce network-specific configurations implementing
the desired behavior until a concrete EBM is developed.
The concrete EBMs closely resemble the structure of the
programmable MATs in the data-plane. Hence, they are transformed or compiled into an equivalent data-plane behavior
satisfying the intent.
The adoption of EBM modeling serves two main purposes. First, the highly abstract model of the EBMs describing the intents represents ideal means to capture the intent
goals. Meanwhile, refinement, a key feature of EBM modeling, allows for the gradual mapping of these hardware and
software-independent abstract EBMs towards the concrete
EBMs representing the corresponding data-plane configurations. Second, Event-B is also a formal method to design
EBMs that are correct by construction. I2DN benefits from
this feature by formally representing an intent requirements
and constraints on the network states by defining strict rules
referred to in the EBM as invariants. For a machine to be
correct, i.e., performing as intended, these invariants must
always be preserved after every event and refinement operation. These verification steps are referred to as proof obligations and are carried-out using automated tools such as
Rodin [14]. To this end, the main contributions of this article
can be summarized as follows:
1) We develop a general framework for the lifecycle management of intents within the context of NDNs and
analyze the main challenges for its realization. We then
propose I2DN, a novel architecture that focuses on
modeling and mapping NDN intents into data-plane
configurations.
2) Within I2DN, we define a novel networking intent
model that is inspired by existing virtual assistants.
-----
3) We propose a novel intent-to-data-plane configuration
mapping process using Event-B modeling. The proposed work demonstrates how EBM modeling language and refinement tools can be used efficiently to
automate the steps of intent processing, validation, and
translation to correct network and domain-dependent
configurations.
The remainder of this article is organized as follows;
Section II presents the main concepts of NDNs and discusses how programmable data-planes are realized in the
context of NDN. Section III introduces IDNs, explains their
relevance, and surveys the related Literature. In Section IV,
we provide an overview of our proposed mapping architecture. Section V is then dedicated to describing the adopted
models and their mapping steps. Simulation results are presented in Section VI. Section VII discusses some open
research issues for I2DN. Finally, Section VIII concludes the
article and presents planned future work.
**II. NDN BACKGROUND AND RELATED WORK**
In this section, we first provide a brief review of NDNs’
data-plane functionalities and discuss current progress with
respect to achieving programmability at that plane.
_A. NDNs AND SWITCH CONFIGURATIONS_
NDNs are centered around the delivery of contents that
are uniquely identified using a hierarchical content naming
format (e.g., /com/youtube [2]). Rather than IP addresses,
NDN components, including network switches, servers, connected sensors, machines, and user devices, are identified by
semantically meaningful names. In turn, any device in an
NDN network can act as a content producer, a consumer, or a
packet forwarder simultaneously [15].
As shown in Fig.1, to request contents, a consumer generates an Ipkt that is sent to an NDN switch. An Ipkt contains
the requested content name as well as optional metadata to
specify any additional constraints on the delivered content,
such as its version, freshness, or publisher. Each Ipkt is
also uniquely identified using a randomly generated nonce
value that must be added before the Ipkt is dispatched to the
network. Additionally, an Ipkt can include forwarding hints
instructions specifying a particular routing path as well as
any arbitrary metadata that can be used to parameterize the
delivered contents. To forward an Ipkt, each device, including
the user’s device and the switches, looks up its forwarding
information base (FIB) to find the longest prefix match to the
Ipkt content name and a corresponding list of candidate ports
for Ipkt forwarding. Finally, configured forwarding strategies
define additional rules (e.g., all ports, least occupied port,
or first port in the list) controlling the final forwarding action
for the Ipkt.
In contrast to IP-based forwarding, the forwarding of Ipkts
is stateful: when a switch forwards an Ipkt, it is stored in a
pending interest table (PIT) along with its source port until
the interest expires or is satisfied. If another Ipkt with the
same content name is received, the switch adds the source
**FIGURE 1. NDN communication model and switch components.**
port of the new Ipkt to the matched entry in the PIT. This
allows the switch to store the states of all currently served
interest requests while avoiding overloading the network with
redundant requests of the same content. In addition, PITs
facilitate multicast services and loop-free multipaths. Loss
recovery with minimal latency is also easily achieved by
controlling timeouts for the PIT entries.
As shown in Fig.1, when a content producer receives an
Ipkt, it replies with a Dpkt sent on the Ipkt source port. A Dpkt
contains the requested content and its name. The publisher
ensures authentication of the data by adding a signature field
along with any additional tags (e.g., published information,
content version, and creation time) that can be stored in the
metadata. When a switch receives a Dpkt, it looks up its PIT to
forward the Dpkt along the reverse paths of the corresponding
Ipkt and then erases that entry from the PIT. When a host
receives a Dpkt, it uses its PIT to forward it to the correct application interface. Mobility of consumers or producers is inherently treated in NDNs since location-dependent
IP addresses do not identify packets. For instance, a moving
consumer can resubmit a request to desired contents for an
expired Ipkt. NDN switches can also add forwarding hints to
Ipkts to guide them to a new producer location.
An NDN switch also contains a content store (CS) to cache
forwarded Dpkts according to a specific caching strategy. For
example, the switch adjacent to the first consumer in Fig. 1
caches the received Dpkt and sends it to the second consumer
in response to a new request. Thus, cached Dpkts used to reply
to multiple Ipkts can significantly reduce content delivery
latency in NDNs.
_B. DATA-PLANE PROGRAMMABILITY_
The data-plane layer of NDN is stateful by design since
records of all pending Ipkts are stored in the PITs. Furthermore, with CSs, switches can employ different caching strategies to reduce content delivery latency. In addition, switches
parse content names and employ them for routing. Finally, the
original NDN design [1] envisioned fully programmable and
adaptive forwarding strategies that can be implemented using
programming algorithms. These design features allow NDNs
-----
to support the development of new network services
(e.g., content ordering, freshness guarantees, semanticsbased forwarding, authentication, and/or publish/subscribe
related services). However, most of these features remain
conceptual at the design level, with little progress towards
their realization on current switches. Meanwhile, existing
Literature has focused on the design issues of various NDN
functionalities mainly such as routing [16], forwarding [2],
[17], and caching [18], [19].
Recently, several research efforts have focused on the
adoption of the novel paradigm of software-defined networking (SDN) for the efficient management of NDNs [7],
[20], [21]. In SDN, network control logic and algorithms
are executed at the control-plane, which then communicates
a set of forwarding rules directly to the switch data-plane
using protocols such as OpenFlow. However, existing solutions have mostly focused on implementing specific services
such as routing [22], traffic management [23] and adaptive
caching [24]. In these approaches, the controller achieves
limited reconfigurability of the NDN switch data-plane using
OpenFlow [25].
In previous work [9], the authors developed a novel
NDN programmable data-plane (PDP) architecture that takes
advantage of P4, a switch behavior programming language [8]. The proposed work allows a controller at the
control-plane to define, install and update, at run-time, customized P4 programs to realize a suite of network services.
In the proposed work, each programmed switch first parses
Ipkt and Dpkt headers and collects additional metadata such
as its source port or the size of its ingress queue. The switch
then processes the packet according to a sequence of MatchAction Tables (MATs) that are programmed according to the
controller’s specified P4 programs. Programmed instructions
may send the packet on specific ports looked up using its FIB
or PIT, as well as drop, recirculate, or clone the packet. The
switch may also collect and store different statistics about the
packet. The packet may also be sent to the CS to be stored
if it is a Dpkt or replied to if it is an Ipkt. The proposed
work then demonstrated how these functionalities are used to
offer different services in the data-plane. Examples of these
services include traditional ones such as admission control,
load balancing, security using firewalls, and differentiated
content-delivery services. Other examples include novel services such as geographical gating and caching.
In this article, we take advantage of the developed
PDP architecture and focus on the problem of translating
high-level intents to these customized PDP configurations.
**III. INTENT-DRIVEN NETWORKING**
The main premise of IDN is to have networks that are easier
and simpler to manage and customize to individual applications and/or industries [11]. IDN allows operators to describe,
at a high level of abstraction, the desired business goals as
well as how customized network services should behave to
serve different applications. IDN can also be employed by
application developers to interact directly with the hosted
network to specify their required service customization. This
section describes the main intent lifecycle management functionalities and discusses the main contributions towards their
realization based on the model defined by the IRTF Network
Management Research Group of the IETF [11].
_A. INTENT LIFECYCLE FUNCTIONALITIES_
Within the context of IDN, an intent describes a goal,
a constraint, or a desired outcome to be met by the network [26]. The authors in [26] define three main intent types:
(i) customer- or application-service intents that describe the
desired service quality for a given customer or application
(e.g., customers should receive application A videos with
high quality and a staleness not exceeding one minute);
(ii) network-service intents describe services that are offered
by the network (e.g., content delivery services should have
a maximum latency of 30 ms); (iii) strategy intents describe
a desired goal from the perspective of the overall network
operation (e.g., reduce overall energy consumption or maintain bandwidth utilization levels and cache occupancy below
a given threshold). Intents can also be classified according to
their lifecycle as either persistent (e.g., all users of a given
application receive the highest video quality) or transient
(e.g., remove all cached contents of a given application from
the network).
Fig. 2 depicts the main processing functionalities during
the lifecycle of an intent. The figure builds on the IETF
standard model [11] and includes two main phases: preproduction and production. During the first phase, the network operator defines the set of intents that the users can
employ. Then, depending on the level of automation in the
IDN [27], the operator may optionally associate with each
intent an intent handler to define the abstract actions that
are taken by the network to fulfill the given intent. These
handlers can range in complexity from predefined rules to
self-reasoning agents that learn and refine the intent handling
using feedback from the network. These handlers/rules will
aid the intent translation process during the production phase.
The first functionality in the production phase involves
ingesting the intents from the users. These users can be
network administrators, application developers, or end-users.
This step takes place using different text- or voice-based
interfaces to type or utter the intents, respectively. Advances
in speech recognition and natural language processing allow
for the realization of this step [28]. Moreover, the authors
in [11] envision this process to eventually include an open
dialog between the user and the IDN system in order to aid
the user to articulate and clarify the intent gradually.
Once ingested, the intent lifecycle management involves
the realization of functionalities that belong to one of two
categories, namely, intent fulfillment and assurance [11].
Functions in the first category ensure the realization of the
required network configurations to satisfy the intent. Meanwhile, assurance functionalities validate the intents, identify
any potential conflicts with already existing ones, and ensure
-----
**FIGURE 2. Intent life cycle.**
that the corresponding switch configurations realize the goals
of the intents and do not drift away from these goals over time.
The first step in intent fulfillment involves identifying the
ingested intent. In this step, the intent is rendered in a format
that the IDN system can process. This step includes identifying the type of the intent, its application scope, its goals,
and/or desired outcomes. It also parses the intent to identify
any semantics that the user has provided within the ingested
intent (e.g., a specific content, time, or service name). The
outcome of the identification process is fed to the translation
module which maps the intent into actions, management
operations or services, as well as network configurations. Any
predefined intent rules or handlers that were defined in the
pre-production phase can be used as aids to this step. The final
stage in the fulfillment of an intent is to translate that intermediate representation into device-specific configurations.
The orchestration of the configurations of different devices
in the network to respond to different intents also represents
an important component of this final stage.
Intent assurance functionalities ensure that the applied
network configurations comply with the user intents. These
functionalities include intents conflict detection and resolution as well as the assurance that the implemented configurations satisfy the intents. The first step of the intent conflict
detection process takes place before the network configurations are deployed. Then during network operation, the traffic
is monitored and analyzed to ensure the intents goals are
satisfied.
IDN systems are anticipated to be augmented with machine
leaning (ML) capabilities [4] that can enhance the performance of various IDN functionalities using learnt experience.
For example, as will be shown in our proposed work, intent
identification tools may employ ML algorithms to enhance
the process of understanding the user input. Similarly,
a ML-based translation module can refine its mapping decisions based on the network feedback concerning previous
configurations. Finally, ML can be used to monitor and
analyze the network feedback and take appropriate actions
to correct the data-plane configuration when the network
performance shifts away from the intent goals [29].
Using the above framework, we can identify three main
areas of research. First, the development of formal models
for representing intents and intent handlers is a key step
towards the automation of IDN systems. Second, the development of efficient mechanisms for intent translation into
network configurations as well as intent conflict detection and
configuration validation before deployment is another challenge. Finally, the last challenge concerns the addition of the
necessary intelligence for each IDN system functionalities
to ensure its full automation. In our proposed architecture,
we focus on the first two of these challenges. Hence, in the
following section, we review the Literature with respect to
intent modeling, translation, and validation.
_B. RELATED APPROACHES FOR INTENT MODELING AND_
_TRANSLATION_
Existing network data models such as the management
information bases (MIBs) and YANG (yet another nextgeneration) were developed specifically for low-level device
configuration [30]. They are accompanied by a suite of
client-server protocols such as the simple network communication protocol (SNMP) and the network configuration
protocol (NETCONF) to interact with and configure devices.
While they provide a good abstraction for device configurations, they are not suitable for representing the high-level
abstraction of network intents.
While recent research efforts have proposed several novel
intent models, they have been mostly focused on defining
intents that directly capture desired network or service configurations rather than abstract or declarative user or operator goals. For example, one of the earliest approaches for
intent modeling is the model built within the SDN-based
Open Network Operating System (ONOS) [31]. The model
defines a set of predefined connection-oriented intents
(e.g., topology, end-points connection, or service chain
intents) and then provides a one-to-one mapping of these
intents to network policies. Similarly, the IETF NEMO
project and its extension defined in [32] focus on intents
relating to network operations, such as selecting or changing
a routing path. Other approaches utilize intent models built as
extensions of the Topology and Orchestration Specifications
for Cloud Applications (TOSCA) model [33]. However, they
are also limited to direct mapping of network-oriented lowlevel intents into policies. Chopin [34] is another framework
for specifying intents for cloud resource usage between endpoints. It uses a fixed intent template that defines the desired
traffic source and destination as well as the required resources
between these end-points. The authors in [35] develop a
novel intent definition language for applications hosted on
IP networks. In their model, intents must clearly identify the
two communicating end-points and the desired data-plane
service (e.g., drop heavy hitters), which is then configured
statically in the data-plane. In a similar manner, an intent
model was developed in [36] to describe flow-rule intents
-----
**TABLE 1. Summary of the analysis of intent-based solutions in the Literature.**
for vehicular networks. The authors in [10] provide a more
expressive model of service-oriented intents that allows an
application to identify a service (e.g., caching or resource
provisioning). However, the intents are also pre-associated
with a set of policies that describe the required behavior of
the service in more detail.
In summary, existing network intent models are limited to
describing communication-oriented requirements rather than
aiming at capturing the operator or application goals from the
underlying network. The majority of these existing models
assume that the served applications have a detailed knowledge of the network topology and the exact configurations of
the resource demands for their traffic flows. In other words,
they identify network configurations using low-level vocabulary (e.g., allocated bandwidth between two end-points).
A detailed comparison of existing IDN models and their
limitations is presented in [10].
In contrast to the aforementioned models, highly expressive and well-developed intent models were developed for
software applications such as those used by personal assistants [37], [38]. Moreover, intents capture and interpretation
using these models have been addressed extensively in the
field of natural language processing [39].
The majority of existing solutions in the Literature for
intent to network configuration focus on the direct mapping
of intents into policies [11], [32]. However, one of the main
limitations of this approach is that the rigid modeling of policies as events-condition-actions fails to capture intent goals
except in the context of predefined services such as network
slicing [40]. A different approach is used in [34] where
intents are translated directly into optimization problems for
resource assignment and allocation. Overall, the Literature is
limited to approaches that map intents to policies or limited
direct network configurations.
Table 1 presents a summary of the intent models and
domains of applicability of the major existing solutions in the
Literature. Most of these solutions are domain-specific, and,
hence, provide an intent model that captures requirements
specific to a certain use case. Additionally, these solutions
all apply only to a topology-centric IP-based network. To the
best of the authors’ knowledge, the proposed work is the
**FIGURE 3. I2DN network architecture.**
first attempt to build an intent model and an intent-to-dataplane mapping mechanism with a particular focus on NDNs.
As NDN names are generic and can identify both contents
and network resources, an NDN-based intent model offers
a higher-level of abstraction compared to IP-based models.
Thus, application developers can define high-level custom
network services applied to their contents and flows without
any prior knowledge of the underlying network topology or
endpoints.
**IV. PROPOSED I2DN ARCHITECTURE**
_A. I2DN NETWORK MODEL_
As shown in Fig. 3, the goal of I2DN is to receive intents from
network operators or application developers and then translate them into a programmable NDN data-plane configuration. The target network contains a single domain managed by
a single controller. We further require that the switches in the
NDN data-plane implement stateful programmable MatchAction Tables (MATs) that can process packets according to
custom rules. These MATs can be semantically represented as
a set of rules of the form if (conditions) then actions, where
the conditions and actions apply to packet fields (e.g., content
name), switch metadata (e.g., queue length or output port),
-----
or custom saved states. Furthermore, we assume that access
to the CS is controlled by the MATs as shown in Fig. 3.
The Literature contains several data-plane architectures that
meet these requirements and can thus be used with I2DN.
We can cite P4 switches [8], OpenState [41], or our proposed
ENDN architecture [9]. Traditional NDN switches can also
be used if they allow the creation of new custom stateful
forwarding strategies. The stateful programmable data-plane
allows highly dynamic per-packet forwarding decisions to
be executed directly at the data-plane with little involvement
from the controller. As a result, communication between
switches and the controller for data-plane configuration is
carried-out only when a new intent is requested: every intent
is translated to stateful MAT entries in the data-plane.
_B. OVERVIEW OF I2DN_
Fig. 4 provides a schematic description of the main components of our proposed I2DN architecture. As per the model
described in Sec. III, the processes of I2DN operate in two
phases: production and pre-production. The production phase
corresponds to the intent to data-plane configuration mapping
process that is executed each time a new intent is uttered.
On the other hand, the pre-production phase consists of defining all the different mapping rules used during the production
phase. For instance, the different types of intents that the
users can request is defined by the operator in a library
of intents templates during the pre-production phase. These
intent templates are related to a service or a network strategy.
Examples of service intents are: to forward a given list of
_contents to certain subscribers, to cache contents belonging_
_to a particular namespace for a specific duration or to dis-_
_tribute requests equally among several producers. Examples_
of strategy intents are: to maintain average utilization of a
_server to a certain level or to create three classes of ser-_
_vices for contents. Intents also have parameters called slots_
(e.g., a content namespace or a traffic threshold).
The production phase consists of an intent processing
workflow containing three main steps: identification, translation, and configuration. These steps are closely related to the
stages of a generic IDN intent lifecycle, as shown in Fig. 2.
The validation process is done in parallel with the translation
and configuration steps using the proof engine of the Event-B
formal method.
During the identification step, intents are captured using
a chat interface [42] or with the help of a smart assistant
similar to Amazon’s Alexa [37]. The intent detection and
slot filling [43] operations are then performed. In this step,
an intent is identified by contrasting it against the built-in
intents from the intent library, and a list of label-value pairs
representing intent slot parameters is generated (e.g., time
intervals, content names, or producers IDs).
Once the slot labels and values are obtained, they are
fed into the first module of intent translation: the abstract
Event-B machine (EBM) generation. Every intent template
is associated with an abstract EBM during the pre-production
phase. This EBM contains an abstract implementation of the
**FIGURE 4. Proposed intent mapping architecture.**
desired network behavior to fulfill the intent. Event-B [13]
is a formal method that allows developers to model a discrete system control problem using a set of state variables
in an EBM. Constraints, called invariants, are then added
to the possible values of the state variables to represent
the expected system behavior when the problem is solved
(e.g., a counter can never reach a certain threshold). Events
acting on the state variables are then created and proven
to be compliant with the constraints, thus resulting in
an event-based algorithm that solves the control problem.
Event-B can thus create programs that are proven to be correct
by construction using its proof engine [14]. Our architecture
uses Event-B to model the programmable network behavior
in response to each desired intent. EBM events follow the if
_(condition) then action semantic. This representation facili-_
tates the refinement of the abstract machines into corresponding Match-Action Table (MAT) rules in the data-plane. In this
case, EBM state variables correspond to packet headers, traffic statistics, switch values (e.g., queue size), packet metadata
(e.g., packet source and destination), and in-network custom
saved states (e.g., the last measured RTT), and thus correspond to the different inputs and outputs of the network.
In the abstract EBM, intent slot values are mapped to EBM
parameters, and the semantics of the intent result in several
invariants that ensure that the EBM implements the required
intent behavior. Once the abstract EBM is instantiated with
the slot values, it is refined using several refinement patterns [44] defined in the pre-production phase until a final
EBM, called concrete EBM, is reached. EBM refinement is
an essential part of the Event-B method: it gradually adds
more details to the EBM while ensuring the invariants are
always met until the problem is completely solved. The
main goal of the refinement step is to transition between
two different EBM representations. The abstract EBM representation is high-level and allows the intent requirements to
-----
be defined conveniently using abstract variables. On the other
hand, the concrete EBM representation is switch-dependent
and thus close to the data-plane MAT structures. As a result,
the refinement patterns map abstract EBM variables and
events into concrete EBM constructs to adapt to the network
capabilities. For instance, a load balancer intent can balance
the load between two producers using a specific load distribution algorithm (e.g., round-robin, congestion-aware, or based
on the source region of the packets). The abstract EBM would
then contain the generic load balancing algorithm and an
abstract variable specifying the load distribution algorithm
to use. On the other hand, the concrete EBM would contain
the full implementation of the load balancer with the load
distribution algorithm in the case of a P4 network, or an
action to forward the packets to a load balancer middlebox
implementing the required load distribution algorithm in a
more traditional network. The proof engine is executed during
every refinement to ensure that the refined EBMs do not violate the invariants of the abstract EBM. Hence, the concrete
EBM is proved to be compliant with the intent requirements
set at the abstract EBM level.
Once the concrete EBM corresponding to the intent has
been generated, it is processed by the EBM analyzer module.
The main goal of this module is to translate the concrete
EBM into programmable MAT entries. However, as multiple
intents can be configured in the network, we first need to
check that these intents do not result in conflicting data-plane
configurations. Therefore, the EBM analyzer first performs
consistency checks among multiple intents. More precisely,
through the composition of different EBMs representing
different intents [45], we can ensure that the invariants of
an EBM are not violated by the processing done in another
EBM. Hence, we can verify that a new intent does not conflict
with existing ones. Once the concrete EBM passes the consistency checks, it is translated into a stateful MAT program
represented in a model that is compatible with the underlying
network, such as a custom forwarding strategy [12] or a
P4 program [9], [46]. Finally, it is worth noting that some
EBM variables are mapped into the execution of generic
control-plane functionalities (e.g., a routing scheme to find
the shortest path, or an optimal network function placement
algorithm).
The following sections provide a brief description of our
proposed models and intent lifecycle functionalities.
**V. PROPOSED INTENT LIFECYCLE**
In this section, we describe in detail the different steps of the
intent lifecycle of our I2DN architecture. Table 2 contains a
summary of the different mathematical variables used in this
section.
_A. INTENT CREATION AND IDENTIFICATION_
In our model, at an abstract level, an NDN can be
regarded as a blackbox that provides end-points (e.g., users,
devices, and applications) with customizable contents.
Customization includes various delivery patterns
**TABLE 2. Summary of the different variables.**
(request/receive, publish/subscribe, notifications, etc.), content processing services (e.g., encryption, filtering, and synchronization of multiple streams) as well quality guarantees
(e.g., reliability, delivery speed, and latency). Furthermore,
it provides additional delivery services (e.g., access control,
caching, request filtering, load balancing, geo-gating, and
delivery quality assurance). The network blackbox also provides monitoring (e.g., reporting the number of requests from
a certain user) and event-reporting (e.g., reporting an alarm
when the number of content requests in a geographical area
exceeds a given threshold) services. From the perspective of
the network operator, the network blackbox is composed of
a number of abstract services (e.g., content request/response
handlers, content filtering, firewalls, and access control) that
act on resources (e.g., consumers and producers lists, content
namespaces, abstract communication channels to consumers,
producers, and contents or caches) that must be configured in
order to satisfy the requirements of the offered services.
These requirements are defined as intents that are instances
of intent templates. The latter are created by the network operator and are stored in an intents library during
the pre-production phase. They are defined using semantic
frames [39], [47]. Each frame, or intent template, contains
a unique intent name n and a set of entities, referred to as
slots that are placeholders for the values of attributes needed
to describe the intent. The intent template also provides a
set of different example utterances that the intent owner can
use. These samples can be communicated to the application
developer as hints.
Formally, an intent template is identified by its name n and
defines different sequences s1, s2, · · · of slot labels from a
set L such that si = (li1, li2, · · · ). Each slot label l ∈ _L_
describes an object that the users may mention in the intent.
Fig. 5 depicts three examples of different intent templates.
The first is an intent to describe a load balancing mechanism
that an application developer can request. The template indicates the set of slot labels with their types that can be used
in that intent (e.g., cn, c1, and p1). The possible sequences
of slots are defined by the uttered samples. For example,
in the first uttered sample: distribute the received requests
-----
**FIGURE 5. Examples of built-in intent templates.**
_for cn using mechanism between p1 and p2’’, indicates that_
the expected slot labels are s = {cn, mechanism, p1, p2}.
The second intent template in the figure describes an intent
to cache contents in the network when they satisfy certain
properties (e.g., cache contents generated by producer p1 in
the last hour). Finally, the third template describes an intent
requesting to block or report heavy hitters (i.e., consumers
who send many Ipkts to a given type of content) in a certain
region.
At production time, users utter an intent to describe the
desired outcome guided by the samples of uttered intents.
The identification module tokenizes the intent into words
**w = (w1, w2, · · · ) that are processed in two steps: intent**
classification, i.e., mapping the uttered words to the correct
intent n, and a second phase of slot filling that identifies
a corresponding sequence si = (li1, li2, · · · ) and a corresponding subset of the tokenized words stored in the vector
**vi = (w1, w2, · · · ) storing the corresponding values of the**
slots. For example, using the first intent template in Fig. 5,
when the user utters ‘‘Producer1 and Producer2 should
_serve Video between 3:00pm to 5:00pm’’, the identification_
module’s output is the intent template name LoadBalance_Action, the slot labels sequence s_ p1, p2, cn, t, and
= { }
the slot values v ‘‘Producer1’’, ‘‘Producer2’’, ‘‘Video’’,
= {
‘‘3:00pm to 5:00pm[′′] . It is worth noting that slot values
}
correspond to abstract values that can later be mapped to
concrete network-specific values. For instance, the Video slot
value corresponds to a content name prefix identifying all
the video contents of a specific application in the previous
example.
We adopt open-source machine learning-based tools, such
as DeepPavlov [48] in this phase. Models of the intents are
first defined and stored as JSON objects data sets. The tool
is then trained using a graphical user interface until it can
correctly identify the intents. When a new intent template is
added, the system is retrained to recognize the intent. The
outcome of the intent identification phase is a selected intent
**FIGURE 6. Examples of abstract EBMs.**
**FIGURE 7. An Event-B context.**
and a list of slot labels and their values that are then passed
to the intent translation phase.
_B. EBM TEMPLATES AND INTENT TRANSLATION_
We will first describe the abstract EBM templates that the
operator creates for each intent and slot sequence. As shown
in Fig. 6, we implement an intent behavior in Event-B using
two components: a context and an abstract machine .
_C_ _M_
The context defines the relatively static state of the network
_C_
and is shared by all the machines. On the other hand, every
machine implements the behavior of a specific intent.
As shown in Fig. 4, the network context is created during
the pre-production time but can be updated during production. In Event-B, the context is used to define new data types
that are associated with the variables representing the state
of EBMs [13]. In our architecture, we thus use the context
to represent the types of different resources and objects that
are available or can be manipulated in the network. Examples are producers, consumer regions, content namespaces,
or scheduling algorithms. Fig. 7 shows the Event-B code
of a network context which contains three sections: Sets,
Constants, and Axioms. Hence, the context can be modelized
by the set of sets (S, C, A). Here, S lists all the types
_C_
(i.e., the categories of objects or resources that comprise
or interact with the network). The constants set C stores
possible elements of the sets in S (e.g., the possible content producers). Here constants can also refer to names of
algorithms or control-plane mechanisms that can be resolved
during refinement. For example, the LoadBalanceAlgorithms
set stores the constants RoundRobin and WRR that correspond to different scheduling algorithms for a load balancer.
-----
**TABLE 3. Intent to EBM mapping.**
Finally, the axioms set A is used mainly to link constants to
their set (e.g., axm1 in Fig. 7)). But it is worth noting that
axioms can also be used to specify properties of sets and constants (e.g., every content namespace must have at least one
producer).
A machine template contains the implementation of an
_M_
intent behavior. Table 3 summarizes how intents are mapped
to EBMs. At the level of the intent, the network is seen as a
blackbox whose expected outcomes are specified. However,
in the EBM, we go inside this blackbox and model how the
network processes packets to satisfy the intent. In NDN, the
network processes two types of packets: Ipkts and Dpkts.
Hence, our EBMs specify the stateful treatment of Ipkts and
Dpkts inside the network. More precisely, an EBM models
an NDN network and its possible packet processing actions
using a set of variables V. The variables have a type that can
either be a native type (e.g., boolean or integer) or one of the
new types defined in the context .
_C_
The EBM variables can be classified into four categories:
packet variables, flow variables, abstract variables, and slot
parameters. Packet variables correspond to any data specific to a single packet. Hence, they are used to represent
header fields (e.g., content name), individual packet forwarding actions (e.g., drop or forward to a specific destination), or metadata (e.g., queue priority, received timestamp,
or source region). Packet variables are thus reinitialized each
time the network receives a new packet. On the other hand,
flow variables represent stateful information that is kept in the
network. Examples are data managed by stateful algorithms
(e.g., number of packets sent to a specific destination) or
contents cached in the network. Abstract variables are only
allowed at the level of abstract EBMs and correspond to parts
of the packet processing treatment that have not yet been
specified in detail. For instance, an abstract EBM may have
an abstract variable representing the result of a congestion
detection mechanism without detailing how this mechanism
works. This abstract EBM would then specify how to process packets in case of congestion based on the value of
this abstract variable. The refinement process eliminates the
abstract variables by replacing them with the corresponding algorithms. The operator has the complete freedom to
decide on the abstraction level that is represented by these
abstract variables. A higher level of abstraction will provide
more flexibility to adapt to different network domains and
capabilities at the expense of refinement steps. Finally, slot
parameters are used to make an EBM generic by allowing its
behavior to be parametrized.
Packet processing actions are represented in EBMs by
a set of events E that act on the variables V. The events
have an if (condition) then action semantic, where both the
condition and actions are relative to the variables V. Hence,
an event e **E can be formally modeled as a conditional**
∈
statement: e := if (Ge(V )) then V:= Ae(V ). The event
guard Ge contains a list of logical conditions on the values
of the EBM variables V that can trigger the event. On the
other hand, the event action Ae(V ) specifies how variables
are modified when e is executed. Hence, each event that
is triggered brings the network from one state to another
state. The possible states of the machine are restricted by
several conditions on the variables represented by the set of
invariants I. Finally, it is worth noting that each machine
contains an initialization event that is executed as the first
event in the machine. It assigns different values to the
machine variables in order to define the desired initial state
(e.g., the number of received requests for a specific content is
initialized to 0, or the cached contents set is initialized with
the empty set).
To better explain how EBMs work, we will present in
detail a simple load balancer intent example. The application developer wants to distribute the load of requests
for a video namespace between two producers using the
round-robin algorithm. The following intent is then uttered:
_‘‘Distribute received requests for Video between P1 and P2_
_using the RoundRobin algorithm’’ and the following slot_
values are extracted: Video, P1, P2, and RoundRobin. These
slot values are then passed to the abstract EBM template
shown in Fig. 8: they serve to initialize the slotLoadBal_ancedNamespace, slotProducer1, slotProducer2, and slot-_
_LoadBalancerAlgorithm template parameter variables in the_
_INITIALISATION event. Fig. 8 shows that EBMs have three_
main sections: VARIABLES, INVARIANTS, and EVENTS.
The VARIABLES section lists all the variables used by the
EBM while the INVARIANTS section is initially used to
specify the type of these variables. ipktContentName and ipkt_Destination are examples of packet variables, while this EBM_
contains no flow or abstract variables. Every event contains a
guards section introduced by the WHERE keyword that contains several conditions on the variables, as well as an actions
section introduced by the THEN keyword where variables are
modified. There are two main events for the Ipkts and Dpkts:
-----
**FIGURE 8. The load balancer abstract EBM.**
a receive event that initializes the packet variables and a
deliver event that allows the receive event to be triggered
again. The receive event has an event parameters section
introduced by the ANY keyword to represent the possible initialization values of packet variables constrained by a guard
condition. For instance, the event parameter contentName
of the receiveIpkt event, alongside the guard ‘‘contentName
_Namespaces’’, specify that the Ipkt content name header_
∈
field can be any namespace from the Namespaces set defined
in the context (cf. Fig 7). Finally, there are three events
that process Ipkts with the following behavior: if the Ipkt
content name is the same as the load-balanced namespace
specified in the intent, then the packet is either forwarded to
the first or second producers; otherwise, no action is done.
As a result, the abstract EBM only describes the details of the
namespace check and the packet forwarding, while the exact
implementation of the load balancing algorithm is left for the
refinement process.
It is worth noting here an essential capability of Event-B
that comes from the expressiveness of invariants. While several invariants specify the type of variables, other invariants
are used to put constraints on the values of variables. For
example, inv8 in Fig. 8 imposes that Ipkts requesting content
in the slotLoadBalancedNamespace can only be forwarded
either to slotProducer1 or slotProducer2. This constraint corresponds to one part of the semantic of the load balancer
intent. Hence, invariants can also be used to represent the
expected outcomes of an intent behavior using constraints on
variables. Examples of constraints that can be represented as
invariants are: the currently served request must belong to
_the set of authorized contents, the requesting user location_
_must be within a certain geographical area, or the number of_
_responses should not exceed the number of requests in a pull_
_delivery pattern. All the events of the EBM are then checked_
using the Event-B proof engine (cf. Fig. 4) to make sure they
do not violate the constraints set by invariants. As a result,
both the invariants and the proof engine result in the ‘‘correct
_by construction’’ feature of Event-B._
Abstract EBMs are refined to gradually have additional
implementation details until the intent behavior is completely
specified. In Event-B, a refinement extends an initial EBM by
adding new variables, invariants, and events [13]. Events of
the abstract EBM can also be refined by adding new guards
and actions, with the restriction that the refined event results
in exactly the same outcome on the variables of the abstract
EBM. This restriction ensures that refined versions of an
event may not violate the invariants of the abstract EBM.
In other words, refinements are syntactical extensions of an
EBM that preserve the invariants. Fig. 9 shows the concrete
machine resulting from the refinement of the abstract load
balancer machine of Fig. 8 when the round-robin algorithm is
used. The currentPosition, numIpktsP1 and numIpktsP2 flow
variables are added alongside three invariants that impose
the round-robin scheduling constraint (inv4, inv5, and inv6).
The processIpktToP1 and processIpktToP2 events are then
refined accordingly by adding new guards and actions. In our
architecture, we use the refinement patterns concept introduced by Iliasov et al. [44]. Refinement patterns allow us
to automate the implementation of refinements by formally
specifying every EBM syntactical modification that is part
of a refinement. Refinement patterns also have applicability
conditions that allow them to be triggered when needed. For
instance, the refinement that led to the concrete machine of
-----
**TABLE 4. EBM to MAT mapping rules.**
**FIGURE 9. The resulting concrete EBM of the load balancer with round**
robin.
Fig. 9 was triggered by the presence of the value RoundRobin
in the slotLoadBalancerAlgorithm variable.
When the concrete machine is created, it is processed by
the EBM analyzer in order to generate a corresponding dataplane configuration. The next section describes the different
processes performed by the EBM analyzer.
_C. EBM ANALYZER_
The EBM analyzer first performs several consistency checks
on the concrete EBM to make sure it does not conflict with other intents already configured in the network.
These consistency checks are based on the fact that every
EBM has invariants that specify the expected outcome of the
corresponding intent behavior. Consequently, we can check
that two EBMs do not conflict with each other by validating
the events of the first EBM against the invariants of the second
EBM and vice-versa. In order to perform these consistency
checks, the two EBMs have to be composed to create a
combined EBM containing the invariants and events of both
machines. The details of EBMs composition are outside the
scope of this paper. However, several efficient schemes exist
in the Literature [45]. The creation of the combined EBM
results in the generation of several invariant preservation
proof obligations. The Event-B proof engine then examines
these proof obligations that require that all events preserve the
invariants. Automated tools like Rodin [14] can automatically
process most if not all proof obligations; any remaining ones
may be proved manually. If a proof obligation cannot be
proved, it means that the two intents, or their implementations, are conflicting. The new intent is rejected, and the user
who submitted the intent is notified. Once the concrete EBM
is validated, it is converted to a data-plane configuration as
follows.
The NDN data-plane contains the FIB and PIT tables used
to forward the Ipkts and Dpkts, as well as the CS used to
cache already served packets. Additionally, we assume that
the data-plane contains programmable MATs as part of both
the Ipkt and Dpkt pipelines. Examples of implementations
of these MATs include our proposed ENDN architecture that
uses P4 functions [9], as well as traditional NDN forwarding
strategies [12]. A MAT can be used to select custom forwarding actions based on values derived from packet header fields,
metadata, or measured statistics. The possible actions include
forwarding the packet to one or more network ports, dropping
it, sending it to the CS, notifying the controller, modifying
header fields, as well as storing a custom state in the switch.
The MAT execution structure can be modeled as a collection
of conditional rules of the form if (condition on fields) then do
_action. The MAT execution structure thus closely resembles_
the event execution model of EBMs. Hence, we can map
EBMs to MATs by following the rules in Table 4.
We can classify the EBM components into four categories:
events, variables, context constants, and non-mappable components (e.g., invariants). Events are directly mapped to MAT
rules: event guards are mapped to rule conditions, while event
actions are mapped to rule actions. Only packet and flow
variables can be mapped to an MAT component. Abstract
variables are processed by the different refinement patterns,
-----
and are thus not allowed at the level of the concrete machine,
while slot parameter variables are considered as constants.
Packet variables are standard and have special mapping
rules to MAT fields: they are mapped to packet header
fields (e.g., content name as shown in Fig. 1), function calls
(e.g., execute a meter), or metadata fields (e.g., source and
destination ports). Flow variables are usually custom and are
mapped to stateful variables in the MAT (e.g., P4 registers).
Finally, the context constants are translated to local values
for the switch (e.g., a producer is mapped to an output port
number and a forwarding hint value). It is worth noting that
we can also have special flow variables in EBMs. These can
be used to specify some requirements on the FIB and PIT
rules (e.g., the FIB routes need to be computed using the
shortest path algorithm).
Fig. 10 shows an example of a P4 code corresponding to the
round robin load balancer concrete EBM of Fig. 9. The different Event-B components are mapped to the corresponding
P4 structures: flow variables become registers (in blue in the
code), packet variables become metadata fields or function
calls (in green in the code), and context constants become
_define statements (in red in the code). In the concrete EBM,_
a special variable called processingStepIpkt allows the events
to be organized as possible alternative in a specific processing
step of Ipkts. For example, in Fig. 9, the receiveIpkt event
corresponds to the processing step 0, then the processIp_ktToP1, processIpktToP2, and processIpktOtherNamespace_
can happen at the processing step 1, finally the deliverIpkt
event happens during processing step 2. Events that are on
the same processing step are mutually exclusive, and thus
correspond to different match-action rules in a single MAT.
Every processing step thus results in the creation of a new
P4 table (e.g., processingStepIpkt1 table in Fig. 10), except
for the processing steps of the receive and deliver events. The
actions of the events are then mapped to P4 actions accessible
from their associated processing step table. Finally, the event
guards become entries in the corresponding P4 table. The
resulting P4 code can then be installed in the data-plane.
**VI. PERFORMANCE EVALUATION**
This section demonstrates the advantages of our proposed
I2DN architecture. More precisely, declarative goals are
expressed as intents, and then translated into data-plane configurations. We then measure the performance gains achieved
by these intents when compared to the performance of a
traditional NDN configured with shortest path routes and
best route strategies [12]. Our experiments employ the Abilene topology [49] built using ENDN switches [9] within
the ndnSIM simulator [50]. The ENDN switches are used
because they allow our intents to be implemented in the
data-plane as P4 functions.
_A. TEST SCENARIO_
Fig. 11 shows the Abilene topology used in our simulation. All links have a rate of 1Mbps and introduce a propagation delay based on the geographical distance between
**FIGURE 10. The resulting P4 code of the round robin load balancer.**
the cities. We consider a content delivery application with
content geo-gating requirements where access to contents
is restricted based on the geographical region of the users.
More precisely, users from cities on the east coast of the
United States (blue nodes in Fig. 11) can only access content
specific to their region, and similarly for users from west
coast cities (green nodes in Fig. 11). Denver and Indianapolis
are regional producers that cache the content of their region,
and Kansas City is a national producer that can serve requests
-----
**FIGURE 11. The abilene topology.**
from both regions while ensuring the geo-gating restrictions
using an application-level logic. To configure the network, the
application developer initially defines three intents (words in
italic correspond to slot values, and the application namespace is /MyApp):
- I1: Indianapolis can only serve requests for /MyApp
content coming from the east coast.
- I2: Denver can only serve requests for /MyApp content
coming from the west coast.
- I3: Kansas City can serve all requests for /MyApp
content.
Additionally, the application developer would like to limit
the content requests served by regional producers by automatically offloading any excess requests towards Kansas City.
This results in two additional intents
:
- I4: Limit the /MyApp content requests served by Indi_anapolis to 100 requests/s and offload any excess_
requests to Kansas City.
- I5: Limit the /MyApp content requests served by Denver
to 100 requests/s and offload any excess requests to
_Kansas City._
We also consider a second application that requires content
from the east coast requested by users in the west coast
to be delivered with the lowest delay. The content of this
application is urgent, so the application developer agreed
with the network providers to have the application traffic
forwarded with a higher priority. Additionally, the application
developer requests proactive caching of the contents in the
west coast when the number of requests reaches a certain
threshold. In this case, the reception of a new request triggers
a secondary request initiated by the P4 code to retrieve other
available contents from the east coast to cache them locally.
As a result, the application developer selects two intents
:
- I6: Serve /UrgentContent traffic with high priority.
- I7: Proactively cache /UrgentContent contents in the
_east coast if the number of requests reaches 20 requests_
_per day._
Finally, the network operator selects a strategy intent that
locally avoids congestion in the network by always providing
two alternative paths to any destination in every switch. The
shortest path is used unless the link utilization reaches 90%.
In that case, an alternative path is used. This results in the
following intent
:
- I8: Avoid congestion in the network by keeping the link
utilization below 90%.
Intents I1 and I2 correspond to the same intent template
with different slot values (and similarly for intents I4 and I5).
The different intents are then processed by our architecture
and result in several P4 functions that are placed in the
switches as follows
:
- The P4 functions corresponding to intents I1 and I2 are
placed in the east coast and west coast nodes respectively
(i.e., the green and blue nodes in Fig. 11). These P4
functions add a forwarding hint towards Indianapolis or
Denver to the /MyApp Ipkts originating from the east or
west coasts respectively.
- I3 is translated to a P4 function placed in Kansas City
that automatically sends Ipkts to the central producer
even if a forwarding hint to Denver or Indianapolis is
present.
- I4 and I5 are mapped to a rate-limiting P4 function
placed in Denver and Indianapolis. It measures the
rate of /MyApp requests and offloads any traffic over
100 requests/s to Kansas City.
- I6 is implemented as a P4 function that requires all
_/UrgentContent packets to be processed with a high_
queue priority. This P4 function is installed in all the
switches along the path followed by /UrgentContent
packets.
- I7 is translated to a P4 function placed in Denver that
proactively caches the /UrgentContent in the local CS.
- The P4 function generated by I8 is placed in all the
switches. It processes all Ipkts containing forwarding
hints towards specific destinations (e.g., Denver or Indianapolis in our scenario) by sending them to a secondary
path in case of congestion. The algorithm also makes
sure to check the source port from which packets are
received to avoid creating forwarding loops by sending
the packet back through the face from where it was
received.
At t 0s, consumers from every city of the east and west
=
coasts (i.e., the green and blue nodes in Fig. 11) start requesting /MyApp content at a rate proportional to the size of their
population. From t 100s to t 150s, there is a rush period
= =
where additional traffic is added, resulting in congestion on
the east coast. Finally, the /UrgentContent located in Atlanta
is requested by a consumer in Seattle at a slow exponentially
distributed rate with a mean of 1 request/s during the entire
simulation time. The RTT (including transmission, propagation, and queuing delays), packet loss rate, and received Dpkt
throughput are measured for the /MyApp traffic originating
from every city as well as for the /UrgentContent traffic.
We then compare the performance of an NDN network configured using the intents described above against that of a
standard NDN network with no intents. The latter forwards
all /MyApp requests to Kansas City using the shortest path as
geo-gating can only be guaranteed by the central producer.
_B. EXPERIMENTAL RESULTS_
Fig. 12 shows the measured RTT for the /MyApp traffic
originating from Los Angeles, Houston, and New York.
-----
**FIGURE 12. Measured RTT for the /MyApp traffic coming from different cities and for the /UrgentContent traffic.**
The effects of satisfying intents I1 and I2 are visible in
Figs. 12a and 12b: the RTT increases by around 30ms when
no intents are used because the packets are served by the central producer in Kansas City instead of the regional producers.
This additional delay is consistent with the propagation delay
of 10ms between the regional producers and Kansas City
added twice to the transmission delay of a 1KB Dpkt over
the 1Mbps links. On the other hand, I3 allows the requests
originating from Houston to be processed directly by the
central producer, which is closer than the regional producers.
During the rush period, the traffic is increased, which causes
the Indianapolis rate-limiting threshold defined by I4 to be
reached. The excess traffic is thus offloaded to Kansas City
which causes the RTT of the New York traffic to increase
slightly in Fig. 12a.
Fig. 12d clearly shows the effects of intents I6 and I7.
At around t 20s, the Denver switch proactively caches the
=
_/UrgentContent Dpkts which causes a significant decrease of_
the RTT. Additionally, the delay remains unchanged during
the rush period when intents are used as the /UrgentContent
traffic is treated with a high queue priority.
The effect of the congestion avoidance intent I8 is mainly
visible during the rush period. During this time, the traffic increases, as shown in the throughput plots of Fig. 14,
which causes congestion in the east coast. This causes an
increase in delay for the New York and /UrgentContent traffic
(cf. Figs. 12a and 12d) when no intents are used.
The congestion also causes an increase in packet loss
and a decrease in received throughput as shown in
Figs. 13a, 13d, 14a, and 14d. On the other hand, the
P4 function that was generated from I8 has successfully
avoided the congestion. Hence, there is no degradation of
performance when intents are used.
Our proposed architecture has allowed the network to adapt
to the needs of the application and network operators while
improving network manageability and configuration through
intents. This, in turn, resulted in a better network performance
compared to traditional non-intent-based networks.
_C. COMPUTATIONAL COST_
Finally, we analyze the computational cost that is introduced
by the intents on both the control- and data-planes.
At the control-plane, our architecture processes intents
asynchronously from the data-plane operations: data-plane
configurations are generated once when an intent is processed but are not modified later during packet processing.
More precisely, every intent is completely translated into an
autonomous data-plane configuration/program that does not
interact with the control-plane. Hence, the communication
overhead between the control- and data-planes takes place
once. The operator can limit data-plane updates to batch
processes. The main control-plane cost is incurred during the
installation of a new configuration in the data-plane. However, several programmable data-plane architectures allow
-----
**FIGURE 13. Measured packet loss for the /MyApp traffic coming from different cities and for the /UrgentContent traffic.**
**FIGURE 14. Received Dpkt throughput measured for the /MyApp traffic coming from different cities and for the /UrgentContent traffic.**
-----
fast runtime reconfigurability of P4 programs which makes
the impact of data-plane reconfigurations minimal on the
switch operation [9].
At the data-plane level, P4 programs introduce a processing
delay dependent on the switch hardware or software implementation [51]. Several high-performance P4 switch implementations were proposed in the Literature to significantly
reduce this processing delay, especially using FPGAs [52]
or GPUs [53]. It is worth noting, however, that most highperformance P4 switches are limited in the number of P4
programs that can be executed in parallel (e.g., P4VBox
can execute up to 13 P4 programs in parallel [54]). Hence,
the main data-plane cost overhead can be characterized by
the number of P4 functions that are needed in the network for a specific set of intents. In our test scenario,
we notice that some intents are mapped to a single P4 function
(e.g., I3 or I4), while other intents are implemented as a P4
function placed in every switch (e.g., I6 or I8). It is worth noting though that several intents correspond to the same intent
template with different slot values. These intents can thus be
shared at the data-plane level by calling the same P4 function
using different parameters. The number of P4 functions at
the data-plane level can thus be reduced using P4-function
sharing. In previous work [9], the authors have discussed the
trade-off between scalability and intent customizability and
performance that depends on the available MAT resources at
the data-plane level. This trade-off is decided by the network
operator and embedded in a control-plane logic at the level of
the EBM analyzer. The details of this logic consist in solving
constrained optimization problems and are outside the scope
of this paper.
**VII. OPEN ISSUES AND FUTURE RESEARCH WORK**
This section discusses several assumptions and limitations of our proposed work and highlights future research
opportunities.
- Intent model: The proposed intent model takes a
major step forward towards representing intents that
can capture the operator and developer goals in a
much more declarative way compared to the traditional
event-condition-action models [3]. However, the model
relies on predefined classes of intents where users must
utter one of the predefined intents. This model represents a closed-world model. An open-world intent
model can accept and identify unknown, not previously
seen, intents from the users. In the Literature, several open-world and multiple intents models have been
developed in other contexts, such as chatbots [42], but
remain a challenge for IDN.
- Single vs. multiple network domains In the proposed
work, we considered a single subnetwork with a single
control domain. Extending the proposed work to multiple independent domains that necessitate the collaboration and orchestration between several controllers is left
as future work.
- Learning and run-time adaptation: Thus far, our
work has focused on the mapping of user intents to
PDP configurations while assuring conflict resolution
and validation before they are installed. We believe
that producing an efficient intent model and intent to
data-plane translation methodology represents a first
step towards realizing self-configuring and healing IDN.
Hence, the challenge of monitoring and analyzing the
network behavior and adapting it at run-time remains a
future work.
- Trust and security: Allowing application developers
to configure the network data-plane indirectly through
intents introduces additional trust and security issues.
**VIII. CONCLUSION AND FUTURE WORK**
This paper proposed a novel architecture to capture
high-level named-data network (NDN) service intents and
translate them into data-plane configurations. Our architecture employs the Event-B modeling and refinement concepts to represent high-level intents using abstract Event-B
Machines (EBMs) and then refine them to machines that
can be used to configure the data-plane. We have provided
a detailed description of the modeling and mapping steps
for translating intents to EBMs and refining these machines.
Finally, we showed how these produced EBMs could be translated to instructions on the data-plane match action tables.
Experimental evaluation results demonstrate the feasibility
and efficiency of the various functionalities of the architecture. Currently, we are investigating the feasibility of employing deep learning to replace some of the statically defined
mapping rules.
**REFERENCES**
[1] L. Zhang, A. Afanasyev, J. Burke, V. Jacobson, K. Claffy, P. Crowley,
C. Papadopoulos, L. Wang, and B. Zhang, ‘‘Named data networking,’’ SIGCOMM Comput. Commun. Rev., vol. 44, no. 3, pp. 66–73,
Jul. 2014.
[2] Z. Li, Y. Xu, B. Zhang, L. Yan, and K. Liu, ‘‘Packet forwarding in
named data networking requirements and survey of solutions,’’ IEEE
_Commun. Surveys Tuts., vol. 21, no. 2, pp. 1950–1987, 2nd Quart.,_
2019.
[3] E. Zeydan and Y. Turk, ‘‘Recent advances in intent-based networking: A
survey,’’ in Proc. IEEE 91st Veh. Technol. Conf. (VTC-Spring), May 2020,
pp. 1–5.
[4] L. Pang, C. Yang, D. Chen, Y. Song, and M. Guizani, ‘‘A survey on intentdriven networks,’’ IEEE Access, vol. 8, pp. 22862–22873, 2020.
[5] R. Ullah, M. A. U. Rehman, and B.-S. Kim, ‘‘Design and implementation of an open source framework and prototype for named data
networking-based edge cloud computing system,’’ IEEE Access, vol. 7,
pp. 57741–57759, 2019.
[6] O. Karrakchou, N. Samaan, and A. Karmouch, ‘‘EP4: An applicationaware network architecture with a customizable data plane,’’ in Proc.
_IEEE 22nd Int. Conf. High Perform. Switching Routing (HPSR), Jun. 2021,_
pp. 1–6.
[7] N. Anerousis, P. Chemouil, A. A. Lazar, N. Mihai, and S. B. Weinstein,
‘‘The origin and evolution of open programmable networks and
SDN,’’ IEEE Commun. Surveys Tuts., vol. 23, no. 3, pp. 1956–1971,
3rd Quart., 2021.
[8] P. Bosshart, G. Varghese, D. Walker, D. Daly, G. Gibb, M. Izzard,
N. McKeown, J. Rexford, C. Schlesinger, D. Talayco, and A. Vahdat,
‘‘P4: Programming protocol-independent packet processors,’’ SIGCOMM
_Comput. Commun. Rev., vol. 44, pp. 87–95, Jul. 2014._
-----
[9] O. Karrakchou, N. Samaan, and A. Karmouch, ‘‘ENDN: An enhanced
NDN architecture with a P4-programmabIe data plane,’’ in Proc. 7th ACM
_Conf. Information-Centric Netw., Sep. 2020, pp. 1–11._
[10] S. Alalmaei, Y. Elkhatib, M. Bezahaf, M. Broadbent, and N. Race, ‘‘SDN
heading north: Towards a declarative intent-based northbound interface,’’
in Proc. 16th Int. Conf. Netw. Service Manage. (CNSM), Nov. 2020,
pp. 1–5.
[11] A. Clemm, L. Ciavaglia, L. Z. Granville, and J. Tantsura. Intent_Based Networking-Concepts and Definitions. Accessed: Mar. 29, 2021._
[Online]. Available: https://datatracker.ietf.org/doc/html/draft-irtf-nmrgibn-concepts-definitio%ns-05
[12] A. Afanasyev, J. Shi, B. Zhang, L. Zhang, I. Moiseenko, Y. Yu, W. Shang,
Y. Huang, J. P. Abraham, and S. DiBenedetto, ‘‘NFD developer’s guide,’’
NDN, Shanghai, China, Tech. Rep. NDN-0021, 2018.
[13] J.-R. Abrial, Modeling in Event-B: System and Software Engineering.
Cambridge, U.K.: Cambridge Univ. Press, 2010.
[14] Event-B.Org. Accessed: Apr. 21, 2021. [Online]. Available:
http://www.event-b.org/install.html
[15] H. Zhang, Y. Li, Z. Zhang, A. Afanasyev, and L. Zhang, ‘‘NDN
host model,’’ ACM SIGCOMM Comput. Commun. Rev., vol. 48, no. 3,
pp. 35–41, Sep. 2018.
[16] R. Ahmed, F. Bari, S. R. Chowdhury, G. Rabbani, R. Boutaba, B. Mathieu,
R. Ahmed, F. Bari, S. R. Chowdhury, G. Rabbani, R. Boutaba, and
B. Mathieu, ‘‘αRoute: Routing on names,’’ IEEE/ACM Trans. Netw.,
vol. 24, no. 5, pp. 3070–3083, Oct. 2016.
[17] A. Ndikumana, S. Ullah, K. Thar, N. H. Tran, B. Ju Park, and C. S. Hong,
‘‘Novel cooperative and fully-distributed congestion control mechanism
for content centric networking,’’ IEEE Access, vol. 5, pp. 27691–27706,
2017.
[18] M. A. Naeem, M. A. U. Rehman, R. Ullah, and B.-S. Kim, ‘‘A comparative performance analysis of popularity-based caching strategies
in named data networking,’’ IEEE Access, vol. 8, pp. 50057–50077,
2020.
[19] K. Thar, N. H. Tran, S. Ullah, T. Z. Oo, and C. S. Hong, ‘‘Online caching
and cooperative forwarding in information centric networking,’’ IEEE
_Access, vol. 6, pp. 59679–59694, 2018._
[20] A. Kalghoum and S. M. Gammar, ‘‘Towards new information centric networking strategy based on software defined networking,’’ in
_Proc. IEEE Wireless Commun. Netw. Conf. (WCNC), Mar. 2017,_
pp. 1–6.
[21] Q.-Y. Zhang, X.-W. Wang, M. Huang, K.-Q. Li, and S. K. Das, ‘‘Software
defined networking meets information centric networking: A survey,’’
_IEEE Access, vol. 6, pp. 39547–39563, 2018._
[22] E. Aubry, T. Silverston, and I. Chrismen, ‘‘Implementation and evaluation
of a controller-based forwarding scheme for NDN,’’ in Proc. IEEE 31st Int.
_Conf. Adv. Inf. Netw. Appl. (AINA), Mar. 2017, pp. 144–151._
[23] Q. Sun, W. Wendong, Y. Hu, X. Que, and G. Xiangyang, ‘‘SDN-based
autonomic CCN traffic management,’’ in Proc. IEEE Globecom Workshops
_(GC Wkshps), Dec. 2014, pp. 183–187._
[24] R. Jmal and L. C. Fourati, ‘‘An OpenFlow architecture for
managing content-centric-network (OFAM-CCN) based on popularity
caching strategy,’’ _Comput._ _Stand._ _Interface,_ vol. 51, pp. 22–29,
Mar. 2017.
[25] P. Zuraniewski, N. van Adrichem, D. Ravesteijn, W. IJntema,
C. Papadopoulos, and C. Fan, ‘‘Facilitating ICN deployment with an
extended openflow protocol,’’ in Proc. 4th ACM Conf. Inf.-Centric Netw.,
Sep. 2017, pp. 123–133.
[26] L. Chen, O. Havel, A. Olariu, P. Martinez-Julia, J. Nobre, and
D. Lopez. Intent Classification. Accessed: Nov. 23, 2021. [Online].
Available: https://datatracker.ietf.org/doc/html/draft-irtf-nmrg-ibn-intentclassificat%ion-05
[27] Y. Wang, R. Forbes, U. Elzur, J. Strassner, A. Gamelas, H. Wang, S. Liu,
L. Pesando, X. Yuan, and S. Cai, ‘‘From design to practice: ETSI ENI reference architecture and instantiation for network management and orchestration using artificial intelligence,’’ IEEE Commun. Standards Mag., vol. 4,
no. 3, pp. 38–45, Sep. 2020.
[28] S. Perez-Soler, S. Juarez-Puerta, E. Guerra, and J. de Lara, ‘‘Choosing
a chatbot development tool,’’ IEEE Softw., vol. 38, no. 4, pp. 94–103,
Jul. 2021.
[29] X. Li, N. Samaan, and A. Karmouch, ‘‘An automated VNF manager based
on parameterized action MDP and reinforcement learning,’’ in Proc. IEEE
_Int. Conf. Commun., Jun. 2021, pp. 1–6._
[30] M. Paliwal, D. Shrimankar, and O. Tembhurne, ‘‘Controllers in SDN: A
review report,’’ IEEE Access, vol. 6, pp. 36256–36270, 2018.
[31] Y. Han, J. Li, D. Hoang, J.-H. Yoo, and J. W.-K. Hong, ‘‘An intent-based
network virtualization platform for SDN,’’ in Proc. 12th Int. Conf. Netw.
_Service Manage. (CNSM), Oct. 2016, pp. 353–358._
[32] Y. Tsuzaki and Y. Okabe, ‘‘Reactive configuration updating for intentbased networking,’’ in Proc. Int. Conf. Inf. Netw. (ICOIN), 2017,
pp. 97–102.
[33] T. A. Khan, A. Muhammad, K. Abbas, and W.-C. Song, ‘‘Intent-based
networking platform: An automated approach for policy and configuration
of next-generation networks,’’ in Proc. 36th Annu. ACM Symp. Appl.
_Comput. (SAC), Mar. 2021, pp. 1921–1930._
[34] V. Heorhiadi, S. Chandrasekaran, M. K. Reiter, and V. Sekar, ‘‘Intentdriven composition of resource-management SDN applications,’’
in Proc. 14th Int. Conf. Emerg. Netw. Exp. Technol., Dec. 2018,
pp. 86–97.
[35] M. Riftadi and F. Kuipers, ‘‘P4I/O: Intent-based networking with
P4,’’ in Proc. IEEE Conf. Netw. Softwarization (NetSoft), Jun. 2019,
pp. 438–443.
[36] A. Singh, G. S. Aujla, and R. S. Bali, ‘‘Intent-based network for data dissemination in software-defined vehicular edge computing,’’ IEEE Trans.
_Intell. Transp. Syst., vol. 22, no. 8, pp. 5310–5318, Aug. 2021._
[37] T. Kollar, D. Berry, L. Stuart, K. Owczarzak, T. Chung, L. Mathias,
M. Kayser, B. Snow, and S. Matsoukas, ‘‘The Alexa meaning representation language,’’ in Proc. Conf. North Amer. Chapter Assoc. Com_put. Linguistics, Hum. Lang. Technol., (Industry Papers), vol. 3, 2018,_
pp. 177–184.
[38] A. de Barcelos Silva, M. M. Gomes, C. A. da Costa, R. da Rosa Righi,
J. L. V. Barbosa, G. Pessin, G. D. Doncker, and G. Federizzi, ‘‘Intelligent
personal assistants: A systematic literature review,’’ Expert Syst. Appl.,
vol. 147, Jun. 2020, Art. no. 113193.
[39] Z. Zhang, Z. Zhang, H. Chen, and Z. Zhang, ‘‘A joint learning framework
with BERT for spoken language understanding,’’ IEEE Access, vol. 7,
pp. 168849–168858, 2019.
[40] K. Abbas, T. A. Khan, M. Afaq, and W.-C. Song, ‘‘Network slice lifecycle management for 5G mobile networks: An intent-based networking
approach,’’ IEEE Access, vol. 9, pp. 80128–80146, 2021.
[41] G. Bianchi, M. Bonola, A. Capone, and C. Cascone, ‘‘Openstate: Programming platform-independent stateful openflow applications inside
the switch,’’ ACM SIGCOMM Comput. Commun. Rev., vol. 44, no. 2,
pp. 44–51, 2014.
[42] H. E. Z. Zhan and M. Song, ‘‘Table-to-dialog: Building dialog assistants to chat with people on behalf of you,’’ IEEE Access, vol. 8,
pp. 102313–102320, 2020.
[43] S. Chen and S. Yu, ‘‘WAIS: Word attention for joint intent detection and slot filling,’’ in Proc. AAAI Conf Artif. Intell., vol. 33, 2019,
pp. 9927–9928.
[44] A. Iliasov, E. Troubitsyna, L. Laibinis, and A. Romanovsky, ‘‘Patterns for
refinement automation,’’ in Formal Methods for Components and Objects
(Lecture Notes in Computer Science), S. de Boer Frank, M. Marcello,
S. H. Bonsangue, and M. Leuschel, Eds. Berlin, Germany: Springer, 2010,
pp. 70–88.
[45] R. Silva and M. Butler, ‘‘Shared event composition/decomposition
in event-B,’’ in _Formal_ _Methods_ _for_ _Components_ _and_ _Objects_
(Lecture Notes in Computer Science), B. K. Aichernig, S. de Boer
Frank, and M. M. Bonsangue, Eds. Berlin, Germany: Springer, 2012,
pp. 122–141.
[46] A. L. R. Madureira, F. R. C. Araujo, G. B. Araujo, and L. N. Sampaio,
‘‘NDN fabric: Where the software-defined networking meets the contentcentric model,’’ IEEE Trans. Netw. Service Manage., vol. 18, no. 1,
pp. 374–387, Mar. 2021.
[47] J. C. Fillmore and C. Baker, A Frames Approach to Semantic Analysis.
London, U.K.: Oxford Univ. Press, Dec. 2009.
[48] O. Sattarov, ‘‘Natural language processing with DeepPavlov library and
additional semantic features,’’ in Artificial Intelligence (Lecture Notes in
Computer Science), G. S. Osipov, A. I. Panov, and K. S. Yakovlev, Eds.
Cham, Switzerland: Springer, 2019, pp. 146–159.
[49] Abilene Core Topology | University IT. Accessed: Nov. 10, 2021. [Online].
Available: https://uit.stanford.edu/service/network/internet2/abilene
[50] S. Mastorakis, A. Afanasyev, I. Moiseenko, and L. Zhang, ‘‘ndnSIM
2: An updated NDN simulator for NS-3,’’ NDN, Shanghai, China,
Tech. Rep., NDN-0028, 2016.
[51] H. Harkous, M. Jarschel, M. He, R. Priest, and W. Kellerer, ‘‘Towards
understanding the performance of p4 programmable hardware,’’ in Proc.
_ACM/IEEE Symp. Architectures Netw. Commun. Syst. (ANCS), Sep. 2019,_
pp. 1–6.
-----
[52] S. Ibanez, G. Brebner, N. McKeown, and N. Zilberman, ‘‘The
P4->NetFPGA workflow for line-rate packet processing,’’ in Proc.
_ACM/SIGDA Int. Symp. Field-Program. Gate Arrays, Feb. 2019,_
pp. 1–9.
[53] P. Li and Y. Luo, ‘‘P4GPU: Accelerate packet processing of a p4 program
with a CPU-GPU heterogeneous architecture,’’ in Proc. Symp. Architec_tures Netw. Commun. Syst., Mar. 2016, pp. 125–126._
[54] M. Saquetti, G. Bueno, W. Cordeiro, and J. R. Azambuja, ‘‘P4 VBox:
Enabling P4-based switch virtualization,’’ IEEE Commun. Lett., vol. 24,
no. 1, pp. 146–149, Jan. 2020.
OUASSIM KARRAKCHOU (Graduate Student
Member, IEEE) received the master’s degree in
telecommunications engineering from the Institut
National des Sciences Appliquées (INSA), Lyon,
France, in 2014. He is currently pursuing the Ph.D.
degree with the School of Electrical and Computer
Engineering, University of Ottawa. He has worked
as a Technical Consultant for a leading financial
software editor in France, from 2014 to 2017.
His research interests include future internet architectures, information-centric networks, software-defined networks, and
cloud.
NANCY SAMAAN (Member, IEEE) received the
B.Sc. and M.Sc. degrees from the Department of
Computer Science, Alexandria University, Egypt,
and the Ph.D. degree in computer science from
the University of Ottawa, Canada, in 2007. She is
currently a Professor with the School of Electrical Engineering and Computer Science, University
of Ottawa. Her current research interests include
network resource management, wireless communications, quality-of-service issues, and autonomic
communications. In 2008, she received the Natural Sciences and Engineering
Research Council of Canada University Faculty Award.
AHMED KARMOUCH (Member, IEEE) received
the M.S. and Ph.D. degrees in computer science
from the University of Paul Sabatier, Toulouse,
France, in 1976 and 1979, respectively. He was an
Industrial Research Chair at the Ottawa Carleton
Research Institute and the Natural Sciences and
Engineering Research Council. He has been the
Director of the Ottawa Carleton Institute for Electrical and Computer Engineering. He is currently
a Professor at the School of Electrical Engineering
and Computer Science, University of Ottawa. He is involved in several
projects with industry and government laboratories in Canada and Europe.
His current research interests include mobile computing, autonomic overlay
networks, software defined networks, cloud computing, and network virtualization. He has organized several conferences and workshops, edited several
books, and served as a Guest Editor for IEEE Communications Magazine,
_Computer Communications, and others._
-----
| 21,398
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ACCESS.2022.3158753?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ACCESS.2022.3158753, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://ieeexplore.ieee.org/ielx7/6287639/9668973/09732947.pdf"
}
| 2,022
|
[
"JournalArticle"
] | true
| null |
[
{
"paperId": "d4d23654101ce668acd65fb3f753891aa1daa9b8",
"title": "Intent-Based Networking - Concepts and Definitions"
},
{
"paperId": "959f6eeac6d29db471339afc3c9a4dd2e2871aea",
"title": "Intent Classification"
},
{
"paperId": "067cff3011c591074b55c7db4a37aa723c5ce823",
"title": "Intent-Based Network for Data Dissemination in Software-Defined Vehicular Edge Computing"
},
{
"paperId": "5ffbaeab60eda0e6633d4843ca7adce7aaad74fe",
"title": "Choosing a Chatbot Development Tool"
},
{
"paperId": "37492d335984182060c7e6d480130d2e72adf251",
"title": "EP4: An Application-Aware Network Architecture with a Customizable Data Plane"
},
{
"paperId": "6029254e20934f1d41553326c21e10d9cd90c764",
"title": "An Automated VNF Manager based on Parameterized Action MDP and Reinforcement Learning"
},
{
"paperId": "da6c3a56aaf15656a49e9eef2b3e7cf49361d8aa",
"title": "Intent-based networking platform: an automated approach for policy and configuration of next-generation networks"
},
{
"paperId": "71c30ab700c313c5fc28591e539337df6bd4a01f",
"title": "NDN Fabric: Where the Software-Defined Networking Meets the Content-Centric Model"
},
{
"paperId": "6b06a9b07f009badb2d08c0b7cbf0d4944da6bed",
"title": "SDN Heading North: Towards a Declarative Intent-based Northbound Interface"
},
{
"paperId": "57b5c40987fd67b00c46b3b3fe6af7cc6741a435",
"title": "ENDN: An Enhanced NDN Architecture with a P4-programmabIe Data Plane"
},
{
"paperId": "ced8425069442fbe19ee8b33888fb993ab05dd66",
"title": "From Design to Practice: ETSI ENI Reference Architecture and Instantiation for Network Management and Orchestration Using Artificial Intelligence"
},
{
"paperId": "dc35c4f4ebd17af314995a2035ed3b716674d3f4",
"title": "Intelligent personal assistants: A systematic literature review"
},
{
"paperId": "0597db37fb0ce7434cb4f9ad1bd701abe82ca8a6",
"title": "Recent Advances in Intent-Based Networking: A Survey"
},
{
"paperId": "0290df9a174906ec27afbd8bf477e1f75a742f14",
"title": "A Comparative Performance Analysis of Popularity-Based Caching Strategies in Named Data Networking"
},
{
"paperId": "fc7df87fbc7ab57c2a339a61e3be852efdbed727",
"title": "P4VBox: Enabling P4-Based Switch Virtualization"
},
{
"paperId": "0fd1ea593811ac3777a50c8d1701d2c4dc4f7555",
"title": "A Joint Learning Framework With BERT for Spoken Language Understanding"
},
{
"paperId": "0082fca8b5f578dbd17207c6cb2d9867143f9e05",
"title": "Towards Understanding the Performance of P4 Programmable Hardware"
},
{
"paperId": "c67fa8266ae314fd0d36e834876c035cdc2e7d03",
"title": "WAIS: Word Attention for Joint Intent Detection and Slot Filling"
},
{
"paperId": "3accede437d8d2996b281a51b8f1e84b141e6fcc",
"title": "P4I/O: Intent-Based Networking with P4"
},
{
"paperId": "6827db32b0092396028be6e51d330245710732da",
"title": "Design and Implementation of an Open Source Framework and Prototype For Named Data Networking-Based Edge Cloud Computing System"
},
{
"paperId": "12e2dc3adae6cf68760cf1db058a4c7e220dbcad",
"title": "The P4->NetFPGA Workflow for Line-Rate Packet Processing"
},
{
"paperId": "bfc9ad598fd38f6c92c3e3d18dd1f58e44860984",
"title": "Intent-driven composition of resource-management SDN applications"
},
{
"paperId": "4ddaedf5eb811568ecd6ab6754c4074c065c6298",
"title": "NDN host model"
},
{
"paperId": "8e3bfff28337c4249c1e98973b4df8f95a205dce",
"title": "Software Defined Networking Meets Information Centric Networking: A Survey"
},
{
"paperId": "87cbba4bf46b58a0db32b38ca7c53d9bfdd030ed",
"title": "The Alexa Meaning Representation Language"
},
{
"paperId": "e01bb32bea309ea674e52429a15287c15cc79d26",
"title": "Novel Cooperative and Fully-Distributed Congestion Control Mechanism for Content Centric Networking"
},
{
"paperId": "d1c47e1f5365f538ea26eb9728c590795dad5f73",
"title": "Facilitating ICN deployment with an extended openflow protocol"
},
{
"paperId": "db51f178edd5ead68fdb3464ab272ff33e94193f",
"title": "Towards New Information Centric Networking Strategy Based on Software Defined Networking"
},
{
"paperId": "4bd64e82e34307d6674e86f2cb5a7a82027a46c9",
"title": "An OpenFlow Architecture for Managing Content-Centric-Network (OFAM-CCN) based on popularity caching strategy"
},
{
"paperId": "a459a88bc238f4b5b9027847aa10723f0825b296",
"title": "Implementation and Evaluation of a Controller-Based Forwarding Scheme for NDN"
},
{
"paperId": "4b288e489867e51c6941af77a964d3e1edc63cde",
"title": "An intent-based network virtualization platform for SDN"
},
{
"paperId": "877fa72fd6462d3235ae38ef76b950e3d704d9f4",
"title": "P4GPU: Accelerate packet processing of a P4 program with a CPU-GPU heterogeneous architecture"
},
{
"paperId": "85b9d8fe00034cb7779e75037f9c22ff77138366",
"title": "SDN-based autonomic CCN traffic management"
},
{
"paperId": "ad3d83248eae66580d4deada76e72e3be9a9b44c",
"title": "Named data networking"
},
{
"paperId": "907830c20463491a78da2bbee95f4f654813448e",
"title": "OpenState: programming platform-independent stateful openflow applications inside the switch"
},
{
"paperId": "463e7d2a62b1f15cc2daba34230297827e7c6757",
"title": "P4: programming protocol-independent packet processors"
},
{
"paperId": "e56f654468b5c093ff10c16931712e98fbf9305d",
"title": "Formal Methods for Components and Objects"
},
{
"paperId": "6d20a6a27207857c4282e88a0011e9fee9dfab96",
"title": "Shared Event Composition/Decomposition in Event-B"
},
{
"paperId": "794c598e037ffac8b8dec326ba29a5dd9044ece6",
"title": "Modeling in event-b - system and software engineering by Jean-Raymond Abrial"
},
{
"paperId": "afe4665ef2421e94f7a1179e66841917b0e3d039",
"title": "A Frames Approach to Semantic Analysis"
},
{
"paperId": "937758de71d864bb445e919d4b363667f6deceee",
"title": "Patterns for Refinement Automation"
},
{
"paperId": "083fb3c3c71ad00f183ffb12e79557fb397228fc",
"title": "Network Slice Lifecycle Management for 5G Mobile Networks: An Intent-Based Networking Approach"
},
{
"paperId": "081a5289951007fddfa6003ea4da64af5c3a5e80",
"title": "The Origin and Evolution of Open Programmable Networks and SDN"
},
{
"paperId": "3483de1f44c644389e82343e433283fdcdb5b693",
"title": "A Survey on Intent-Driven Networks"
},
{
"paperId": "6c13ccaaac8a7a0bf5ccaa5f86ad89ccd2000f81",
"title": "Table-to-Dialog: Building Dialog Assistants to Chat With People on Behalf of You"
},
{
"paperId": "b15d38a204b9640559f388783347db38c5d9fb0a",
"title": "Packet Forwarding in Named Data Networking Requirements and Survey of Solutions"
},
{
"paperId": "2d690b999a1132cf62cb644e4d3165650eb0d9eb",
"title": "Natural Language Processing with DeepPavlov Library and Additional Semantic Features"
},
{
"paperId": "67309b0970fcf6345dbd589dce4951608fb0076c",
"title": "Controllers in SDN: A Review Report"
},
{
"paperId": "867a3a0a667e68715645e510fada008683091ed1",
"title": "Online Caching and Cooperative Forwarding in Information Centric Networking"
},
{
"paperId": "6823c70696bb00b6cc0978c6a7d7459deded722f",
"title": "Reactive configuration updating for Intent-Based Networking"
},
{
"paperId": "36ddffd76c43a3106ec01b71a39da9a813bb5c07",
"title": "ndnSIM 2 : An updated NDN simulator for NS-3"
},
{
"paperId": null,
"title": "‘‘ α Route: Routing on names,’’"
},
{
"paperId": "7534387f5394e693617baaeae143c9435ff711f2",
"title": "NFD Developer's Guide"
},
{
"paperId": null,
"title": "He has worked as a Technical Consultant for a leading financial software editor in France"
},
{
"paperId": null,
"title": "Mapping Applications Intents to Programmable NDN Data-Planes via Event-B Machines"
},
{
"paperId": null,
"title": "Abilene Core Topology | University IT"
}
] | 21,398
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00bb9e53447b7deb7a90315805e848fc70ac9748
|
[
"Computer Science"
] | 0.882559
|
A Cooperative Partial Snapshot Algorithm for Checkpoint-Rollback Recovery of Large-Scale and Dynamic Distributed Systems
|
00bb9e53447b7deb7a90315805e848fc70ac9748
|
2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW)
|
[
{
"authorId": "29812443",
"name": "Yonghwan Kim"
},
{
"authorId": "2058113362",
"name": "Junya Nakamura"
},
{
"authorId": "1696725",
"name": "Y. Katayama"
},
{
"authorId": "1697557",
"name": "T. Masuzawa"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
A distributed system consisting of a huge number of computational entities is prone to faults because some nodes' faults cause the entire system to fail. Therefore, fault tolerance of distributed systems is one of the most important issues. Checkpoint-rollback recovery is a universal and representative technique for fault tolerance; it periodically records the whole system state (configuration) to non-volatile storage, and the system restores itself using the recorded configuration when the system fails. To record a configuration of a distributed system, a specific algorithm named a snapshot algorithm is required. However, many snapshot algorithms require coordination among all nodes in the system, thus frequent executions of snapshot algorithms require unacceptable communication cost especially if the systems are large-scale. As a sophisticated snapshot algorithm, a partial snapshot algorithm has been introduced that takes a partial snapshot (instead of a global snapshot). However if two or more partial snapshot algorithms are concurrently executed and their snapshot domains are overlapped, they should coordinate so that the partial snapshots (taken by the algorithms) should be consistent. In this paper, we propose a new efficient partial snapshot algorithm which uses leader election for the coordination but not frequently.
|
## A cooperative partial snapshot algorithm for checkpoint-rollback recovery of large-scale and dynamic distributed systems and experimental evaluations[∗]
#### Junya Nakamura[†][1], Yonghwan Kim[2], Yoshiaki Katayama[2], and Toshimitsu Masuzawa[3]
1Toyohashi University of Technology, Japan
2Nagoya Institute of Technology, Japan
3Osaka University, Japan
#### March 30, 2021
**Abstract**
A distributed system consisting of a huge number of computational entities is prone
to faults, because faults in a few nodes cause the entire system to fail. Consequently,
fault tolerance of distributed systems is a critical issue. Checkpoint-rollback recovery is a
universal and representative technique for fault tolerance; it periodically records the entire
system state (configuration) to non-volatile storage, and the system restores itself using the
recorded configuration when the system fails. To record a configuration of a distributed
system, a specific algorithm known as a snapshot algorithm is required. However, many
snapshot algorithms require coordination among all nodes in the system; thus, frequent
executions of snapshot algorithms require unacceptable communication cost, especially if
the systems are large. As a sophisticated snapshot algorithm, a partial snapshot algorithm
has been introduced that takes a partial snapshot (instead of a global snapshot). However,
if two or more partial snapshot algorithms are concurrently executed, and their snapshot
domains overlap, they should coordinate, so that the partial snapshots (taken by the
algorithms) are consistent. In this paper, we propose a new efficient partial snapshot
algorithm with the aim of reducing communication for the coordination. In a simulation,
we show that the proposed algorithm drastically outperforms the existing partial snapshot
algorithm, in terms of message and time complexity.
### 1 Introduction
A distributed system consists of computational entities (i.e., computers), usually called nodes,
which are connected to each other by (communication) links. Each node can communicate
with the other nodes by exchanging messages through these links. In large-scale distributed
systems, node faults are inevitable, and the faults of only a few nodes (probably a single node)
may cause the entire system to fail. Therefore, the fault tolerance of distributed systems is a
critical issue to ensure system dependability.
_Checkpoint-rollback recovery [3] is a universal and representative method for realizing the_
fault tolerance of distributed systems. Each node periodically (or when necessary) records its
∗A preliminary version of this paper appeared in the proceedings of the Sixth International Symposium on
Computing and Networking Workshops (CANDARW) [1].
This is the peer reviewed version of the following article [2], which has been published in final form at
https://doi.org/10.1002/cpe.5647. This article may be used for non-commercial purposes in accordance with
Wiley Terms and Conditions for Use of Self-Archived Versions.
†Corresponding author: junya[at]imc.tut.ac.jp
1
-----
local state in non-volatile storage, from which the node recovers its past non-faulty state when
faults occur. This recorded state is called a checkpoint and restoring the node state using
its checkpoint is called a rollback. However, in distributed systems, to guarantee consistency
after a rollback (i.e., a global state constructed from the checkpoints), nodes must cooperate
with each other to record their checkpoints. A configuration is inconsistent [4,5] if it contains
an orphan message, which is received but is not sent in the configuration. To resolve the
inconsistency, the receiver of the orphan message must restore an older checkpoint. This
may cause a domino effect [6] of rollbacks, which is an unbounded chain of local restorings
to attain a consistent global state. A consistent global state can be formed by every node’s
mutually concurrent local state (which means that there are no causal relationships between
any two local states in the global state) and all in-transit messages. A snapshot algorithm is for
recording a consistent global configuration called a snapshot which ensures that all nodes record
their checkpoints cooperatively. Checkpoint-rollback recovery inherently contains a snapshot
algorithm to record the checkpoints of the nodes, forming a consistent global state, and its
efficiency strongly depends on that of the snapshot algorithm.
Many sophisticated snapshot algorithms have been proposed [7–11]. As the scale (the
number of nodes) of a distributed system increases, the efficiency of the snapshot algorithm
becomes more important. Especially in a large-scale distributed system, frequent captures of
global snapshots incur an unacceptable communication cost. To resolve the problem of global
snapshot algorithms, partial snapshot algorithms have been proposed, which take a snapshot of
some portion of a distributed system, rather than the entire system. Most snapshot algorithms
(whether global or partial) cannot deal with dynamic distributed systems where nodes can
freely join and leave the system at any time.
In this paper, we propose a new cooperative partial snapshot algorithm which (a) takes
a partial snapshot of the communication-related subsystem (called a snapshot group), so its
message complexity does not depend on the total number of nodes in the system; (b) allows
concurrent initiations of the algorithm by two or more nodes, and takes a consistent snapshot
using elaborate coordinations among the nodes with a low communication cost; and (c) is applicable to dynamic distributed systems. Our simulation results show that the proposed algorithm
succeeds in drastically decreasing the message complexity of the coordinations compared with
previous works.
The rest of this paper is organized as follows: Section 2 introduces related work. Section 3
presents the system model and details of a previous work on which our algorithm is based. The
proposed algorithm designed to take concurrent partial snapshots and detect the termination is
described in Section 4. Section 5 discusses the correctness of the algorithm. The performance
of the algorithm is experimentally evaluated in comparison with that of an existing algorithm
in Section 6. Finally, Section 7 concludes the paper.
### 2 Related Work
Chandy and Lamport [12] proposed a distributed snapshot algorithm that takes a global snapshot of an entire distributed system. This global snapshot algorithm ensures its correctness
when a distributed system is static: No node joins or leaves, and no (communication) link
is added or removed. Moreover, the algorithm assumes that all links guarantee the First in
First out (FIFO) property, and each node knows its neighbor nodes. Chandy and Lamport’s
snapshot algorithm uses a special message named Marker, and each node can determine the
timing to record its own local state using the Marker message. Some snapshot algorithms for
distributed systems with non-FIFO links have also been proposed [13]. These global snapshot
algorithms are easy to implement and take a snapshot of the distributed system. However,
the algorithms require (m) messages (where m is the number of links), because every pair
_O_
of neighboring nodes has to exchange Marker messages. Therefore, these algorithms are not
practically applicable to large-scale distributed systems which consist of a huge number of
nodes.
Some researchers have tried to reduce the number of messages of snapshot algorithms
[14–16], e.g., (n log n), but the complexity depends on n, the number of nodes in the entire
_O_
2
-----
system. This implies that the scalability of snapshot algorithms remains critical. Not only the
scalability problem but also applicability to dynamic distributed systems (where nodes can join
and leave the distributed system at any time) are important for global snapshot algorithms.
An alternative approach to scalable snapshot algorithms called communication-induced
checkpointing has been studied [9,17–19]. In this approach, not all nodes are requested to record
their local states (as their checkpoints), but some are, depending on the communication pattern.
For distributed applications mainly based on local coordination among nodes, communicationinduced checkpoint algorithms can reduce the communication and time required for recording
the nodes’ checkpoints. However, these algorithms cannot guarantee that the latest checkpoints
of the nodes form a consistent global state. This forces each node to keep multiple checkpoints
in the node’s non-volatile storage, and requires an appropriate method to find a set of node
checkpoints that forms a consistent global state. Thus, from a practical viewpoint, these
snapshot algorithms cannot solve the scalability problem.
Moriya and Araragi [10,20] introduced a partial snapshot [1] algorithm, which takes a snapshot of the subsystem consisting only of communication-related nodes, named Sub-SnapShot
(SSS) algorithm. They also proved that the entire system can be restored from faults, using
the latest checkpoint of each node. A communication-related subsystem can be transitively
determined by the communication-relation, which is dynamically created by (application layer)
communications (exchanging messages) among the nodes. In practical distributed systems, the
number of nodes in a communication-related subsystem is expected to be much smaller than
the total number of nodes in the distributed system. This implies that the number of messages
required for SSS algorithm does not depend on the total number of nodes. Therefore, SSS algorithm can create checkpoints efficiently, so that SSS algorithm makes the checkpoint-rollback
recovery applicable to large-scale distributed systems. However, SSS algorithm cannot guarantee the consistency of the (combined) partial snapshot, if two or more nodes concurrently
initiate SSS algorithm instances, and their snapshot groups (communication-related subsystems) overlap.
Spezialetti [7] presented snapshot algorithms to allow concurrent initiation of two or more
snapshot algorithms, and an improved variant was proposed by Prakash [8]. However, their
algorithms still target the creation of a global snapshot, and their algorithms are not applicable to dynamic distributed systems. SSS algorithm is applicable to dynamic distributed
systems, where nodes can join and leave the system freely, because the algorithm uses only the
communication-relation, which changes dynamically, and requires no a priori knowledge about
the topology of the entire system.
Another snapshot algorithm for dynamic distributed systems was introduced by Koo and
Toueg [3]. However, this communication-induced checkpoint algorithm has to suspend executions of all applications while taking a snapshot, to guarantee the snapshot’s consistency. In
contrast, SSS algorithm allows execution of any applications while a snapshot is taken, with
some elaborate operations based on the communication-relation.
Kim et al., proposed a new partial snapshot algorithm, named Concurrent Sub-Snapshot
(CSS) algorithm [11, 21], based on SSS algorithm. They called the problematic situation
caused by the overlap of the subsystems a collision and presented an algorithm that can
resolve collisions by combining colliding SSS algorithm instances. In CSS algorithm, to resolve
the collision, leader election among the initiating nodes of the collided subsystems is executed,
and only one leader node becomes a coordinator. The coordinator and the other initiators
are called the main-initiator and sub-initiators, respectively. This leader election is executed
repeatedly, to elect a new coordinator when a new collision occurs. All sub-initiators forward
all information collected about the subsystems to the main-initiator, so that all the snapshot
algorithm instances are coordinated to behave as a single snapshot algorithm which is initiated
by the main-initiator.
CSS algorithm successfully realizes an efficient solution for the collision problem, by consistently combining two or more concurrent SSS algorithm executions. However, if a large number
of nodes concurrently initiate CSS algorithm instances, and the nodes collide with each other
1In [7], they called a portion of a global snapshot a partial snapshot; however, the notion of a partial snapshot
is different from that in our algorithm, SSS algorithm [10, 20], and CSS algorithm [11, 21]. In this paper, a
partial snapshot is not a part of a global snapshot, but a snapshot of a subsystem.
3
-----
many times, leader elections are executed concurrently and repeatedly, and an enormous number of messages are forwarded to the main-initiator. This overhead for combining snapshot
groups and forwarding snapshot information for coordination is the most critical drawback of
CSS algorithm.
### 3 Preliminaries
#### 3.1 System model
Here, we describe the system model we assumed in the paper. The model definition follows
that of SSS algorithm [10,20]. We consider distributed systems consisting of nodes that share
no common (shared) memory or storage. Nodes in the system can communicate with each
other asynchronously, by exchanging messages (known as the message-passing model). We
assume that each node can send messages to any other node if the node knows the destination
node’s ID: It can be realized if its underlying network supports appropriate multi-hop routing,
even though the network is not completely connected. Each node is a state machine and has
a unique identifier (ID) drawn from a totally ordered set. We assume a numerous but finite
number of nodes can exist in the system.
We consider dynamic distributed systems, where nodes can frequently join and leave the
distributed system. This implies that the network topology of the system can change, and each
node never recognizes the entire system’s configurations in real time. In our assumption, each
node can join or leave the system freely, but to guarantee the consistency of the checkpoints,
the node can leave the system only after taking a snapshot. This implies that to leave, the
node must initiate a snapshot algorithm. If a message is sent to a node that has already left
the system, the system informs the sender of the transmission failure. On the other hand, a
new coming node can join the system anytime.
Every (communication) link between nodes is reliable, which ensures that all the messages
sent through the same link in the same direction are received, each exactly once, in the order
they were sent (FIFO). A message is received only when it is sent. Because we assume an
asynchronous distributed system, all messages are received in finite time (as long as the receiver
exists), but with unpredictable delay.
#### 3.2 SSS algorithm
In this subsection, we briefly introduce SSS algorithm [10, 20] which takes a partial snapshot
of a subsystem consisting of nodes communication-related to a single initiator. This implies
that SSS algorithm efficiently takes a partial snapshot; that is, the algorithm’s message and
time complexities do not depend on the total number of nodes in the distributed system. SSS
algorithm is also applicable to dynamic distributed systems, where nodes join and leave freely,
because it does not require knowledge of the number of nodes or the topology of the system,
but requires only a dynamically changing communication-relation among nodes.
In SSS algorithm, every node records its dependency set (DS ), which consists of the IDs
of nodes with which it has communicated (sent or received messages). SSS algorithm assumes
that only a single node (called an initiator ) can initiate the algorithm, and to determine the
subsystem, an initiator traces the communication-relation as follows: When a node pi initiates
SSS algorithm, the node records its current local state (as its checkpoint) and sends Markers
with its ID to all nodes in its dependency set DSi. When a node pj receives a Marker message
with the ID of pi for the first time, the node also records its current local state. After that,
_pj forwards the Markers with the ID of pi to all nodes in its dependency set DSj and sends_
_DSj to the initiator pi. The initiator can trace the communication-relation by referring the_
dependency sets received from other nodes: The initiator maintains the union of the received
dependency sets, including its own dependency set, and the set of the senders of the dependency
sets. When these two sets become the same, the nodes in the sets constitute the subsystem
communication-related to the initiator. The initiator informs each node pj in the determined
subsystem of the node set of the subsystem; pj should receive Markers from every node in the
set.
4
-----
Figure 1: An (overlay) initiator network consisting of initiators
Recording in-transit messages in SSS algorithm is basically the same as in traditional distributed snapshot algorithms (Chandy and Lamport’s manner). Each node joining the partial
snapshot algorithm records messages which are received before receipt of the Marker in each
link.
### 4 CPS Algorithm: The Proposed Algorithm
#### 4.1 Overview
When two or more nodes concurrently initiate SSS algorithm instances, the subsystems (called
_snapshot group) may overlap, which is called a collision. CSS algorithm has been proposed_
with the aim of resolving this collision. This algorithm combines the collided snapshot groups,
using leader election repeatedly. This allows concurrent initiations by two or more initiators;
however, it causes a huge amount of communication cost for leader elections, if collisions
occur frequently. Moreover, to guarantee the consistency of the combined partial snapshot,
every initiator must forward all information, e.g., the node list, the dependency set, and the
collision-related information, to the leader. This forwarding causes additional communication
cost.
To reduce the communication cost, we propose a new partial snapshot algorithm, CPS
_algorithm, which stands for Concurrent Partial Snapshot. This algorithm does not execute_
leader election to resolve a collision every time a collision is detected. Instead, CPS algorithm
creates a virtual link between the two initiators of the two collided groups, which is realized
by making each initiator just store the other’s ID as its neighbor’s. These links construct the
overlay network which consists only of initiators. We called this overlay network an initiator
_network, and no information is forwarded among initiators in this network. Figure 1 illustrates_
an example of an initiator network for a case where three snapshot groups collide with each
other.
CPS algorithm consists of two phases: Concurrent Partial Snapshot Phase (Phase 1) and
Termination Detection Phase (Phase 2). In Phase 1, an initiator sends Marker messages to its
communication-related nodes to determine its snapshot group. If the snapshot group collides
with another group, the initiator and the collided initiator create a virtual link between them
for their initiator network. When the snapshot group is determined, the initiator of the group
proceeds to Phase 2 to guarantee the consistency of the checkpoints in all (overlapped) snapshot
groups. In Phase 2, to achieve the guarantee, each initiator communicates with each other in
the initiator network to check all the initiators have already determined their snapshot groups.
After this check is completed, an initiator tells the termination condition of each node in the
initiator’s snapshot group and goes back to Phase 1 to finish the algorithm. Note that all
nodes in the snapshot groups execute Phase 1 on the real network, and only initiators execute
Phase 2 on the initiator network that is constructed in Phase 1.
In this section, we describe the proposed CPS algorithm. First, Section 4.2 explains how
the proposed algorithm handles events of sending/receiving an application message. Then,
Section 4.3 and Section 4.4 provide details of the two phases of the algorithm, i.e., Concurrent
Partial Snapshot Phase and Termination Detection Phase.
5
-----
Figure 2: Orphan message mij
**Algorithm 1 Basic actions of Phase 1**
1: procedure Before pi sends a message to pj
2: **if init ̸= null ∧** _pj /∈_ _pDS ∪_ _DS ∧_ _InPhase2 = false then_
3: // Send Marker before sending a message
4: Send ⟨Marker, init⟩ to pj
5: **end if**
6: _DS ←_ _DS ∪{pj_ _} // Add pj to its DS_
7: end procedure
8: procedure Before pi receives a message from pj
9: _DS ←_ _DS ∪{pj_ _}// Add pj to its DS_
10: **if init ̸= null ∧** _pj /∈_ _RcvMk then_
11: Add (pj, message) to MsgQ
12: **end if**
13: end procedure
#### 4.2 Basic operation
To take a snapshot safely, CPS algorithm must handle events of sending or receiving an application message (as other snapshot algorithms do). Algorithm 1 shows the operations that
each node executes before sending (lines 1–7) or receiving (lines 8–13) an application message.
When node pi currently executing CPS algorithm (initi = null) sends a message to node pj
_̸_
which is not in the DSi, pi has to send Marker to pj before sending the message. Variable
_pDS stores DS when a node receives the first Marker to restore the content of DS when a_
snapshot algorithm is canceled.
Figure 2 depicts why this operation is necessary: Let pk be the node which is communicationrelated to pi and pj (pi and pj are not communication-related with each other). When each
node receives Marker for the first time, the node broadcasts Marker to all the nodes in its
_DS. Therefore, pi already sent Marker to pk, and pk sends Marker to pj when these nodes_
receive the Markers. However, if pi sends a message mij to pj without sending Marker to
_pj, the message might be received before the Marker from pk, and it makes mij an orphan_
message. Let us consider another case in Fig. 2 where pj sends mji to pi before pj stores its
checkpoint. When pi receives mji, pi adds mji into MsgQ as defined in Algorithm 1 because
_pi is executing CPS algorithm and has not received a Marker message from pj. After finishing_
CPS algorithm, mji is stored as one of the in-transit messages with the checkpoint. Therefore,
_mji never becomes an orphan message._
#### 4.3 Phase 1: Concurrent Partial Snapshot Phase
This phase is basically the same as that in SSS algorithm, except for the collision-handling
process. Each node can initiate a snapshot algorithm at any time, by sending a special message
_Marker to the node’s communication-related nodes, and the other nodes record their local_
states when they receive Marker for the first time. An initiator of CPS algorithm traces the
communication-relation to determine its partial snapshot group.
In Phase 1, each node pi maintains the following variables:
- initi: Initiator’s ID. An initiator sets this variable as its own ID. A normal node (not
initiator) sets this variable to the initiator ID of the first Marker message it receives.
Initially null.
6
-----
- DSi: A set of the IDs of the (directly) communicate-related nodes. This set is updated
when pi sends/receives an application message as described in Section 4.2.
- pDSi: A set variable that stores the DSi temporarily. Initially ∅.
- RcvMki: A set of the IDs of the nodes from which pi (already) received Marker messages.
Initially .
_∅_
- MkListi: A set of the IDs of the nodes from which pi has to receive Marker messages
to terminate the algorithm. Initially .
_∅_
- fini: A boolean variable that denotes whether the partial snapshot group is determined
or not. Initially false. An initiator updates this variable to true when Phase 1 terminates,
while a non-initiator node updates this when the node receives a Fin message.
- MsgQi: A message queue that stores a sequence of the messages for checkpoints, as the
pairs of the ID of the sender node and the message. Initially null.
- CollidedNodesi: A set of the IDs of the nodes from which pi received collision Marker
messages. Initially .
_∅_
- MkFromi (Initiator only): A set of the IDs of the nodes that send Marker to its DS.
Initially .
_∅_
- MkToi (Initiator only): The union set of the DSes of the nodes in MkFrom. Initially ∅.
- DSInfoi (Initiator only): A set of the pairs of a node ID and its DS. Initially ∅.
- Waiti (Initiator only): A set of the IDs of the nodes from which pi is waiting for a reply
to create a virtual link of the initiator network. Initially .
_∅_
- Ni (Initiator only): A set of the neighbor nodes’ IDs in the initiator network. Initially ∅.
We use the following message types in Phase 1. We denote the algorithm messages by
_⟨MessageType, arg1, arg2, . . .⟩. Note that some messages have no argument. We assume that_
every message includes the sender ID and the snapshot instance ID, which is a pair of an initiator ID and a sequence number of the snapshot instances the initiator invoked, to distinguish
snapshot algorithm instances that are or were executed.
- ⟨Marker, init⟩: A message which controls the timing of the recording of the local state.
Parameter init denotes the initiator’s ID.
- ⟨MyDS, DS⟩: A message to send its own DS (all nodes communication-related to this
node) to its initiator.
- ⟨Out⟩: A message to cancel the current snapshot algorithm. When a node who has been
an initiator receives a MyDS message of the node’s previous instance, the node sends
this message to cancel the sender’s snapshot algorithm instance.
- ⟨Fin, List⟩: A message to inform that its partial snapshot group is determined. List
consists of the IDs of the nodes from which the node has to receive Marker messages to
terminate the algorithm.
- ⟨NewInit, p, Init⟩: A message to inform that a different initiator has been detected. Init
denotes the ID of the detected initiator, and p denotes the ID of the node which sends
_Marker with Init._
- ⟨Link, p, q⟩: A message sent by an initiator to another initiator to confirm whether a link
(of the overlay network) can be created between the two initiators or not. p denotes the
ID of the node which received a collided Marker, and q denotes the ID of the sender
node.
- ⟨Ack, p, q⟩: A reply message for a ⟨Link, p, q⟩ message when the link can be created.
- ⟨Deny, p, q⟩: A reply message for a ⟨Link, p, q⟩ message when the link cannot be created.
- ⟨Accept, p, Init⟩: A reply message for a ⟨NewInit, p, Init⟩ message when the link between
its initiator and Init is successfully created.
7
-----
Figure 3: Partial snapshot group example
𝑝"
𝑝" 𝑝& 𝑚"# <𝑀𝑎𝑟𝑘𝑒𝑟, 𝑝"> collision
𝑝#
𝑝% <𝑀𝑎𝑟𝑘𝑒𝑟, 𝑝&>
node𝑝# marker𝑝% 𝑝& 𝑚%& <𝑀𝑎𝑟𝑘𝑒𝑟, 𝑝&>
communication
relation checkpoint message marker
Figure 4: Collision assumption of Algorithm 3
Algorithm 2 presents the pseudo-code of Phase 1. By this algorithm, each node stores, as
a checkpoint, a local application state in line 11 and in-transit messages in line 66.
We briefly present how an initiator determines its partial snapshot group when no collision
occurs. Figure 3 describes an example of a distributed system consisting of 10 nodes, p0 to
_p9, and some pairs are communication-related: For example, p7 has communication-relations_
with p0, p6, and p8; i.e., DS7 = {p0, p6, p8}. In this example, p0 initiates CPS algorithm. p0
initializes all variables, and records its local state; then, p0 sends ⟨Marker, p0⟩ to all nodes in
_DS0 = {p2, p3, p6, p7} (lines 6–13). When p3 receives the first Marker from p0, p3 records its_
local state, and sets p0 as its initiator (variable init3) (lines 6–11). Then, p3 sends its DS3 to
its initiator p0 using the ⟨MyDS, DS3⟩ message (line 12). After that, p3 sends ⟨Marker, p0⟩ to
all nodes in DS3 = {p0, p8}(line13). Note that node p8, which is not directly communicationrelated to p0, also receives ⟨Marker, p0⟩ from p3 (or p7) and records its local state. If the initiator
_p0 receives a ⟨MyDS, DSi⟩_ message from pi, it adds the ID pi and DSi to MkFrom0 and MkTo0
respectively, and inserts (i, DSi) into DSInfo0 (lines 33–35). When MkTo0 _MkFrom0[2]_
_⊆_
holds, this means that all nodes which are communication-related to the initiator already
received the Marker. Thus, the initiator determines its partial snapshot group as the nodes in
_MkFrom0, and proceeds to Phase 2 (lines 57–59), named the Termination Detection Phase,_
which is presented in the next subsection. When Phase 2 finishes, the initiator sends the
_⟨Fin, MkListi⟩_ message to each pi ∈ _MkFrom0 (lines 43–46 of Algorithm 4), where MkListi_
is the set of the IDs from which pi has to receive Markers. If node pi has received Marker
messages from all the nodes in MkListi, pi terminates the algorithm (lines 62–72).
Algorithm 3 presents the pseudo-code of the collision-handling procedures in Phase 1. In the
algorithm, we change some notations of node IDs for ease of understanding. Our assumption
is depicted in Figure 4. We assume that a collision occurs between two snapshot groups, and
let px and py be the nodes executing the snapshot algorithm by receiving Marker from the
initiators pa and pb, respectively. Node px receives ⟨Marker, pb⟩ from py, and px informs its
initiator pa of a collision by sending a NewInit message, because initx = pb.
_̸_
Figure 5 illustrates an example of the message flow when a collision occurs. In the example,
we assume that two initiators, p0 and p6, concurrently initiate CPS algorithm instances, and
_p4 detects a collision as follows. Node p4 receives ⟨Marker, p0⟩_ from p3, and ⟨Marker, p6⟩ from
_p5 in this order. Because p4 receives Marker with initiator p6 different from its initiator p0, p4_
2If DS0 remains unchanged, MkTo0 = MkFrom0 holds. However, each node pi can send a message to a
node not in DSi (which adds the node to DSi) even while CPS algorithm is being executed. This may cause
_MkTo0 ⊂_ _MkFrom0; refer to Algorithm 1 for details._
8
-----
Figure 5: Collision-handling example in CPS algorithm
sends ⟨NewInit, p5, p6⟩ to its initiator p0 (line 25 of Algorithm 2). When p0 receives the NewInit,
if p0 has not determined the partial snapshot group yet, p0 sends a ⟨Link, p4, p5⟩ message to
opponent initiator p6 (line 6). As a reply to the Link message, p6 sends a ⟨Ack, p4, p5⟩ message
(line 26), if p6 also has not determined its partial snapshot group yet. Otherwise, p6 sends a
_⟨Deny, p4, p5⟩_ message to p0[3] (line 31). Finally, p0 sends ⟨Accept, p5, p6⟩ to p4 which detected the
collision (line 60), and p4 sends ⟨Marker, p6⟩ to p5 (line 50). Note that this Marker is necessary
to decide which messages should be recorded in the checkpoint in p5. In this example, we also
notice the following points: (1) In Figure 5, p5 may also detect a collision by ⟨Marker, p0⟩ from
_p4. This causes additional message communications between p0 and p6; e.g., p6 also sends a_
_Link message to p0. (2) Even if there is no communication-relation between p4 and p5, when_
two initiators invoke CPS algorithm instances, p4 or p5 can send Marker in advance to send
a message (refer to Algorithm 1). In this case, a virtual link between p0 and p6 may not be
created, because either of them may have already determined their partial snapshot groups
(note that p5 and p4 are not included in DS4 and DS5, respectively).
#### 4.4 Phase 2: Termination Detection Phase
Only the initiators, which determine their partial snapshot groups, execute Phase 2. Note that
Phase 2 is executed on the initiator network that was constructed in Phase 1. The goal of this
phase is to confirm that all initiators in the initiator network have already determined their
snapshot groups[4]. In other words, all initiators in the initiator network completed Phase 1,
and are executing Phase 2. In this phase, the proposed algorithm elects one initiator as the
leader, and constructs a breadth-first spanning tree rooted at the leader. From the leaves to the
root, each initiator notifies its parent initiator in the tree that it is in Phase 2 (convergecast),
and when the convergecast terminates, the leader broadcasts the termination of Phase 2 to all
other initiators (broadcast).
In Phase 2, each initiator pi maintains the following variables:
- rIDi: The ID of the root initiator the initiator currently knows. Initially, null.
- disti: The distance to the root initiator rIDi. Initially, ∞.
- pIDi: The ID of the parent initiator in the (spanning) tree rooted at the root initiator
_rIDi. Initially, null._
- Childi: A set of the IDs of the child initiators in the (spanning) tree. Initially, ∅.
- LTi: A set of the IDs of the initiator from which the initiator received LocalTerm messages. Initially, .
_∅_
3In the Deny case, p6 has determined its snapshot group and has sent Fin messages to the nodes in the group
including p5. Node p5 eventually receives the Fin message and terminates the snapshot algorithm. While p4
cannot receive any response for the NewInit message p4 sent, the node also eventually receives its Fin message
from p0. If there exists an application message m54 sent from p5 to p4, the sent must be after taking the
checkpoint of p5 for p6’s snapshot (otherwise, the two snapshot groups of p0 and p6 are merged). Node p4 also
receives m54 after taking its checkpoint for p0’s snapshot. If p4 receives the message before its checkpoint, p5
send a Marker to p4 before m54, p4 should join in p6’s snapshot group. The application message m54 is sent
and received after the checkpoints of p4 and p5; thus, the message never becomes an orphan. We can have the
same discussion for an application message sent in the opposite direction.
4If an initiator has not experienced any collision in Phase 1, the initiator terminates Phase 2 immediately
because the initiator does not need to wait other snapshot groups.
9
-----
- CKi: A set of the IDs of the initiator from which the initiator received Check messages.
Initially, .
_∅_
- InPhase2i: A boolean variable. This is true if pi is in Phase 2; otherwise, false.
In addition, the following Phase 1 variables are also used. Note that these variables are never
updated in Phase 2.
- MKFromi
- DSInfoi
The following messages are used in Phase 2.
- ⟨Check, rID, dist, pID⟩: A message to inform its neighbors of the smallest ID that the
initiator currently knows. rID is the initiator that has the smallest ID (the initiator
currently knows), dist is the distance to rID, and pID is the parent initiator’s ID to
_rID._
- ⟨LocalTerm⟩: A message for a convergecast.
- ⟨GlobalTerm⟩: The leader initiator (which has the smallest ID) broadcasts this message
to all other initiators when a convergecast is successfully finished.
Algorithm 4 presents the pseudo-code of the proposed algorithm of Phase 2. In Phase 2,
each initiator repeatedly broadcasts a Check message to its neighbor initiators, to find the
leader. The Check message includes the smallest ID (denoted by rID) that the initiator ever
knows and the distance to it. When an initiator starts Phase 2, the initiator sends a Check
message containing its ID as the minimum ID rID to its all neighbor initiators (line 5). When
the initiator receives Check messages, it updates its root, its distance, and its parent initiator
(line 15), if it finds a smaller ID or a smaller distance with the smallest ID it ever knows. If there
is some update on these variables, it sends the Check message with the updated information to
all its neighbor initiators again (line 16). By repeating these broadcasts and updates, initiators
construct a breadth-first spanning tree rooted at the initiator with the smallest ID.
This naive technique is widely used to find the leader in the distributed system. However,
this technique is hardly applicable when the diameter of the network is unknown, because
the broadcast of the Check message has to be repeated as many times as the diameter of
the network. To resolve this difficulty, in the proposed algorithm, we allow an initiator pi
to stop broadcasting Check and start convergecast toward the leader (the initiator currently
knows), when the following conditions are satisfied (line 25): (1) an initiator pi receives Check
messages from its all neighbor initiators, and (2) there are no child initiators in the neighbors.
This implies that initiator pi is a leaf initiator of the tree rooted at the leader. Even after
an initiator begins the convergecast, the initiator stops it when the initiator receives a Check
message from any neighbor initiator, and the initiator restarts the convergecast when the
conditions above are satisfied.
The convergecast uses a LocalTerm message that is repeatedly sent from a leaf initiator
to the root initiator (the leader) through the tree. When the initiator receives a LocalTerm
message, the initiator adds the sender’s ID to its set variable LT (line 29), which is a set variable that stores the IDs of the initiators from which the initiator received LocalTerm messages.
Therefore, the parent initiator (which has one or more child initiators) starts the convergecast
when the initiator receives LocalTerm messages from all its child initiators (line 25). The
convergecast is terminated when the leader receives LocalTerm messages from all its neighbor
initiators (note that all neighbor initiators of the leader eventually become the leader’s children), and the leader broadcast GlobalTerm messages to finish Phase 2 (line 31). This implies
that to terminate the convergecast, all initiators have to start convergecasts, and this means all
initiators have the same rID. If some initiators start convergecasts with wrong information,
e.g., the rID of the initiator is not the smallest ID, these initiators will stop the convergecast,
and send Check messages again when they detect a smaller initiator ID (line 16). This wrong
convergecast can be executed at most d times, where d is the diameter of the initiator network
at the time when all the initiators in the initiator network are in Phase 2.
10
-----
#### 4.5 Rollback Algorithm
Here, we describe the rollback algorithm of CPS algorithm. Actually, the algorithm is the
same as RB algorithm of SSS algorithm [10,20]; thus, we just introduce RB algorithm in our
style below.
First, we give the overview of RB algorithm. The rollback algorithm can be invoked anytime
by any node, even if some node in its snapshot was leaved from the system. When a rollback
of a snapshot is triggered by a rollback initiator pi, first pi sends a RbMarker message to every
node in pi’s DS to determine its rollback group similar to SSS algorithm described briefly in
Section 3.2. After the rollback group is determined, each node in the group first restores its
state to the latest checkpoint[5] and recovers every link of the node with the stored in-transit
messages. Then, the node resumes to the execution of its application.
We enumerate the variables and the message types that RB algorithm uses below. They are
mostly the same for those of CPS algorithm. In the rollback algorithm, each node pi maintains
the following variables.
- RbIniti: Rollback initiator’s ID. An initiator sets this variable as its own ID. A normal
node (not initiator) sets this variable to the initiator ID of the first RbMarker message
it receives. Initially null.
- RbRcvMki: A set of the IDs of the nodes from which pi (already) received RbMarker
messages. Initially .
_∅_
- RbMkListi: A set of the IDs of the nodes from which pi has to receive RbMarker messages
to terminate the algorithm. Initially .
_∅_
- RbFini: A boolean variable that denotes whether the rollback group is determined or
not. Initially false.
- RbMkFromi (Initiator only): A set of the IDs of the nodes that send RbMarker to its
DS. Initially .
_∅_
- RbMkToi (Initiator only): The union set of the DSes of the nodes in RbMkFrom.
Initially .
_∅_
- RbDSInfoi (Initiator only): A set of the pairs of a node ID and its DS. Initially ∅.
The algorithm also uses the following Phase 1 variables:
- DSi
- MsgQi
We use the following message type for the rollback algorithm.
- ⟨RbMarker, init⟩: A message which controls the timing of a rollback of the local state.
Parameter init denotes the initiator’s ID.
- ⟨RbMyDS, DS⟩: A message to send its own DS (all nodes communication-related to this
node) to its initiator.
- ⟨RbOut⟩: A message to cancel the current rollback algorithm. When a node who has
been an initiator receives a RbMyDS message of the node’s previous instance, the node
sends this message to cancel the sender’s rollback algorithm instance.
- ⟨RbFin, List⟩: A message to inform that its rollback group is determined. List consists of
the IDs of the nodes from which the node has to receive RbMarker messages to terminate
the algorithm.
Algorithm 5 is the pseudo-code of the rollback algorithm. As you can see, this is mostly
the same as Algorithm 2, but the algorithm is simpler than that. This is because the rollback
algorithm does not support concurrent rollbacks of multiple groups, which requires collision
handling of these groups like CPS algorithm.
5If the node has not stored any checkpoint yet, the node rolls back to its initial state.
11
-----
### 5 Correctness
In this section, we show the correctness of the proposed algorithm. First, we show the consistency of the recorded checkpoints (the snapshot). The consistency of the snapshot can be
guaranteed by the following conditions: (a) the recorded checkpoints are mutually concurrent,
which means that no causal relation, e.g., message communications, exists between any two
checkpoints, and (b) in-transit messages are correctly recorded.
We denote the k-th event of node pi as e[k]i [.][ S][i][ denotes the recorded checkpoint of node][ p][i][.]
When a snapshot algorithm correctly terminates, Si is updated to the latest checkpoint, and
the previous recorded checkpoint is discarded. Thus, Si is uniquely defined, if pi recorded its
local state at least once. From the proposed algorithm (and many other snapshot algorithms
using Marker ), Si is usually created when the node receives the first Marker.
**Definition 1. (A causal relation) e[n]i** _j_ _[denotes that][ e]i[n]_ _[causally precedes][ e]j[m][. This causal]_
_[≺]_ _[e][m]_
_relation is generated in three cases: (1) e[n]i_ _[and][ e]j[m]_ _[are two internal computations on the same]_
_node (i = j) and n < m. (2) e[n]i_ _[and][ e]j[m]_ _[are the sending and the receiving events of a message,]_
_respectively. (3) e[n]i_ _[≺]_ _[e]k[l]_ _[and][ e]k[l]_ _[≺]_ _[e]j[m]_ _[(transitive).]_
Now we show the following lemma using the notation and definition above.
**Lemma 1. For any two checkpoints Si and Sj recorded at distinct nodes pi and pj by the**
_proposed algorithm, neither Si_ _Sj nor Sj_ _Si holds (or they are concurrent)._
_≺_ _≺_
_Proof. For contradiction, we assume Si_ _Sj holds without loss of generality. It follows that a_
_≺_
message chain m1, m2, · · ·, mk (k ≥ 1) exists such that m1 is sent by pi after Si, ml is received
before sending ml+1 (1 ≤ _l < k) at a node, and mk is received by pj before Sj._
If Si and Sj are recorded by Markers from the same initiator, we can show that Marker
is sent along the same link before each ml. This is because Marker is (a) sent to every
communication-related node when a node records a checkpoint, and (b) sent to a communicationirrelated node before a message is sent to the node (which becomes communication-related).
Therefore, pj records its checkpoint at the latest before it receives mk, which is a contradiction.
Even if Si and Sj are recorded by Markers from two different initiators, px and py, respectively, Marker from px is received by pj before the receipt of mk for the same reason as
above. Thus, pj never records its checkpoint, when Marker from py is received by it (a collision
occurs).
Therefore, Lemma 1 holds.
Next, we present the following lemma about the recorded in-transit messages.
**Lemma 2. A message m sent from pi to pj is recorded as an in-transit message by pj, if and**
_only if m is sent before Si and received after Sj._
_Proof. (only if part) A message m from pi to pj is recorded as an in-transit message by pj_
only when it is received after Sj, but before Marker from pi. Marker is sent from pi to pj
immediately after Si; thus, the above implies from the FIFO property of the communication
link that m is sent before Si. The only if part holds.
**(if part) Let m be the message that is sent from pi before Si, and received by pj after Sj.**
First, we assume that Si and Sj are recorded on receipt of Marker s from the same initiator
(i.e., they are in the same partial snapshot group). Because m is sent before Si, pi adds pj to
its DSi, and then pi sends Marker to pj when Si is recorded (i.e., when the first Marker is
received). Node pi sends not only Marker but also its DSi to its initiator. This implies when
the snapshot group is determined, pi is included in MkListj, which is the set of the IDs of the
nodes from which pj has to receive Markers. Therefore, pj cannot terminate the algorithm,
until pj receives Marker from pi. Because m is received by pj before Marker from pi (due to
the FIFO property), m is always recorded as an in-transit message.
Next, we assume that Si and Sj are recorded on receipt of Markers from different initiators
(denoted by px and py, respectively). In this case, when pj receives Marker from pi (pi has to
send Marker to pj when it records Si), it sends NewInit to its initiator py because it detects a
collision. We have to consider the following two cases when py receives NewInit from pj. Note
12
-----
that, at this time, px has not determined its snapshot group, because pj is included in DSi,
and px has not received DSj yet.
(1) py has not determined its snapshot group: py sends Link to px, and a virtual
link between the two nodes is created in the initiator network. This causes pi to be added to
_MkListj, when py determines its snapshot group. Because pi_ _MkListj, pj has to wait for_
_∈_
_Marker from pi, and records m as an in-transit message._
(2) py already determined its snapshot group: If pi is in the snapshot group of py,
we can show with an argument similar to (1) that m is recorded as an in-transit message. If pi
is not in py’s snapshot group, then the snapshot group is determined using DSj that does not
contain pi. This implies pj never sends Marker to pi, when checkpoint Sj is recorded. In this
case, because py has already sent a Fin message to pj before the receipt of NewInit, pj never
records m in Sj, because pi is not included in MkListj. However, in this case, pj records a
new checkpoint, say Sj[′] [, on receipt of][ Marker][ from][ p][i][ that was sent when][ S][i][ is recorded, and]
receives m before Sj[′] [. As a result,][ m][ is not an in-transit message, and is never recorded in][ S][j]
or Sj[′] [.]
Lemmas 1 and 2 guarantee the consistency of the recorded checkpoints and in-transit messages by the proposed algorithm. Now we discuss about the termination of Phase 1 using the
following lemma.
**Lemma 3. Every initiator eventually terminates Phase 1 and proceeds to Phase 2.**
_Proof. To terminate Phase 1 (and start Phase 2), each initiator has to execute procedure_
CanDetermineSG() (lines 55 to 61 in Algorithm 2) and satisfies two conditions (line 56 in
Algorithm 2): (1) MkToi is a subset of or equal to MkFromi and (2) Waiti is an empty set.
Note that whenever MkToi, MkFromi, or Waiti is updated, an initiator executes procedure
CanDetermineSG() (refer Algorithm 2). Therefore, if any initiator cannot terminate Phase 1,
it implies that, two conditions are not satisfied and the variables in the two conditions are
never updated (i.e., deadlock), or the two conditions are never satisfied forever even if they are
repeatedly updated (i.e., livelock).
(1) Condition MkToi ⊆ _MkFromi: Assume for contradiction that MkFromi ⊂_ _MkToi_
and no more update occurs. Let px be the node that is included in its initiator pi’s MkToi,
but not in MkFromi. This means that px received (or will receive) a Marker message from
the node whose DS contains px. When px receives the Marker message, px does one of the
following (lines 4 to 28 in Algorithm 2): (a) If it is the first Marker message (lines 6 to 13),
_px sends its DSx to its initiator pi, which is a contradiction. (b) If it is the second or later_
_Marker message (lines 15 to 19), px already sent its DSx to its initiator pi when px received_
the first Marker message, this is also a contradiction. (c) If a collision happens (lines 21 to 26),
we must take care with MkFrom of two initiators, px’s initiator pi and the opponent collided
initiator, say pj. For the initiator pi, when px receives a collided Marker, px sends a NewInit
message to its initiator pi. This implies that px processed the case (a) to recognize pi as its
initiator before, and the case (a) contradicts the assumption as we proved. For the opponent
initiator pj, when pi receives the NewInit message, the initiator sends a Link message, which
leads px _MkFromj (line 21 of Algorithm 3). This also contradics the assumption._
_∈_
(2) Condition Waiti = ∅: Assume for contradiction that there is an element in Waiti, and
the element is never removed from Waiti. Note that an element can be added to Waiti only
when a collision occurs for the first time between two snapshot groups (line 5 in Algorithm 3).
Therefore, when an initiator pi adds an element to Waiti, pi also sends a Link message to the
opponent colliding initiator pj. The initiator pj sends either an Ack message or a Deny message
as its reply (lines 19 to 33 in Algorithm 3). Both of these two messages cause the corresponding
element to remove from Waiti; thus, each element in Waiti is removed eventually. This is a
contradiction. Note that if once two distinct initiators are connected in an initiator network
by exchanging Link and Ack messages, they never add the opponent initiator in their Wait
each other. If a Deny message is sent as the reply, the collision never occurs again between
the two collided nodes. Therefore, an element is added to Waiti only a finite number of times,
because the total number of the nodes in the system is finite.
From Lemmas 1 to 3, the following theorem holds.
13
-----
**Theorem 1. Phase 1 eventually terminates, and all checkpoints and in-transit messages**
_recorded by the proposed algorithm construct a consistent snapshot of the subsystem._
Now, we prove the following theorem regarding the correctness of Phase 2.
**Theorem 2. Every initiator in an initiator network terminates, after all of the initiators in**
_the network determine their snapshot groups._
To prove the theorem, we will show that the convergecast in Phase 2 never terminates, if
an initiaor executing Phase 1 exists. The reason is as follows: An initiator terminates Phase
2 when it receives a GlobalTerm message. The root node of the spanning tree constructed on
the initiator network sends GlobalTerm messages, when the node receives LocalTerm messages
from all its neighbor nodes (they all are children of the node on the tree). LocalTerm messages
are sent by a convergecast from the leaf nodes of the tree to the root, when (1) a node received
_Check messages from all its neighbor nodes, and no neighbor node was a child of the node_
(or the node is a leaf), or (2) a node received Check messages from all its neighbor nodes and
_LocalTerm messages from all its child nodes. Therefore, it is sufficient for the correctness of_
Phase 2 to prove the following lemma.
**Lemma 4. The convergecast in Phase 2 never terminates, if an initiator node executing Phase**
_1 exists._
_Proof. We assume that only one node is executing Phase 1 in the initiator network, and let pi_
be the node. We denote all nodes with distance d from pi as Ni[d][; e.g.,][ N][ 3]i [is the set of all nodes]
with distance 3 from pi (trivially, Ni[1] [=][ N][i][). Let][ p][s][ be the node that has the smallest ID in the]
initiator network. To terminate the convergecast, ps must receive LocalTerm from all nodes in
_Ns and become the root of the spanning tree. Assuming that ps_ _Ni, the convergecast never_
_∈_
terminates, because pi is executing Phase 1, and never sends LocalTerm to ps. Even if ps ∈ _Ni[2][,]_
the convergecast cannot terminate, because a node in Ni that cannot receive LocalTerm from pi
does not send LocalTerm to ps. In the same way, if ps ∈ _Ni[x]_ [for some][ x][(][≥] [1), the convergecast]
never terminates.
If the convergecast does not terminate, which implies that an initiator is still executing
Phase 1 and has not determined its snapshot group yet, no node can terminate Phase 2,
because no GlobalTerm is sent. Therefore, Theorem 2 holds.
### 6 Evaluation
In this section, we evaluate the performance of the proposed algorithm with CSS algorithm
[11, 21]. CSS algorithm is a representative of partial snapshot algorithms, as described in
Section 2, and the two algorithms have the same properties: (1) The algorithms do not suspend
an application execution on a distributed system while taking a snapshot, (2) the algorithms
take partial snapshots (not snapshots of the entire system), (3) the algorithms can take multiple
snapshots concurrently, and (4) the algorithms can handle dynamic network topology changes.
In addition, both algorithms are based on SSS algorithm [10, 20]. For these reasons, CSS
algorithm is a reasonable baseline for CPS algorithm. We also analyze time and message
complexities of CPS algorithm theoretically in Section 6.4.
#### 6.1 CSS algorithm summary
Before showing the simulation results, we briefly explain CSS algorithm. For details, please
refer the original paper [21].
The basic operation when no collision happens is almost the same as Phase 1 of CPS
algorithm. An initiator sends Marker messages to the nodes in its DS, and the nodes reply
by sending DSinfo messages with their DS. If the initiator receives DSes from all of its nodes,
it sends Fin messages to let the nodes know the sets of nodes from which they must receive
_Markers, before terminating the snapshot algorithm._
14
-----
(a) Message flow when a collision occurs (b) Initiator network
Figure 6: A collision-handling example of CSS algorithm
In the algorithm, when a collision occurs, two collided initiators merge their snapshot
groups into one group, and one of them becomes a main-initiator and the other becomes a
sub-initiator. The main-initiator manages all of the DSes of the nodes in the merged snapshot
group and determines when the nodes terminate the snapshot algorithm. The sub-initiator
just forwards all the DSinfo and collision-related messages to its main-initiator, if it receives.
If another collision occurs and the main-initiator’s snapshot group is merged into that of the
merging initiator, the merged initiator resigns the main-initiator, and becomes a sub-initiator
of the merging initiator. These relations among a main-initiator and sub-initiators form a
tree rooted at the main-initiator, and in this paper, we call it an initiator network, like CPS
algorithm.
Figure 6 (a) illustrates the actual message flow of CSS algorithm when a collision happens.
When a node px receives a collided Marker message from a neighbor node py, px sends a
_NewInit message to its initiator. This NewInit message is forwarded to the initiator’s initiator_
if it exists. This forwarding repeats until the NewInit message reaches the main-initiator. The
main-initiator pa sends an Accept message to px, to allow resolution of this collision. Then, px
sends a Combine message to py, and this Combine message is also forwarded to the opponent
main-initiator pb. When the opponent main-initiator pb receives the Combine message, the
node compares its ID with ID of pa. If pa < pb, pb recognizes pa as its initiator, and sends an
_InitInfo message to pa with all of the information about the snapshot algorithm, including the_
set of all DSes that pb has ever received. Otherwise, pb sends a CompInit message to pa and
requests pa to become pb’s sub-initiator, by considering pb as its main-initiator. The collision
is resolved with these message exchanges, and finally, one of the initiators pa or pb manages
both snapshot groups. When pb becomes the main-initiator by sending the CompInit message,
the initiator network of this example can be illustrated as in Figure 6 (b).
When another collision happens during this collision handling, the main initiator stores the
_NewInit message that provides the notification of the collision in a temporary message queue,_
and processes the message after the current collision is resolved. In other words, CSS algorithm
can handle at most one collision at the same time. We think this drawback largely degrades
the performance of CSS algorithm.
In the simulation, we modified CSS algorithm slightly from the original, because we discovered during implementing the simulator that the original algorithm lacked some mechanisms
that were necessary to take snapshots consistently. First, we introduced Out messages, which
was not described in CSS algorithm paper [21]. This helps a node (not an initiator) to shut
down the current snapshot algorithm and join the next one. Second, we altered it to forward
_CompInit and InitInfo messages to a main-initiator, in addition to DSinfo and Combine. This_
was necessary to avoid deadlocking, when two or more collisions occur at the same time.
15
-----
Table 1: Message types. The initiator network-type messages of CSS algorithm (i.e., DSinfo,
_NewInit, etc.) are counted only when these messages are forwarded from a sub-initiator to its_
main-initiator.
**Type** **CPS algorithm** **CSS algorithm**
Marker _Marker_ _Marker_
Normal _MyDS, Fin, Out_ _DSinfo, Fin, Out_
Collision _NewInit, Link, Ack, Deny, Accept_ _NewInit, Accept, Combine, CompInit, InitInfo_
Initiator network _Check, LocalTerm, GlobalTerm_ _DSinfo, NewInit, Combine, CompInit, InitInfo_
#### 6.2 Simulation settings
The evaluation is performed by simulating node behaviors on a single computer. Although
both algorithms can take a snapshot on an asynchronous distributed system, for simplicity, a
simulation is conducted in synchronous rounds. In a round, all nodes receive messages, process
them, and send new messages, which will be delivered in the next round.
Before each simulation of the algorithms, a communication-relation on nodes is generated, which has influence on the performance of the snapshot algorithms. Although actual
communication-relations depend on distributed applications to which snapshot algorithms are
applied, we generate communication-relations randomly with probability C for every pair of
nodes for simplicity. After generating a communication-relation, we start simulation executions, one of each of the algorithms. In the first round, each node becomes an initiator
with probability F, and starts execution (by storing its state and sending Markers to its
communication-related nodes) of the snapshot algorithms if it becomes an initiator. We terminate the simulation when all the initiated snapshot algorithm instances terminate.
We have three parameters for the simulation: communication probability C, initiation probability F, and the number of nodes N . As described, parameters C and F probabilistically
determine the communication-relations and the snapshot algorithm initiations, respectively.
The larger C generates denser communication-relations; thus, a (partial) snapshot group becomes larger. The larger F makes more nodes behave as initiators. N indicates the number of
nodes in a simulation. If C or F is large, a collision occurs more easily.
We evaluate these snapshot algorithms with three measures. The first measure is the total
number of messages sent in a simulation. As described in Section 3.1, a node can send a
message to any other node if the node knows the destination node’s ID. Additionally, in this
simulation, we assume that every node can send messages (including messages sent in Phase 2
of CPS algorithm, e.g., Check ) to every other node in one hop. In other words, we do not take
into account any relaying message for this measure. The second measure is the total number
of rounds from the initiations of the snapshot algorithms until the termination of all snapshot
algorithm instances. The last measure is the number of messages by type. This is a complement
of the first measure, to discuss which parts of the algorithms dominate their communication
complexity. For this purpose, we classify the messages of both algorithms into four types,
as shown in Table 1. The normal-type messages are used to decide a snapshot group. The
collision-type messages are sent to resolve collisions that occurred during a snapshot algorithm.
The initiator network-type messages are sent between initiators, to coordinate their instances.
In CPS algorithm, this type of message is used in Phase 2, to synchronize their termination. In
contrast, CSS algorithm uses this type to forward collision-related messages from a sub-initiator
to its main-initiator.
We run at least 100 simulations for each parameter setting and show the average of the
simulations.
#### 6.3 Simulation results
First, we show the simulation results for different numbers of nodes N, in Figure 7. As Figure
7 (a) indicates, CPS algorithm can take snapshots with fewer messages than CSS algorithm.
For instance, when N = 200, CPS algorithm reduced 44.1% of messages from that of CSS
algorithm. Figure 7 (b) shows the running time of these algorithms (note that only this graph
16
-----
uses a logarithmic scale). Although the running time of CPS algorithm was always less than 40
rounds, that of CSS algorithm drastically increased, and it took 34,966 rounds when N = 200.
This huge difference came from the fact that CSS algorithm can handle at most one collision
at the same time; thus, collisions must wait until the collision being processed (if it exists) is
resolved. In contrast, an initiator of CPS algorithm can handle multiple collisions concurrently,
and then CPS algorithm drastically improves the total rounds. We discuss later why the huge
differences in the total numbers of messages and rounds exist.
The total number of collisions of both algorithms are displayed in Figure 7 (c). Interestingly,
CPS algorithm has more collisions than CSS algorithm, although CPS algorithm sends fewer
messages than CSS algorithm. This is because, CPS algorithm reprocesses a Marker message
again when a node receives Out to resolve a collision consistently. However, if the node is in
another snapshot group than that of the Marker message, this reprocess leads to a collision.
Figure 7 (d) shows the total numbers of partial snapshot groups[6], which are controlled by
initiation probability C. Both the algorithms have the same numbers because we provided the
same seed of the pseudo random number generator (PRNG) in the simulator to each iteration of
both the algorithms; we used i as the seed for the i-th iteration of each algorithm. Moreover,
the initiation of each node is calculated with the PRNG in the same manner between the
algorithms; thus, the same set of nodes become initiators for the same iteration.
Figure 7 (e) depicts the size of their initiator networks in the simulations. Here, we define
the initiator network size of CPS algorithm and CSS algorithm by the diameter of the initiator network and the depth of the initiator network tree, respectively, because these metrics
can estimate the message processing load of the initiator network. We can observe that the
increasing ratio of CSS algorithm is larger than that of CPS algorithm.
Figures 7 (f) and (g) display the ratio of the message types, which were defined in Section
6.2, of the algorithms in their simulations. The ratios of marker-type messages of the two
algorithms are mostly the same, while those of collision- and initiator network-type messages
are different. In CPS algorithm, Initiator network-type messages are sent on the initiator
network only to construct a breath-first-search (BFS) spanning tree, and to synchronize the
termination of the initiators’ instances. However, CSS algorithm requires sub-initiators to
forward every collision-related message, in which these forwarding messages are counted as
initiator network-type messages, to their main-initiators. This forwarding is a very heavy task
in terms of the message counts. In fact, 40.9% of messages were sent on the initiator network
of CSS algorithm when N = 200, although the total numbers of collision-type messages are
mostly the same for the algorithms.
To discuss why there exist such huge differences in the total numbers of messages and rounds
between CPS algorithm and CSS algorithm, we examine their representative executions, and
analyze their execution details. As the representative, we chose an execution whose total
number of messages is almost the same as the average of each algorithm when N = 200,
_C = 10, and F = 10._
First, we see the BFS spanning tree on the initiator network of CPS algorithm in the
execution, which is illustrated in Figure 8. There are 17 initiators in the network, and its
topology is almost a complete graph (the network has a clique of size 16, and its diameter is
two). Therefore, the convergecast in Phase 2 with Check messages terminates at most two
rounds after all the initiators finish Phase 1, and the root node can broadcast GlobalTerm
immediately. We can confirm this in Figure 7 (d), and this is not a special case for the
execution.
The initiator network of CSS algorithm is depicted in Figure 9. The tree has 16 nodes
(initiators), and its depth is five, which means a collision-related message (e.g., Combine or
_NewInit) will forward four times at most. To reveal the reason for the large number of messages_
and rounds of CSS algorithm, let us assume that a Marker message is sent from the snapshot
group of initiator p173 to the snapshot group of initiator p171, and this tree has been constructed
when this collision happens. This is the worst case on the network. First, the collided node
in p171’s snapshot group sends a NewInit message to p171, and this message is forwarded four
times to p0; then p0 sends an Accept message to p171. When p171 receives this Accept message,
6These are equal to the numbers of initiators
17
-----
80000
70000
60000
50000
40000
30000
20000
10000
0
10[5]
10[4]
10[3]
10[2]
10[1]
10[0]
10[-1]
CPS
CSS
|PS SS|Col2|
|---|---|
|||
30 50 100 150 200
# nodes
(a) Total messages
|CPS CSS|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|||||||||
|||||||||
14000
12000
10000
8000
6000
4000
2000
30 50 100 150 200
# nodes
(b) Total rounds
0
30 50 100 150 200
# nodes
25
20
15
10
5
0
30 50 100 150 200
# nodes
(c) Total collisions
3.5
CPS
3 CSS
2.5
2
1.5
1
30 50 100 150 200
# nodes
0.5
0
(d) Total partial snapshot groups (e) Initiator network size
Marker Collision Marker Collision
Normal Initiator NW Normal Initiator NW
100 100
80
60
40
20
0
30 50 100 150 200
# nodes
30 50 100 150 200
# nodes
80
60
40
20
0
(f) Ratio of CPS messages (g) Ratio of CSS messages
Figure 7: Simulation results for different numbers of nodes N . Communication probability C
and initiation probability F are fixed at 10%
0
6 13 21 29 32 40 52 53 83 90 95 97 108 110 111 140
Figure 8: An initiator network example of CPS algorithm
18
-----
Figure 9: An initiator network example of CSS algorithm
25000
CPS
CSS
20000
15000
10000
5000
0
1 2 3 4 5 6 7 8 9 10
Rank
Figure 10: The total number of processed messages of the top 10 nodes in the simulation
it sends a Combine message to the colliding node in p173’s snapshot group, and this Combine
message is also forwarded four times to p0. [7] Then, p0 receives the Combine message from p0,
and p0 replies with an InitInfo message to p0, because p0 _< p0. Finally, the collision between_
_̸_
the initiators that share the same parent is resolved, thanks to 12 messages and 12 rounds
(remember, the simulation is conducted by synchronous round, and it always takes a round
to deliver a message). Moreover, CSS algorithm must resolve collisions one by one. Although
this is a worst-case analysis, and typically, CSS algorithm can handle a collision with fewer
messages and rounds, this is why CSS algorithm consumes a large number of messages and
rounds.
Figure 10 shows the top 10 nodes that process the largest number of messages in the two
executions of CSS algorithm and CPS algorithm. Apparently, most of the messages in CSS
algorithm are processed by two nodes (p0 and p33 in Figure 9). This is unfavorable, because
the nodes are exhausted by processing these messages, and can no longer run an application.
However, these tasks are distributed equally in CPS algorithm.
Finally, we observe the results for different communication probability C and initiation
probability F . These results are shown in Figures 11 and 12. Similarly to the case for different
number of N, CPS algorithm outperforms CSS algorithm in terms of the total numbers of
messages and rounds.
#### 6.4 Theoretical Performance
Finally, we analyze the theoretical performance of CPS algorithm in terms of time and message
complexities in the worst scenario where there are n nodes in the system, and all of them invoke
the algorithm. We also assume the invocations happen at the same time for simplicity.
First, we analyse the time complexity with asynchronous rounds. In an asynchronous round,
every node receives messages sent in the previous round, processes the messages, and sends
new messages to other nodes. We assume that communication-relations of all the nodes form a
line graph of n nodes, and one end of the graph has the smallest ID for the worst case of time
7Remember that the initiator network in Fig. 9 has been constructed when this collision happens. This
means that p0 is the main-initiator of both p173 and p171. In other words, p0 behaves as the main-initiator of
the collided snapshot group and as that of the colliding snapshot group.
19
-----
70000
60000
50000
40000
30000
20000
10000
0
|CPS|Col2|Col3|Col4|
|---|---|---|---|
|CSS||||
|||||
|CSS|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
1 5 10 15 20
Communication probability [%]
1 5 10 15 20
Communication probability [%]
CPS
CSS
10[5]
10[4]
10[3]
10[2]
10[1]
10[0]
10[-1]
(a) Total messages (b) Total rounds
Figure 11: Simulation results for different communication probability C. The number of nodes
_N and initiation probability F are fixed at 150 and 10%, respectively_
45000
40000
35000
30000
25000
20000
15000
10000
5000
0
|CPS|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|CSS||||||||||
|||||||||||
|CPS|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|
|---|---|---|---|---|---|---|---|---|---|---|---|
|CSS||||||||||||
|||||||||||||
1 5 10 15 20
Initiation probability [%]
1 5 10 15 20
Initiation probability [%]
10[5]
10[4]
10[3]
10[2]
10[1]
10[0]
10[-1]
(a) Total messages (b) Total rounds
Figure 12: Simulation results for different initiation probability F . The number of nodes N
and communication probability C are fixed at 150 and 10%, respectively
complexity. In this case, each initiator determines its partial snapshot group in five rounds[8],
and enters Phase 2. The leader election of Phase 2 takes n 1 rounds because it requires n 1
_−_ _−_
rounds to propagate the smallest ID from one end to the other end on the line graph. With
the same discussion, the relay transmissions of LocalTerm and GlobalTerm messages also takes
_n_ 1 rounds each. After the termination of Phase 2, each initiator sends Fin messages and
_−_
terminates CPS algorithm in the next round. Therefore, CPS algorithm can take a snapshot
within 3n + 3 rounds.
Next, we consider message complexity of CPS algorithm. The worst case is a situation
where all the initiators are communication-related each other. In Phase 1 of the case, each
node sends n Marker messages and one MyDS message before collisions happen. Since a
collision requires four messages and n collisions happen in this situation, 4n messages are sent
to resolve the collisions in total. In the leader election process of Phase 2, m Check messages
are sent in a round, and the election finish within ∆rounds, where m is the number of edges
in the initiator network, and ∆is the diameter of the network when Phase 2 terminates.
_LocalTerm and GlabalTerm messages are sent once in every edge; then the total number of_
these messages is m. Since we assume in Phase 1 that collisions happen between every two
initiators, the initiator network is a complete graph of degree n, that is, m = n(n 1)/2 and
_−_
∆= 1. Therefore, the message complexity of CPS algorithm is (n[2]).
_O_
### 7 Conclusion
We proposed a new partial snapshot algorithm named CPS algorithm to realize efficient
checkpoint-rollback recovery in large-scale and dynamic distributed systems. The proposed
partial snapshot algorithm can be initiated concurrently by two or more initiators, and an overlay network among the initiators is constructed to guarantee the consistency of the snapshot
obtained when some snapshot groups overlap. CPS algorithm realizes termination detection
8Each initiator sends messages in the following order: Marker (round 1), MyDS and NewInit (round 2),
_Link (round 3), Ack (round 4), and Accept (round 5)._
20
-----
to consistently terminate the algorithm instances that are initiated concurrently.
In a simulation, we confirmed that the proposed CPS algorithm outperforms the existing
partial snapshot algorithm CSS in terms of the message and time complexities. The simulation
results showed that the message complexity of CPS algorithm is better than that of CSS
algorithm for all the tested situations, e.g., 44.1% better when the number of nodes in a
distributed system is 200. This improvement was mostly due to the effective use of the initiator
network. The time complexity was also drastically improved, because CPS algorithm can
handle multiple collisions concurrently, while CSS algorithm must handle collisions sequentially.
#### Acknowledgements
This work was supported by JSPS KAKENHI Grant Numbers JP16K16035, JP18K18029, and
JP19H04085. All the experiments in the paper were conducted with GNU Parallel [22] on the
supercomputer of ACCMS, Kyoto University.
### References
[1] Y. Kim, J. Nakamura, Y. Katayama, and T. Masuzawa, “A Cooperative Partial Snapshot
Algorithm for Checkpoint-Rollback Recovery of Large-Scale and Dynamic Distributed Systems,” in Proceedings of the 6th International Symposium on Computing and Networking
_Workshops (CANDARW), (Takayama, Japan), pp. 285–291, Nov. 2018._
[2] J. Nakamura, Y. Kim, Y. Katayama, and T. Masuzawa, “A cooperative partial snapshot
algorithm for checkpoint-rollback recovery of large-scale and dynamic distributed systems
and experimental evaluations,” Concurrency and Computation: Practice and Experience,
vol. n/a, p. e5647, Jan. 2020.
[3] R. Koo and S. Toueg, “Checkpointing and rollback-recovery for distributed systems,”
_IEEE Transactions on Software Engineering, vol. SE-13, pp. 23–31, Jan 1987._
[4] R. H. Netzer and J. Xu, “Necessary and sufficient conditions for consistent global snapshots,” IEEE Transactions on Parallel & Distributed Systems, vol. 6, pp. 165–169, 02
1995.
[5] M. J. Fischer, N. D. Griffeth, and N. A. Lynch, “Global states of a distributed system,”
_IEEE Transactions on Software Engineering, vol. SE-8, pp. 198–202, May 1982._
[6] D. Briatico, A. Ciuffoletti, and L. Simoncini, “A distributed domino-effect free recovery
algorithm.,” in Proceedings of the 4th Symposium on Reliability in Distributed Software
_and Database Systems, pp. 207–215, 1984._
[7] M. Spezialetti and P. Kearns, “Efficient distributed snapshots,” in Proceedings of the 6th
_International Conference on Distributed Computing Systems (ICDCS), pp. 382–388, 1986._
[8] R. Prakash and M. Singhal, “Maximal global snapshot with concurrent initiators,” in
_Proceedings of the 6th IEEE Symposium on Parallel and Distributed Processing, pp. 344–_
351, 1994.
[9] E. N. Elnozahy, L. Alvisi, Y.-M. Wang, and D. B. Johnson, “A survey of rollback-recovery
protocols in message-passing systems,” ACM Computing Surveys, vol. 34, pp. 375–408,
Sept. 2002.
[10] S. Moriya and T. Araragi, “Dynamic snapshot algorithm and partial rollback algorithm
for internet agents,” Electronics and Communications in Japan (Part III: Fundamental
_Electronic Science), vol. 88, no. 12, pp. 43–57, 2005._
[11] Y. Kim, T. Araragi, J. Nakamura, and T. Masuzawa, “Brief announcement: a concurrent
partial snapshot algorithm for large-scale and dynamic distributed systems,” in Proceed_ings of the 13th international conference on Stabilization, Safety, and Security of dis-_
_tributed systems (SSS), SSS’11, (Grenoble, France), pp. 445–446, Oct. 2011._
21
-----
[12] K. M. Chandy and L. Lamport, “Distributed snapshots: Determining global states of
distributed systems,” ACM Trans. Comput. Syst., vol. 3, pp. 63–75, Feb. 1985.
[13] T. H. Lai and T. H. Yang, “On distributed snapshots,” Information Processing Letters,
vol. 25, no. 3, pp. 153–158, 1987.
[14] A. D. Kshemkalyani, “Fast and message-efficient global snapshot algorithms for large-scale
distributed systems,” IEEE Transactions on Parallel and Distributed Systems, vol. 21,
pp. 1281–1289, Sep. 2010.
[15] R. Garg, V. K. Garg, and Y. Sabharwal, “Scalable algorithms for global snapshots in
distributed systems,” in Proceedings of the 20th Annual International Conference on Su_percomputing, ICS ’06, pp. 269–277, 2006._
[16] R. Garg, V. K. Garg, and Y. Sabharwal, “Efficient algorithms for global snapshots in large
distributed systems,” IEEE Transactions on Parallel and Distributed Systems, vol. 21,
pp. 620–630, May 2010.
[17] J. Helary, A. Mostefaoui, and M. Raynal, “Communication-induced determination of consistent snapshots,” IEEE Transactions on Parallel and Distributed Systems, vol. 10, no. 9,
pp. 865–877, 1999.
[18] R. Baldoni, F. Quaglia, and B. Ciciani, “A vp-accordant checkpointing protocol preventing
useless checkpoints,” in Proceedings of the 17th IEEE Symposium on Reliable Distributed
_Systems, pp. 61–67, 1998._
[19] R. Baldoni, J.-M. Helary, A. Mostefaoui, and M. Raynal, “A communication-induced
checkpointing protocol that ensures rollback-dependency trackability,” in Proceedings of
_IEEE 27th International Symposium on Fault Tolerant Computing, pp. 68–77, 1997._
[20] S. Moriya and T. Araragi, “Dynamic snapshot algorithm and partial rollback algoithm
for internet agents,” in Proceedings of the 15th International Symposium on Distributed
_Compuiting (DISC 2001), pp. 23–28, 2001._
[21] Y. Kim, T. Araragi, J. Nakamura, and T. Masuzawa, “A Concurrent Partial Snapshot
Algorithm for Large-scale and Dynamic Distributed Systems,” IEICE Transactions on
_Information and Systems, vol. E97-D, pp. 65–76, Jan. 2014._
[22] O. Tange, GNU Parallel 2018. Zenodo, first ed., 2018.
22
-----
**Algorithm 2 Pseudo code of CPS algorithm for node pi (normal operations of Phase 1)**
1: procedure Initiate( )
2: OnReceive(⟨Marker, pi⟩)
3: end procedure
4: procedure OnReceive(⟨Marker, px⟩ from pj )
5: **if initi = null then**
6: // This is the first Marker
7: _initi ←_ _px, RcvMki ←_ _RcvMki ∪{pj_ _}_
8: _pDSi ←_ _DSi, DSi ←∅_
9: _MkListi ←∅, fini ←_ _false_
10: _MsgQi ←∅_
11: Record its own local state
12: Send ⟨MyDS, pDSi⟩ to initi
13: Send ⟨Marker, px⟩ to ∀pk ∈ _pDSi_
14: **else if initi = px then**
15: // Marker from the same snapshot group
16: _RcvMki ←_ _RcvMki ∪{pj_ _}_
17: **if fini = true then**
18: CheckTermination()
19: **end if**
20: **else if initi ̸= px then**
21: // A collision occurs
22: _RcvMki ←_ _RcvMki ∪{pj_ _}_
23: _CollidedNodesi ←_ _CollidedNodesi ∪{(pj_ _, px)}_
24: **if fini = false then**
25: Send ⟨NewInit, pj _, px⟩_ to initi
26: **end if**
27: **end if**
28: end procedure
29: procedure OnReceive(⟨MyDS, DSj _⟩_ from pj )
30: **if initi = null ∨** _fini = true then_
31: Send ⟨Out⟩ to pj
32: **else**
33: _MkFromi ←_ _MkFromi ∪{pj_ _}_
34: _MkToi ←_ _MkToi ∪{DSj_ _}_
35: _DSInfoi ←_ _DSInfoi ∪_ (pj _, DSj_ )
36: CanDetermineSG()
37: **end if**
38: end procedure
39: procedure OnReceive(⟨Out⟩ from pj )
40: // Cancel its snapshot algorithm
41: _initi ←_ _null_
42: _DSi ←_ _DSi ∪_ _pDSi_
43: Delete recorded local state and received messages in MsgQi
44: ReProcessMarker()
45: end procedure
46: procedure OnReceive(⟨Fin, List⟩ from pj )
47: _MkListi ←_ _List_
48: // My initiator notifies the determination of its snapshot group
49: _fini ←_ _true_
50: CheckTermination()
51: end procedure
52: procedure OnTermination( )
53: ReProcessMarker()
54: end procedure
23
-----
**Algorithm 2 Pseudo code of CPS algorithm for node pi (normal operations of Phase 1)**
(Cont’d)
55: procedure CanDetermineSG()
56: **if MkToi ⊆** _MkFromi ∧_ _Waiti = ∅_ **then**
57: // Initiator pi determines its snapshot group
58: _fini ←_ _true_
59: StartPhase2()
60: **end if**
61: end procedure
62: procedure CheckTermination()
63: **if MkListi ⊆** _RcvMki then_
64: **for each (pj** _, m) in MsgQi do_
65: **if pj ∈** _MkListi then_
66: Record m as an in-transit message
67: **end if**
68: **end for**
69: Wait until InPhase2i = false
70: Terminate this snapshot algorithm
71: **end if**
72: end procedure
73: procedure ReProcessMarker( )
74: **if CollidedNodesi ̸= ∅** **then**
75: // Process Markers again for collisions that is not resolved
76: **for each (py, pb) ∈** _CollidedNodesi do_
77: OnReceive(⟨Marker, pb⟩ from py)
78: **end for**
79: **end if**
80: end procedure
24
-----
**Algorithm 3 Pseudo code of CPS algorithm (collision handling of Phase 1)**
1: // From the view of pa in Fig. 4
2: procedure OnReceive(⟨NewInit, py, pb⟩ from px)
3: **if fina = false then**
4: **if pb /∈** _Na then_
5: _Waita ←_ _Waita ∪_ (px, py, pb)
6: Send ⟨Link, px, py⟩ to pb
7: **else**
8: _MkFroma ←_ _MkFroma ∪{py}_
9: _MkToa ←_ _MkToa ∪{px}_
10: _DSInfoa ←_ _DSInfoa ∪_ (py, {px})
11: Send ⟨Link, px, py⟩ to pb
12: Send ⟨Accept, py, pb⟩ to px
13: **end if**
14: **else if pb ∈** _Na then_
15: Send ⟨Link, px, py⟩ to pb
16: **end if**
17: end procedure
18: // From the view of pb in Fig. 4
19: procedure OnReceive(⟨Link, px, py⟩ from pa)
20: **if finb = false then**
21: _MkFromb ←_ _MkFromb ∪{px}_
22: **if pa /∈** _Nb then_
23: _Nb ←_ _Nb ∪{pa}_
24: _MkTob ←_ _MkTob ∪{py}_
25: _DSInfob ←_ _DSInfob ∪_ (px, {py})
26: Send ⟨Ack, px, py⟩ to pa
27: AcceptColliededNodes(pa)
28: CanDetermineSG()
29: **end if**
30: **else**
31: Send ⟨Deny, px, py⟩ to pa
32: **end if**
33: end procedure
34: // From the view of pa in Fig. 4
35: procedure OnReceive(⟨Ack, px, py⟩ from pb)
36: _Na ←_ _Na ∪{pb}_
37: AcceptColliededNodes(pb)
38: CanDetermineSG()
39: end procedure
40: // From the view of pa in Fig. 4
41: procedure OnReceive(⟨Deny, px, py⟩ from pb)
42: _Waita ←_ _Waita \ {(px, py, pb)}_
43: **if pb /∈** _Na then_
44: CanDetermineSG()
45: **end if**
46: end procedure
47: // From the view of px in Fig. 4
48: procedure OnReceive(⟨Accept, py, pb⟩ from pa)
49: **if py /∈** _pDSx then_
50: Send ⟨Marker, pb⟩ to py
51: **end if**
52: _CollidedNodesx ←_ _CollidedNodesx \ {(py, pb)}_
53: end procedure
54: // From the view of pa in Fig. 4
55: procedure AcceptCollidedNodes(pb)
56: **for each (pi, pj** _, pb) ∈_ _Wait do_
57: _MkFrom ←_ _MkFrom ∪{pj_ _}_
58: _MkTo ←_ _MkTo ∪{pi}_
59: _DSInfo ←_ _DSInfo ∪_ (pj _, {pi})_
60: Send ⟨Accept, pj _, pk⟩_ to pi
61: _Wait ←_ _Wait \ {(pi, pj_ _, pk)}_
62: **end for**
63: end procedure
25
-----
**Algorithm 4 Pseudo code of CPS algorithm for initiator pi (Phase 2)**
1: procedure StartPhase2()
2: **if Ni ̸= ∅** **then**
3: _rIDi ←_ _pi, disti ←_ 0, pIDi ← _pi, Childi ←∅_
4: _LTi ←∅, CKi ←∅, InPhase2i ←_ _true_
5: Send ⟨Check, rIDi, disti, pIDi⟩ to ∀pj ∈ _Ni_
6: Process the messages arrived before entering Phase 2
7: **else**
8: // There are no neighbors on the initiator network
9: FinishPhase2()
10: **end if**
11: end procedure
12: procedure OnReceive(⟨Check, rIDj _, distj_ _, pIDj_ _⟩_ from pj ∈ _Ni)_
13: _CKi ←_ _CKi ∪{pj_ _}_
14: **if rIDj < rIDi ∨** (rIDj = rIDi ∧ _distj + 1 < disti) then_
15: _rIDi ←_ _rIDj_, disti ← _distj + 1, pIDi ←_ _pj_
16: Send ⟨Check, rIDi, disti, pIDi⟩ to ∀pj ∈ _Ni_
17: **end if**
18: **if pIDj = pi then**
19: _Childi ←_ _Childi ∪{pj_ _}_
20: **else if pj ∈** _Childi then_
21: _Childi ←_ _Childi \ {pj_ _}_
22: _LTi ←_ _LTi \ {pj_ _}_
23: **end if**
24: **if CKi = Ni ∧** _Childi = ∅_ **then**
25: Send ⟨LocalTerm⟩ to pIDi
26: **end if**
27: end procedure
28: procedure OnReceive(⟨LocalTerm⟩ from pj ∈ _Ni)_
29: _LTi ←_ _LTi ∪{pj_ _}_
30: **if Childi = CKi = LTi = Ni ∧** _pIDi = pi then_
31: Send ⟨GlobalTerm⟩ to ∀pj ∈ _Childi_
32: FinishPhase2()
33: **else if Childi = LTi ∧** _CKi = Ni then_
34: Send ⟨LocalTerm⟩ to pIDi
35: **end if**
36: end procedure
37: procedure OnReceive(⟨GlobalTerm⟩ from pj ∈ _Ni)_
38: Send ⟨GlobalTerm⟩ to ∀pj ∈ _Childi_
39: FinishPhase2()
40: end procedure
41: procedure FinishPhase2()
42: _InPhase2i ←_ _false_
43: **for each pk ∈** _MkFromi do_
44: _MkListk ←{∀px | pk ∈_ _DSx, (px, DSx) ∈_ _DSInfoi}_
45: Send ⟨Fin, MkListk⟩ to pk
46: **end for**
47: end procedure
26
-----
**Algorithm 5 Pseudo code of CPS algorithm for node pi (Rollback)**
1: procedure Initiate( )
2: OnReceive(⟨RbMarker, pi⟩)
3: end procedure
4: procedure OnReceive(⟨RbMarker, px⟩ from pj )
5: **if RbIniti = null then**
6: Stop the execution of its application
7: _RbIniti ←_ _px, RbRcvMki ←_ _RbRcvMki ∪{pj_ _}_
8: _RbMkListi ←∅, RbFini ←_ _false_
9: Send ⟨RbMyDS, DSi⟩ to RbIniti
10: Send ⟨RbMarker, px⟩ to ∀pk ∈ _DSi_
11: **else if RbIniti = px then**
12: _RbRcvMki ←_ _RbRcvMki ∪{pj_ _}_
13: **if RbFini = true then**
14: CheckRbTermination()
15: **end if**
16: **end if**
17: end procedure
18: procedure OnReceive(⟨RbMyDS, DSj _⟩_ from pj )
19: **if RbIniti = null ∨** _RbFini = true then_
20: Send ⟨RbOut⟩ to pj
21: **else**
22: _RbMkFromi ←_ _RbMkFromi ∪{pj_ _}_
23: _RbMkToi ←_ _RbMkToi ∪{DSj_ _}_
24: _RbDSInfoi ←_ _RbDSInfoi ∪_ (pj _, DSj_ )
25: **if RbMkToi ⊆** _RbMkFromi then_
26: // Initiator pi determines its rollback group
27: _RbFini ←_ _true_
28: **for each pk ∈** _RbMkFromi do_
29: _RbMkListk ←{∀px | pk ∈_ _DSx, (px, DSx) ∈_ _RbDSInfoi}_
30: Send ⟨RbFin, RbMkListk⟩ to pk
31: **end for**
32: **end if**
33: **end if**
34: end procedure
35: procedure OnReceive(⟨RbOut⟩ from pj )
36: // Cancel this rollback algorithm
37: _RbIniti ←_ _null_
38: end procedure
39: procedure OnReceive(⟨RbFin, List⟩ from pj )
40: _RbMkListi ←_ _List_
41: // My initiator notifies the determination of its rollback group
42: _RbFini ←_ _true_
43: CheckRbTermination()
44: end procedure
45: procedure CheckRbTermination()
46: **if RbMkListi ⊆** _RbRcvMki then_
47: Restore its state to the latest checkpoint
48: Restore in-transit messages stored with the checkpoint to its links
49: **for each (pj** _, m) in MsgQi do_
50: **if pj /∈** _RbMkListi then_
51: Add m into the corresponding link
52: **end if**
53: **end for**
54: Resume the execution of its application.
55: Terminate this rollback algorithm
56: **end if**
57: end procedure
27
-----
| 23,861
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2103.15285, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/2103.15285"
}
| 2,018
|
[
"JournalArticle"
] | true
| 2018-11-01T00:00:00
|
[
{
"paperId": "86910be60af0ef933b4d847ac16157a7dde6bb13",
"title": "Brief Announcement: A Concurrent Partial Snapshot Algorithm for Large-Scale and Dynamic Distributed Systems"
},
{
"paperId": "752936ceac47997f2a8a568c61fda1b6efa5867d",
"title": "Fast and Message-Efficient Global Snapshot Algorithms for Large-Scale Distributed Systems"
},
{
"paperId": "240194c01d9180827e0a7e3047b35aa9118d1ec8",
"title": "Efficient Algorithms for Global Snapshots in Large Distributed Systems"
},
{
"paperId": "a37e50d2b771bee4d545f40fe5e651228b28639e",
"title": "Scalable algorithms for global snapshots in distributed systems"
},
{
"paperId": "2b3e797a06a1b406712ca69099eacfadc7ffaafe",
"title": "Dynamic snapshot algorithm and partial rollback algorithm for internet agents"
},
{
"paperId": "6f9633e70f7027a982fbe6d966b8ca1946212399",
"title": "A survey of rollback-recovery protocols in message-passing systems"
},
{
"paperId": "617f944562902e631b630077551a612a3e1a0973",
"title": "A VP-accordant checkpointing protocol preventing useless checkpoints"
},
{
"paperId": "7df2e0bf81025faa68e58a8daecaa898dbbabfec",
"title": "Communication-induced determination of consistent snapshots"
},
{
"paperId": "0b29eb277d3e60d0538371638847e886567192e6",
"title": "A communication-induced checkpointing protocol that ensures rollback-dependency trackability"
},
{
"paperId": "20934c437ef2582f24d800445342fbc12307002a",
"title": "Necessary and Sufficient Conditions for Consistent Global Snapshots"
},
{
"paperId": "d2f1419b56ec3c5862d2674bb9be903f9a17dde7",
"title": "Maximal global snapshot with concurrent initiators"
},
{
"paperId": "c6af125d53436a3ea47e3d9f5ac4c71b0dfa2149",
"title": "Efficient Algorithms for Distributed Snapshots and Global Virtual Time Approximation"
},
{
"paperId": "884d4fc5dc6917330a3f1961cc4366893cfbedda",
"title": "Checkpointing and Rollback-Recovery for Distributed Systems"
},
{
"paperId": "419ccfbf86d5be128f434a3ba3bebec5872f1a3c",
"title": "Global States of a Distributed System"
},
{
"paperId": "01e761d0f330ac9f2e9ebba157f9eee57b7056a1",
"title": "Distributed Snapshots"
},
{
"paperId": "6540c2f2221e31492223abf52decd6dfb9405041",
"title": "A Concurrent Partial Snapshot Algorithm for Large-Scale and Dynamic Distributed Systems"
},
{
"paperId": "2706db42926e0e58e35336331f6d3b62f0811cf5",
"title": "Snapshots : Determining Global States of Distributed Systems"
},
{
"paperId": "189b02eb04859240edc8189c6db8014ba4f7c1f9",
"title": "Efficient Distributed Snapshots"
},
{
"paperId": "faf64e3b9212f09520dbf0285504feb3c4fa0454",
"title": "A Distributed Domino-Effect free recovery Algorithm"
},
{
"paperId": null,
"title": "Process Markers again for collisions that is not resolved 76: for each (py, p b ) ∈ CollidedN odes i do 77: OnReceive( Marker"
},
{
"paperId": null,
"title": "GNU Parallel 2018"
}
] | 23,861
|
en
|
[
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00bcbcd8ace7507c842c725d4329e9151c38585b
|
[] | 0.816788
|
Distributed Event-Based Sliding-Mode Consensus Control in Dynamic Formation for VTOL-UAVs
|
00bcbcd8ace7507c842c725d4329e9151c38585b
|
International Conference on Unmanned Aircraft Systems
|
[
{
"authorId": "1409089822",
"name": "J. U. Alvarez-Muñoz"
},
{
"authorId": "34037074",
"name": "J. Chevalier"
},
{
"authorId": "1411307622",
"name": "J. J. Castillo-Zamora"
},
{
"authorId": "1858077",
"name": "J. Escareño"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"ICUAS",
"Int Conf Unmanned Aircr Syst"
],
"alternate_urls": null,
"id": "04936387-1b87-4ce0-93e7-767cb352e9c8",
"issn": null,
"name": "International Conference on Unmanned Aircraft Systems",
"type": "conference",
"url": null
}
|
The present work deals with consensus control for a multi-agent system composed by mini Vertical Takeoff and Landing (VTOL) rotorcrafts by means of a novel nonlinear event-based control law. First, the VTOL system modeling is presented using the quaternion parametrization to develop an integral sliding-mode control law for attitude stabilization of the aerial robots. Then, the vehicle position dynamics is expanded to the multi-agent case where a cutting-edge event-triggered sliding-mode control is synthesized to fulfill the collective consensus objective within a formation context. With its inherent robustness and reduced computational cost, the aforementioned control strategy guarantees closed-loop stability, while driving trajectories to the equilibrium in the presence of time-varying disturbances. Finally, for validation and assessment purposes of the overall consensus strategy, an extensive numerical simulation stage is conducted.
|
# Distributed Event-Based Sliding-Mode Consensus Control in Dynamic Formation for VTOL-UAVs
### Jonatan U Alvarez-Munoz, J. Chevalier, Jose J Castillo-Zamora, J. Escareno
To cite this version:
#### Jonatan U Alvarez-Munoz, J. Chevalier, Jose J Castillo-Zamora, J. Escareno. Distributed Event- Based Sliding-Mode Consensus Control in Dynamic Formation for VTOL-UAVs. 2021 International Conference on Unmanned Aircraft Systems (ICUAS), Jun 2021, Athens, Greece. pp.1364-1373, 10.1109/ICUAS51884.2021.9476730. hal-03351323
### HAL Id: hal-03351323
https://hal.science/hal-03351323
#### Submitted on 22 Sep 2021
#### HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
#### L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
-----
## Distributed Event-Based Sliding-Mode Consensus Control in Dynamic Formation for VTOL-UAVs
#### J. U. Alvarez-Mu˜noz[1], J. Chevalier[1], Jose J. Castillo-Zamora[2] and J. Escareno[3]
**_Abstract— The present work deals with consensus control_**
**for a multi-agent system composed by mini Vertical Take-**
**off and Landing (VTOL) rotorcrafts by means of a novel**
**nonlinear event-based control law. First, the VTOL system**
**modeling is presented using the quaternion parametrization**
**to develop an integral sliding-mode control law for attitude**
**stabilization of the aerial robots. Then, the vehicle position**
**dynamics is expanded to the multi-agent case where a cutting-**
**edge event-triggered sliding-mode control is synthesized to fulfill**
**the collective consensus objective within a formation context.**
**With its inherent robustness and reduced computational cost,**
**the aforementioned control strategy guarantees closed-loop**
**stability, while driving trajectories to the equilibrium in the**
**presence of time-varying disturbances. Finally, for validation**
**and assessment purposes of the overall consensus strategy, an**
**extensive numerical simulation stage is conducted.**
I. INTRODUCTION
In recent years, the technological surge in terms of MultiAgent Systems (MAS) and the control theory behind made
possible the usage of these systems for multiple applications
in different sectors, including transportation, manipulation and
rescue operations. Since then, swarm control of autonomous
systems features various functions, such as the consensus,
where an agreement between the agents to recognize the
states of each individual of the system is required [1], [2].
In addition to consensus, the formation control allows to
ensure large-scale multi-agent systems dynamic viability [3].
But in some cases, due to communication or processing
limitations, distributed models are preferable [4]–[6]. The
distributed formation control approach has been verified under
various conditions, such as the formation of shapes in the 3
dimensional space [7] or interconnection graph changes [8].
However, large-scale swarms of autonomous systems face
the problem of limited communication bandwidth. Even
more, when several tasks such as remote control or video
transmitting work at the same time, deterioration of controllers performance can arise. To solve this issue, the
event-based paradigm emerged. The idea is to compute
and update the control signals only when an event occurs.
Mathematical models and simulation of single and double
integrator agents show the relevance of event-based controllers
regarding communication delays, packet drops or noise [9]–
[11]. Applied to similar systems, this approach proves also
*Corresponding author: [email protected]
1J. U. Alvarez-Munoz and J. Chevalier are with EXTIA, Sevres, 92310,˜
France
2Jose J. Castillo-Zamora is with Universite Paris Saclay, Laboratory of´
Signals and Systems, CNRS-CentraleSupelec, Gif-sur-Yvette, and IPSA
Paris, Ivry-sur-Seine, France
3J. Escareno is with UENSIL-ENSCI, Limoges, CNRS, XLIM, UMR
7252, Limoges, France
its effectiveness during obstacle avoidance while maintaining
formation as the simulated results shown in [12]. In addition,
experimental validation of a group of mini VTOL-UAVs
confirmed the performances of such control systems regarding
the communication bandwidth preservation [13].
These works ensure consensus and formation control with
bandwith usage reduction. However, for applications where
unknown disturbances may be present, robust controllers need
to be implemented. In this context, the Sliding-Mode Control
(SMC) [14], [15], which is known for its inherent robustness
can be implemented in multi-agent systems. On this subject,
an adaptive sliding mode control law was developed and
proved through simulation results [16], where the flight
stability of a group of VTOL-UAVs exposed to constant
and bounded disturbances was improved. The results validate
the relevance of such a control law even when applied to
the non-linear dynamics of VTOL-UAVs. Therefore, it is
interesting to combine the two features presented before to
increase the performance of aerial swarm systems.
As an example of that, [17] deals with the leaderfollowing consensus problem from an event-based sliding
mode controller perspective. The design of the SMC for
time-finite consensus is addressed and extended to an eventbased implementation. A nonlinear second-order multi-agent
system is presented in [18] where an integral sliding mode
surface and an event-mechanism for the controller update
are formulated. Both works validate their proposals through
formal mathematical analyisis and the results of nonlinear
double-integrator multi-agent systems.
Captivated by the aforementioned works, the actual paper
presents a proposal regarding the coordinating control of a
set of mini VTOL rotorcrafts by designing an event-based
and adaptive SMC. For this, the inner-outer loop control
methodology is implemented. First, and contrary to most of
the approaches cited above which use Euler angles, a robust
control technique consisting of an Integral Sliding-Mode
Control (ISMC) based on the quaternion parametrization
for each VTOL rotorcraft is designed to ensure attitude
stabilization. Then, the present work explains the construction
of a robust collaborative position control scheme, composed
of a sliding-mode surface, an adaptive term and a triggermechanism regarding the outer loop control. The main idea
behind this approach is to take advantage of the features
present in the research previously cited into one control
algorithm. Practical convergence to the leader in terms of
position and velocity, robustness to bounded disturbances,
reduction in terms of energy consumption and inter-vehicles
communication are demonstrated through this work. The
-----
effectiveness of the proposal is demonstrated through a formal
stability analysis and a detailed simulation scenario with five
mini VTOL-UAVs, subjected to continuous and time-varying
disturbances.
The sequel of the paper is structured as follows. In Section
II, some mathematical preliminaries used throughout the
manuscript are presented. Section III is devoted to the
mathematical modeling of the VTOL-UAV system. Section
IV presents the attitude control law for each robot, the
formulation of the event-triggered control law and the consensus strategy for the set of aerial vehicles. The simulation
scenario and numerical results are presented in Section V.
The conclusions and future work are presented in Section VI.
II. THEORETICAL PREREQUISITES
The current section presents the mathematical concepts of
graph theory, quaternion representation and event-triggered
control used throughout the paper.
_A. Graph Theory_
A MAS can be modeled as a set of dynamic systems
(or agents) in which an information exchange occurs. Such
information flow is mathematically represented by means of
graph theory. In this regard, let = _, ξ_ be defined by
_G_ _{V_ _}_
the sets = 1, ..., N and ξ which represents the vertices
_V_
(or nodes) and edges of the graph, respectively. Adjacency
between two nodes, i and j, exists if there is an edge (i, j)
that connects both nodes. In this sense, such nodes are said
to be adjacent and the aforementioned relation is formally
represented as:
_ξ = (i, j)_ : i, j
_∈V × V_
An undirected graph is described as a graph where the
node i can obtain information about the node j and vice
versa, i.e.
(i, j) _ξ_ (j, i) _ξ_
_∈_ _⇔_ _∈_
The matrix is called the adjacency matrix and its
_A_
elements aij describe the adjacency between nodes i and
_j such that_
�1 _i and j are adjacent_
_aij =_
0 otherwise
If all pairs of nodes in are connected, then is called
_G_ _G_
connected. The distance d(i, j) is defined by the shortest
path between nodes i and j, and it is equal to the number of
edges that conform the path. The degree matrix of is the
_D_ _G_
diagonal matrix with elements di equal to the cardinality of
node i’s neighbor set Ni = j ∈ _V : (i, j) ∈_ _ξ. The Laplacian_
_matrix_ of is defined as = . For undirected graphs,
_L_ _G_ _L_ _D−A_
is symmetric and positive semi-definite, i.e., = 0.
_L_ _L_ _L[T]_ _≥_
Moreover, the row sums of are zero. For connected graphs,
_L_
has exactly one zero eigenvalue, and the eigenvalues can
_L_
be listed in increasing order 0 = λ1(G) < λ2(G) ≤ _... ≤_
_λN_ (G). The second eigenvalue λ2(G) is called the algebraic
_connectivity._
If the system has a leader-following configuration, the
leader is represented by an extra vertex 0, and then communication between the leader and the followers is performed.
is then a diagonal matrix representing this communication,
_B_
with entries 1, if there exists an edge between the leader and
any other agent in the group, or 0, otherwise.
_Lemma 2.1: The matrix_ + has full rank when has
_L_ _B_ _G_
a spanning tree with leader as the root, which implies non
singularity of +
_L_ _B_
_Remark 2.2: From here, we shall refer to the matrix_ +
_L_ _B_
as, in order to avoid any confusion.
_H_
_B. Unit Quaternion and Attitude Kinematics_
Considering two orthogonal right-handed coordinate
frames: the body coordinate frame, B(xb, yb, zb), located at
the center of mass of a rigid body and the inertial coordinate
frame, N (xn, yn, zn), located at some point in the space (for
instance, the earth NED frame). The rotation of the body
frame B with respect to the fixed frame N is represented
by the attitude matrix R ∈ _SO(3) = {R ∈_ R[3][×][3] : R[T] _R =_
_I3, det R = 1}._
The cross product between two vectors ξ, ϱ ∈ R[3] is
represented by a matrix multiplication [ξ[×]]ϱ = ξ _ϱ,_
_×_
where [ξ[×]] is the well-known skew-symmetric matrix. The
_n-dimensional unit sphere embedded in R[n][+1]_ is denoted as
S[n] = {x ∈ R[n][+1] : x[T] _x = 1}. Members of SO(3) are often_
parameterized in terms of a rotation β ∈ R about a fixed axis
_ev ∈_ S[2] by the map U : R × S[2] _→_ _SO(3) such that_
_U_ (β, ev) := I3 + sin(β)[e[×]v [] + (1][ −] [cos(][β][))[][e]v[×][]][2] (1)
Hence, a unit quaternion, q ∈ S[3], is defined as
where qv = (q1 q2 q3)[T] _∈_ R[3] and q0 ∈ R are known as the
vector and scalar parts of the quaternion respectively. The
quaternion q represents an element of SO(3) through the
map R : S[3] _→_ _SO(3) defined as_
_R := I3 + 2q0[qv[×][] + 2[][q]v[×][]][2]_ (3)
_Remark 2.3: R = R(q) = R(−q) for each q ∈_ S[3], i.e.
even quaternions q and _q represent the same physical_
_−_
attitude.
Denoting by ⃗ω = (ω1 ω2 ω3)[T] the angular velocity vector of
the body coordinate frame, B relative to the inertial coordinate
frame N expressed in B, the kinematics equation is given by
� _q˙0_ � = [1] � _−qv[T]_ � **_ω = [1]_** (4)
_q˙v_ 2 _I3q0 + [qv[×][]]_ 2 [Ξ(][q][)][ω]
The attitude error is used to quantify mismatch between
two attitudes. If q defines the current attitude quaternion and
_qd the desired quaternion, i.e. the desired orientation, then_
the error quaternion that represents the attitude error between
the current orientation and the desired one is given by
_qe := qd[−][1]_ _∗_ _q = (qe0 qev[T]_ [)][T] (5)
_q :=_ � cos _[β]2_
_ev sin_ _[β]2_
� = � _q0_
_qv_
�
_∈_ S[3] (2)
-----
_zb_
_zn_
_yb_
_φ_
_xb_
|d φ|ψ B θ mg|
|---|---|
_B_
_d_
_θ_
_N_ _yn_
_xn_
Fig. 1: Schematic configuration of a VTOL vehicle in the
3D space.
where q[−][1] is the complementary rotation of the quaternion q
which is given by q[−][1] := (q0 _−_ _qv[T]_ [)][T][ and (][∗][) denotes the]
quaternion multiplication.
III. ATTITUDE AND POSITION DYNAMICS OF THE VTOL
MULTI-AGENT SYSTEM
If a group of N -VTOL vehicles is considered and each
aerial system is modeled as a rigid body, as in Fig. 3,
then, according to [19], the six degrees of freedom model
(position and orientation) of the system can be separated into
translational and rotational motions, defined respectively by
**_p˙_** _i = vi_
_mi ˙vi = −mig + Ri_
(6)
+ ςi
IV. ATTITUDE AND POSITION CONTROL FOR THE VTOL
MAS
The current section is divided in two parts. First, we
introduce the attitude control law to stabilize the i[th] agent’s
attitude, followed by the position control strategy to achieve
convergence to the leader and multi-agent formation.
_A. Attitude Stabilization Method_
The aim of this section is to present the design procedure of
an attitude control which drives the aerial vehicles to attitude
stabilization, i.e. to the asymptotic conditions below
_qi →_ [±1 0 0 0][T] _, ωi →_ 0 as t →∞ (9)
The angular velocity error for each aerial vehicle in terms of
quaternions is given by the next expression
**_ωei = ωi_** _Riωdi_ (10)
_−_
where ωi corresponds to the actual orientation of the system
and Ri is the rotation matrix given by (3). Then, by calculating
the time derivative of the error quaternion given in (5) and
the angular velocity error, the attitude error dynamics can be
given by
� _qq˙˙eieiv0_ � = [1]2 � _I3qei−0 + [qei[T]_ _vqei[×]v_ []] � **_ωei_** (11)
**_ω˙_** _ei = −Ji[−][1]ω[×]ei[J][i][ω][ei][ +][ J]i[−][1]Γi_ (12)
The design of the attitude control law consists of an integral
sliding mode control, where the sliding surface is proposed
as follows
_si = Jiωei + λiqeiv + Kiεi_ (13)
where si R[3], εi corresponds to the integral of the error
_∈_
in terms of quaternions and λi and Ki are constant positive
parameters. The time derivative of the previous equation is
given by
_s˙i = Ji ˙ωei + λi ˙qeiv + Kiqeiv_ (14)
Substituting equation (12) into (14), the next expression is
obtained
_s˙i = λi ˙qeiv +Kqeiv +Ji(ω[×]ei[R][i][ω][di]_ _[−][R][i][ ˙][ω][di][)][−][ω][×]i_ _[J][i][ω][i]_ [+Γ][i]
(15)
Then, the control law, using the exponential reaching law
_s˙ = asign(s) + bs, where a, b > 0 is given by_
**Γi = −** _λi ˙qeiv −_ _Kiqeiv −_ _Ji(ω[×]ei[R][i][ω][di][ −]_ _[R][i][ ˙][ω][di][)]_ (16)
+ ω[×]i _[J][i][ω][i][ −]_ _[a][i][sign][(][s][i][)][ −]_ _[b][i][s][i]_
0
ΣTi : (6)
_mi ˙vi = −mig + Ri_ _U0T i_ + ςi
ΣRi : � _q˙i = 12_ [Ξ(][q][i][)][ω][i] (7)
_Ji ˙ωi = −ω[×]i_ _[J][i][ω][i][ +][ Γ][i]_
where i = 1, ..., N . pi and vi are linear positions and
velocities vectors, mi is the mass of each aerial system,
**_g is the gravity vector, R is the rotation matrix given in (3),_**
_UT i is the total thrust and ςi corresponds to an unknown_
disturbance, bounded in the manner ∥ςi∥≤ _ςmax. Besides,_
_Ji ∈_ R[3][×][3] is the inertia matrix of the rigid bodies expressed
in the body frame B and Γi ∈ R[3] is the vector of applied
torques. Γi depends on the (control) couples generated by the
actuators and the aerodynamics, such as gyroscopic couples
or the gravity gradient.
Note that the rotation matrix R can also be defined according
to the Euler angles φ, θ, ψ, correspondingly referred to as
roll, pitch and yaw angles
_R(φ, θ, ψ) =_
CθCψ SφSθCψ CφSψ CφSθCψ + SφSψ
_−_
CθSψ SφSθSψ + CφCψ CφSθSψ − SφCψ
_−Sθ_ CθSφ CθCφ
(8)
Finally, in order to reduce the chattering phenomenon, the
sign function is replaced by the hyperbolic tangent function
as follows: sign(s) = tanh( _α[s]_ [)][, with][ α][ a small constant to]
control the shape of the function.
_Proof:_ Let us consider the next candidate Lyapunov
function, which is positive-definite:
_V = [1]_ _i_ _[s][i]_ (17)
2 _[s][T]_
where S⋆ and C⋆ stand for sin(⋆) and cos(⋆), respectively.
-----
By finding its time derivative and substituting (15) into this
one it is possible to obtain
_V˙ =s[T]i_ [(][λ][i][q][˙][ei]v [+][ K][i][q][ei]v [+][ J][i][(][ω]ei[×][R][i][ω][di][ −] _[R][i][ ˙][ω][di][)]_
(18)
_−_ **_ω[×]i_** _[J][i][ω][i][ +][ Γ][i][)]_
Then, by substituting the control law given in (16) into (18)
and after some manipulations, the next expression is obtained
_V˙ = s[T]i_ [(][−][a][i][sign][(][s][i][)][ −] _[b][i][s][i][)][ ≤]_ [0][ ∀] _[t][ ≥]_ [0] (19)
which assures the asymptotic stability of the system subjected
to the proposed control law.
_B. Position Control Strategy for the VTOL Multi-Agent System_
The control strategy proposed inhere for a set of VTOLUAVs is intended to deal with the consensus problem. In
other words, considering a virtual leader, the ith follower
must perform leader-following consensus as follows
lim (20)
_t→∞[(][p][i][ −]_ _[p][0][)][ →]_ [0]
where pi and p0 are the position vectors of the ith follower
and the virtual leader, respectively.
Let the linear position dynamics of each aerial vehicle in the
multi-agent system, expressed by (6), be rewritten as:
_p˙xi_ _vxi_
_p˙yi_ = _vyi_ _,_ (21)
_p˙zi_ _vzi_
_v˙xi_ _UmT ii_ [(][C][ψ][i] [S][θ][i] [C][φ][i] [+][ S][ψ][i] [S][θ][i] [) +][ ς][x][i]
_vv˙˙yzii_ = _UmT ii_ [(]U[S]mT i[ψ]i[i][(][S][C][θ][i][φ][C][i] _[φ][C][i][θ][−][i]_ [)][ −][C][ψ][g][i][ +][S][φ][ ς][i] [) +][z][i] _[ ς][y][i]_ (22)
For control purposes, let the virtual control inputs be
defined as follows
_UT i_
_VVVxzyiii = = =_ _UUmmmT iT iiii_ [(][(][(][C][S][C][ψ][φ][ψ][i][i][i][S][S][C][θ][θ][θ][i][i][i][C][C][)][ −][φ][φ][i][i] _[−][+][g]_ [ S][C][ψ][ψ][i][i][S][S][φ][θ][i][i][)][)] (23)
Hence, the desired Euler angles (θdi, φdi) and the total thrust
_UT i can be obtained as_
�
_φUdiT i = arctan( = m_ _Vx[2]iC[+]θdi[ V]([ 2]yVixi[+ (]SψdiV[V]zi[z]−i+V[+]gyi[ g]C[)]ψdi[2]_ )) (24)
_θdi = arctan(_ _Vxi_ CψdiVzi++Vgyi Sψdi )
Thus, it follows that the representation of the system in (22)
can be expressed as that of a disturbed system of the form:
� _p˙i(t) = vi(t)_ (25)
_v˙i(t) = ui(t) + ςi(t)_
where ui(t) is the control input and ςi(t) corresponds to the
external disturbance.
Now, let us define the lumped tracking errors for the ith
aerial vehicle as
_−_ _λievi(t) −_ Πisign(Si(t))))
where Πi = diag(γix, γiy, γiz) is a matrix of adjustable
control gains, and where γi > 0.
Assuming that there exists a number Π[d]i [, let][ ς][max][ be the]
vector of lumped uncertainties, which is bounded as Π[d]i _[>][ |][ς][|]_
with Π[d]i [=][ diag][(][γ]ix[d] _[, γ]iy[d]_ _[, γ]iz[d]_ [)][ being the terminal solution for]
Πi. To achieve Π[d]i [, let the adaptive law be expressed as]
˙Πi = ϱ[−][1]|Si(t)| (32)
with ϱ = diag(ρix, ρiy, ρiz) a matrix of adaptive gains,
defining also the adaptation speed and all subjected to ρi > 0.
Then, the compact form of (31) can be expressed by
_u(t) = H[−][1]_ _⊗_ _I3(b ⊗_ _p¨0(t) −_ _λev(t) −_ Πsign(S(t))) (33)
where Π = [γ1[T] _[, ..., γ]N[T]_ []][T][ and][ S][(][t][) = [][S]1[T] [(][t][)][, ..., S]N[T] [(][t][)]][T][ .]
_epi(t) =_
_evi(t) =_
_N_
�
_aij(psi_ (t) − _psj_ (t)) + bi(psi (t) − _p0(t))_
_j=1_
_N_
�
_aij(vsi_ (t) − _vsj_ (t)) + bi(vsi (t) − _p˙0(t))_
_j=1_
(26)
The compact form of the lumped tracking error is given as
_ep(t) = (L + B) ⊗_ _I3p¯(t)_
(27)
_ev(t) = (L + B) ⊗_ _I3v¯(t)_
where _e[T]p_ [(][t][)] = [e[T]p1[(][t][)][, ..., e][T]pN [(][t][)]][T][,] _e[T]v_ [(][t][)] =
[e[T]v1[(][t][)][, ..., e][T]vN [(][t][)]][T][,] _p¯(t)_ = _p(t) −_ 1N _×1 ⊗_ _p0(t),_
_v¯(t) = v(t) −_ 1N _×1 ⊗_ _p˙0(t), p(t) = [p[T]1_ [(][t][)][, ..., p]N[T] [(][t][)]][T][,]
_v(t)_ = [v1[T] [(][t][)][, ..., v]N[T] [(][t][)]][T][,][ u][(][t][)] = [u[T]1 [(][t][)][, ..., u]N[T] [(][t][)]][T][,]
_ς(t) = [ς1[T]_ [(][t][)][, ..., ς]N[T] [(][t][)]][T][ and the term][ ⊗] [denotes the]
Kronecker product.
Then, the time derivative of (27) can be further expressed
by
_e˙p = ev_
(28)
_e˙v = H ⊗_ _I3 · (u(t) + ς(t) −_ 1N ⊗ _p¨0(t))_
In order to meet the consensus control requirements for the
VTOL-UAV’s, a sliding surface is proposed as
_Si(t) = evi(t) + λiepi(t)_ (29)
where λi = diag(λix, λiy, λiz) is a matrix of control gains,
and where λi > 0. Let Si = [S1[T] _[, ..., S]N[T]_ []][T][, then the compact]
form of (29) is given as
_S(t) = ev(t) + λep(t)_ (30)
According to [20], a sliding-mode control law consisting
of ui(t) = u0i(t) + uwi(t) where u0i(t) takes care of the
nominal part of the system and uwi(t) deals with the external
disturbances such that SiS[˙]i < 0 can be designed. Let the
control input be given by
_ui(t) = (lii + bi)[−][1](_
_N_
�
(aijuj(t) + bip¨0(t)
_j=1_
(31)
-----
The interest in the usage of event-driven systems is due
to good performance in applications where resources are
constrained. In multi-robot systems connected over a shared
network, where rapid exchange of information is performed
between agents, resources like bandwidth and processor times
are constrained. Then, the event-based control is expected
to offer better results. In this regard, the event-based control
signals are updated only when a specific condition is satisfied,
i.e. an event occurs. In consequence, traffic network is reduced
or power consumption is minimised. With this in mind, the
control law ui(t) given in (31) is modified in such a way
that _t_ [t[k], t[k][+1])
_∀_ _∈_
_ui(t) = (lii + bi)[−][1](_
_N_
�
(aijuj(t[k]) + bip¨0(t[k])
_j=1_
(34)
Then, by introducing the control law (34) in its compact form,
the next expression is obtained
_V˙ = S[T]_ (t)�H ⊗ _I3�(H[−][1]_ _⊗_ _I3(b ⊗_ _p¨0(t[k]) −_ _λev(t[k])_
_−_ Πsign(S(t[k])))) + ς(t) − 1N ⊗ _p¨0(t) + λev(t)�[�]_
+ Π[˜] _[T]_ _ϱΠ[˙˜]_
�
= S[T] (t) _b ⊗_ _p¨0(t[k]) −_ _λev(t[k]) −_ Πsign(S(t[k]))
+ H ⊗ _I3�ς(t) −_ 1N ⊗ _p¨0(t) + λev(t)�[�]_
� �
+ S[T] (t) Πsign(S(t[k])) Π[d]sign(S(t[k]))
_−_
�
= S[T] (t) _b ⊗_ _p¨0(t[k]) −_ _λev(t[k]) −_ Π[d]sign(S(t[k]))
+ H ⊗ _I3�ς(t) −_ 1N ⊗ _p¨0(t) + λev(t)�[�]_
�
_≤_ _S[T]_ (t) _b ⊗_ _p¨0(t[k]) −_ _λev(t[k]) −_ Π[d]sign(S(t[k]))
+ H ⊗ _I3�ςmax(t) −_ 1N ⊗ _p¨0(t) + λev(t)�[�]_
� �
_≤_ _S[T]_ (t) Υ − _λev(t[k]) −_ Π[d]sign(S(t[k])) + H ⊗ _I3λev(t)_
� �
_≤_ _S[T]_ (t) Υ − _λev(t[k]) −_ Π[d]sign(S(t[k])) + H ⊗ _I3ev(t)_
�
= S[T] (t) Υ − _λev(t[k]) −_ Π[d]sign(S(t[k]))
+ H ⊗ _I3�ϵ¯vi(t) −_ _ϵ¯v0(t) + ¯ev(t[k])�[�]_
Then, by applying the well-known Lipschitz continuity
condition, the next expression can be obtained:
�
_≤S[T]_ (t) Υ − _λev(t[k]) −_ Π[d]sign(S(t[k]))
+ _H ⊗_ _I3�L¯ϵ¯vi(t) −_ _L¯ϵ¯v0(t) + ¯Le¯v(tk)�[�]_
�
_≤S[T]_ (t) Υ − Π[d]sign(S(t[k])) − _L[¯]∥ev(t[k])∥_
+ _H ⊗_ _I3�L¯∥ϵ¯vi(t)∥−_ _L¯∥ϵ¯v0(t)∥_ + ∥L¯e¯v(tk)∥�[�]
(41)
As long as S(t) > 0 or S(t) < 0, then the condition
sign(S(t)) = sign(S(t)) is verified [t[k], t[k][+1]). Then,
_∀∈_
when the trajectories are outside the sliding manifold, (41)
can be expressed as
�
_V˙ ≤∥S[T]_ (t)∥ Υ − Π[d]|S(t[k])| − _L[¯]∥ev(t[k])∥_
�
+ ∥H ⊗ _I3∥(L[¯]∥ϵ¯vi(t)∥−_ _L[¯]∥ϵ¯v0(t)∥_ + ∥L[¯]e¯v(t[k])∥)
_V_ _κ_ _S[T]_ (t) = _κ_ _S(t)_
_⇒_ [˙] ≤− _∥_ _∥_ _−_ _∥_ _∥_
(42)
where κ > 0 and Πi > sup{Υ + L[¯]∥ϵ¯vi(t)∥− _L[¯]∥ϵ¯v0(t)∥_ +
_L¯∥e¯v(t[k])∥−_ _L¯∥ev(t[k])∥}. It follows that the sliding manifold_
works as an attractor and the state trajectories converges
towards it [t[k], t[k][+1]), which completes the proof of
_∀_ _∈_
reachability.
The rest of the proof is not presented here, but it can be
obtained following a similar procedure to that of the seminal
work [17].
_−_ _λievi(t[k]) −_ Πisign(Si(t[k]))))
Then, the errors introduced due to the discretization of the
control are given by
_ϵ¯p(t) = p(t[k]) −_ _p(t)_ (35)
_ϵ¯v(t) = v(t[k]) −_ _v(t)_ (36)
such that at t[k], ¯ϵ(t) = 0. Note that t[k]i [corresponds to the]
triggering instant of the ith agent. Then, ¯ϵvi(t) and ¯ϵv0(t)
denotes the discretization error between the agents and leader,
respectively. From (26),
_epi(t[k]) =_
_evi(t[k]) =_
_N_
�
_aij(psi_ (t[k]) − _psj_ (t[k])) + bi(psi (t[k]) − _p0(t[k]))_
_j=1_
_N_
�
_aij(vsi_ (t[k]) − _vsj_ (t[k])) + bi(vsi (t[k]) − _p˙0(t[k]))_
_j=1_
(37)
_Theorem 4.1: Considering the system described by (22)_
and (25), with error variables (26) and (35-37), sliding
manifold S(t) in the notions of sliding mode and the control
law (34)
_• The reachability of the sliding surface is confirmed for_
some reachability constant κ > 0
_• The event-based sliding mode control law (34) provides_
stability in the sense of Lyapunov if the adaptive gain
Πi accomplishes
Πi > sup{Υ + L[¯]∥ϵ¯vi(t)∥− _L[¯]∥ϵ¯v0(t)∥_ + L[¯]∥e¯v(t[k])∥
_−_ _L[¯]∥ev(t[k])∥}_
(38)
where Υ = ςmax −Hp¨0(t) + ¨p0(t[k])
_Proof: Let a candidate Lyapunov function be given by:_
_V = [1]_ ˜Π[T] _ϱ˜Π_ (39)
2 _[S][(][t][)][T][ S][(][t][) + 1]2_
where the adaptation error is defined as Π = Π[˜] Π[d]. From,
_−_
(39), the time derivative of V is obtained as follows:
_V˙ = S[T]_ (t)�H⊗I3(u(t)+ς(t)−1N ⊗p¨0)+λev(t)�+ Π[˜] _[T]_ _ϱΠ[˙˜]_
(40)
-----
The time t[k] at which an event is triggered is described by
a trigger mechanism. In other words, as long as a criterion
(established by the trigger mechanism) is respected, the next
event is not triggered and the control signal keeps its precedent
constant value.
_Corollary 4.2: Consider the group of mini aerial vehicles_
described by (21), with the control law (31). Let one assume
the trigger mechanism is expressed as follows
_ξ = ||ν1epi + ν2evi|| −_ (r0 + r1e[−][ϕt]) (43)
with ν1 > 0, ν2 > 0, r0 0, r1 0, r0 + r1 > 0 and
_≥_ _≥_
_ϕ ∈_ (0, λ2(L)), where λ2(L) is the second eigenvalue if all
the eigenvalues of are arranged in ascending order.
_L_
Then, the trigger mechanism verifies the desired closed-loop
behavior taking into account the error and its change rate
[17].
_Remark 4.3: The control law (31) allows the convergence_
to zero between the followers and the leader. However, if the
consensus is extended to formation control with Λ a feasible
formation such that Λ = [µij ∈ R| µij > 0; _i, j = 1, ..., N_ ]
then, the tracking errors (26) can be rewritten as
_epi(t) =_
_N_
�
_aij(psi_ (t) − _psj_ (t) − _µij) + bi(psi_ (t)
_j=1_
_A. Simulation Scenario_
The simulation model features the parameters depicted in
Table I for each VTOL vehicle. Besides, for the case of study
System Description Value Units
Mass (m) 650 g
Distance (d) 17 cm
Quadcopter Inertial moment x (Jφ) 0.0075 _Kg · m[2]_
Inertial moment y (Jθ) 0.0075 _Kg · m[2]_
Inertial moment z (Jψ) 0.013 _Kg · m[2]_
TABLE I: Physical parameters for the VTOL vehicle
presented in this work, five aerial vehicles are considered
(N = 5). The virtual leader (N = 0) shares to the neighbors
its information related to the desired position or trajectory.
The communication topology that is used for information
exchange between the agents is shown in Fig. 3, where a
directed configuration can be remarked. Besides, it can be
seen that the information of the leader is acquired by all the
agents in the system.
3
1
5
Leader
2
4
Fig. 3: Multi-VTOL system and communication flow.
The corresponding adjacency matrix for the graph G,
_A_
the incidence matrix describing the connection of the leader
_B_
with the neighbors and the matrix + =, corresponding
_L_ _B_ _H_
to the closed-loop system, are given respectively as:
_−_ _p0(t) −_ _µi)_
_N_
�
_evi(t) =_ _aij(vsi_ (t) − _vsj_ (t)) + bi(vsi (t) − _p˙0(t))_
_j=1_
(44)
1
2
where µij = ∥χi _−χj∥_ and µi = ∥χi _−χ0∥_ describe the interagent and leader-follower distances and where χ1, ..., χn ∈
R[3] are desired points.
An overview of the entire closed-loop system is depicted
in Fig. 2
Fig. 2: Block diagram of the system.
V. SIMULATION RESULTS
This section is devoted to the presentation of numerical
simulation results to validate the proposed control strategy of
a group of five VTOL aerial vehicles. The set of simulations
was performed using the Matlab/Simulink[® ]environment.
4
The eigenvalues of the matrix are 1, 1, 1, 1 and 1
_H_
with multiplicity 5. It is important to say that none of the
eigenvalues is 0, then the matrix has full rank and there
_H_
exists at least one spanning tree in the topology of Fig. 3.
The control and event function parameters used for the
simulation can be found in Table II
5
0 0 0 0 0
0 0 0 0 0
1 0 0 0 0
0 1 0 0 0
1 0 1 0 0
1 0 0 0 0
0 1 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
_,_ =
_B_
(45)
=
_A_
1 0 0 0 0
0 1 0 0 0
1 0 1 0 0 (46)
_−_
0 1 0 1 0
_−_
1 0 1 0 1
_−_ _−_
=
_H_
-----
2.5
2
1.5
1
0.5
0
-0.5
-1
-1.5
-2
-2.5
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
|0.4 0.2 0|||||
||||||
|-0.2 -0.4|||||
||||||
|-0.6 -0.8|||||
||||||
|1||2 3|||
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||
||||||||||||
|||0.8 0.6 0.4|||||||||
|||0.2 0 -0.2|||||||||
|||-0.4||1 2||3|||||
||||||||||||
||||||||||||
||||||||||||
0 10 20 30 40 50 60
0 10 20 30 40 50 60
1.4
1.2
1
0.8
0.6
0.4
0.2
0
-0.2
-0.4
-0.6
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
|||||||
|||||||
|1||||||
|0||||||
|-1||||||
|1|||2 3|||
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||||||
||||||||||
||||||||||
||||||||||
|1|||||||||
|0|||||||||
||||||||||
|-1 1||2 3|||||||
0 10 20 30 40 50 60
2
1.5
1
0.5
0
-0.5
-1
-1.5
-2
-2.5
0 10 20 30 40 50 60
2
1.5
1
0.5
0
-0.5
-1
-1.5
Fig. 4: Linear positions of the aerial vehicles during the
consensus.
_1) First Scenario: For the simulations, two scenarios were_
considered:
Description Parameter Value
Attitude controller _λφ,θ_ 2
_λψ_ 1.5
_Kφ,θ,ψ_ 0.01
_aφ,θ_ 4.5
_aψ_ 15
_bφ,θ,ψ_ 2
Position controller _λix,iy_ 3
_λiz_ 4
_ρix,iy_ 2
_ρiz_ 2.2
Trigger mechanism _ν1i_ 1
_ν2i_ 0.5
_r0i_ 0.005
_r1i_ 0.015
_ϕ_ 0.2
TABLE II: Numerical values for control laws and event
function
Fig. 5: Linear errors of the aerial vehicles during the
consensus.
6000
5000
4000
3000
2000
1000
0
0 10 20 30 40 50 60
Fig. 6: Evolution of the events vs. continuous-time during
the consensus.
_• The behavior for the consensus of the group of multi-_
rotor aerial vehicles without the influence of a disturbance is studied. First, the multi-robot system is
-----
Fig. 7: Linear velocities of the aerial vehicles during the
consensus.
initialized at orientation and 3D positions given in the
Table III. Then, the set of vehicles follows the virtual
VTOL MAS _ψi_ _pxi_ _pyi_ _pzi_
1 1[◦] 0.05m 1.25m 0.01m
2 3[◦] _−0.95m_ 0.85m 0.01m
3 2[◦] 0.55m _−1.3m_ 0.01m
4 -1[◦] _−1m_ _−1.22m_ 0.01m
5 -1[◦] _−0.1m_ _−1.15m_ 0.01m
TABLE III: Initial conditions for the system
leader to the desired position given as p0 = [0 0 1][T] m.
After that, when the system is stabilized at time t = 20s,
the virtual leader performs a trajectory described as
_p0 = [2 sin(2πt/16) 2 cos(2πt/16) 3][T]_ m.
_• The behavior of the multi-agent system for the formation_
control under the influence of an unknown and timevarying disturbance is addressed. Indeed, the desired
positions and trajectories given by the virtual leader,
Fig. 8: Velocity errors of the aerial vehicles during the
consensus.
as well as the initial conditions for the system, are
the same as in the first scenario, however the multirobot system performs formation control, where the
positions of the agents are intended to form a pentagon
on the x _y plane with a distance of 1.5m between each_
_−_
vertex. The time varying disturbance is described by ςi =
[0.4 sin(0.1πt) 0.2 cos(0.2πt) 0.1 sin(0.15πt)][T] N and
is present during the entire simulation. The simulation
for both scenarios runs for 60s.
_B. Simulation Results_
Fig. 4 depicts the linear positions of the multi-agent system
during the consensus. A numerical zoom was performed for
the x and y axis during the first 4s of the simulation, proving
the consensus convergence in finite time. Fig. 5 shows the
error profile of the follower agents. As in the first curves,
a numerical zoom for the first seconds of simulation was
performed to have a better perspective on how the error
-----
converges to zero quickly. This convergence to zero shows
desirable closed-loop dynamics of the system and proves the
effectiveness of the proposed control strategy. Fig. 7 and Fig. 8
show the linear velocities and the linear errors in terms of
velocity of the different aerial vehicles during the simulation.
The obtained results confirm the consensus convergence to
the leader in terms of velocity. As before, numerical zooms
were implemented to show more in detail the behavior of the
multi-agent system. Finally, Fig. 6 shows how the events are
triggered during the simulation. Using the triggering function
given in (43), a minimal number of controller updates are
expected and consequently the control effort is required only
when necessary. The behavior of the events is clearly nonlinear, and from the results we can see that the number of
updates increases during the trajectory-tracking phase.
_1) Second Scenario: Fig. 9 depicts the behavior of the_
multi-VTOL system on the 3d space for the formation
control scenario. Fig. 10 shows the linear positions for each
VTOL vehicle. An unknown and time-varying disturbance
(previously described), acts over the system and as one can see,
it corresponds to a matched disturbance, since the positions of
the agents correspond to the expected ones, i.e. the trajectories
are not affected. However, from Fig. 11 one can observe that
the number of updates is slightly greater compared to the
scenario when no disturbance is present. A comparison in
the number of updates, when the disturbance is affecting the
system or not, is presented in Table IV
VTOL agent 1 2 3 4 5
Updates without disturbance 3552 3591 4040 4099 4055
Updates with disturbance 3702 3637 4322 4284 4190
TABLE IV: Control updates for scenarios 1 and 2 under the
control law (34)
Fig. 9: Behavior of the multi-agent VTOL system in the 3d
space.
VI. CONCLUSIONS
In this study, the consensus problem and formation control
of a group of VTOL-UAVs has been addressed by means of a
Fig. 10: Linear positions of the aerial vehicles during the
formation.
6000
5000
4000
3000
2000
1000
0
0 10 20 30 40 50 60
Fig. 11: Evolution of the events vs. continuous-time during
the formation.
distributed and adaptive event-based sliding mode-control law.
By integrating the robustness of the SMC with the benefits
of the event-based scheme, closed-loop performance and low
-----
power computation were achieved. Due to the underactuated
nature of the aerial vehicles, an inner-outer control loop
methodology was implemented. The proposed attitude and
multi-agent control laws were validated through stability
analysis and numerical simulations. The simulations show
that, even under the influence of unknown disturbances, the
control law allows practical convergence to consensus or
formation.
As future work, the design of an obstacle avoidance algorithm
as well as the experimental implementation of the proposed
strategies will be performed.
REFERENCES
[1] W. Huang, Y. Huang, and S. Chen, “Robust consensus control for a
class of second-order multi-agent systems with uncertain topology and
disturbances.”, Neurocomputing, 2018, vol. 313, pp. 426-435.
[2] V. P. Tran, M. Garratt and I. R. Petersen, “Time-Varying Formation
Control of a Collaborative Multi-Agent System Using NegativeImaginary Systems Theory”, arXiv preprint arXiv:1811.06206, 2018.
[3] X. Dong, Y. Hua, Y. Zhou, Z. Ren, and Y. Zhong, “Theory and
experiment on formation-containment control of multiple multirotor
unmanned aerial vehicle systems,” IEEE Transactions on Automation
_Science and Engineering, 2018, no 99, pp. 1-12._
[4] N. A. Lynch, “Distributed algorithms”, Morgan Kaufmann, 1996.
[5] V. Borkar and P. Varaiya, “Asymptotic agreement in distributed
estimation”, in IEEE Transactions on Automatic Control, vol. 27, no.
3, pp. 650-655, June 1982.
[6] John Nikolas TSITSIKLIS, “Problems in decentralized decision making
and computation”, Massachusetts Inst of Tech Cambridge Lab For
_Information and Decision Systems, 1984._
[7] Y. Liu, J. M. Montenbruck, D. Zelazo, M. Odelga, S. Rajappa, H. H.
Bulthoff, and A. Zell, “A distributed control approach to formation¨
balancing and maneuvering of multiple multirotor UAVs”, IEEE
_Transactions on Robotics, 2018, vol. 34, no 4, pp. 870-882._
[8] A. Abdessameud, “Formation Control of VTOL-UAVs Under Directed
and Dynamically-Changing Topologies”, 2019 American Control
_Conference, Philadelphia, PA, USA, 2019, pp. 2042-2047_
[9] G. S. Seyboth, D. V. Dimarogonas and K. H. Johansson, “Event-based
broadcasting for multi-agent average consensus”, Automatica, 2013,
vol. 49, pp. 245-252.
[10] C. Nowzari and J. Cortes, “Team-triggering Coordination for real-time
control of networked cyber-physical systems”, IEEE. Transactions on
_Automatic Control, 2016, vol. 61, pp. 34-47._
[11] Y. Dapeng, R. Wei, L. Xiangdond and Ch. Weisheng, “Decentralized
event-triggered consensus for linear multi-agent systems under general
directed graphs”, Automatica, 2016, vol. 69, pp. 242-249.
[12] Z. Cai, H. Zhou, J. Zhao, K. Wu and Y. Wang, “Formation control
of multiple unmanned aerial vehicles by event-triggered distributed
model predictive control”, IEEE Access, 2018, vol. 6.
[13] J. Guerrero-Castellanos, A. Vega-Alonzo, S. Durand, N. Marchand, V.
R. Gonzalez-Diaz, J. Castaneda-Camacho and W. F. Guerrero-sanchez,˜
“Leader-following consensus and formation control of VTOL-UAVs
with event-triggered communications”, Sensors, 2019, vol. 19, no. 24,
pp. 5498.
[14] J. J. Castillo-Zamora, K. A. Camarillo-Gomez, G. I. P´ erez-Soto and´
J. Rodr´ıguez-Resendiz, “Comparison of PD, PID and Sliding-Mode´
Position Controllers for V–Tail Quadcopter Stability”, in IEEE Access,
vol. 6, pp. 38086-38096, 2018.
[15] J. J. Castillo-Zamora, J. Escareno, I. Boussaada, O. Labbani and K.
Camarillo, “Modeling and Control of an Aerial Multi-Cargo System:
Robust Acquiring and Transport Operations”, 2019 18th European
_Control Conference (ECC), Naples, Italy, 2019, pp. 1708-1713._
[16] W. Zhengyang, F. Qing and W. Bo, “Distributed Adaptive Sliding Mode
Formation Control for Multiple Unmanned Aerial Vehicles”, Chinese
_Control And Decision Conference (CCDC), 2020, pp. 2105-2110._
[17] R. K. Mishra and A. Sinha, “Event-triggered sliding mode based
consensus tracking in second order heterogeneous nonlinear multiagent systems”, European Journal of Control, 2019, vol. 45, pp. 30-44.
[18] Y. Deyin, L. Hongyi, L. Renquan and Sh. Yang, “Distributed SlidingMode Tracking Control of Second-Order Nonlinear Multiagent Systems:
An Event-Triggered Approach”, IEEE Transactions on Cybernetics,
2020, vol. 50, pp. 3892-3902.
[19] J. F. Guerrero-Castellanos, N. Marchand, A. Hably, S. Lesecq, and
J. Delamare, “Bounded attitude control of rigid bodies: Real-time
experimentation to a quadrotor mini-helicopter,” Control Engineering
_Practice, 19(8), pp. 790-797._
[20] H. Ying-Jeh, K. Tzu-Chun and Ch. Shin-Hung, “Adaptive SlidingMode Control for Nonlinear Systems With Uncertain Parameters”,
_IEEE Transactions on Systems, Man, and Cybernetics, 2008, vol. 38,_
pp. 534-539.
-----
| 14,591
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ICUAS51884.2021.9476730?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ICUAS51884.2021.9476730, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://hal.archives-ouvertes.fr/hal-03351323/file/ICUAS_Extia_final.pdf"
}
| 2,021
|
[
"Conference"
] | true
| 2021-06-15T00:00:00
|
[] | 14,591
|
en
|
[
{
"category": "Business",
"source": "external"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
},
{
"category": "Agricultural and Food Sciences",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00be59bbc5253ed1fe31189b3113a50b7adc7232
|
[
"Business"
] | 0.872574
|
Distributed Ledger Technology Applications in Food Supply Chains: A Review of Challenges and Future Research Directions
|
00be59bbc5253ed1fe31189b3113a50b7adc7232
|
Sustainability
|
[
{
"authorId": "1381977443",
"name": "Jamilya Nurgazina"
},
{
"authorId": "2263925",
"name": "Udsanee Pakdeetrakulwong"
},
{
"authorId": "145140500",
"name": "T. Moser"
},
{
"authorId": "47939696",
"name": "G. Reiner"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://mdpi.com/journal/sustainability",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-172127"
],
"id": "8775599f-4f9a-45f0-900e-7f4de68e6843",
"issn": "2071-1050",
"name": "Sustainability",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-172127"
}
|
The lack of transparency and traceability in food supply chains (FSCs) is raising concerns among consumers and stakeholders about food information credibility, food quality, and safety. Insufficient records, a lack of digitalization and standardization of processes, and information exchange are some of the most critical challenges, which can be tackled with disruptive technologies, such as the Internet of Things (IoT), blockchain, and distributed ledger technologies (DLTs). Studies provide evidence that novel technological and sustainable practices in FSCs are necessary. This paper aims to describe current practical applications of DLTs and IoT in FSCs, investigating the challenges of implementation, and potentials for future research directions, thus contributing to achievement of the United Nations’ Sustainable Development Goals (SDGs). Within a systematic literature review, the content of 69 academic publications was analyzed, describing aspects of implementation and measures to address the challenges of scalability, security, and privacy of DLT, and IoT solutions. The challenges of high costs, standardization, regulation, interoperability, and energy consumption of DLT solutions were also classified as highly relevant, but were not widely addressed in literature. The application of DLTs in FSCs can potentially contribute to 6 strategic SDGs, providing synergies and possibilities for more sustainable, traceable, and transparent FSCs.
|
## sustainability
_Review_
# Distributed Ledger Technology Applications in Food Supply Chains: A Review of Challenges and Future Research Directions
**Jamilya Nurgazina** **[1,]*** **, Udsanee Pakdeetrakulwong** **[2,]*** **, Thomas Moser** **[1]** **and Gerald Reiner** **[3]**
1 Department Media and Digital Technologies, St. Pölten University of Applied Sciences,
3100 St. Pölten, Austria; [email protected]
2 Software Engineering Department, Nakhon Pathom Rajabhat University, Nakhon Pathom 73000, Thailand
3 Department of Information Systems and Operations Management, Vienna University of Economics
and Business, 1020 Vienna, Austria; [email protected]
***** Correspondence: [email protected] (J.N.); [email protected] (U.P.)
[����������](https://www.mdpi.com/article/10.3390/su13084206?type=check_update&version=3)
**�������**
**Citation: Nurgazina, J.;**
Pakdeetrakulwong, U.; Moser, T.;
Reiner, G. Distributed Ledger
Technology Applications in Food
Supply Chains: A Review of
Challenges and Future Research
Directions. Sustainability 2021, 13,
[4206. https://doi.org/10.3390/](https://doi.org/10.3390/su13084206)
[su13084206](https://doi.org/10.3390/su13084206)
Academic Editors: Caterina Tricase,
Angela Tarabella and
Pasquale Giungato
Received: 14 March 2021
Accepted: 8 April 2021
Published: 9 April 2021
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright: © 2021 by the authors.**
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: The lack of transparency and traceability in food supply chains (FSCs) is raising concerns**
among consumers and stakeholders about food information credibility, food quality, and safety.
Insufficient records, a lack of digitalization and standardization of processes, and information
exchange are some of the most critical challenges, which can be tackled with disruptive technologies,
such as the Internet of Things (IoT), blockchain, and distributed ledger technologies (DLTs). Studies
provide evidence that novel technological and sustainable practices in FSCs are necessary. This paper
aims to describe current practical applications of DLTs and IoT in FSCs, investigating the challenges
of implementation, and potentials for future research directions, thus contributing to achievement of
the United Nations’ Sustainable Development Goals (SDGs). Within a systematic literature review,
the content of 69 academic publications was analyzed, describing aspects of implementation and
measures to address the challenges of scalability, security, and privacy of DLT, and IoT solutions. The
challenges of high costs, standardization, regulation, interoperability, and energy consumption of
DLT solutions were also classified as highly relevant, but were not widely addressed in literature.
The application of DLTs in FSCs can potentially contribute to 6 strategic SDGs, providing synergies
and possibilities for more sustainable, traceable, and transparent FSCs.
**Keywords: distributed ledger technology; Internet of Things; food supply chain; blockchain; sustain-**
ability; IoT; review
**1. Introduction**
Food path traceability and food information credibility are the critical aspects in
agricultural and food supply chains (FSCs) [1–5]. Complex supply chain networks are
comprised of numerous intermediaries, who are often reluctant to share traceability information [4], contributing to a lack of transparency, digitalization, and supporting systems [1].
Various risk factors can influence food quality and safety, such as various hazardous compounds included in stages of packaging, production, processing, or storage, which can
impose serious health risks to consumers [6]. Product quality at each stage in the supply chain depends on the quality of the prior stages and hence the quality of the final
product depends on the proper traceability practices across the entire supply chain [5,6].
Implementation of automatic systems for data capture are costly and diversity of the
systems makes it hard to implement them in practice [2,3]. However, food trade globalization [3] forces stakeholders in supply chains, e.g., farmers, manufacturers, retailers, and
distributors, to adopt traceability standards [2,4], which imposes even more difficulties
for small-scale producers and farmers [1]. This brings another critical challenge in terms
of standardization of processes, data, and information exchange among stakeholders in
supply chains [2–4], as well as digitalization barriers. A lack of digitalization leads to
-----
_Sustainability 2021, 13, 4206_ 2 of 26
processes and paperwork done manually resulting in human error [7], a lack of available
records, slow-tracing, and difficulties in retrieving information and sorting products [1].
Food scandals, food fraud [4,7], and food contamination incidents [1–3] lead to rising
concerns regarding food quality, safety, and information credibility among consumers
and stakeholders [1–3]. Hence, the implementation of digital technologies is becoming a
necessity and a competitive advantage [8,9] to sustain operations in the market, to decrease
various supply chain risks [1,2,7], and to regain public confidence in food safety, food
security, and quality [3,7,9,10]. There is a rising trend of digitalization in the food industry
and FSCs with integration of technologies, such as the Internet of Things (IoT), blockchain,
and distributed ledger technologies (DLTs) [8,10]. In particular, there is an increased need
of system management solutions for IoT-integrated blockchain systems for transparency,
security, and traceability of FSCs [1,2,4,7,10].
Sensor technologies, such as IoT and cyber-physical systems (CPS) have been widely
integrated in FSCs to preserve logistics monitoring, product quality tracking and process
control [1,11], and to ensure data-driven decision making [12]. Sensors capture and store
critical food data, such as food conditions, location history, and product life cycle, thereby
improving storage management, stockpiling and allocation prioritization, thus preventing
product losses, contamination, and spoilage [1,2,11,12]. Various sensor technologies, such
as the global positioning system (GPS), geographic information system (GIS), near-field
communication (NFC), radio frequency identification (RFID), and temperature and humidity sensors, can improve monitoring and information capturing in various processes [13],
such as production, processing, storage, distribution, and retail [1,11]. However, there
are several challenges of IoT deployments, such as cyber-security and safety risks [1,8,13],
data confidentiality [4], vulnerability, and data integrity [13]. Integration of blockchain
technology in IoT systems can potentially improve system security and address such
challenges [1,8,13]. For instance, blockchains can help prevent food fraud by retaining
trustworthy product information on biological and geographic origin [1,2]. Additionally,
blockchains can benefit production planning and scheduling across supply chains [14].
The combination of blockchains with IoT can potentially improve FSCs transparency, efficiency, and sustainability [5,13] save costs and time [2,8,13], reduce information asymmetry,
paperwork, fraud risks, and increase trust among supply chain stakeholders and end
consumers [5,13].
DLT is a term used to represent a digital network of distributed models, consisting of
blockchain-based ledgers, and collaborating on shared tasks and activities. Blockchain technology is a data structure, composed of “blocks”, that are cryptographically linked together
in a chained sequence using cryptographic hashes, secured against manipulations [11,15].
Due to wider functionality, DLT is a commonly used term for a computer-based system
consisting of distributed ledger-based data structures, which can provide increased levels
of trust, service availability, resiliency, and security of digital systems, as well as distributed
storage, computation, and control [15].
The 2030 Agenda for Sustainable Development Goals (SDGs) of the United Nations
(UN) [16] provides solid and important guidelines, with several of them directly affected
by traceability of FSCs: good health and wellbeing (SDG 3) [17,18], decent work and
economic growth (SDG 8) [17–19], industry and infrastructure (SDG 9), clean water and
sanitation (SDG 6) [10], sustainable cities and communities (SDG 11), and responsible consumption and production (SDG 12) [17,18], which need to be addressed on governmental,
organizational and personal levels across societies [12,16,19].
Integration of DLTs across organizations and infrastructures can enhance stability,
resilience, and security of systems [8,15], enabling distributed solutions for industries and
societies. Fostering sustainable innovation, digitalization, and industrialization can potentially contribute to the SDG 9. Real-time and reliable product-related information, such as
temperature, humidity, light or chemical conditions [2,6], shared across FSCs, can prevent
or predict food contamination, food waste, and food spoilage issues [2,6], additionally
providing automation of processes, such as shelf-life management and product recall [13],
-----
_Sustainability 2021, 13, 4206_ 3 of 26
tracking of expiry dates, thereby contributing to SDGs 3 and 12. Food fraud [1,2,4], a
lack of transparency [12], trust issues [20,21], and various ethical and labor issues in FSCs
can be addressed with digitized data and information exchange among stakeholders in
FSCs [12,20,21], decreasing the roles of middlemen [21]. Digitalization practices in agriculture and food production processes with DLTs, IoT, and other emerging technologies,
such as artificial intelligence (AI), cloud- and fog computing, and big data analytics, can
additionally contribute to the reduction of food waste, inefficient use of resources, and
data-driven decision-making in FSCs [12,19], contributing to SDGs 6 and 11. Aspects
addressing sustainability and improving the quality of life with the blockchain have been
pointed out, specifically for education, environment, health, local economy, social inclusion,
and improved waste management [17], as well as sustainable water management [10].
Despite the potentials of DLT implementation in FSCs with improved security, provenance, reliability, visibility, and neutrality in supply chain operations [7,9], application
and development of DLTs in supply chains is still in its early stages [8,13]. The lack of
uniform technology standards and regulations [3,10,22], insufficient data, traceability processes, interface standardization [4], the lack of technology understanding [3,10,22], and
digitalization barriers are some of the obstacles that hinder widespread adoption [3,10,22].
There have been initiatives addressing current barriers and applications of blockchain
implementation in supply chains [7,8,10], addressing benefits and challenges of adoption
in FSCs [3,7,22], with content-based analysis [13] and suggestions for future research directions [10] for improved sustainability of FSCs [13,17,22]. In recent publications, the
challenges of scalability, security, and privacy of DLT and IoT solutions were highlighted as
some of the most critical in ongoing research [10,22–26]. This systematic literature review
(SLR) paper provides content-based detailed analysis and systematic review of papers,
addressing technical details of DLT and IoT implementation in FSCs with the following
contributions and objectives:
The challenges of scalability, security, and privacy and practices to address them are
_•_
described in detail.
Suggestions for future research directions are provided, with wider interpretation of
_•_
their relevance to the SDGs [17] and contribution towards more transparent, traceable,
and sustainable FSCs.
Based on the highlighted research objectives, the following research questions (RQs)
were be addressed in this study:
RQ 1: What challenges of DLT and IoT implementation in FSCs were identified and
how were they addressed in literature?
RQ 2: What implications for future research directions were elaborated and how can
they contribute to the SDGs?
The remainder of this SLR paper is structured as follows: Section 2 describes the
research methodology of the SLR. Section 3 discusses the main findings, provides an
overview and summary of analyzed papers, and presents classification of challenges of DLT
and IoT implementation into eight thematic clusters. Section 4 discusses the implications
for future research directions and their relevance to the SDGs. Section 5 discusses the major
findings. Section 6 describes the limitations of the study and summarizes the key findings
and contributions of the SLR.
**2. Research Methodology**
This SLR follows the approach of Tranfield et al. [27], modified and adapted from the
approaches of Queiroz et al. [8] and Roberta Pereira et al. [28].
To address the research questions, we performed a SLR approach, presented in
Figure 1. During the stages of the SLR, summary of existing academic literature was
carried out, including current issues and trends, assessing scientific contributions, based
on and opposed to the current and existing knowledge [29].
-----
_Sustainability 2021, 13, 4206_ cates were detected. The details of the research protocol are summarized in Table 1. Based 4 of 26
on the keywords and selection criteria used, publications made available online until (and
including) December 2020 were selected in the process.
**Figure 1.Figure 1. Systematic literature review (SLR) approach adapted from [Systematic literature review (SLR) approach adapted from [8,27]. 8,27].**
In Stage 1 of the SLR, the target research topic was identified, defining applications ofThe publications, which included the description of DLT and IoT implementation
DLT and IoT in FSCs domain. At this stage, a research protocol was developed, and searchdetails in FSCs were considered and summarized in this review. The identified publicakeywords were selected. Search queries were performed in five databases: IEEE Xploretions were screened for validity based on selection criteria, which is specified in the reDigital Library, ScienceDirect, Springer Link, Taylor and Francis Online, and Wiley Onlinesearch protocol and outlined in Table 1.
Library. The combination of the following keywords was used in the search: “blockchain”
OR “distributed ledger” AND “food supply chain”. In the search, no duplicates wereTable 1. Research protocol based on [8,28,30]
detected. The details of the research protocol are summarized in Table 1. Based on
the keywords and selection criteria used, publications made available online until (andResearch Protocol **Details**
including) December 2020 were selected in the process.Search queries performed in the following databases: IEEE
The publications, which included the description of DLT and IoT implementationXplore Digital Library (IEEE)[ 1], ScienceDirect[ 2], Springer Link[ 3],
Search in databases
details in FSCs were considered and summarized in this review. The identified publicationsTaylor and Francis Online [4], and Wiley Online Library[ 5]. No duwere screened for validity based on selection criteria, which is specified in the researchplicates were detected
protocol and outlined in TablePublication type 1. Peer-reviewed papers
Language All publications in English language
**Table 1. Research protocol based on [8,28,30].**
Date range All time span until (including) December 2020
**Research Protocol** Abstract (IEEE); title, terms, abstract, keywords (ScienceDirect); Details
Search fields
and full text search (Springer, Taylor and Francis, Wiley)
Search queries performed in the following databases: IEEE
“blockchain” OR “distributed ledger” AND “food supply Xplore Digital Library (IEEE) [1], ScienceDirect [2], Springer Link
Search terms Search in databases 3, Taylor and Francis Onlinechain” 4, and Wiley Online Library 5.
No duplicates were detected
Only papers describing relevant blockchain or distributed
Inclusion criteria Publication type ledger technologies (DLTs) and IoT (also: sensors, traceability) Peer-reviewed papers
Language application in food supply chain (FSC) were included All publications in English language
Date range All time span until (including) December 2020
Abstract (IEEE); title, terms, abstract, keywords
Search fields (ScienceDirect); and full text search (Springer, Taylor and
Francis, Wiley)
“blockchain” OR “distributed ledger” AND “food supply
Search terms
chain”
Only papers describing relevant blockchain or distributed
Inclusion criteria ledger technologies (DLTs) and IoT (also: sensors, traceability)
application in food supply chain (FSC) were included
Papers in other domains (e.g., wind energy, healthcare) and
Exclusion criteria papers not presenting research or implementation details
were omitted. Repetitive or irrelevant content was omitted
-----
monitoring
_Sustainability 2021, 13, 4206_ conference and journal publications were considered 5 of 26
Data analysis and synthesis
Shortlisted papers were read through and analyzed, covering
current practices of blockchain or DLT and IoT implementation
and research in FSCs domain
**Table 1. Cont.**
1 IEEE Xplore: https://ieeexplore.ieee.org/search; 2 ScienceDirect: https://www.sciencedirect.com/search; Research Protocol[3] Springer Link: https://link.springer.com/; [4] Taylor and Francis: Details
https://www.tandfonline.com; [5] Wiley Online Library: https://onlinelibrary.wiley.com/ (accessed
Papers were screened for validity: describing blockchain or
on 29 January 2021).
Data extraction and monitoring DLT implementation or research. Book chapters, magazines,
conference and journal publications were considered
In Stage 2, the search terms were selected to shortlist the initial number of publica
Shortlisted papers were read through and analyzed, covering
tions. Based on identified selection criteria, papers not satisfying the criteria were omitted,
Data analysis and synthesis current practices of blockchain or DLT and IoT
e.g., papers in other application domains, such as healthcare, wind energy, etc., or papers
implementation and research in FSCs domain
[1 IEEE Xplore:not describing implementation or research details of DLT, blockchain, and IoT implemen- https://ieeexplore.ieee.org/search; 2 ScienceDirect: https://www.sciencedirect.com/search; 3](https://ieeexplore.ieee.org/search)
[Springer Link:tation in FSCs. The search fields were defined differently in different databases, as de- https://link.springer.com/;](https://link.springer.com/) [4] [Taylor and Francis: https://www.tandfonline.com;](https://www.tandfonline.com) [5] Wiley Online
[Library:scribed in Table 1. After each selection stage, the selected papers were counted and docu- https://onlinelibrary.wiley.com/ (accessed on 29 January 2021).](https://onlinelibrary.wiley.com/)
mented in a common spreadsheet during the selection process, adapted from [24], presented in Figure 2. Out of 147 originally found papers, 69 publications were subsequently In Stage 2, the search terms were selected to shortlist the initial number of publications.
Based on identified selection criteria, papers not satisfying the criteria were omitted, e.g.,shortlisted for detailed analysis, among which 25 were conference papers, 40 were journal
papers in other application domains, such as healthcare, wind energy, etc., or papers notpublications, and 4 book sections, which resulted in a selection rate of 46.94%.
describing implementation or research details of DLT, blockchain, and IoT implementationIn Stage 3 of the SLR, the main review findings were elaborated, visualizations were
in FSCs. The search fields were defined differently in different databases, as describeddeveloped, and research questions were finalized and addressed. At this stage, challenges
in Tableof blockchain, DLT and IoT applications were identified from selected literature, summa- 1. After each selection stage, the selected papers were counted and documented
in a common spreadsheet during the selection process, adapted from [rized, and classified into eight thematic clusters. Based on the findings, future research 24], presented in
Figuredirections and their relevance for the SDGs were elaborated. Additionally, the papers 2. Out of 147 originally found papers, 69 publications were subsequently shortlisted
for detailed analysis, among which 25 were conference papers, 40 were journal publications,were classified based on the research methods used, food domain and publication type,
and 4 book sections, which resulted in a selection rate of 46.94%.presented in Section 3.
**Figure 2.Figure 2. SLR process adapted from [SLR process adapted from [28]. 28].**
In Stage 3 of the SLR, the main review findings were elaborated, visualizations were
Throughout the SLR process, key findings, implementation details, and challenges
developed, and research questions were finalized and addressed. At this stage, challenges
were summarized.
of blockchain, DLT and IoT applications were identified from selected literature, summarized, and classified into eight thematic clusters. Based on the findings, future research
directions and their relevance for the SDGs were elaborated. Additionally, the papers
were classified based on the research methods used, food domain and publication type,
presented in Section 3.
Throughout the SLR process, key findings, implementation details, and challenges
were summarized.
**3. Results and Discussion**
In this section, the classification of selected research papers is presented. The challenges of scalability, security, and privacy were classified as the most relevant and occurring
-----
_Sustainability 2021, 13, 4206_ For the classified papers, only papers describing implementation details of block-6 of 26
chain, DLT, and IoT in FSCs were included in the review, including theoretical review
papers. The identified research methods in the selected literature are:
in the analyzed literature [1. Review. 10,22–26,31], along with other highlighted challenges. In this
section, the current challenges of DLT and IoT implementation in the food sector are sum-2. System (framework) design.
marized, and the top three classified challenges of scalability, security, and privacy are3. Experimental setup/prototype.
described in detail. The shortlisted publications mostly covered the experimental stage of4. Case study.
development, i.e., proposing a system, a framework design, or a prototype, while only 155. Simulation.
out of 69 publications were case studies, 23 were review papers, and 6 (out of 69) wereThe classification of papers was carried out according to authors’ understanding and
quantitative simulation-based studies. There were publications, which applied to more
interpretation of findings, considering relevance and technological contribution of the an
than one research method as well.
alyzed publications. The validation of the classifications to research methods was performed by two authors to cross-check the validity of the identified research methods, and
_3.1. Classification of Selected Research Papers_
to prevent possible bias in allocation. If a publication included more than one research
For this SLR, the shortlisted 69 research papers were classified into several criteria: re
method, both research methods were added into the classification as separate methods.
search methods, food domain, and publication type. Using the adapted approach from [32],
The summary of shortlisted papers, based on the application domain, publication type,
the papers were classified into five research methods, depicted in Figure 3.
publication year, and research method are depicted in Table A1 in Appendix A.
**Figure 3.Figure 3. Research methods of shortlisted papers.Research methods of shortlisted papers.**
For the classified papers, only papers describing implementation details of blockchain,In our classification, the case study stage includes and assumes the previous stages
DLT, and IoT in FSCs were included in the review, including theoretical review papers. Theof experimental setup (prototype) or the system (framework) design were implemented,
identified research methods in the selected literature are:and a final solution was evaluated in a company setting. Various review papers addressed
1. Review.
2. System (framework) design.
3. Experimental setup/prototype.
4. Case study.
5. Simulation.
The classification of papers was carried out according to authors’ understanding
and interpretation of findings, considering relevance and technological contribution of
the analyzed publications. The validation of the classifications to research methods was
performed by two authors to cross-check the validity of the identified research methods,
and to prevent possible bias in allocation. If a publication included more than one research
method, both research methods were added into the classification as separate methods.
The summary of shortlisted papers, based on the application domain, publication type,
publication year, and research method are depicted in Table A1 in Appendix A.
In our classification, the case study stage includes and assumes the previous stages
of experimental setup (prototype) or the system (framework) design were implemented,
and a final solution was evaluated in a company setting. Various review papers addressed
DLT implementation challenges, providing summary of areas of application, potentials,
and suggestions for further research directions in food and agri-food [1,12,19,21,25,33–41],
agriculture and precision agriculture [24,26,31,42–44], and seafood [45] domains.
-----
_Sustainability 2021, 13, 4206_ 7 of 26
_3.2. Challenges of DLT and IoT Implementation in FSCs_
To identify the most frequent keywords and to visualize a data set of identified
challenges, the software of ATLAS.ti was used. The identified challenges were summarized
in a spreadsheet file, which was uploaded into the software for further analysis. In total,
196 keywords related to challenges were identified from the selected literature.
Among the challenges identified, the most prominent and frequent occurrences were
the challenges of scalability, security, cost, privacy, storage, energy consumption, latency,
and interoperability. Considering the previous studies [23–25], we provide a comprehensive
description of the scalability, security, and privacy challenges, as well as the measures to
address them, as presented in literature. The 15 most occurring keywords of challenges,
with at least 5 occurrences, are depicted in Table 2.
**Table 2. Top 15 most frequent keywords (challenges).**
**Ranking** **Challenge** **Count (Frequency)**
1 Scalability 25
2 Security 22
3 Privacy 20
4 Cost 19
5 Interoperability 18
6 Energy consumption 13
7 Latency 12
8 Storage 12
9 Standardization 10
10 Regulations 8
11 Stakeholder involvement 8
12 Confidentiality 7
13 Digitalization 7
14 Technology immaturity 6
15 Data integrity 5
3.2.1. Scalability Challenges
The most frequent and prominent challenge, which was identified in the selected literature, was the scalability issue of blockchain and IoT implementation in FSCs, i.e., the ability
to maintain transactions of a network at scale without business process interruption [41].
The consensus algorithms of blockchains, such as Proof-of-Work and Proof-of-Stake, require competition for computational resources, hence achieving scalability and stability in
blockchain and IoT-based systems is still a challenge [46].
Current existing blockchain platforms, such as Hyperledger Sawtooth, are not capable
to handle high amount of data arriving simultaneously, including sensory data and IoT
data, due to the low maturity of the solution. [47] highlighted the scalability issue of
Hyperledger Sawtooth and suggested to dedicate research efforts towards improvement
of blockchain scalability [47]. Another solution of the Hyperledger Fabric Composer was
investigated by [48], who implemented an experimental study with RFID and IoT for
traceability of a halal FSC.
Another blockchain platform, Ethereum, was compared with Hyperledger Sawtooth
with respect to performance by [49]. They presented a fully decentralized IoT-integrated
blockchain-based traceability solution for agri-food supply chains. From a performance
perspective, the Hyperledger Sawtooth performed better than Ethereum with respect to
CPU load, latency, and network traffic. Ethereum had better scalability performance and
reliability with increased number of participants, as well as better software maturity [49].
Another way to address the scalability issue of blockchains was the implementation
of various mechanisms, one of which being the “sharding” mechanism integrated by [50].
They introduced a permissioned 3-tier blockchain framework, with integrated Hazard
Control and Critical Control Point (HACCP), permissioned blockchain, and IoT infrastructure. The “sharding” mechanism used a set of parallel blockchains, called “shards”, to
-----
_Sustainability 2021, 13, 4206_ 8 of 26
scale the network with large number of transactions in multiple shards in parallel. The
task of verifying transactions was divided across multiple shards, and each shard maintained its own synchronized ledger, allocating the shards according to geographic zones.
The network performance was evaluated in a simulation, and resulted in a query time
of just a few milliseconds even when the data was gathered from multiple shards [41,50]
also mentioned the “sharding” mechanism to improve scalability by dividing blockchain
data into several nodes or shards, thereby spreading computational power among the
nodes simultaneously. In their review, private and consortium blockchain solutions were
considered more scalable comparing to public ones, since in public blockchains all nodes
share identical responsibilities, e.g., an establishment of a consensus, interaction with user
and ledger management [41]. Consortium blockchains are shared among a consortium
of multiple institutions, which have access to the blockchain [43]. Private blockchains,
on the other hand, allocate tasks to different nodes, which improves performance of the
network. Public Ethereum blockchain is able to support 15 transactions per second, while
private blockchains, such as Hyperledger Fabric, can provide 3500 transactions per second [41]. Efficient “lightweight” strategies of consensus mechanisms were suggested to
address the issues of scalability, data integrity and privacy by performing any expensive
high-computational tasks off-chain [41].
Various decentralized storage solutions were investigated to improve the scalability of
blockchain solutions. The Interplanetary File System (IPFS) and Ethereum blockchain were
integrated for decentralized storage of IoT data in an automated FSC traceability model [51],
in agri-food prototypical [52], and system design solutions [53,54]. Manufacturer data
and various quality inspections details were stored in a centralized server, while IoT data
was stored in a so-called table of content (TOC) located both on a central server and on
a decentralized database of IPFS. This method allowed a faster transaction process and
backward traceability, tracking each product by the TOC identifier from each supply chain
member [51]. In addition to the IPFS, different hybrid storage solutions were proposed,
including lightweight data structures and a Delegate Proof-of-Stake consensus mechanism,
which restricts the number of validators to improve the scalability of the blockchain [24].
Hybrid on-chain and off-chain data storage solutions were described [23,55], such as
DoubleChain [24], as well as smart contract filtering algorithms, such as a Distributed
Time-based Consensus algorithm, to reduce on-chain data [24]. Additionally, grouping
nodes into clusters in the Blockchain of Things infrastructure was suggested to improve
blockchain scalability [24].
In [56], a decentralized storage solution for blockchain in the FSC domain was also
integrated to enhance throughput, latency, and capacity, introducing the BigchainDB. The
real-time IoT sensor data and HACCP were integrated for real-time food tracing. Throughput and latency issues were addressed with the BigchainDB for distributed database, which
could increase throughput and data storage in a positive linear correlation, while maintaining blockchain properties, such as immutability, transparency, peer-to-peer network,
chronological order of transactions, and decentralized user governance with a consensus
mechanism [56].
Moreover, [57] proposed using a lightning network technology with edge computing
in a blockhain-based food safety management system to improve transaction and performance efficiency. Real-time transactions were carried out in an off-chain channel without
uploading data on to the blockchain. A dynamic programming algorithm was applied to
reduce lightning network fees [57].
Another approach was the introduction of a new consensus algorithm, proposed
by [46], who addressed the issue of blockchain scalability by integrating IoT, IBM cloud
and blockchain in a scalable traceability system. A system prototype was presented with an
integrated consensus mechanism, called the proof of supply chain share, as well as fuzzy
logic to perform shelf-life management for perishable food traceability. The feasibility of
the proposed model was evaluated with a case study in a retail e-commerce sector [46]. A
two-level blockchain solution was additionally proposed by [58], who performed a case
-----
_Sustainability 2021, 13, 4206_ 9 of 26
study-based pilot project, combining a permissionless (public) ledger, shared externally,
with a permissioned ledger, available only to licensed stakeholders [58].
The major concern of recent blockchain developments is the technological immaturity [23], and many approaches highlighted the lack of solid scalable blockchain solutions.
Most blockchain initiatives stay in a small implementation or proof-of-concept phase
through small pilot studies, while large scale implementations and integration to normal
operations are usually initiated by companies, and are not widely represented in research
publications [19]. Blockchain technology is still perceived by organizations as an emerging
technology and an “experimental tool” for achieving a potential competitive advantage in
future [19].
3.2.2. Security Challenges
There are numerous benefits blockchains can provide, such as enhanced IoT and
cloud security [43], reduction of data manipulation [43], anonymity, decentralization, and
improved customer satisfaction in terms of security and food safety [6,9,59]. However,
there’s a major concern about data security of IoT systems and cyber security of blockchain
solutions [34]. A lack of interoperability in regional standards can additionally lead to
information asymmetry in supply chains and increased security risks for consumers [60].
To address the security issue [50], an access restriction-based blockchain framework
was proposed to keep data about pricing, order details, order frequency, and shipments
accessible only for related trading partners. Various client- and network-based attacks
and their countermeasures were described, such as double transfer attack, DOS/DDOS
attack, wallet theft, sniffing attack, and sybil attack [50]. To ensure automated food quality
and safety compliance, an integration with food quality and safety standards, such as ISO
22000, was suggested for implementing smart contracts [61].
The application of asymmetric encryption algorithms [24], such as Ellipse Curve
Cryptography, Diffie-Hellman and RSA, and secure protocols, such as Telehash and Whisper, was proposed to enhance data security in a cross-border trade conceptual blockchain
system [23].
Another suggestion was a consensus algorithm called proof of supply chain share,
proposed by [46], that could mimic the proof of stake algorithm. The hybrid solution
comprised of a blockchain, IoT technologies and cloud computing, with minimum data
operated on the blockchain to sustain system flexibility and adaptability. To store data
efficiently, a mechanism of “blockchain vaporization” was introduced, storing food traceability data, e.g., container ID or batch ID, on the blockchain until the completion of a
proof of delivery or point of sales. When the item was sold or delivered, the associated
data was “vaporized” from the blockchain and stored only in a cloud database. The IBM
cloud solution was integrated to store product data and IoT sensor data [46]. Another
solution proposed cloud-based livestock monitoring system with the blockchain and IoT,
storing sensor data, such as humidity, movement, and CO2 emissions, to detect abnormal
infection-related behavior [62].
To restrict participant access on the blockchain, [41] described the Proof-of-Authority
consensus algorithm with a consortium blockchain solution, approving and determining the number of participants in a trade supply chain. Another consensus algorithm
was introduced by [63], called proof of object. They proposed a new RFID sensor coupled design with a blockchain solution, encrypting terminals with SSL/TLS protocols
and implementing extra security features at the hardware level to prevent security attacks [63]. Other efforts analyzed smart contract security and vulnerability of an Ethereum
blockchain solution with IPFS in a prototypical implementation. The issues of credibility,
authenticity of products, automated payments, and delivery mechanisms in the blockchain
were addressed [52]. Other encryption algorithms, such as base-64, were additionally
presented [64] to enhance data security.
In [65] proposed a product serialization method to address blockchain security and
scalability in a perishable supply chain. Smaller number of transactions on the blockchain
-----
_Sustainability 2021, 13, 4206_ 10 of 26
could improve the scalability, and a secure serialization protocol was used to verify the
authenticity of serial numbers. A path-based fund transfer protocol was proposed to
prevent the sale of expired products [65].
Another approach to enhance the DLT security was proposed by [66], who implemented a federated interledger blockchain solution comprising an open-source IoT and
DLT platform in a food chain scenario. The interledger blockchain with its combination of private and public blockchains was integrated. Periodical synchronization of a
private blockchain ensured data auditability and security. The consortium Ethereum
blockchain was integrated among the FSC members. Since there are currently no standards
for interconnecting DLT solutions, the benefits of interconnecting multiple ledgers were
highlighted [66].
3.2.3. Privacy Challenges
The public key infrastructure of DLTs allows to identify users by their public keys,
however, especially in the FSC sector, many actors are competitors in the market, which
magnifies the issue of stakeholder and user privacy [19].
Hence, to address the privacy issue, [41] described a Peer Blockchain Protocol solution
in an e-commerce trading sector, introducing different block types to address trading
privacy concerns. Three types of blocks were used in transactions: peer micro-blocks, peer
key-blocks and global blocks, pertaining bandwidth requirements, with each block type
following different validation strategy [41].
Using multiple ledgers was another technique to improve privacy of blockchainIoT solutions with a federated interledger approach, i.e., combining several blockchain
ledgers [66]. Private and public Ethereum ledgers were integrated, with private ledgers
storing participants’ confidential data, and public main ledger storing only limited public
data. The privacy issues of public blockchains were highlighted, mentioning negative
implications of immutability and data replication on user privacy, despite the positive
effects of auditability and verifiability [66].
To address the business privacy requirements, various data and information classification techniques were introduced, segregating roles and access rights to shared data [67].
A privacy protection module was integrated in a blockchain prototype, performing user
right control and management, generating keys and encrypting private information. A
two-way traceability coding scheme was applied to identify and track grain products across
a supply chain [67]. Hybrid on-chain and off-chain storage mechanisms, such as DoubleChain [24] were additionally described to preserve data privacy with storing sensitive
data off-chain [23,24,55].
Another approach was suggested by [58], who proposed the application of zero knowledge proofs (ZKP) encryption and a permissioned blockchain, providing access only to
certified stakeholders and storing limited information on the blockchain. ZKP, or other
encryption mechanisms, were proven to ensure identity verification and restricted access
to the data, based on pre-defined access rights, thereby enhancing user and business data
privacy [58]. Data encryption mechanisms, such as proxy encryption server and improved
partial blind signature algorithm, were suggested to ensure data privacy [24]. Additionally, a hierarchical blockchain-based system for improved data privacy and security was
proposed, which ensured chain-to-chain communication, while restricting the number of
blocks on the shared chain [24]. A Quorum blockchain platform was described, which is
an Ethereum-based platform, that provides transaction data encryption and centralized
data control enforcement to preserve data privacy [24].
Despite the initiatives to address the existing issues of blockchain, DLT, and IoT solutions, the privacy and security issues still persist, despite including private or permissioned
blockchains and strong encryption mechanisms [38,46]. Moreover, there is a contradiction
between concepts of anonymity and decentralization in food traceability systems, especially
handling sensitive personal information [46]. More efforts should be dedicated towards
-----
_Sustainability 2021, 13, 4206_ 11 of 26
improving security and scalability aspects of blockchain, DLT, and IoT solutions, ensuring
safe and secure data storage and handling in various business operations [38,46].
_3.3. Classification of Challenges into Thematic Clusters_
The content analysis of shortlisted papers was carried out, identifying 196 keywords
of challenges, which were mentioned or addressed in the selected literature. The identified
challenges were manually classified into the following thematic clusters: technical and
infrastructure, organizational, human, financial, physical, environmental, data-related, and
intangibles. The clusters were adapted from [12] classification of supply chain resources,
with two additional added categories: environmental and data-related.
The allocation of challenges to each cluster was implemented, considering the authors’
perception of their relevance to a particular cluster. The summary of the eight identified
clusters and some of their associated keywords are depicted in Table 3. The “Technical and
Infrastructure” cluster included the highest number of keywords detected and represented
the technical and infrastructure-related issues in DLT and IoT implementation. The second
largest cluster was “organizational”, including challenges associated with stakeholder,
organizational, regulatory, and policy-making issues. The “data-related” cluster included
all issues relating to data and information handling, such as data governance, data accessibility and ownership. The “human” cluster considered human-related issues, such as
human error or resistance. The “financial” cluster included all financial challenges, and
the “physical” cluster included the issues occurring on a physical level, such as sensor
tampering. The “environmental” cluster considered the challenges related to sustainability
and energy consumption, and the “intangibles” cluster included the issues, such as trust,
reputation, and uncertainty.
**Table 3. Classification of identified challenges into 8 thematic clusters.**
**Technical and Infrastructure**
**Organizational**
Infrastructure ownership; transaction delay; connectivity;
scalability; computational power; security; system integration;
storage; interoperability; digitalization (poor infrastructure);
privacy; need of automatic control; heterogeneity of solutions;
hardware-software complexity; low throughput; insufficient
communication protocols; latency; technology immaturity
Heterogeneity of actors; confidentiality; participant incompetency;
stakeholder involvement; authority issues; policy making;
digitalization divide; resistance to openness; new business models;
stakeholder governance; source of power; unifying requirements;
integrity and honesty; certification; standardization
Training and education; lack of expertise; unclear benefits of
**Human** blockchains; lack of skills; user society acceptance; cultural
adoption; consumer preferences; human error
Payment mechanisms; economic models; cost and financial
**Financial** investment; financial risks; resource integration; risk factor
evaluation
Connecting pre- and postprocessing information; sensor-tampering;
**Physical** sensor-reliability; bar code tampering; slow-trace; manual work;
sensor battery life
Sustainability; energy consumption; economic sustainability;
**Environmental**
energy harvesting
**Data-related**
Data governance and ownership; key management; data integrity;
transparent data management; auditable information sharing;
transparency; data accessibility; sensitive data; information
connectivity; traceability coding scheme; data redundancy; data
incompleteness
**Intangibles** Uncertainty; volatility; blockchain-reputation; DLT potential; trust
The number of keywords detected in each cluster is depicted in Figure 4. Previous
studies outlined the major challenges related to technical, organizational and regulatory
-----
_Sustainability 2021, 13, 4206_ 12 of 26
aspects of blockchain implementation in FSCs [22]. In our analysis, a more detailed
classification has been elaborated, resulting overall in 8 clusters of challenges.
**Figure 4.Figure 4. Classification of challenges into thematic clusters with numbers of keywords in each cluster.Classification of challenges into thematic clusters with numbers of keywords in each**
cluster.
All clusters and associated keywords are depicted in a mind-map visualization in
Figure3.4. Summary and Outlook of Challenges and Enablers of DLT Adoption A1 in Appendix B.
To achieve FSC traceability practically, further improvements and modifications of
_3.4. Summary and Outlook of Challenges and Enablers of DLT Adoption_
existing blockchain, DLT and IoT solutions are needed. The most widespread solutions of
Hyperledger To achieve FSC traceability practically, further improvements and modifications of ex-Sawtooth [47,49,68], Hyperledger Fabric [48,67,69,70], Ethereum
isting blockchain, DLT and IoT solutions are needed. The most widespread solutions of Hy-[20,49,51,60,71], Multichain [24], R3 Corda [24], and Quorum [24] were presented in literperledger Sawtooth [ature with initiatives on new consensus algorithms development, double-chain and in-47,49,68], Hyperledger Fabric [48,67,69,70], Ethereum [20,49,51,60,71],
Multichain [terledger approaches [66]. 24], R3 Corda [24], and Quorum [24] were presented in literature with initiatives on new consensus algorithms development, double-chain and interledger approaches [66].
Various initiatives have been implemented to enhance the scalability and security
of blockchain, DLT and IoT solutions, ensuring the food safety in FSCs [61,72], such as
sharding, novel smart contract mechanisms, distributed and off-chain data storage solutions and platforms, such as IPFS and BigchainDB, to store large amounts of data from
various origins, including sensor data. Various data access and data manipulation rights
have been introduced with various encryption algorithms, such as ZKP [58], homomorphic encryption or attribute-based encryption, to improve the aspects of security, privacy
and confidentiality in such applications. However, the privacy concerns, especially with
the introduction of the general data protection regulation (GDPR), are still an on-going
challenge in industrial and research applications [23,26]. The summary of solutions for
the challenges of scalability, security, and privacy, presented in the analyzed literature, is
depicted in Table 4.
There are existing challenges regarding process standardization, organizational/
infrastructure regulation [4,19,20,28,68], interoperability [12,34,39,40,73] digitalization barriers [38,68,74], and sensory battery life [68]. Integration with GS1 standards, such as
electronic product code information services (EPCIS), and digital food record were suggested to improve interoperability of blockchains in FSCs, to increase the levels of trust
and to provide evidence of data provenance [36]. It has been suggested to consider various
cross-regional and international food and feed legislation standards, such as EC 178/2002,
when developing smart contracts [23].
aspects of blockchain implementation in FSCs [22]. In our analysis, a more detailed
classification has been elaborated, resulting overall in 8 clusters of challenges.
-----
_Sustainability 2021, 13, 4206_ 13 of 26
**Table 4. Summary of solutions to address the scalability, security, and privacy challenges.**
**Challenges** **Solutions** **References**
Scalability
Security
Privacy
food and agri-food [51,52,55],
IPFS for storing data
agriculture [24,72], rice [54], food
off-chain
trade [23]
sharding food [50], trade [41]
BigchainDB food [56]
Proof-of-Supply-Chain-Share e-commerce [46]
Lightning network food [57]
Lightweight data structures, Delegate Proof-of-Stake,
Distributed Time-based Consensus, DoubleChain, agriculture [24]
grouping nodes into clusters
Two-level blockchain agri-food [58]
Data access restriction food [50], agri-food [58]
Proof-of-Supply-Chain-Share, blockchain vaporization food [46]
Proof-of-Authority trade [41]
Proof-of-Object food [63]
Product serialization, path-based fund transfer
perishable food [65]
protocol
Ellipse Curve Cryptography, Diffie-Hellman, RSA,
food trade [23], agriculture [24]
secure protocols (Telehash, Whisper)
Lightweight data structures, proxy encryption agriculture [24]
Interledger, consortium blockchain food [66]
Peer Blockchain Protocol trade [41]
Interledger blockchain food [66]
Access rights restriction, two-way coding scheme grain [67]
On-chain and off-chain data storage food trade [23], food [55]
Improved partial blind signature, proxy encryption agriculture [24]
Zero-knowledge proof encryption agri-food [58]
The issues of high costs and transaction fees of blockchain and IoT infrastructure
implementation [12,19,41,48,49,70] were highlighted as some of the critical adoption challenges in FSCs, with several studies describing cost reducing impact [73,75], and effects on
supply chain transactions with DLTs [75]. Additionally, various challenges and disputes
might arise regarding infrastructure and data ownership [12,21], as well as data and sensor
tampering [1,34,44,46,67], and information and data incredibility [73,76,77]. Due to the
reluctance among FSC stakeholders [18] to implement DLT and IoT solutions, another
major challenge is to involve stakeholders in DLT adoption in FSCs [25,59,78–80].
Despite the various challenges and barriers, there are numerous benefits of DLT and
IoT implementation in FSCs. Several investigations were carried out to identify various
enablers and value drivers of blockchain and DLT adoption in FSCs [81,82]. The key
enablers identified were customer satisfaction, risk reduction, improvement of safety, improvement of quality of food [73,81], fraud detection, reduction of paperwork, provenance
tracking, real-time transparency/visibility [7,73,81], improved systems, data security, and
government regulations [81]. Depending on the sought value, the available resources,
feasibility of implementation [34] and various blockchain maturity levels and development
stages (e.g., 1.0, 2.0., 3.0) should be considered when deciding on DLT adoption [19,43,82].
Several techno-economic factors, such as disintermediation, traceability, and price, were
highlighted as the most important factors, which can influence stakeholders’ adoption
decisions [80]. However, there are issues, which blockchains alone cannot address, such as
identifying which information should be shared with stakeholders versus private, confidential and competitive information, that should be protected and stored off-chain to achieve
fair, trustworthy and sustainable FSCs [4,25]. Hence, tackling various data, technology,
process standardization, and policy making issues is critical to facilitate blockchain and
DLT adoption in FSCs [4,25,35].
**4. Implications for Future Research Directions**
The aim of this review was to consolidate prior studies on blockchain, DLT, and IoT
applications in FSCs using the SLR technique. Most of the studies were published recently,
-----
_Sustainability 2021, 13, 4206_ 14 of 26
**4. Implications for Future Research Directions**
The aim of this review was to consolidate prior studies on blockchain, DLT, and IoT
applications in FSCs using the SLR technique. Most of the studies were published recently,
which demonstrates that the application of blockchain and DLT in FSCs is still in an early
which demonstrates that the application of blockchain and DLT in FSCs is still in an early
development stage. Moreover, despite the explicit benefits of blockchains and DLTs, there
development stage. Moreover, despite the explicit benefits of blockchains and DLTs, there
are various challenges associated with implementation and suggestions for future research
are various challenges associated with implementation and suggestions for future re
directions to be addressed. Based on the presented findings, the future research directions
search directions to be addressed. Based on the presented findings, the future research
are elaborated for the blockchain and DLT research and development in FSCs in a proposed
directions are elaborated for the blockchain and DLT research and development in FSCs
domain scheme, adapted from [83], as shown in Figure 5.
in a proposed domain scheme, adapted from [83], as shown in Figure 5.
**Figure 5. Classification of the potential future research directions in blockchain-based FSCs,**
**Figure 5. Classification of the potential future research directions in blockchain-based FSCs, adapted**
adapted from [83].
from [83].
As it is presented in FigureAs it is presented in Figure 5, there are three domain schemes, which are human, 5, there are three domain schemes, which are human,
governance and technical domain. The human domain includes data-related issues, whilegovernance and technical domain. The human domain includes data-related issues, while
the governance domain includes the economics, finance, regulation, and organizationthe governance domain includes the economics, finance, regulation, and organization rerelated issues. The technical domain includes the technology and infrastructure associatedlated issues. The technical domain includes the technology and infrastructure associated
issues. The potential future research directions for blockchain-based FSCs can be observedissues. The potential future research directions for blockchain-based FSCs can be observed
as highly interdisciplinary, as most of them overlap with at least with two other domains.as highly interdisciplinary, as most of them overlap with at least with two other domains.
These future research directions (FRDs) will be explained further in detail, with theirThese future research directions (FRDs) will be explained further in detail, with their concontributions to the SDGs of the UN, considering the previous studies [tributions to the SDGs of the UN, considering the previous studies [17,18,84]. 17,18,84].
_4.1. Resolution of the Scalability Issue of Blockchains4.1. Resolution of the Scalability Issue of Blockchains_
The scalability of blockchains is a known challenge and has been an active area ofThe scalability of blockchains is a known challenge and has been an active area of
research for several years [research for several years [60]. The scalability challenge is a major concern of blockchain-60]. The scalability challenge is a major concern of blockchainbased systems for FSCs, because of growing data [51,58] and transaction speed [13,42].
The ongoing research should include the exploration and adoption of decentralized storage solutions, such as IPFS, BigchainDB, Swarm, IOTA, and Algorand, to store data
off-chain [23,55,57]. In addition, to improve the scalability of the blockchain, the solutions
involving fewer interaction with the blockchain should be considered, such as the routing
protocols and routing algorithms for offline channels [65]. Further research and development of novel mechanisms are still needed to improve the scalability of blockchain-based
applications in real business environments [47]. This FRD can be considered in response to
the SDG 9, industry, innovation and infrastructure.
-----
_Sustainability 2021, 13, 4206_ 15 of 26
_4.2. Data Security, Reliability and Trustworthiness at Machine or Sensor Data Entry Level_
The data security and trustworthiness are some of major challenges of the blockchain
and IoT-enabled applications [38]. Therefore, novel consensus algorithms should be further
explored to facilitate the data access restriction on the blockchain [23,63]. IoT devices are
widely integrated in various blockchain deployments, capturing food production data
and environmental conditions during distribution processes, thereby decreasing labor
costs and improving data entry credibility [44]. Since the data are stored permanently in
blockchains and DLTs, such data can be utilized for subsequent processing (e.g., traceability,
verification, recommendation, and payment), ensuring the accuracy of recorded data. An
additional challenge has been the possibility of mismanagement and tampering of IoT data,
which magnifies security and data reliability concerns [7,13,36]. Further research efforts
should be targeted at developing fault-tolerant, safe and reliable architectures and systems
for blockchain-IoT-based FSCs [1,63,67]. In addition, the application of fog computing
concepts to improve the reliability of IoT devices could be investigated [85]. This FRD can
be considered in response to the SDG 9.
_4.3. Protection and Privacy Issues of Blockchains_
One of the major challenges of blockchain-IoT applications is the compliance with
existing regulations and standards [25,35,51], as well as harmonization of standards for
cross-regional and cross-country FSCs [25]. Regulatory authorities are setting rules of
data protection, such as the GDPR on the data protection and privacy in the European
Union and the European Economic Area. The users of blockchain-based FSC solutions
should be taught to consider and interpret their rights, obligations and duties. Smart
contracts can potentially ensure compliance with legislations, as well as the protection
of participants’ privacy. Therefore, future research initiatives should concern the data
protection mechanisms (e.g., homomorphic encryption, attribute-based encryption, etc.)
and privacy issues of blockchains and DLTs [20,40,66]. This FRD can be considered in
response to the SDG 9.
_4.4. Interoperability of Blockchains_
The blockchain interoperability refers to the ability to share information across different blockchain networks without restrictions. The blockchain interoperability can be
categorized in the following forms: (1) the interoperability between blockchain and legacy
systems; (2) the interoperability between blockchain platforms; and (3) the interoperability
between two smart contracts within a single blockchain platform [86].
Even though there are currently several blockchain project initiatives in the FSC domain, most of these projects are isolated and unable to communicate with each other. The
blockchain interoperability can be considered important, particularly, in the FSC domain,
which generally consists of various relevant stakeholders [36,49,66]. Each stakeholder
may have their own system, that is not compatible with the other stakeholder’s system.
Therefore, blockchain developments should be flexible enough to consider various regulations and platforms [4,22]. The formation of consortia of business partners, supported
by governmental institutions, was suggested to drive standardization of blockchain developments and the long-term implementations [4,7]. Consequently, topics concerning the
development of general standards for data collection and exchange, as well as standardization of processes and interfaces to enhance interoperability across different systems and
blockchain solutions, as well as the integrity of data still require further attention to enable
efficient cross-medium and cross-blockchain communication [4,12,23,42]. This FRD can be
considered in response to the SDG 9.
_4.5. Integration of other Emerging Technologies_
Blockchain technology is utilized as a solution for trust and security issues among
FSC stakeholders [4,5,7]. Additionally, smart contracts can be utilized to detect nearly
expired food products. Therefore, a warning or alert system could be introduced, so
-----
_Sustainability 2021, 13, 4206_ 16 of 26
that retail stores could manage, distribute, or sell products before the expiration date.
Furthermore, the blockchain becomes the underlying technology, that can be integrated
with other emerging technologies (e.g., artificial intelligence (AI), big data analytics, digital
twins, cloud- and fog computing) to realize data-driven FSCs [12,48]. The combination
of the blockchain, IoT and machine learning is one of the promising topics to explore.
On the one hand, the blockchain is utilized to store data in a permanent and immutable
way to guarantee reliability; on the other hand, AI, such as machine learning or deep
learning, can examine existing data and construct algorithms, that can make predictions
to identify patterns, or to generate useful recommendations, thereby creating a medium
for data-driven decisions [24,81,87]. Therefore, the integration of blockchains with other
emerging technologies can contribute to development of innovative solutions in agri-food
and precision agriculture domains to increase yields, while reducing production costs
and environmental pollution [26]. This FRD can be considered in response to the SDG 12,
responsible consumption and production, and SDG 9.
_4.6. Blockchain-IoT Solutions for a High Value FSC_
Only a few studies have focused on developing IoT-based blockchain solutions for
organic or premium FSCs, which could sustain consumers’ trust in authentic and organic
product origin in FSCs. Hence, another important dimension for future research is the
application of blockchains in combination with the IoT in FSCs to verify the authenticity of
organic food products [7,19]. IoT-based sensors integrated in FSCs ensure the reliability
and availability of data. The DLT, on the other hand, is a more reliable, credible, and
secure counterpart to a traditional database. Therefore, organic certification processes can
be facilitated and automated with integrated blockchain, DLT and IoT solutions [7,88].
Furthermore, a digital certificate with anti-counterfeit evidence, issued with blockchain, is
much more trustworthy and can be easily verified, compared to a paper-based counterpart.
Hence, further research on IoT-based blockchain solutions for organic FSCs and the subsequent evaluation of FSC performance is worth investigating. This FRD can be considered
in response to the SDG 3, good health and wellbeing, and SDGs 9 and 12.
_4.7. Automated and Direct Payments with Cryptocurrency and Proof-of-Delivery_
Traditional trading methods are time-consuming and rely heavily on manual processes to handle transactions in FSCs. Furthermore, in addition to these complex and
inefficient practices, payments are time consuming and are carried out through financial intermediaries [89,90]. For this issue, the blockchain technology can provide the medium for
automated and direct payment processes. The future research initiatives should consider
adopting blockchains for automated payment transaction processes with cryptocurrencies
and proof-of-delivery methods integrated between senders and recipients [20]. Such an
automated payment system with cryptocurrencies or currency-like transactions can help
eliminate the need of trusted third parties or unnecessary human interventions, leading
to payment delays [12,21,90]. Additionally, initiatives to support small farmers can be
introduced, increasing their competitiveness in developing markets, establishing cooperatives [19], and improving their profits [88,90]. Moreover, performance evaluations
and cost analyses of such solutions in empirical and case study-based settings should be
investigated. Prior studies highlighted the importance of blockchains in addressing the
labor and decent work conditions, ethical issues, animal welfare and environmental impact
issues, related to the SDG 8 [25,84]. Therefore, this FRD can be considered in response to
the SDG 8, decent work and economic growth, and SDG 9.
-----
_Sustainability 2021, 13, 4206_ 17 of 26
_4.8. Sustainable Agri-Food Supply Chain_
Blockchains or other DLTs in combination with IoT are considered as some of the most
promising technologies, that can potentially enable connected and traceable supply chains,
more decentralized, trusted, and user-centered digital services, as well as new business
models, that could benefit the society and economy. They could additionally enhance the
sustainability of various agri-food supply chains [12,19,33,91].
Previous comprehensive research on blockchain and DLT development was demonstrated with proof-of-concept and prototypical implementations in FSCs. However, there
is still a lack of empirical validation to evaluate the impact of blockchains and DLTs on
FSCs performance [22,37], in particular, related to sustainability, i.e., economic, social, and
environmental sustainability. For instance, with regard to the economic aspect, future
research initiatives should evaluate how blockchains and DLTs can help reduce economic
losses and food waste, or how such solutions can enhance the circular economy aspects
of FSCs [17,84]. The economic sustainability of blockchain-enabled solutions needs to be
evaluated, reflecting on the adoption potential of the blockchain technology in real business
environments. However, the barriers to engage all relevant stakeholders in FSCs to adopt
the blockchain [13,84] should be considered, which can hinder the adoption process.
From the point of view of the social sustainability aspect, the future research initiatives
should address the legal and regulatory issues of blockchain-based systems [13,25,58,73,84].
Various initiatives promote global blockchain standards, such as ISO Blockchain (TC307),
to facilitate industrial and societal acceptance [13]. Besides, further studies to improve
working conditions and to monitor forced labor in FSCs should be carried out, measuring
the real social sustainability of FSCs through blockchain utilization.
The blockchain-based IoT applications can record permanent data of the entire activity in a FSC from the food production (e.g., cultivated plants, fertilizers, pesticides),
transportation, processing, and packaging to the retailing of food products. However,
complex consensus mechanisms to validate blockchain transactions require high energy
consumption [42]. Therefore, the investigation of “lightweight” distributed consensus
mechanisms is necessary to address the energy consumption challenge and to consider
an environmental sustainability perspective in blockchain deployments [26]. Providing
reliable and detailed product information on food origin, logistics details, production, and
distribution details can empower consumers to make informed and responsible purchasing decisions [12,18,19,21], taking into consideration the sustainability of food industries
involved, thereby enabling sustainable consumption in FSCs [12,19,21]. Ethical and sustainable food production [22], addressing fair income and poverty issues [17,18], responsible
consumption and purchasing decisions [17], and global partnerships for sustainable development [17] can potentially support the achievement of the SDGs. Synergies and various
trade-offs between targets can be investigated, addressing the issues of decent work, health,
economy, social inclusion, sustainable water management [10], and reduction of industrial,
municipal, and agricultural waste with blockchains [17,18]. Once the sustainable FSCs
with those three dimensions of sustainability (i.e., economic, social, environmental) can be
established, it can lead to the achievement of the SDGs, particularly, the SDG 6, clean water
and sanitation, SDG 11, sustainable cities and communities, and SDGs 3, 8, 9, and 12.
Various additional SDGs were addressed in literature, which can benefit from blockchains;
however, mostly indirectly, such as the SDG 1, no poverty [17,18], SDG 4, quality education [17], SDG 5, gender equality [18], SDG 10, reduced inequalities [17], SDG 14, life below
water [18], and SDG 17, partnerships for the goals [17]. The abovementioned suggestions
for future research demonstrate, that there are still numerous topics in blockchain innovation and implementation to investigate, in order to establish digitally connected, traceable
and sustainable FSCs, which can potentially contribute towards the SDGs achievement.
-----
_Sustainability 2021, 13, 4206_ 18 of 26
**5. Discussion**
Apart from the challenges of scalability, security, and privacy, the challenges of technical, organizational, and regulatory origin in blockchain-based FSCs [22] were highlighted,
including the technological immaturity [23], and adoption barriers [4,84], providing implications for research directions [13,17,22,33,84]. The lack of national and international
regulations and standards [35], high costs for blockchain development, gas consumption [53,58,92] and substantial energy and computing power consumption [57,92] can
hinder the industry-wide adoption in FSCs [22]. Additionally, the interoperability of
DLTs should be investigated, including blockchain-to-blockchain and blockchain-to-legacy
interoperability [36]. Development of consortia of business partners, supported by governmental institutions, was suggested to drive blockchain standardization and long-term
implementation [4]. Various barriers of data entry at the physical level still persist, such as
data and sensor tampering, which can be tackled with full digitization, full visibility and
substantial investments [7]. Therefore, cost and benefit factors, consumers’ willingness to
pay and product value or volume might play a role in blockchain adoption decisions [7].
Contribution of blockchain technology towards sustainability of FSCs [22] and the
SDGs in the areas of health, economy, decent work, reduction of waste, sustainable water
management, and social inclusion was highlighted [17,18,84]. More efforts should be
dedicated to make blockchain, DLT, and IoT solutions in FSCs more sustainable, energyand cost-efficient. Recent studies provide evidence, that internationalization of various
economic activities on a world-wide level are necessary for development of coherent policies for the SDGs, as well as understanding of various synergies and trade-offs associated
with market instruments to implement the SDGs [84,93]. The impact of disruptions, such
as COVID-19, on global markets, economies and practices is additionally highlighted,
since the disruptions bring in efforts for policies, strategies and planning on international
level [93]. Therefore, novel supply chain processes should be designed to address the
impact of disruptions on organizations, societies and FSCs [24].
Apart from the technological and policy initiatives, novel approaches and standardizations to automatically measure food quality and safety should be adopted, such as
HACCP, DNA barcoding, DNA profiling [45], food quality index evaluation [94], and
combination of different methodologies [45]. Implementing other emerging technologies,
such as AI, digital twins, CPS, cloud- and fog computing and big data analytics, can ensure
data-driven decision making in FSCs, as well as enhanced transparency, traceability and automation [22]. Digitalization initiatives, such as Agriculture 4.0 [42], can enhance the safety
and reliability of food chains, as well as reduce food waste and food fraud. Empowering
consumers with reliable and sufficient food data can enable responsible consumption and
purchasing decisions in FSCs [12,19,84]. More empirical and case study investigations are
necessary to evaluate technological capabilities of blockchains, the long-term benefits and
quantitative aspects affecting FSC performance and sustainability [5,7,22]. The suggestions
for future research directions and the summary of challenges, elaborated in this review,
can benefit the ongoing DLT and IoT initiatives in FSCs.
**6. Conclusions**
This review provides a contribution towards outlining the current practical applications of blockchain, DLTs and IoT in the FSC domain, describing the initiatives to address
the most relevant challenges of scalability, security, and privacy and suggestions for future
research directions. The detailed analysis of the 69 shortlisted papers was provided with a
comprehensive summary of existing solutions, challenges, and applications. Six strategic
SDGs of the UN can potentially be addressed by the DLT and IoT implementation in FSCs,
thereby enabling more traceable, transparent and sustainable FSCs. There are several
limitations in our study. Papers published after December 2020 were not considered in the
review, and due to the specifics of the search, there are publications that may have been
missed. Further research should focus on considering other related supply chains, such as
textile, e-commerce, food trade, agriculture, perishable and frozen food, processed food,
-----
_Sustainability 2021, 13, 4206_ 19 of 26
global and local retail industries, packaging, and grocery networks, as well as investigating
the impacts of other emerging technologies, mentioned in this study, on sustainability,
transparency and traceability of FSCs. Furthermore, other highlighted challenges, presented in this study, such as regulatory, standardization and interoperability issues, should
be addressed in more detail.
**Author Contributions: Conceptualization, J.N., U.P., T.M. and G.R.; methodology, J.N., U.P., T.M.**
and G.R.; validation, J.N. and U.P.; investigation, J.N. and U.P.; data curation, J.N. and U.P.; writing—
original draft preparation, J.N. and U.P.; writing—review and editing, J.N., U.P., T.M. and G.R.;
visualization, J.N. and U.P.; supervision, T.M. and G.R.; funding acquisition, J.N., U.P. and T.M. All
authors have read and agreed to the published version of the manuscript.
**Funding: This paper was jointly supported by OeAD-GmbH/ICM in cooperation with ASEA-**
UNINET and funded by the Austrian Federal Ministry of Science and Research (BMBWF), project
number ICM-2019-13796; the Lower Austrian Research and Education Promotion Agency (NFB) dissertation grant number SC18-015; and Austrian Blockchain Center, project number 868625, financed
by The Austrian Research Promotion Agency (FFG).
**Institutional Review Board Statement: Not applicable.**
**Informed Consent Statement: Not applicable.**
**Data Availability Statement: Not applicable.**
**Conflicts of Interest: The authors declare no conflict of interest.**
**Abbreviations**
FSC Food supply chain
ISO International Organization for Standardization
IoT Internet of Things
DLT Distributed ledger technology
CPS Cyber-physical systems
GPS Global positioning system
GIS Geographic information system
NFC Near-field communication
RFID Radio frequency identification
SDGs Sustainable Development Goals
UN United Nations
AI Artificial intelligence
SLR Systematic literature review
CPU Central processing unit
HACCP Hazard control and critical control point
IPFS Interplanetary file system
DOS Denial-of-service
DDOS Distributed denial-of-service
RSA Rivest-Shamir-Adleman
CO2 Carbon dioxide
SSL Secure sockets layer
TLS Transport layer security
ZKP Zero knowledge proofs
GDPR General data protection regulation
EPCIS Electronic product code information services
FRD Future Research Direction
COVID-19 Coronavirus disease 2019
DNA Deoxyribonucleic Acid
-----
_Sustainability 2021, 13, 4206_ 20 of 26
**Appendix A**
**Table A1. Classification of selected literature by application domain, publication type, year and**
research method.
**Application** **Publication** **Year** **Research Method** **Reference**
conference 2019 Case study [74]
seafood
journal 2019 Review [45]
agriculture journal 2020 Review [24,43]
2019 Review [19,21]
2020 Review, case study [73]
2020 Case study [79]
agri-food journal
2020 Review [12,38]
2018 Review [1]
2020 Experimental setup, simulation [52]
2020 System design, case study [58]
olive oil conference 2019 Experimental setup, simulation [69]
book chapter 2019 Experimental setup [95]
dairy
journal 2020 System (framework) design [59]
book chapter 2020 Review [6]
egg journal 2019 Case study [68]
2018 Experimental setup [49]
2020 System design, simulation [53]
agri-food conference
food conference
2016 System (framework) design [76]
2019 System (framework) design [71]
2017 Case study [56]
2019 System (framework) design [47]
2019 Case study [51,66]
2020 Experimental setup [61]
2019 Case study (containerized) [70]
2020 Review [39]
Experimental setup, simulation
2019 [50]
(milk chocolate)
2019 System (framework) design [44]
conference 2019 Experimental setup [48]
halal food
journal 2020 Case study [78]
pork meat,
journal 2019 Experimental setup [94]
restaurant
System design, simulation
2020 [92]
grain journal (Australian)
2020 Case study [67]
precision 2020 System design (framework) [85]
journal
agriculture 2020 Review [26,42]
2019 Review [41]
Trade/food trade journal
2020 System (framework) design [23]
rice conference 2020 System design (framework) [54]
2020 Review [77]
agriculture conference
2020 System (framework) design [72]
2020 System (framework) design [90]
2020 System (framework) design [88]
-----
_Sustainability 2021, 13, 4206_ 21 of 26
**Table A1. Cont.**
**Application** **Publication** **Year** **Research Method** **Reference**
2018 System (framework) design [60]
2019 Experimental setup [63]
2020 Experimental setup [57]
2020 Review [25,31,33–35]
food journal
2020 Simulation (quantitative study) [81]
2019 Review [40]
2019 Case study [46]
2019 Case study (case 2) [75]
2020 System (framework) design [55]
soybean conference 2019 System (framework) design [20]
wine journal 2019 Case study [91]
food book chapter 2020 Review [36,37]
perishable food journal 2020 System design (method) [65]
seed conference 2020 System (framework) design [64]
retail journal 2020 Experimental setup [96]
grape journal 2020 System (framework) design [80]
fish journal 2020 Case study [18]
livestock conference 2020 System (framework) design [62]
-----
_Sustainability 2021, 13, 4206_ 22 of 26
_Sustainability 2021, 13, x FOR PEER REVIEW_ 23 of 27
**Appendix B**
**Appendix B**
**Figure A1.Figure A1. Challenges of DLT and IoT implementation: visualization of 8 thematic clusters and their keywords.Challenges of DLT and IoT implementation: visualization of 8 thematic clusters and their keywords.**
-----
_Sustainability 2021, 13, 4206_ 23 of 26
**References**
1. Galvez, J.F.; Mejuto, J.C.; Simal-Gandara, J. Future challenges on the use of blockchain for food traceability analysis. Trends Anal.
_[Chem. 2018, 107, 222–232. [CrossRef]](http://doi.org/10.1016/j.trac.2018.08.011)_
2. Aung, M.M.; Chang, Y.S. Traceability in a food supply chain: Safety and quality perspectives. Food Control 2014, 39, 172–184.
[[CrossRef]](http://doi.org/10.1016/j.foodcont.2013.11.007)
3. Chen, S.; Liu, X.; Yan, J.; Hu, G.; Shi, Y. Processes, benefits, and challenges for adoption of blockchain technologies in food supply
[chains: A thematic analysis. Inf. Syst. E Bus. Manag. 2020, 5, 1. [CrossRef]](http://doi.org/10.1007/s10257-020-00467-3)
4. Behnke, K.; Janssen, M.F.W.H.A. Boundary conditions for traceability in food supply chains using blockchain technology. Int. J.
_[Inf. Manag. 2020, 52, 101969. [CrossRef]](http://doi.org/10.1016/j.ijinfomgt.2019.05.025)_
5. Köhler, S.; Pizzol, M. Technology assessment of blockchain-based technologies in the food supply chain. J. Clean. Prod. 2020, 269,
[122193. [CrossRef]](http://doi.org/10.1016/j.jclepro.2020.122193)
6. Aysha, C.H.; Athira, S. Overcoming the quality challenges across the supply chain. In Dairy Processing: Advanced Research to
_[Applications; Minj, J., Sudhakaran, V.A., Kumari, A., Eds.; Springer: Singapore, 2020; pp. 181–196. [CrossRef]](http://doi.org/10.1007/978-981-15-2608-4_9)_
7. Rogerson, M.; Parry, G.C. Blockchain: Case studies in food supply chain visibility. Int. J. Supply Chain Manag. 2020, 25, 601–614.
[[CrossRef]](http://doi.org/10.1108/SCM-08-2019-0300)
8. Queiroz, M.M.; Telles, R.; Bonilla, S.H. Blockchain and supply chain management integration: A systematic review of the
[literature. SCM 2019, 25, 241–254. [CrossRef]](http://doi.org/10.1108/SCM-03-2018-0143)
9. Bermeo-Almeida, O.; Cardenas-Rodriguez, M.; Samaniego-Cobo, T.; Ferruzola-Gómez, E.; Cabezas-Cabezas, R.; Bazán-Vera, W.
Blockchain in agriculture: A systematic literature review. In Technologies and Innovation; Valencia-García, R., Alcaraz-Mármol, G.,
del Cioppo-Morstadt, J., Vera-Lucio, N., Bucaram-Leverone, M., Eds.; Springer International Publishing: Cham, Switzerland,
[2018; pp. 44–56. [CrossRef]](http://doi.org/10.1007/978-3-030-00940-3_4)
10. Zhao, G.; Liu, S.; Lopez, C.; Lu, H.; Elgueta, S.; Chen, H.; Boshkoska, B.M. Blockchain technology in agri-food value chain
[management: A synthesis of applications, challenges and future research directions. Comput. Ind. 2019, 109, 83–99. [CrossRef]](http://doi.org/10.1016/j.compind.2019.04.002)
11. [Creydt, M.; Fischer, M. Blockchain and more—Algorithm driven food traceability. Food Control 2019, 105, 45–51. [CrossRef]](http://doi.org/10.1016/j.foodcont.2019.05.019)
12. Kamble, S.S.; Gunasekaran, A.; Gawankar, S.A. Achieving sustainable performance in a data-driven agriculture supply chain: A
[review for research and applications. Int. J. Prod. Econ. 2020, 219, 179–194. [CrossRef]](http://doi.org/10.1016/j.ijpe.2019.05.022)
13. Duan, J.; Zhang, C.; Gong, Y.; Brown, S.; Li, Z. A Content-analysis based literature review in blockchain adoption within food
[supply chain. Int. J. Environ. Res. Public Health 2020, 17, 1784. [CrossRef]](http://doi.org/10.3390/ijerph17051784)
14. Ying, K.-C.; Pourhejazy, P.; Cheng, C.-Y.; Syu, R.-S. Supply chain-oriented permutation flowshop scheduling considering flexible
[assembly and setup times. Int. J. Prod. Res. 2020, 8, 1–24. [CrossRef]](http://doi.org/10.1080/00207543.2020.1842938)
15. De Leon, D.C.; Stalick, A.Q.; Jillepalli, A.A.; Haney, M.A.; Sheldon, F.T. Blockchain: Properties and misconceptions. Asia Pac. J.
_[Innov. Entrep. 2017, 11, 286–300. [CrossRef]](http://doi.org/10.1108/APJIE-12-2017-034)_
16. United Nations. Transforming our World: The 2030 Agenda for Sustainable Development | Department of Economic and
[Social Affairs. 2015. Available online: https://sdgs.un.org/publications/transforming-our-world-2030-agenda-sustainable-](https://sdgs.un.org/publications/transforming-our-world-2030-agenda-sustainable-development-17981)
[development-17981 (accessed on 8 March 2021).](https://sdgs.un.org/publications/transforming-our-world-2030-agenda-sustainable-development-17981)
17. França, A.S.L.; Neto, J.A.; Gonçalves, R.F.; Almeida, C.M.V.B. Proposing the use of blockchain to improve the solid waste
[management in small municipalities. J. Clean. Prod. 2020, 244, 118529. [CrossRef]](http://doi.org/10.1016/j.jclepro.2019.118529)
18. Tsolakis, N.; Niedenzu, D.; Simonetto, M.; Dora, M.; Kumar, M. Supply network design to address United Nations Sustainable
[Development Goals: A case study of blockchain implementation in Thai fish industry. J. Bus. Res. 2020, 550, 43. [CrossRef]](http://doi.org/10.1016/j.jbusres.2020.08.003)
19. Kamilaris, A.; Fonts, A.; Prenafeta-Bold ´υ, F.X. The rise of blockchain technology in agriculture and food supply chains. Trends
_[Food Sci. Technol. 2019, 91, 640–652. [CrossRef]](http://doi.org/10.1016/j.tifs.2019.07.034)_
20. Salah, K.; Nizamuddin, N.; Jayaraman, R.; Omar, M. Blockchain-based soybean traceability in agricultural supply chain. IEEE
_[Access 2019, 7, 73295–73305. [CrossRef]](http://doi.org/10.1109/ACCESS.2019.2918000)_
21. Antonucci, F.; Figorilli, S.; Costa, C.; Pallottino, F.; Raso, L.; Menesatti, P. A review on blockchain applications in the agri-food
[sector. J. Sci. Food Agric. 2019, 99, 6129–6138. [CrossRef]](http://doi.org/10.1002/jsfa.9912)
22. Rejeb, A.; Keogh, J.G.; Zailani, S.; Treiblmaier, H.; Rejeb, K. Blockchain technology in the food industry: A review of potentials,
[challenges and future research directions. Logistics 2020, 4, 27. [CrossRef]](http://doi.org/10.3390/logistics4040027)
23. Qian, J.; Dai, B.; Wang, B.; Zha, Y.; Song, Q. Traceability in food processing: Problems, methods, and performance evaluations—A
[review. Crit. Rev. Food Sci. Nutr. 2020, 1–14. [CrossRef]](http://doi.org/10.1080/10408398.2020.1825925)
24. Lin, W.; Huang, X.; Fang, H.; Wang, V.; Hua, Y.; Wang, J.; Yin, H.; Yi, D.; Yau, L. Blockchain technology in current agricultural
[systems: From techniques to applications. IEEE Access 2020, 8, 143920–143937. [CrossRef]](http://doi.org/10.1109/ACCESS.2020.3014522)
25. Katsikouli, P.; Wilde, A.S.; Dragoni, N.; Høgh-Jensen, H. On the benefits and challenges of blockchains for managing food supply
[chains. J. Sci. Food Agric. 2021, 101, 2175–2181. [CrossRef]](http://doi.org/10.1002/jsfa.10883)
26. Torky, M.; Hassanein, A.E. Integrating blockchain and the internet of things in precision agriculture: Analysis, opportunities, and
[challenges. Comput. Electron. Agric. 2020, 178, 105476. [CrossRef]](http://doi.org/10.1016/j.compag.2020.105476)
27. Tranfield, D.; Denyer, D.; Smart, P. Towards a methodology for developing evidence-informed management knowledge by means
[of systematic review. Br. J. Manag. 2003, 14, 207–222. [CrossRef]](http://doi.org/10.1111/1467-8551.00375)
-----
_Sustainability 2021, 13, 4206_ 24 of 26
28. Pereira, C.R.; Christopher, M.; da Silva, A.L. Achieving supply chain resilience: The role of procurement. Supply Chain Manag. Int.
_[J. 2014, 19, 626–642. [CrossRef]](http://doi.org/10.1108/SCM-09-2013-0346)_
29. Kunz, N.; Reiner, G. A meta-analysis of humanitarian logistics research. J. Humanit. Logist. Supply Chain Manag. 2012, 2, 116–147.
[[CrossRef]](http://doi.org/10.1108/20426741211260723)
30. Alexander, A.; Walker, H.; Naim, M. Decision theory in sustainable supply chain management: A literature review. Supply Chain
_[Manag. Int. J. 2014, 19, 504–522. [CrossRef]](http://doi.org/10.1108/SCM-01-2014-0007)_
31. Niknejad, N.; Ismail, W.; Bahari, M.; Hendradi, R.; Salleh, A.Z. Mapping the research trends on blockchain technology in food
[and agriculture industry: A bibliometric analysis. Environ. Technol. Innov. 2021, 21, 101272. [CrossRef]](http://doi.org/10.1016/j.eti.2020.101272)
32. Wieringa, R.; Maiden, N.; Mead, N.; Rolland, C. Requirements engineering paper classification and evaluation criteria: A proposal
[and a discussion. Requir. Eng. 2006, 11, 102–107. [CrossRef]](http://doi.org/10.1007/s00766-005-0021-6)
33. Feng, H.; Wang, X.; Duan, Y.; Zhang, J.; Zhang, X. Applying blockchain technology to improve agri-food traceability: A review of
[development methods, benefits and challenges. J. Clean. Prod. 2020, 260, 121031. [CrossRef]](http://doi.org/10.1016/j.jclepro.2020.121031)
34. Xu, Y.; Li, X.; Zeng, X.; Cao, J.; Jiang, W. Application of blockchain technology in food safety control: Current trends and future
[prospects. Crit. Rev. Food Sci. Nutr. 2020, 1–20. [CrossRef] [PubMed]](http://doi.org/10.1080/10408398.2020.1858752)
35. Kayikci, Y.; Subramanian, N.; Dora, M.; Bhatia, M.S. Food supply chain in the era of Industry 4.0, blockchain technology
implementation opportunities and impediments from the perspective of people, process, performance, and technology. Prod.
_[Plan. Control 2020, 31, 1–21. [CrossRef]](http://doi.org/10.1080/09537287.2020.1810757)_
36. Keogh, J.G.; Rejeb, A.; Khan, N.; Dean, K.; Hand, K.J. Optimizing global food supply chains: The case for blockchain and GSI
[standards. In Building the Future of Food Safety Technology; Elsevier: Amsterdam, The Netherlands, 2020; pp. 171–204. [CrossRef]](http://doi.org/10.1016/B978-0-12-818956-6.00017-8)
37. Subramanian, N.; Chaudhuri, A.; Kayikci, Y. Blockchain applications in food supply chain. In Blockchain and Supply Chain Logistics;
Subramanian, N., Chaudhuri, A., Kayıkcı, Y., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 21–29.
[[CrossRef]](http://doi.org/10.1007/978-3-030-47531-4_3)
38. Lezoche, M.; Hernandez, J.E.; Díaz, M.d.M.E.A.; Panetto, H.; Kacprzyk, J. Agri-food 4.0, a survey of the supply chains and
[technologies for the future agriculture. Comput. Ind. 2020, 117, 103187. [CrossRef]](http://doi.org/10.1016/j.compind.2020.103187)
39. Peña, M.; Llivisaca, J.; Siguenza-Guzman, L. Blockchain and its potential applications in food supply chain management in
ecuador. In Advances in Emerging Trends and Technologies; Botto-Tobar, M., León-Acurio, J., Cadena, A.D., Díaz, P.M., Eds.; Springer
[International Publishing: Cham, Switzerland, 2020; pp. 101–112. [CrossRef]](http://doi.org/10.1007/978-3-030-32022-5_10)
40. Astill, J.; Dara, R.A.; Campbell, M.; Farber, J.M.; Fraser, E.D.G.; Sharif, S.; Yada, R.Y. Transparency in food supply chains: A review
[of enabling technology solutions. Trends Food Sci. Technol. 2019, 91, 240–247. [CrossRef]](http://doi.org/10.1016/j.tifs.2019.07.024)
41. Juma, H.; Shaalan, K.; Kamel, I. A Survey on using blockchain in trade supply chain solutions. IEEE Access 2019, 7, 184115–184132.
[[CrossRef]](http://doi.org/10.1109/ACCESS.2019.2960542)
42. Liu, Y.; Ma, X.; Shu, L.; Hancke, G.P.; Abu-Mahfouz, A.M. From industry 4.0 to agriculture 4.0, current status, enabling
[technologies, and research challenges. IEEE Trans. Ind. Inf. 2020, 1. [CrossRef]](http://doi.org/10.1109/TII.2020.3003910)
43. Akram, S.V.; Malik, P.K.; Singh, R.; Anita, G.; Tanwar, S. Adoption of blockchain technology in various realms: Opportunities and
[challenges. Secur. Priv. 2020, 3, 4. [CrossRef]](http://doi.org/10.1002/spy2.109)
44. Haroon, A.; Basharat, M.; Khattak, A.M.; Ejaz, W. Internet of things platform for transparency and traceability of food supply
chain. In Proceedings of the 2019 IEEE 10th Annual Information Technology, Electronics and Mobile Communication Conference
[(IEMCON), Vancouver, BC, Canada, 17–19 October 2019; pp. 13–19. [CrossRef]](http://doi.org/10.1109/IEMCON.2019.8936158)
45. Gopi, K.; Mazumder, D.; Sammut, J.; Saintilan, N. Determining the provenance and authenticity of seafood: A review of current
[methodologies. Trends Food Sci. Technol. 2019, 91, 294–304. [CrossRef]](http://doi.org/10.1016/j.tifs.2019.07.010)
46. Tsang, Y.P.; Choy, K.L.; Wu, C.H.; Ho, G.T.S.; Lam, H.Y. Blockchain-driven IoT for food traceability with an integrated consensus
[mechanism. IEEE Access 2019, 7, 129000–129017. [CrossRef]](http://doi.org/10.1109/ACCESS.2019.2940227)
47. Baralla, G.; Pinna, A.; Corrias, G. Ensure traceability in European food supply chain by using a blockchain system. In Proceedings
of the 2019 IEEE/ACM 2nd International Workshop on Emerging Trends in Software Engineering for Blockchain (WETSEB),
[Montreal, QC, Canada, 27 May 2019; pp. 40–47. [CrossRef]](http://doi.org/10.1109/WETSEB.2019.00012)
48. Chandra, G.R.; Liaqat, I.A.; Sharma, B. Blockchain redefining: The halal food sector. In Proceedings of the 2019 Amity International
[Conference on Artificial Intelligence (AICAI), Dubai, United Arab Emirates, 2–6 February 2019; pp. 349–354. [CrossRef]](http://doi.org/10.1109/AICAI.2019.8701321)
49. Caro, M.P.; Ali, M.S.; Vecchio, M.; Giaffreda, R. Blockchain-based traceability in agri-food supply chain management: A practical
implementation. In Proceedings of the 2018 IoT Vertical and Topical Summit on Agriculture—Tuscany (IOT Tuscany), Tuscany,
[Italy, 8–9 May 2019; pp. 1–4. [CrossRef]](http://doi.org/10.1109/IOT-TUSCANY.2018.8373021)
50. Malik, S.; Kanhere, S.S.; Jurdak, R. ProductChain: Scalable blockchain framework to support provenance in supply chains. In
Proceedings of the 2018 IEEE 17th International Symposium on Network Computing and Applications (NCA), Cambridge, MA,
[USA, 1–3 November 2018; pp. 1–10. [CrossRef]](http://doi.org/10.1109/NCA.2018.8548322)
51. Casino, F.; Kanakaris, V.; Dasaklis, T.K.; Moschuris, S.; Rachaniotis, N.P. Modeling food supply chain traceability based on
[blockchain technology. IFAC PapersOnLine 2019, 52, 2728–2733. [CrossRef]](http://doi.org/10.1016/j.ifacol.2019.11.620)
52. Shahid, A.; Almogren, A.; Javaid, N.; Al-Zahrani, F.A.; Zuair, M.; Alam, M. Blockchain-based agri-food supply chain: A complete
[solution. IEEE Access 2020, 8, 69230–69243. [CrossRef]](http://doi.org/10.1109/ACCESS.2020.2986257)
-----
_Sustainability 2021, 13, 4206_ 25 of 26
53. Shahid, A.; Sarfraz, U.; Malik, M.W.; Iftikhar, M.S.; Jamal, A.; Javaid, N. Blockchain-based reputation system in agri-food supply
chain. In Advanced Information Networking and Applications; Barolli, L., Amato, F., Moscato, F., Enokido, T., Takizawa, M., Eds.;
[Springer International Publishing: Cham, Switzerland, 2020; pp. 12–21. [CrossRef]](http://doi.org/10.1007/978-3-030-44041-1_2)
54. Kumar, M.V.; Iyengar, N.C.S.N.; Goar, V. Employing blockchain in rice supply chain management. In Advances in Information
_Communication Technology and Computing; Goar, V., Kuri, M., Kumar, R., Senjyu, T., Eds.; Springer: Singapore, 2021; pp. 451–461._
[[CrossRef]](http://doi.org/10.1007/978-981-15-5421-6_45)
55. Singh, S.K.; Jenamani, M.; Dasgupta, D.; Das, S. A conceptual model for Indian public distribution system using consortium
[blockchain with on-chain and off-chain trusted data. Inf. Technol. Dev. 2020, 11, 1–25. [CrossRef]](http://doi.org/10.1080/02681102.2020.1847024)
56. Tian, F. A supply chain traceability system for food safety based on HACCP, blockchain & Internet of things. In Proceedings of
the 2017 14th International Conference on Service Systems and Service Management (ICSSSM), Dalian, China, 16–18 June 2017;
[pp. 1–6. [CrossRef]](http://doi.org/10.1109/ICSSSM.2017.7996119)
57. Gai, K.; Fang, Z.; Wang, R.; Zhu, L.; Jiang, P.; Choo, K.-K.R. Edge computing and lightning network empowered secure food
[supply management. IEEE Internet Things J. 2020, 1. [CrossRef]](http://doi.org/10.1109/JIOT.2020.3024694)
58. Baralla, G.; Pinna, A.; Tonelli, R.; Marchesi, M.; Ibba, S. Ensuring transparency and traceability of food local products: A
[blockchain application to a Smart Tourism Region. Concurr. Comput. Pract. Exp. 2021, 33, e5857. [CrossRef]](http://doi.org/10.1002/cpe.5857)
59. [Tan, A.; Ngan, P.T. A proposed framework model for dairy supply chain traceability. Sustain. Futures 2020, 2, 100034. [CrossRef]](http://doi.org/10.1016/j.sftr.2020.100034)
60. Kim, M.; Hilton, B.; Burks, Z.; Reyes, J. Integrating blockchain, smart contract-tokens, and IoT to design a food traceability
solution. In Proceedings of the 2018 IEEE 9th Annual Information Technology, Electronics and Mobile Communication Conference
[(IEMCON), Vancouver, BC, Canada, 1–3 November 2018; pp. 335–340. [CrossRef]](http://doi.org/10.1109/IEMCON.2018.8615007)
61. Shahzad, A.; Zhang, K. An integrated IoT-blockchain implementation for end-to-end supply chain. In Proceedings of the Future
_Technologies Conference (FTC); Arai, K., Kapoor, S., Bhatia, R., Eds.; Springer International Publishing: Cham, Switzerland, 2020;_
[Volume 2, pp. 987–997. [CrossRef]](http://doi.org/10.1007/978-3-030-63089-8_65)
62. Yang, L.; Liu, X.-Y.; Kim, J.S. Cloud-based livestock monitoring system using RFID and blockchain technology. In Proceedings of
the 2020 7th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/2020 6th IEEE International
[Conference on Edge Computing and Scalable Cloud (EdgeCom), New York, NY, USA, 1–3 August 2020; pp. 240–245. [CrossRef]](http://doi.org/10.1109/CSCloud-EdgeCom49738.2020.00049)
63. Mondal, S.; Wijewardena, K.P.; Karuppuswami, S.; Kriti, N.; Kumar, D.; Chahal, P. Blockchain inspired RFID-based information
[architecture for food supply chain. IEEE Internet Things J. 2019, 6, 5803–5813. [CrossRef]](http://doi.org/10.1109/JIOT.2019.2907658)
64. Abdulhussein, A.B.; Hadi, A.K.; Ilyas, M. Design a tracing system for a seed supply chain based on blockchain. In Proceedings of
the 2020 3rd International Conference on Engineering Technology and its Applications (IICETA), Najaf, Iraq, 6–7 September 2020;
[pp. 209–214. [CrossRef]](http://doi.org/10.1109/IICETA50496.2020.9318792)
65. Thakur, S.; Breslin, J.G. Scalable and secure product serialization for multi-party perishable good supply chains using blockchain.
_[Internet Things 2020, 11, 100253. [CrossRef]](http://doi.org/10.1016/j.iot.2020.100253)_
66. Lagutin, D.; Bellesini, F.; Bragatto, T.; Cavadenti, A.; Croce, V.; Kortesniemi, Y.; Leligou, H.C.; Oikonomidis, Y.; Polyzos, G.C.;
Raveduto, G.; et al. Secure open federation of IoT platforms through interledger technologies—The SOFIE Approach. In
Proceedings of the 2019 European Conference on Networks and Communications (EuCNC), Valencia, Spain, 18–21 June 2019;
[pp. 518–522. [CrossRef]](http://doi.org/10.1109/EuCNC.2019.8802017)
67. Zhang, X.; Sun, P.; Xu, J.; Wang, X.; Yu, J.; Zhao, Z.; Dong, Y. Blockchain-based safety management system for the grain supply
[chain. IEEE Access 2020, 8, 36398–36410. [CrossRef]](http://doi.org/10.1109/ACCESS.2020.2975415)
68. Bumblauskas, D.; Mann, A.; Dugan, B.; Rittmer, J. A blockchain use case in food distribution: Do you know where your food has
[been? Int. J. Inf. Manag. 2020, 52, 102008. [CrossRef]](http://doi.org/10.1016/j.ijinfomgt.2019.09.004)
69. Arena, A.; Bianchini, A.; Perazzo, P.; Vallati, C.; Dini, G. BRUSCHETTA: An IoT blockchain-based framework for certifying extra
virgin olive oil supply chain. In Proceedings of the 2019 IEEE International Conference on Smart Computing (SMARTCOMP),
[Washington, DC, USA, 12–15 June 2019; pp. 173–179. [CrossRef]](http://doi.org/10.1109/SMARTCOMP.2019.00049)
70. Bechtsis, D.; Tsolakis, N.; Bizakis, A.; Vlachos, D. A blockchain framework for containerized food supply chains. In 29th European
_[Symposium on Computer Aided Process Engineering; Elsevier: Eindhoven, The Netherlands, 2019; pp. 1369–1374. [CrossRef]](http://doi.org/10.1016/B978-0-12-818634-3.50229-0)_
71. Madumidha, S.; Ranjani, P.S.; Vandhana, U.; Venmuhilan, B. A theoretical implementation: Agriculture-food supply chain
management using blockchain technology. In Proceedings of the 2019 TEQIP III Sponsored International Conference on
Microwave Integrated Circuits, Photonics and Wireless Networks (IMICPW), Tiruchirappalli, India, 22–24 May 2019; pp. 174–178.
[[CrossRef]](http://doi.org/10.1109/IMICPW.2019.8933270)
72. Awan, S.H.; Nawaz, A.; Ahmed, S.; Khattak, H.A.; Zaman, K.; Najam, Z. Blockchain based Smart model for agricultural food
supply chain. In Proceedings of the 2020 International Conference on UK-China Emerging Technologies (UCET), Glasgow, UK,
[20–21 August 2020; pp. 1–5. [CrossRef]](http://doi.org/10.1109/UCET51115.2020.9205477)
73. [Xu, J.; Guo, S.; Xie, D.; Yan, Y. Blockchain: A new safeguard for agri-foods. Artif. Intell. Agric. 2020, 4, 153–161. [CrossRef]](http://doi.org/10.1016/j.aiia.2020.08.002)
74. Aich, S.; Chakraborty, S.; Sain, M.; Lee, H.-I.; Kim, H.-C. A review on benefits of IoT integrated blockchain based supply chain
management implementations across different sectors with case study. In Proceedings of the 2019 21st International Conference
on Advanced Communication Technology (ICACT), PyeongChang Kwangwoon Do, Korea, 17–20 February 2019; pp. 138–141.
[[CrossRef]](http://doi.org/10.23919/ICACT.2019.8701910)
75. Roeck, D.; Sternberg, H.; Hofmann, E. Distributed ledger technology in supply chains: A transaction cost perspective. Int. J. Prod.
_[Res. 2020, 58, 2124–2141. [CrossRef]](http://doi.org/10.1080/00207543.2019.1657247)_
-----
_Sustainability 2021, 13, 4206_ 26 of 26
76. Tian, F. An agri-food supply chain traceability system for China based on RFID & blockchain technology. In Proceedings of the
2016 13th International Conference on Service Systems and Service Management (ICSSSM), Kunming, China, 24–26 June 2016;
[pp. 1–6. [CrossRef]](http://doi.org/10.1109/ICSSSM.2016.7538424)
77. Mirabelli, G.; Solina, V. Blockchain and agricultural supply chains traceability: Research trends and future challenges. Procedia
_[Manuf. 2020, 42, 414–421. [CrossRef]](http://doi.org/10.1016/j.promfg.2020.02.054)_
78. [Tan, A.; Gligor, D.; Ngah, A. Applying Blockchain for Halal food traceability. Int. J. Logist. Res. Appl. 2020, 1, 1–18. [CrossRef]](http://doi.org/10.1080/13675567.2020.1825653)
79. Stranieri, S.; Riccardi, F.; Meuwissen, M.P.M.; Soregaroli, C. Exploring the impact of blockchain on the performance of agri-food
[supply chains. Food Control 2021, 119, 107495. [CrossRef]](http://doi.org/10.1016/j.foodcont.2020.107495)
80. Saurabh, S.; Dey, K. Blockchain technology adoption, architecture, and sustainable agri-food supply chains. J. Clean. Prod. 2021,
_[284, 124731. [CrossRef]](http://doi.org/10.1016/j.jclepro.2020.124731)_
81. Tayal, A.; Solanki, A.; Kondal, R.; Nayyar, A.; Tanwar, S.; Kumar, N. Blockchain-based efficient communication for food supply
[chain industry: Transparency and traceability analysis for sustainable business. Int. J. Commun. Syst. 2021, 34, 71. [CrossRef]](http://doi.org/10.1002/dac.4696)
82. [Angelis, J.; da Silva, E.R. Blockchain adoption: A value driver perspective. Bus. Horiz. 2019, 62, 307–314. [CrossRef]](http://doi.org/10.1016/j.bushor.2018.12.001)
83. Kaiser, C.; Stocker, A.; Festl, A.; Lechner, G.; Fellmann, M. A research agenda for vehicle information systems. In Proceedings of
the 26th European Conference on Information Systems, Portsmouth, UK, 23–28 June 2018.
84. Paliwal, V.; Chandra, S.; Sharma, S. Blockchain technology for sustainable supply chain management: A systematic literature
[review and a classification framework. Sustainability 2020, 12, 7638. [CrossRef]](http://doi.org/10.3390/su12187638)
85. Iqbal, R.; Butt, T.A. Safe farming as a service of blockchain-based supply chain management for improved transparency. Cluster
_[Comput. 2020, 23, 2139–2150. [CrossRef]](http://doi.org/10.1007/s10586-020-03092-4)_
86. [Koens, T.; Poll, E. Assessing interoperability solutions for distributed ledgers. Pervasive Mob. Comput. 2019, 59, 101079. [CrossRef]](http://doi.org/10.1016/j.pmcj.2019.101079)
87. Misra, N.N.; Dixit, Y.; Al-Mallahi, A.; Bhullar, M.S.; Upadhyay, R.; Martynenko, A. IoT, big data and artificial intelligence in
[agriculture and food industry. IEEE Internet Things J. 2020, 1. [CrossRef]](http://doi.org/10.1109/JIOT.2020.2998584)
88. Fernandez, A.; Waghmare, A.; Tripathi, S. Agricultural supply chain using blockchain. In Proceedings of International Conference on
_Intelligent Manufacturing and Automation; Vasudevan, H., Kottur, V.K.N., Raina, A.A., Eds.; Springer: Singapore, 2020; pp. 127–134._
[[CrossRef]](http://doi.org/10.1007/978-981-15-4485-9_14)
89. Tripoli, M.; Schmidhuber, J. Emerging Opportunities for the Application of Blockchain in the Agri-Food Industry; FAO: Rome, Italy;
ICTSD: Geneva, Switzerland, 2018.
90. Al-Amin, S.; Sharkar, S.R.; Kaiser, M.S.; Biswas, M. Towards a blockchain-based supply chain management for e-agro business system. In Proceedings of the International Conference on Trends in Computational and Cognitive Engineering; Kaiser, M.S., Bandyopadhyay,
[A., Mahmud, M., Ray, K., Eds.; Springer: Singapore, 2021; pp. 329–339. [CrossRef]](http://doi.org/10.1007/978-981-33-4673-4_26)
91. Spadoni, R.; Nanetti, M.; Bondanese, A.; Rivaroli, S. Innovative solutions for the wine sector: The role of startups. Wine Econ.
_[Policy 2019, 8, 165–170. [CrossRef]](http://doi.org/10.1016/j.wep.2019.08.001)_
92. Gunasekera, D.; Valenzuela, E. Adoption of blockchain technology in the Australian grains trade: An assessment of potential
[economic effects. Econ Pap. 2020, 39, 152–161. [CrossRef]](http://doi.org/10.1111/1759-3441.12274)
93. Philippidis, G.; Shutes, L.; M’Barek, R.; Ronzon, T.; Tabeau, A.; van Meijl, H. Snakes and ladders: World development pathways’
[synergies and trade-offs through the lens of the Sustainable Development Goals. J. Clean. Prod. 2020, 267, 122147. [CrossRef]](http://doi.org/10.1016/j.jclepro.2020.122147)
[[PubMed]](http://www.ncbi.nlm.nih.gov/pubmed/32921933)
94. George, R.V.; Harsh, H.O.; Ray, P.; Babu, A.K. Food quality traceability prototype for restaurants using blockchain and food
[quality data index. J. Clean. Prod. 2019, 240, 118021. [CrossRef]](http://doi.org/10.1016/j.jclepro.2019.118021)
95. Borah, M.D.; Naik, V.B.; Patgiri, R.; Bhargav, A.; Phukan, B.; Basani, S.G.M. Supply chain management in agriculture using
blockchain and IoT. In Advanced Applications of Blockchain Technology; Kim, S., Deka, G.C., Eds.; Springer: Singapore, 2020;
[pp. 227–242. [CrossRef]](http://doi.org/10.1007/978-981-13-8775-3_11)
96. Latif, R.M.A.; Farhan, M.; Rizwan, O.; Hussain, M.; Jabbar, S.; Khalid, S. Retail level Blockchain transformation for product supply
[chain using truffle development platform. Cluster Comput. 2021, 24, 1–16. [CrossRef]](http://doi.org/10.1007/s10586-020-03165-4)
-----
| 26,957
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/SU13084206?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/SU13084206, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2071-1050/13/8/4206/pdf?version=1618387726"
}
| 2,021
|
[
"Review"
] | true
| 2021-04-09T00:00:00
|
[
{
"paperId": "2f119c39dea496f134df43be02fa6fc69a72e3a3",
"title": "Mapping the research trends on blockchain technology in food and agriculture industry: A bibliometric analysis"
},
{
"paperId": "f08eda6802d296c86abf5043df6bdd89899ee47f",
"title": "Exploring the impact of blockchain on the performance of agri-food supply chains"
},
{
"paperId": "780f9c11f48ee71017b2b704741dff385ca7a3b0",
"title": "Towards a Blockchain-Based Supply Chain Management for E-Agro Business System"
},
{
"paperId": "81e57ab591e96313e9be2894407ac8494415acf4",
"title": "Application of blockchain technology in food safety control:current trends and future prospects"
},
{
"paperId": "bcc2f253b870f453982f4cd13497714ee2b3201f",
"title": "Blockchain‐based efficient communication for food supply chain industry: Transparency and traceability analysis for sustainable business"
},
{
"paperId": "2e5af24126293387fdf9e9ae96bc9aab3871908b",
"title": "A conceptual model for Indian public distribution system using consortium blockchain with on-chain and off-chain trusted data"
},
{
"paperId": "897fc2e8fe48fc823240783933adb57550fc4bcb",
"title": "Supply chain-oriented permutation flowshop scheduling considering flexible assembly and setup times"
},
{
"paperId": "27b42af5c3684ecd30125ff61dc2eb600f8cba68",
"title": "Integrating blockchain and the internet of things in precision agriculture: Analysis, opportunities, and challenges"
},
{
"paperId": "5fd611c87695c88e39948cafa74174ae454751af",
"title": "Blockchain Technology in the Food Industry: A Review of Potentials, Challenges and Future Research Directions"
},
{
"paperId": "96d3c60d1ee83f49ae6685e6c6159f52b82b30f3",
"title": "Blockchain technology adoption, architecture, and sustainable agri-food supply chains"
},
{
"paperId": "fd8bbe4cf8d3f7c8d5bf526e1346167efb6a63dd",
"title": "Optimizing global food supply chains: The case for blockchain and GSI standards"
},
{
"paperId": "903b6bd4b77553db5791a8317d747f4320a7c803",
"title": "On the Benefits and Challenges of Blockchains for Managing Food Supply Chains."
},
{
"paperId": "4bdd95598220d8ed2cb436d157d0e5830e33eb52",
"title": "Traceability in food processing: problems, methods, and performance evaluations—a review"
},
{
"paperId": "e4d95ebcdb4f9da6abd25ad2f27eb30114e35cb8",
"title": "Applying Blockchain for Halal food traceability"
},
{
"paperId": "efcb599c2befdc4d194ba37c28e0a6b3ca2bcc38",
"title": "Edge Computing and Lightning Network Empowered Secure Food Supply Management"
},
{
"paperId": "9a35bc7423caf027b6e2f4cba95768ca2978a8f0",
"title": "Blockchain Technology for Sustainable Supply Chain Management: A Systematic Literature Review and a Classification Framework"
},
{
"paperId": "1469ac2507b1adcd0256f45d803e22280a1b0e13",
"title": "Design a Tracing System for a Seed Supply Chain Based on Blockchain"
},
{
"paperId": "884325b2b6d0b129a157c10681f1808fab9384f9",
"title": "Food supply chain in the era of Industry 4.0: blockchain technology implementation opportunities and impediments from the perspective of people, process, performance, and technology"
},
{
"paperId": "50021273be3877b0b33f9cffc12b957b2b72c060",
"title": "Scalable and secure product serialization for multi-party perishable good supply chains using blockchain"
},
{
"paperId": "f4756ff396a39ed266a3fb3914466ba955f18c11",
"title": "Snakes and ladders: World development pathways’ synergies and trade-offs through the lens of the Sustainable Development Goals"
},
{
"paperId": "d1f1f61accd6433ba4c5a5e6f7fdbf07bd3d8fd4",
"title": "Retail level Blockchain transformation for product supply chain using truffle development platform"
},
{
"paperId": "c9943f5c5e39942d35f2b8b7973269d6813a772e",
"title": "Supply network design to address United Nations Sustainable Development Goals: A case study of blockchain implementation in Thai fish industry"
},
{
"paperId": "adbcbe621ee762cf826113cd1eb7723c7a596f99",
"title": "Blockchain Technology in Current Agricultural Systems: From Techniques to Applications"
},
{
"paperId": "58f2627afc7cffe1815a410712315e61587791a2",
"title": "Cloud-based Livestock Monitoring System Using RFID and Blockchain Technology"
},
{
"paperId": "9e7f055f9f692e8e96c62b024c48495e159c13ba",
"title": "Blockchain based Smart Model for Agricultural Food Supply Chain"
},
{
"paperId": "fb5de94fe354ac3b282941a780368ec7b81dc468",
"title": "Applying blockchain technology to improve agri-food traceability: A review of development methods, benefits and challenges"
},
{
"paperId": "b42a89bb8cb56b2dda50e5698301560e326aab18",
"title": "From Industry 4.0 to Agriculture 4.0: Current Status, Enabling Technologies, and Research Challenges"
},
{
"paperId": "96f135db4ea898ed8436c401d0f18d5eca6b3840",
"title": "Ensuring transparency and traceability of food local products: A blockchain application to a Smart Tourism Region"
},
{
"paperId": "0fe255a487cdee86fdba6811e61d1d4d777723f0",
"title": "Boundary conditions for traceability in food supply chains using blockchain technology"
},
{
"paperId": "b9344230b8a0f59e2604c04b5939466a3c6314c8",
"title": "A blockchain use case in food distribution: Do you know where your food has been?"
},
{
"paperId": "b6b7fea1846e85ac1e3c7e3adda6e65b127d0368",
"title": "IoT, Big Data, and Artificial Intelligence in Agriculture and Food Industry"
},
{
"paperId": "ffccb697892616fd61175917fb216c4155248642",
"title": "Technology assessment of blockchain-based technologies in the food supply chain"
},
{
"paperId": "705521d77992cb6dc2cda0127bfea83a15dcf882",
"title": "Blockchain: case studies in food supply chain visibility"
},
{
"paperId": "dd5e5f2e4dfa71f5d0499d841a8505a322249be1",
"title": "Agri-food 4.0: A survey of the supply chains and technologies for the future agriculture"
},
{
"paperId": "e589891c18b1b7303de69e8bbbba2e043a8c4705",
"title": "Adoption of blockchain technology in various realms: Opportunities and challenges"
},
{
"paperId": "9a9d268cfbc2f0b973d5f64021326c4f4ca03491",
"title": "Blockchain-Based Reputation System in Agri-Food Supply Chain"
},
{
"paperId": "6a2ccde0663c3dbd06e3d1eec8c87f75f125e206",
"title": "Blockchain-Based Agri-Food Supply Chain: A Complete Solution"
},
{
"paperId": "9438d9f095a4dc96cca7a7c178218afc7cdc344b",
"title": "Distributed ledger technology in supply chains: a transaction cost perspective"
},
{
"paperId": "3c783fc5bcf259662b132edd031af4991c919285",
"title": "Safe farming as a service of blockchain-based supply chain management for improved transparency"
},
{
"paperId": "51d62d07dccda4c612a5658b439753752b8f6b9c",
"title": "A Content-Analysis Based Literature Review in Blockchain Adoption within Food Supply Chain"
},
{
"paperId": "4a85a62c47d543749745819cdcd9d6ae6a2e9abd",
"title": "Processes, benefits, and challenges for adoption of blockchain technologies in food supply chains: a thematic analysis"
},
{
"paperId": "36544cfdc719bc9555087632fba8a9a105bdf7c4",
"title": "Blockchain-Based Safety Management System for the Grain Supply Chain"
},
{
"paperId": "b7097082385b4f0d2f25013d93f66d1f60903f39",
"title": "Proposing the use of blockchain to improve the solid waste management in small municipalities"
},
{
"paperId": "fc1fcd2d9d8ee40aaeac17429710d5508a7297ae",
"title": "Adoption of Blockchain Technology in the Australian Grains Trade: An Assessment of Potential Economic Effects"
},
{
"paperId": "d5706a8e15994bd85c089d81d098d55de172bf55",
"title": "A Survey on Using Blockchain in Trade Supply Chain Solutions"
},
{
"paperId": "4c16883fc7624f7f77289a2417931f3fbe4155e5",
"title": "Food quality traceability prototype for restaurants using blockchain and food quality data index"
},
{
"paperId": "f9d38bad8df921a3bafb7082d6c8b5e867412124",
"title": "Innovative solutions for the wine sector: The role of startups"
},
{
"paperId": "2ec5f0a71ffab30cf4a252de4733aecd6aea1768",
"title": "A Review on blockchain applications in the agri-food sector."
},
{
"paperId": "30150a2458f597592c89e6fd9913fb86ac30b2d3",
"title": "Blockchain and more - Algorithm driven food traceability"
},
{
"paperId": "27a89147ec8348765a477cf842a6b73b30d056c8",
"title": "Internet of Things Platform for Transparency and Traceability of Food Supply Chain"
},
{
"paperId": "8e18c462dff8bbf5e6dbc2b8375305986ee941a7",
"title": "Assessing interoperability solutions for distributed ledgers"
},
{
"paperId": "e819de56b0a89e0e31725373c75190e7f509f98a",
"title": "Supply Chain Management in Agriculture Using Blockchain and IoT"
},
{
"paperId": "5b6b8376e89dbd9d8930ea927e625537de338385",
"title": "Blockchain-Driven IoT for Food Traceability With an Integrated Consensus Mechanism"
},
{
"paperId": "90896e3ca66309c3965b80cef0875510b8d0e33e",
"title": "Determining the provenance and authenticity of seafood: A review of current methodologies"
},
{
"paperId": "f42c8a158ff80db77d7ceabf3662def300ae88e8",
"title": "Transparency in food supply chains: A review of enabling technology solutions"
},
{
"paperId": "8c9c68ea1b1212501f213e6401bea072ce7d05cf",
"title": "Blockchain technology in agri-food value chain management: A synthesis of applications, challenges and future research directions"
},
{
"paperId": "6eaa13d48cb21d8435f0e5ac16b2a781d46a6b53",
"title": "The Rise of Blockchain Technology in Agriculture and Food Supply Chains"
},
{
"paperId": "29781fbef455d43f09f5c7e95b12a64a814fd7ab",
"title": "BRUSCHETTA: An IoT Blockchain-Based Framework for Certifying Extra Virgin Olive Oil Supply Chain"
},
{
"paperId": "0bdc4cb3bf8e9c78241dd8fbc7af6308f6cbdd6a",
"title": "Secure Open Federation of IoT Platforms Through Interledger Technologies - The SOFIE Approach"
},
{
"paperId": "8cf2dc58493b8348917c96bd8ad6f469016f6f9c",
"title": "Blockchain-Based Soybean Traceability in Agricultural Supply Chain"
},
{
"paperId": "549d8b22967fb8bac47ef98231d9129ba6ff1ee0",
"title": "Blockchain adoption: A value driver perspective"
},
{
"paperId": "6fcf5ddb533f23dc3d0146fd9c701ec1e98d53f7",
"title": "Ensure Traceability in European Food Supply Chain by Using a Blockchain System"
},
{
"paperId": "eccbea2516582a3afdddb2d3330d9a2aa21762c6",
"title": "A Theoretical Implementation: Agriculture-Food Supply Chain Management using Blockchain Technology"
},
{
"paperId": "f48ab328fbad70f72bb4b0b4bf779e18cc92a4b5",
"title": "Blockchain and Its Potential Applications in Food Supply Chain Management in Ecuador"
},
{
"paperId": "e2e4a8a81ba4219d477d9df7eecd6c53cb485ba1",
"title": "Blockchain Inspired RFID-Based Information Architecture for Food Supply Chain"
},
{
"paperId": "531aee46bc98995922b358619c8d80e4dffed168",
"title": "Blockchain and supply chain management integration: a systematic review of the literature"
},
{
"paperId": "39e31d4e169abe349f8b9a092182c4c6150b6488",
"title": "Blockchain Redefining: The Halal Food Sector"
},
{
"paperId": "f6eaacfeb2d3d35c35f3ec80f760c14f82b190cf",
"title": "A Review on Benefits of IoT Integrated Blockchain based Supply Chain Management Implementations across Different Sectors with Case Study"
},
{
"paperId": "376f60eca9d9ae4f5244fc15dc04c27216f4ee4c",
"title": "Blockchain in Agriculture: A Systematic Literature Review"
},
{
"paperId": "5d665705df32dd5225f01584e1902765eea4ba56",
"title": "ProductChain: Scalable Blockchain Framework to Support Provenance in Supply Chains"
},
{
"paperId": "e4a7be90e7695ba3061469cee505bd78bba6076b",
"title": "Integrating Blockchain, Smart Contract-Tokens, and IoT to Design a Food Traceability Solution"
},
{
"paperId": "420d8eb0d8f9829506a7fef6140b87e045cfb402",
"title": "Future challenges on the use of blockchain for food traceability analysis"
},
{
"paperId": "2630f89f7c8fab0b171c1a7b504060f18d348b43",
"title": "Blockchain-based traceability in Agri-Food supply chain management: A practical implementation"
},
{
"paperId": "8f94dc1b7bc5df0b5ba698320af1d8b44f547c4b",
"title": "Blockchain: properties and misconceptions"
},
{
"paperId": "304083f2a7b00d07d7c33883e2e74ac0fd8245c5",
"title": "A supply chain traceability system for food safety based on HACCP, blockchain & Internet of things"
},
{
"paperId": "24cdeb7d7421012c2fdd362b8e2816c105b7071f",
"title": "An agri-food supply chain traceability system for China based on RFID & blockchain technology"
},
{
"paperId": "2bd08492d9b71304e9989ad8d91fd2ff0c247c5d",
"title": "Achieving supply chain resilience: the role of procurement"
},
{
"paperId": "9f4aa3390092de300c659969362895bb782a39e2",
"title": "Decision theory in sustainable supply chain management: a literature review"
},
{
"paperId": "5ebb49ecf0393b35ccab3a6f28620a284a6ceec7",
"title": "Traceability in a food supply chain: Safety and quality perspectives"
},
{
"paperId": "8e95630ff15d49119521267714a7c434fd537b57",
"title": "A meta‐analysis of humanitarian logistics research"
},
{
"paperId": "fd3fde428c91e084b329196ec69380d40e54e16b",
"title": "Requirements engineering paper classification and evaluation criteria: a proposal and a discussion"
},
{
"paperId": "5bda1e400be94fe1e108dedf665a581d355e84e6",
"title": "Towards a Methodology for Developing Evidence-Informed Management Knowledge by Means of Systematic Review"
},
{
"paperId": "d22126c7d6c6a6f783110e18362d5aaf40f8883a",
"title": "Blockchain: A new safeguard for agri-foods"
},
{
"paperId": "4aa925df769289815bd0dcfe2e25572705fd4e11",
"title": "Blockchain and agricultural supply chains traceability: research trends and future challenges"
},
{
"paperId": "24814e347e1e9900d98a2194126b42e154670c4b",
"title": "Blockchain Applications in Food Supply Chain"
},
{
"paperId": "a40dfcca7012c903541acab3ec3753bf257e4e36",
"title": "Overcoming the Quality Challenges Across the Supply Chain"
},
{
"paperId": "16568278705edbfa362cbe5a110222293414bf46",
"title": "A proposed framework model for dairy supply chain traceability"
},
{
"paperId": "be4468c8e417ef46225386794e73aa5e21db9661",
"title": "An Integrated IoT-Blockchain Implementation for End-to-End Supply Chain"
},
{
"paperId": "d97818db6e2b9adddf8fa06f729a759817a25d1b",
"title": "Agricultural Supply Chain Using Blockchain"
},
{
"paperId": "143b8ed523df0b62400bb43323ae5ef72d85f695",
"title": "Employing Blockchain in Rice Supply Chain Management"
},
{
"paperId": "f15feac99bd55f8166c753599fdb806258aacb07",
"title": "Achieving sustainable performance in a data-driven agriculture supply chain: A review for research and applications"
},
{
"paperId": "7d05797d0b7c96d1a7b43e50775c15b788a64c3e",
"title": "Modeling food supply chain traceability based on blockchain technology"
},
{
"paperId": "597286dc488bbea990860fad258d2ddd0448c7e9",
"title": "A Blockchain Framework for Containerized Food Supply Chains"
},
{
"paperId": "36dcbfb74391e608793d47df83e169b5c612da7f",
"title": "Emerging Opportunities for the Application of Blockchain in the Agri-food Industry"
},
{
"paperId": "6d74d9134de3f2ffbaabf7b75f08d1d697bfd65c",
"title": "A Research Agenda for Vehicle Information Systems"
},
{
"paperId": "ef421af177a513784bd0ad3b5f25f98330b5c5b1",
"title": "Transforming our world : The 2030 Agenda for Sustainable Development"
},
{
"paperId": "b8b2b9a56b7e0828597a7d28efa9449d23a79beb",
"title": "Department of Economic and Social Affairs.”"
}
] | 26,957
|
en
|
[
{
"category": "Physics",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Physics",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00bf03e326aa24b7470eff7e7ad444608c58ee71
|
[
"Physics"
] | 0.872669
|
Security Analysis on an Optical Encryption and Authentication Scheme Based on Phase-Truncation and Phase-Retrieval Algorithm
|
00bf03e326aa24b7470eff7e7ad444608c58ee71
|
IEEE Photonics Journal
|
[
{
"authorId": "2721185",
"name": "Y. Xiong"
},
{
"authorId": "2109227012",
"name": "Ravi Kumar"
},
{
"authorId": "73421662",
"name": "C. Quan"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IEEE Photonics J"
],
"alternate_urls": [
"https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=4563994"
],
"id": "4c304f78-3883-487b-8d1e-8e1e14c9b645",
"issn": "1943-0647",
"name": "IEEE Photonics Journal",
"type": "journal",
"url": "http://ieeexplore.ieee.org/servlet/opac?punumber=4563994"
}
|
In this paper, the security of the cryptosystem based on phase-truncation Fourier transform (PTFT) and Gerchberg-Saxton (G-S) algorithm is analyzed. In this cryptosystem, the phase key generated using phase-truncated (PT) operation is bonded with the phase key generated in G-S algorithm to form the first private key, which improves the complexity of the first private key. In addition, since the second private key is generated using the G-S algorithm, the number of known constraints decreases compared to the traditional PTFT-based cryptosystem, which will lead the non-convergence of special attacks. However, it has been found that two private keys generated in the cryptosystem based on PTFT and G-S algorithm are related to one phase key generated in the G-S algorithm, which provides an additional constraint to retrieve the other private key when one private key is disclosed. Based on this analysis, two iterative processes with different constraints are proposed to crack the cryptosystem based on PTFT and G-S algorithm. This is the first time to report the silhouette problem existing in the cryptosystem based on PTFT and G-S algorithm. Numerical simulations are carried out to validate the feasibility and effectiveness of our analysis and proposed iterative processes.
|
**Open Access**
# Security Analysis on an Optical Encryption and Authentication Scheme Based on Phase-Truncation and Phase-Retrieval Algorithm
### Volume 11, Number 5, October 2019
#### Yi Xiong Ravi Kumar Chenggen Quan
##### DOI: 10.1109/JPHOT.2019.2936236
-----
## Security Analysis on an Optical
Encryption and Authentication Scheme
Based on Phase-Truncation and
Phase-Retrieval Algorithm
**Yi Xiong, Ravi Kumar, and Chenggen Quan**
Department of Mechanical Engineering, National University of Singapore, Singapore
117576
_DOI:10.1109/JPHOT.2019.2936236_
_This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see_
_https://creativecommons.org/licenses/by/4.0/_
Manuscript received August 8, 2019; accepted August 14, 2019. Date of publication August 20, 2019;
date of current version September 19, 2019. This work was supported by the National University of
Singapore under Research Project R-265-000-589-114. Corresponding author: Chenggen Quan (email:
[email protected]).
**Abstract: In this paper, the security of the cryptosystem based on phase-truncation Fourier**
transform (PTFT) and Gerchberg-Saxton (G-S) algorithm is analyzed. In this cryptosystem,
the phase key generated using phase-truncated (PT) operation is bonded with the phase
key generated in G-S algorithm to form the first private key, which improves the complexity
of the first private key. In addition, since the second private key is generated using the G-S
algorithm, the number of known constraints decreases compared to the traditional PTFTbased cryptosystem, which will lead the non-convergence of special attacks. However, it
has been found that two private keys generated in the cryptosystem based on PTFT and
G-S algorithm are related to one phase key generated in the G-S algorithm, which provides
an additional constraint to retrieve the other private key when one private key is disclosed.
Based on this analysis, two iterative processes with different constraints are proposed to
crack the cryptosystem based on PTFT and G-S algorithm. This is the first time to report
the silhouette problem existing in the cryptosystem based on PTFT and G-S algorithm.
Numerical simulations are carried out to validate the feasibility and effectiveness of our
analysis and proposed iterative processes.
**Index Terms: Optical image encryption and authentication, security analysis, silhouette**
problem.
#### 1. Introduction
Information security and authentication have attracted increasing attention in recent decades because of the rapid development of computer techniques and the wide use of the internet. Due
to their unique advantages, such as parallel processing and multidimensional capabilities, optical
techniques have been introduced in the field of information security [1]–[3]. A well-known optical
encryption technique named double random phase encoding (DRPE) in which the input image
is encoded into a noise-like image by using two independent random phase-only masks (RPMs)
located at the input (spatial) and Fourier (frequency) planes, respectively, was proposed by Refregier and Javidi [4]. Subsequently, a large number of image encryption systems based on optical
techniques, such as digital holography [5], [6], phase shifting [7], [8], diffractive imaging [9], [10],
interference [11], [12] and polarization [13], [14], have been proposed. Simultaneously, cryptoanalysis on existing encryption schemes has been also proposed to disclose their inherent draw
##### V l 11 N 5 O t b 2019 7801514
-----
backs [15]–[25] and promote the investigation of advanced and security-enhanced cryptosystems
[26]–[30]. For example, it has been found that the DRPE-based cryptosystem [4] is vulnerable to
some attacks, such as chosen-ciphertext [15], [16] and known-plaintext attacks [17], due to its inherent linearity. To address this issue, some techniques, such as equal modulus decomposition [26],
scrambling algorithms [28] and ghost imaging [29], have been introduced to enhance the system
security; however, the additional algorithms increase the difficulty of fully optical implementation for
the security-enhanced cryptosystems.
Qin and Peng proposed a PTFT-based cryptosystem in which two RPMs regarded as private keys
in the DRPE-based cryptosystem are used as public keys while two private keys are generated
in the encryption process by using PT operations [31]. It seems that PT operations can remove
the linearity of the DRPE-based structure, which makes PTFT-based cryptosystem immune to the
attacks that the DRPE-based structure is vulnerable to; however, it is found that the PTFT-based
structure is vulnerable to some special attacks due to enough constraints provided by two public
keys [32], [33]. In addition, it is also found that most information of the plaintext could be retrieved
when the first private key is known even without any knowledge of the corresponding ciphertext and
the second private key [25]. The silhouette problem caused by the first private key in the PTFT-based
cryptosystem will lead to serious information disclosure, which needs to be further enhanced. Rajput
and Nishchal proposed a nonlinear G-S algorithm based optical image encryption scheme in which
two private keys are generated in the encryption process by using G-S phase retrieval algorithm
twice and the decryption process is performed using conventional DRPE-based architecture [34].
The G-S phase-retrieval algorithm-based cryptosystem has high robustness against most of the
existing attacks, i.e., known-plaintext, chosen-plaintext and special attacks. Subsequently, Rajput
and Nishchal proposed an optical encryption and authentication scheme based on the PTFT and
G-S algorithm [35]. In this cryptosystem, the first private key is formed by combining the phase
key obtained by the PT operation with the phase key obtained by the G-S algorithm. Compared
to the traditional PTFT-based cryptosystem [31] in which the first private key is directly obtained
using the first PT operation, the first private key in the cryptosystem [35] is more complex. Besides,
compared to the conventional PTFT-based scheme in which the second private key is generated by
the second PT operation, the second private key in the cryptosystem [35] is generated directly in the
G-S iterative process, which has higher robustness against most of the existing attacks. It seems
that the security level of the cryptosystem [35] has been improved due to the security enhancement
of the private keys. However, based on our analysis, it is found that two private keys are relative to
one phase key generated in the G-S iterative process; consequently, it appears possible that the
other private key could be retrieved with the knowledge of one private key. Partial information of
plaintexts could be retrieved using the retrieved private keys, which means the silhouette problem
caused by two private keys exists in the cryptosystem based on PTFT and G-S algorithm.
In this paper, the security of the cryptosystem based on the PTFT and G-S algorithm is evaluated.
The rest of this paper is organized as follows. In Section 2, the scheme under study is introduced
briefly. In Section 3, the principle of two iterative processes with different constraints used to crack
the cryptosystem [35] is introduced, and the feasibility and effectiveness of the proposed iterative
processes are validated by numerical simulations. In Section 4, the concluding remarks are given.
#### 2. The Scheme Under Study
The flow chart of the encryption and authentication process in the scheme [35] under study is
shown in Fig. 1. The function fn(x, y) is the nth input image to be encrypted and verified, where
_n = 1, 2, 3 . . .. Functions R1(x, y) and R2(u, v) are two independent RPMs distributed uniformly in_
the interval [0, 2π]. Function R(x, y) is the random mask distributed uniformly in the interval [0, 1].
The phase key P1n(u, v) used to form the first private key is generated in the PTFT-based structure
given by
�
_A1n (u, v) = PT {FT [fn (x, y) R1 (x, y)]},_
(1)
_P1n (u, v) = AT {FT [fn (x, y) R1 (x, y)]},_
##### V l 11 N 5 O t b 2019 7801514
-----
Fig. 1. The schematic diagram of the encryption and authentication process in [35].
where FT denotes the Fourier transform, PT and AT denote the phase- and amplitude{·} {·} {·}
truncated operators, respectively. A1n(u, v) and P1n(u, v) are the amplitude and phase parts of the
Fourier spectrum, respectively. A1n(u, v) is used as the input of the phase-retrieval technique-based
iterative process to generate the second private key. Now, using RPM, R2(u, v) as the initial phase
distribution in the Fourier plane at k 1, the iterative process is carried out as follows:
=
1) The phase distribution in the image plane after kth (k 1) iteration is given by
≥
�A1n (u, v) p(n)(k−1) (u, v)��
_,_ (2)
_P2(n)k (x, y) = AT_
�IFT
where IFT{·} denotes the inverse Fourier transform, P2(n)k(x, y) and p(n)k(u, v) are the phase
distributions in the image and Fourier planes at the kth iteration, respectively.
2) The random mask R(x, y) is used as the amplitude constraint and boned with P2(n)k(x, y),
_p(n)k(u, v) is given by_
_FT_
_p(n)k (u, v) = AT_
�
�R (x, y) P2(n)k (x, y)��
_,_ (3)
where p(n)k(u, v) is used to update the phase distribution in the Fourier plane at the (k + 1)th
iteration.
Steps 1–2 are iterated until the correlation coefficient (CC) value reached the preset threshold
value. The private key pn(u, v) used to form the first private key and P2n(x, y) used as the second
private key are the outputs of the iterative process. The CC value between R(x, y) and g(n)k(u, v) =
_PT{IFT[A1n(u, v)p(n)(k−1)(u, v)]} is given by_
�R, g(n)k�
CC [cov] _,_ (4)
= �σR, σg(n)k�
where cov denotes the cross-covariance, and σ denotes the standard deviation (the coordinates
{·}
of the function are omitted here for brevity).
In the decryption and verification process, the decrypted image dn(x, y) is given by
_dn (x, y) = PT {IFT {FT [R_ (x, y) K 2n (x, y)] K 1n (u, v)}}, (5)
where conj{·} denotes a complex conjugate, two asymmetric phase keys K 1n(u, v) and K 2n(x, y)
formed by three phase keys (P1n(u, v), pn(u, v) and P2n(x, y)) generated in the encryption process
are given by
�
_K 1n (u, v) = P1n (u, v) conj [pn (u, v)]_
(6)
_K 2n (x, y) = P2n (x, y)_
##### V l 11 N 5 O t b 2019 7801514
-----
(a) (b)
Fig. 2. (a) The schematic diagram of the decryption and verification process in [35]. (b) The schematic
diagram of the optical setup for the decryption and verification process in [35].
Fig. 3. (Color online) Simulation results for the gray- scale image ‘baboon’. (a) The original gray-scale
input image (f1(x, y)) to be verified. (b) The random mask R(x, y). (c) The RPM R1(x, y). (d) The RPM
_R2(u, v). (e) The private key K 11(u, v). (f) The private key K 21(x, y). (g) The retrieved gray-scale image_
(d1(x, y)). (h) The relation for matching of R(x, y) with A11(u, v). (i) The auto-correction peak.
The nonlinear optical correlation (NOC) is implemented to achieve the information authentication
and verification, which is given by
_NOC_ (x, y) _IFT_
=
�|FT [dn (x, y)] × FT [fn (x, y)]|t × exp {i arg [FT (dn (x, y))] − arg [FT (fn (x, y))]}� _,_
(7)
where t is the nonlinearity factor and we have used t 0.3 in our simulations. arg is the operation
= {·}
to obtain the complex angle.
The schematic diagram of the decryption and verification process is shown in Fig. 2(a). On
the other hand, the decryption and verification process can be achieved optically by employing
the DRPE-based scheme. The random mask R(x, y) displayed on the first spatial light modulator
(SLM1) is boned with the asymmetric phase key K 2n(x, y) displayed on the SLM2, and then is Fourier
transformed. The Fourier spectrum boned with the asymmetric phase key K 1n(u, v) displayed on
the SLM3 is then inversely Fourier transformed, and the final retrieved intensity pattern is displayed
and recorded on the charge-coupled device (CCD) camera.
Numerical simulations are carried out to validate the feasibility and effectiveness of the cryp
tosystem in [35]. A gray-scale image (f1(x, y)) with size of 256 × 256 pixels to be verified is shown
in Fig. 3(a). Figs. 3(b)–(d) show the random mask (R(x, y)) fixed in the cryptosystem and two RPMs
(R1(x, y) and R2(u, v)), respectively. Figs. 3(e) and (f) show the asymmetric phase keys (K 11(u, v)
##### V l 11 N 5 O t b 2019 7801514
-----
Fig. 4. (Color online) Simulation results for the binary image ‘NUS’. (a) The original binary input image
(f2(x, y)) to be verified. (b) The private key K 12(u, v). (c) The private key K 22(x, y). (d) The retrieved
gray-scale image (d2(x, y)). (e) The relation for matching of R(x, y) with A12(u, v). (f) The auto-correction
peak.
and K 21(x, y)) generated in the encryption process, respectively. From the simulation results shown
in Figs. 3(b)–(f), no useful information of the original gray-scale image is visible. With the help of
_R(x, y) and two asymmetric phase keys, the retrieved gray-scale image d1(x, y) is shown in Fig. 3(g)._
It can be seen that most information of the original image has been retrieved. The relation between
CC values and iteration number k for matching of the random mask (R(x, y)) with the amplitude
constraint of the phase-retrieval iterative process (A11(u, v)) is shown in Fig. 3(h), from which it can
be seen that a rapid convergence exists in the iterative process. The correlation value between the
original gray-scale image in Fig. 3(a) and the retrieved image in Fig. 3(g) is shown in Fig. 3(i). It
can be seen that an evident correlation peak exists, which means the retrieved image is authenticated. In addition, a binary image shown in Fig. 4(a) with the same size, which is encoded in the
same random mask R(x, y), is also used to validate the feasibility of the cryptosystem in [35]. The
simulation results are shown in Fig. 4. From the simulation results in Figs. 3 and 4, it is shown that
the cryptosystem in [35] can achieve encryption and authentication for several images. In addition,
the authors [35] also claimed that the cryptosystem is free from special attacks which the traditional
PTFT-based cryptosystem are vulnerable to.
#### 3. Security Analysis
From the simulation results shown above, it can be seen that the gray-scale image (f1(x, y)) and
the binary image (f2(x, y)) are authenticated by the same random mask (R(x, y)), which confirms
the applicability of the scheme with different kind of images. Compared to the traditional PTFTbased cryptosystem [31] in which two cascaded PTFT-based structures are used to generate the
private keys, the security level has been enhanced by combining a PTFT-based structure with a
G-S iterative process. It has been found that most information of the plaintexts has been encoded
into the first private key using the traditional PTFT-based cryptosystem [31]; consequently, most
information of the plaintexts could be retrieved with the knowledge of the first private key even
##### V l 11 N 5 O t b 2019 7801514
-----
without any knowledge of the second private key and the corresponding ciphertexts [25]. In the
cryptosystem based on PTFT and G-S iterative process, the phase key P1n(u, v) generated from
the PTFT-based structure is bonded with the other phase key pn(u, v) to form the first private key
_K 1n(u, v), which means that the phase key P1n(u, v) in which most information of the plaintexts is_
encoded is further encrypted. In addition, compared to the traditional PTFT-based cryptosystem
in which the amplitude part of the Fourier spectrum is encoded and the second private key is
generated by the second PTFT-based structure, some information encoded into the amplitude part
of the Fourier spectrum (A1n(u, v)) is further encoded using the G-S iterative process from which
the second private key K 2n(x, y) or P2n(x, y) is generated in the cryptosystem based on PTFT and
G-S algorithm. The processes which are used to generate the second private key in the traditional
PTFT-based cryptosystem [31] and the cryptosystem in [35] can be respectively described as
_IFT [A1n (u, v) R2 (u, v)] = Cn (x, y) P2n (x, y),_ (8)
_IFT [A1n (u, v) pn (u, v)] = R_ (x, y) P2n (x, y), (9)
where Cn(x, y) is the ciphertext generated using the traditional PTFT-based cryptosystem. In the
cryptography, it is assumed that the attacker has access to the encryption algorithm and some
sources, such as public keys, pairs of plaintexts and the corresponding ciphertexts. In the Eq. (8),
_Cn(x, y) used as the ciphertext and R2(u, v) used as the public key are known, which makes possible_
to retrieve the amplitude part of the Fourier spectrum A1n(u, v) and the second private key P2n(x, y).
The relation between the input and the output of the G-S algorithm with enough iterations is
described as Eq. (9). Since pn(u, v) generated in the iterative process is unknown, it is impossible to
retrieve A1n(u, v) and P2n(x, y) even with the knowledge of the random mask R(x, y). Consequently,
the security level of the cryptosystem based on the PTFT and G-S algorithm has been enhanced.
However, it is noteworthy that R(x, y) is not directly related to the plaintexts, which means that
most information of the plaintexts has not been encoded into R(x, y). Hence, most information
of the plaintexts could be retrieved even without any knowledge of R(x, y). In addition, it can be
seen that two private keys (K 1n(u, v) and K 2n(x, y)) are related to pn(u, v) generated in the G-S
algorithm, partial information of the private keys can be obtained if pn(u, v) could be retrieved;
hence, the information of plaintexts could be retrieved using the retrieved private keys. In this study,
the silhouette problem existing in the cryptosystem based on PTFT and G-S algorithm has been
found. In addition, since the second private key K 2n(x, y) has low sensitivity, the security of the
cryptosystem needs to be further improved. To the best of our knowledge, it is the first time that
the cryptoanalysis to attack the encryption scheme based on PTFT and G-S algorithm has been
proposed.
**_3.1 Silhouette Problem Caused by the Second Private Key_**
During the decryption process of the cryptosystem in [35], the random mask R(x, y), the second
private key K 2n(x, y) and the first private key K 1n(u, v) are needed to obtain the decoded image
according to Eq. (5). As mentioned above, R(x, y) is unrelated to the plaintexts while all information
of plaintexts are encoded into K 1n(u, v) and K 2n(x, y). Hence, the cryptoanalysis on the private keys
generated in the encryption process is carried out.
With the knowledge of the private key K2n(x, y), the information of the plaintext can be retrieved
using the proposed iterative process in Fig. 5. The iterative process can be carried out as follows:
1) At the kth iteration, an estimated random mask R[′]k(x, y) boned with the correct key K 2n(x, y)
is Fourier transformed, then the estimated phase and amplitude parts on the Fourier plane
are given by
�
�
_,_
_,_
�g′
_p[′]_
(n)k [(][u][,][ v][)][ =][ PT]
(n)k [(][u][,][ v][)][ =][ AT]
�
�
_FT [R[′]k (x, y) K 2n (x, y)]_
_FT [R[′]k (x, y) K 2n (x, y)]_
(10)
where g[′](n)k(u, v) and p[′](n)k are the estimated amplitude and phase parts of the Fourier spectrum at the kth iteration, respectively. We would like to emphasize that the simulation results
##### V l 11 N 5 O t b 2019 7801514
-----
Fig. 5. The schematic diagram of the proposed iterative process with correct K 2n(x, y).
shown in this study are obtained using randomly generated matrices (R[′]k(x, y), P[′]1(n)k(u, v)
and K [′]2(n)k(x, y) in Section 3.2) as the initial conditions (k = 1). It is noteworthy that these matrices can also be fixed values when k = 1, such as R[′]k(x, y) = 0 or R[′]k(x, y) = 1. The similar
simulation results will be obtained.
2) The estimated amplitude part in the Fourier plane g[′](n)k(u, v) bonded with the estimated private
key P[′]1(n)k(u, v) is inversely Fourier transformed, then the estimated amplitude part obtained
on the image plane is given by
�g[′](n)k [(][u][,][ v][)][ P][′]
1(n)k [(][u][,][ v][)]
_,_ (11)
_d([′′]n)k_ [(][x][,][ y][)][ =][ PT]
�IFT
��
3) The estimated plaintext d[′](n)k(x, y) is given by
(n)k [(][x][,][ y][)]�
_,_ (12)
_d[′]_
(n)k [(][x][,][ y][)][ =][ MF]
�d[′′]
where MF denotes a median filter.
{·}
4) The new estimated amplitude and phase parts on the Fourier plane are respectively given by
��
_,_
��
� _g[′′]_
(n)k [(][u][,][ v][)][ =][ PT]
�FT �d[′](n)k (x, y) R1 (x, y)
_,_
_P[′]1(n)(k+1) (u, v) = AT_
�FT �d[′](n)k (x, y) R1 (x, y)
(13)
where g[′′](n)k[(][u][,][ v][) and][ P][′][1(][n][)(][k][+][1)][(][u][,][ v][) are the new estimated amplitude and phase parts on]
the Fourier plane, respectively. P[′]1(n)(k+1)(u, v) is updated and used in step 2 at the (k + 1)th
iteration.
5) The new estimated random mask R[′](k+1)(x, y) is given by
�g[′′]
(n)k [(][u][,][ v][)][ p][′](
��
(n)k [(][u][,][ v][)]
_R[′]_
(k+1) [(][x][,][ y][)][ =][ PT]
�IFT
_,_ (14)
where R[′](k+1)(x, y) is updated and used as the input of the iterative process at (k + 1)th iteration.
Steps 1–5 are iterated until the number of iterations k reached the preset value. Numerical
simulations are carried out using MATLAB R2018b (on an Intel Core i5-4570 3.20 GHz, RAM 8 GB
PC) to examine the feasibility and effectiveness of the proposed iterative process. Employing the
proposed iterative process in Fig. 5 and with knowledge of the correct K 21(x, y), the retrieved grayscale image (d[′]1(x, y)) is shown in Fig. 6(a). It can be seen that most information of the gray-scale
image (f1(x, y)) has been retrieved even though the retrieved image is blurred. The relation between
CC values and the number of iterations k for matching d[′]1(x, y) and f1(x, y) is shown in Fig. 6(b), from
which it can be seen that the CC values reach close to 0.7 within a few iterations. The computational
time for 200 iterations is 10.0351 seconds. The auto-correlation value between d[′]1(x, y) with 200
iterations and f1(x, y) is shown in Fig. 6(c). An evident peak exists in the noisy background, which
means the retrieved gray-scale image is successfully verified. From the simulation results shown
##### V l 11 N 5 O t b 2019 7801514
-----
Fig. 6. (Color online) Simulation results of the proposed iterative process with correct K 21(x, y) on the
cryptosystem [35]. (a) The retrieved gray-scale image (d[′]1(x, y)) obtained using the proposed attack
with 200 iterations. (b) The relation between CC values and iteration number k for matching d[′]1(x, y)
and Fig. 3(a). (c) The auto-correction peak.
Fig. 7. (Color online) Simulation results of the proposed iterative process with partially correct K 21(x, y)
on the cryptosystem [35]. (a)-(c) d[′]1(x, y) obtained using the proposed attack with 85%, 90% and 95%
correct K 21(x, y), respectively. (d)-(f) The auto-correlation peaks obtained using the proposed attack
with 85%, 90% and 95% correct K 21(x, y), respectively.
in Fig. 6, it can be seen the information of f1(x, y) can be retrieved using the proposed iterative
process with K 21(x, y), which means that the silhouette information of plaintexts will be disclosed
when some information of the second private key leaks.
In addition, to further validate that the silhouette problem existing in the cryptosystem which
would be caused by the partial information of K 2n(x, y) known by the unauthorized user, simulations
with partially known K 2n(x, y) is carried out and the corresponding results are shown in Fig. 7.
The retrieved grayscale images obtained using the proposed iterative process with 85%, 90%
and 95% correct K 21(x, y) are shown in Figs. 7(a)–(c), respectively. The auto-correlation values
between retrieved images obtained using the proposed iterative process with 85%, 90% and 95%
correct K 21(x, y) and f1(x, y) are shown in Figs. 7(d)–(f), respectively. From simulation results in
Fig. 7, it is shown that the retrieved grayscale-image has lower quality when the less information of
##### V l 11 N 5 O t b 2019 7801514
-----
Fig. 8. (Color online) Simulation results of the proposed iterative process with correct K 22(x, y) on the
cryptosystem [35]. (a) The retrieved gray-scale image (d[′]2(x, y)) obtained using the proposed attack
with 200 iterations. (b) The relation between CC values and iteration number k for matching d[′]2(x, y)
and Fig. 4(a). (c) The auto-correction peak.
Fig. 9. (Color online) Simulation results of the proposed iterative process with partially correct K 22(x, y)
on the cryptosystem [35]. (a)–(c) d[′]2(x, y) obtained using the proposed attack with 85%, 90% and 95%
correct K 22(x, y), respectively, (d)–(f) The auto-correlation peaks obtained using the proposed attack
with 85%, 90% and 95% correct K 22(x, y), respectively.
_K 21(x, y) is known. However, auto-correlation peaks still exist when the partially correct information_
of K 21(x, y) is used to retrieve the information, which means that the retrieved image obtained using
the proposed iterative process can be verified successfully.
Similarly, a binary image is also considered to be retrieved using the proposed iterative process
with correct K 22(x, y) and the simulation results are shown in Fig. 8. Additionally, simulation with
the partially correct K 22(x, y) is carried out and the corresponding results are shown in Fig. 9. The
computational time for 200 iterations is 9.9375 seconds. The simulation results shown in Figs. 8
and 9 are similar to the results shown in Figs. 6 and 7, respectively. Compared to the results shown
in Fig. 7, the retrieved binary image has better quality than that of the gray-scale image even with
less knowledge of K 2n(x, y). It is shown that K 2n(x, y) causes more serious silhouette problem when
the cryptosystem [35] is used to encrypt and authenticate binary images.
##### V l 11 N 5 O t b 2019 7801514
-----
Fig. 10. The schematic diagram of the proposed iterative process with correct K 1n(u, v).
From the simulation results shown, it can be seen that the information of plaintexts can be retrieved
using the proposed iterative process with partially correct K 2n(x, y) and without any knowledge of
_K 1n(u, v) and the ciphertext R(x, y). It is shown that the most information of A1n(u, v) has been_
encoded in K 2n(x, y) using the phase-retrieval iterative process in cryptosystem [35], which may
cause silhouette problem when the information of the private key K 2n(x, y) leak.
**_3.2 Silhouette Problem Caused by the First Private Key_**
Since the phase part on the Fourier plane (P1n(u, v)) is bonded with the phase key generated in
the G-S algorithm (pn(u, v)) to obtain the first private key K 1n(u, v), most information of plaintexts
encoded into P1n(u, v) are further encrypted. Compared to the traditional PTFT-based cryptosystem
in which P1n(u, v) generated in the first PTFT-based structure is directly used as the first private key,
the security level of the cryptosystem in [35] is higher. However, it can be seen that K 1n(u, v) and
_K 2n(x, y) are related to pn(u, v), which can be used as an additional constraint to retrieve K 2n(x, y)_
with the knowledge of K 1n(u, v). Employing the retrieved private keys, the information of the original
images could be retrieved. In addition, since R(x, y) is the ciphertext fixed in the cryptosystem and
not relative to the plaintexts, R(x, y) could be easily obtained by importing an arbitrary input to the
cryptosystem according to the principle of cryptography. With the knowledge of the correct K 1n(u, v),
the information of plaintexts can be retrieved using the proposed iterative process shown in Fig. 10.
The iterative process can be carried out as follows:
1) At the kth iteration, an estimated private key K [′]2(n)k(x, y) boned with the retrieved random
mask R[′](x, y) is Fourier transformed, the amplitude and phase parts on the Fourier plane are
respectively given by
2(n)k [(][x][,][ y][)]
_,_ (15)
(16)
_g[′]_
_p[′]_
(n)k [(][u][,][ v][)][ =][ PT]
(n)k [(][u][,][ v][)][ =][ AT]
�FT
��
�
_FT_
�R[′] (x, y) K [′]
�R[′] (x, y) K [′]
2(n)k [(][x][,][ y][)]��
2) The estimated phase key P[′]1(n)k(u, v) is given by
_P[′]_
1(n)k [(][u][,][ v][)][ =][ K] 1n [(][u][,][ v][)][ p][′]
(n)k [(][u][,][ v][)][,] (17)
3) Using the estimated phase key P[′]1(n)k(u, v) and the estimated amplitude part g[′](n)k(u, v)
obtained using Eq. (15), the estimated amplitude part on the input plane (d[′](n)k(x, y)) is
##### V l 11 N 5 O t b 2019 7801514
-----
given by
_IFT_
(n)k [(][u][,][ v][)][ P][′]
_,_ (18)
_d[′](n)k_ [(][x][,][ y][)][ =][ PT]
�
�
_g[′](_
1(n)k [(][u][,][ v][)]
��
4) Employing a median filter on d[′](n)k(x, y), a new estimated plaintext d([′′]n)k[(][x][,][ y][) is given by]
(n)k [(][x][,][ y][)]
_,_ (19)
_d([′′]n)k_ [(][x][,][ y][)][ =][ MF]
�d[′]
�
5) Using the new estimated plaintext d([′′]n)k[(][x][,][ y][) and the public key][ R][1][(][x][,][ y][), the new estimated]
amplitude and phase parts on the Fourier plane are respectively given by
�
_g[′′]_
�FT
_FT_ �d[′′](n)k (x, y) R1 (x, y)�� _,_
(20)
�FT �d[′′](n)k (x, y) R1 (x, y)�� _,_
_g[′′](n)k_ [(][u][,][ v][)][ =][ PT] �
_P[′′]1(n)k (u, v) = AT_
where g[′′](n)k[(][u][,][ v][) and][ P]1([′′] _n)k[(][u][,][ v][) are the new estimated amplitude and phase parts of the]_
Fourier spectrum, respectively.
6) The new estimated phase key p[′′](n)k[(][u][,][ v][) is given by]
_p[′′](n)k_ [(][u][,][ v][)][ =][ P]1([′′] _n)k_ [(][u][,][ v][)][ {][conj][ [][K][ 1][n][ (][u][,][ v][)]][}][,] (21)
7. The new estimated private key K [′]2(n)(k+1)(x, y) is given by
�
_g[′′]_
_K_ [′]
2(n)(k+1) [(][x][,][ y][)][ =][ AT]
�IFT
(n)k [(][u][,][ v][)][ p][′′]
��
(n)k [(][u][,][ v][)]
_,_ (22)
where K [′]2(n)(k+1)(x, y) is updated and used as the input of the iterative process in Fig. 10 at the
(k 1)th iteration.
+
Steps 1–7 are iterated until the number of iterations (k) reached the preset value. Numerical
simulation is also carried out. The gray-scale image in Fig. 3(a) is used as the arbitrary input to be
imported in the cryptosystem, the retrieved random mask R[′](x, y) is shown in Fig. 11(a) while the
correlation value between R[′](x, y) and Fig. 3(b) is shown in Fig. 11(b). Using the proposed iterative
process with R[′](x, y) and the correct K 11(u, v), the retrieved gray-scale image d[′]1(x, y) is shown in
Fig. 11(c). It can be seen that most information of the gray-scale plaintext is visible from d[′]1(x, y) and
the computational time for 200 iterations is 10.8963 seconds. The relation between CC values and
iteration number k for matching d[′]1(x, y) and Fig. 3(a) is shown in Fig. 11(d) while the correlation
value between d[′]1(x, y) with 200 iterations and Fig. 3(a) is shown in Fig. 11(e). A correlation peak
exists in the noisy background, which means d[′]1(x, y) has been verified successfully. Using R[′](x, y)
and the known K 12(u, v), the retrieved binary image d[′]2(x, y) is shown in Fig. 12(a). The relation
between CC values and iteration number k for matching d[′]2(x, y) and Fig. 4(a) is shown in Fig. 12(b).
An evident correlation peak exists in Fig. 12(c), which means that d[′]2(x, y) with 200 iterations is
verified successfully. The computational time for 200 iterations is 11.0016 seconds. To further
validate the multiuser capability of R[′](x, y), a new gray-scale image with size of 256 256 pixels
×
shown in Fig. 13(a) is used to carry out the simulation. The private key K 13(u, v) generated in the
encryption process is shown in Fig. 13(b). Using the known K 13(u, v) and R[′](x, y), the retrieved
image d[′]3(x, y) is shown in Fig. 13(c). It can be seen that most information of f3(x, y) is retrieved.
The relation between CC values and iteration number k for matching d[′]3(x, y) and f3(x, y) is shown
in Fig. 13(d). It can be seen that the CC values quickly converge to 1, which shows the effectiveness
of the proposed iterative process shown in Fig. 10. The auto-correlation value for matching d[′]3(x, y)
with 200 iterations and f3(x, y) is shown in Fig. 13(e).
From the simulation results shown in Figs. 11–13, it can be seen that most information of the
plaintexts can be retrieved using the proposed iterative process with the correct K 1n(u, v) and without
any knowledge of other private keys, which may cause the silhouette problem when the information
of K 1n(u, v) leak. Although the security level of K 1n(u, v) has been enhanced, the dependent relation
between two private keys provides an additional constraint to crack the cryptosystem. Thus, the
cryptosystem in [35] needs to be further security enhanced.
##### V l 11 N 5 O t b 2019 7801514
-----
Fig. 11. (Color online) Simulation results of the proposed iterative process with correct K 11(u, v) on
the cryptosystem [35]. (a) The retrieved random mask R[′](x, y) obtained using Fig. 3(a) as the input
of cryptosystem. (b) The auto-correlation peak between R[′](x, y) and R(x, y). (c) The retrieved grayscale image d[′]1(x, y) obtained using the proposed iterative process with 200 iterations. (d) The relation
between CC values and iteration number k for matching d[′]1(x, y) and Fig. 3(a). (e) The auto-correction
peak.
Fig. 12. (Color online) Simulation results of the proposed iterative process with correct K 12(u, v) on
the cryptosystem [35]. (a) The retrieved binary image d[′]2(x, y) obtained using the proposed iterative
process with 200 iterations. (b) The relation between CC values and iteration number k for matching
_d[′]2(x, y) and Fig. 4(a). (c) The auto-correction peak._
##### V l 11 N 5 O t b 2019 7801514
-----
Fig. 13. (Color online) Simulation results of the proposed iterative process with correct K 13(u, v) on
the cryptosystem [35]. (a) The gray-scale image f3(x, y) to be retrieved. (b) The known private key
_K 13(u, v). (c) The retrieved image d[′]3(x, y) obtained using the proposed iterative process with Fig. 11(a)_
an Fig. 13(b). (d) The relation between CC values and iteration number k for matching d[′]3(x, y) and
_f3(x, y). (e) The auto-peak._
#### 4. Conclusions
In this paper, the security of the cryptosystem based on the PTFT and G-S algorithm has been analyzed. Since one random mask is used as the ciphertext for different plaintexts in the cryptosystem,
the most information of the plaintexts has not been encoded into the random mask. Consequently,
the security level of the cryptosystem based on the PTFT and G-S algorithm depends on the storage and transmission of two private keys generated in the encryption process. However, since two
private keys are related to the phase key pn(u, v), it provides an additional constraint for attackers
to retrieve the other private key and the corresponding plaintext with the knowledge of one private
key. In this paper, two iterative processes with different constraints have been proposed to crack
the cryptosystem based on PTFT and G-S algorithm successfully. Although the cryptosystem is
immune to the special attack which the PTFT-based cryptosystem is vulnerable to, it has been found
that the silhouette problem caused by two private keys exists, which would cause serious security
problem if the information of any private key leak. In addition, it has been found that silhouette
information of the plaintexts could be retrieved even when only partial information of the second
private key is known; thus, the security level of the cryptosystem based on PTFT and G-S algorithm
needs to be further enhanced. To the best of our knowledge, this is the first time that the silhouette
problem existing in the cryptosystem based on PTFT and G-S algorithm is reported. Numerical
simulation results validate the feasibility and effectiveness of our proposed iterative processes.
#### References
[1] B. L. Volodin, B. Kippelen, K. Meerholz, B. Javidi, and N. Peyghambarian, “A polymeric optical pattern-recognition
system for security verification,” Nature, vol. 383, no. 6595, pp. 58–60, Sep. 1996.
##### V l 11 N 5 O t b 2019 7801514
-----
[2] J. F. Barrera, R. Henao, M. Tebaldi, R. Torroba, and N. Bolognini, “Multiplexing encrypted data by using polarized light,”
_Opt. Commun., vol. 260, pp. 109–112, Apr. 2006._
[3] W. Chen, B. Javidi, and X. Chen, “Advances in optical security systems,” Adv. Opt. Photon., vol. 6, no. 7, pp. 120–155,
Jun. 2014.
[4] P. Refregier and B. Javidi, “Optical image encryption based on input plane and Fourier plane random encoding,” Opt.
_Lett., vol. 20, pp. 767–769, Apr. 1995._
[5] B. Javidi and T. Nomura, “Securing information by use of digital holography,” Opt. Lett., vol. 25, no. 1, pp. 28–30,
Jan. 2000.
[6] E. Tajahuerce and B. Javidi, “Encrypting three-dimensional information with digital holography,” Appl. Opt., vol. 39,
no. 35, pp. 6595–6601, Dec. 2000.
[7] B. Hennelly and J. T. Sheridan, “Optical image encryption by random shifting in fractional Fourier domains,” Opt. Lett.,
vol. 28, no. 4, pp. 269–271, Feb. 2003.
[8] X. F. Meng et al., “Two-step phase-shifting interferometry and its application in image encryption,” Opt. Lett., vol. 31,
no. 10, pp. 1414–1416, May 2006.
[9] W. Chen and X. Chen, “Optical image encryption based on diffractive imaging,” Opt. Lett., vol. 35, no. 22, pp. 3817–3819,
Nov. 2010.
[10] Y. Qin, Q. Gong, and Z. Wang, “Simplified optical image encryption approach using single diffraction pattern in
diffractive-imaging-based scheme,” Opt. Exp., vol. 22, no. 18, pp. 21790–21799, Sep. 2014.
[11] Y. Zhang and B. Wang, “Optical image encryption based on interference,” Opt. Lett., vol. 33, no. 21, pp. 2443–2445,
Nov. 2008.
[12] N. Zhu, Y. Wang, J. Liu, J. Xie, and H. Zhang, “Optical image encryption based on interference of polarized light,” Opt.
_Exp., vol. 17, no. 16, pp. 13418–13424, Aug. 2009._
[13] X. Tan, O. Matoba, Y. Okada-Shudo, M. Ide, T. Shimura, and K. Kuroda, “Secure optical memory system with polarization
encryption,” Appl. Opt. vol. 40, no. 14, pp. 2310–2315, May 2001.
[14] A. Alfalou and C. Brosseau, “Dual encryption scheme of images using polarized light,” Opt. Lett., vol. 35, no. 13,
pp. 2185–2187, Jul. 2010.
[15] A. Carnicer, M. Montes-Usategui, S. Arcos, and I. Juvells, “Vulnerability to chosen-cyphertext attacks of optical encryp
tion schemes based on double random phase keys,” Opt. Lett., vol. 30, no. 13, pp. 1644–1646, Jul. 2005.
[16] X. Peng, H. Wei, and P. Zhang, “Chosen-plaintext attack on lensless double-random phase encoding in the Fresnel
domain,” Opt. Lett. vol. 31, no. 22, pp. 3261–3263, Nov. 2006.
[17] U. Gopinathan, D. S. Monaghan, T. J. Naughton, and J. T. Sheridan, “A known-plaintext heuristic attack on the Fourier
plane encryption algorithm,” Opt. Exp., vol. 14, no. 8, pp. 3181–3186, Apr. 2006.
[18] Y. Zhang, D. Xiao, W. Wen, and H. Liu, “Vulnerability to chosen-plaintext attack of a general optical encryption model with
architecture of scrambling-then-double random phase encoding,” Opt. Lett., vol. 38, no. 21, pp. 4506–4509, Nov. 2013.
[19] C. Zhang, M. Liao, W. He, and X. Peng, “Ciphertext-only attack on a joint transform correlator encryption system,” Opt.
_Exp., vol. 21, no. 23, pp. 28523–28530, Nov. 2013._
[20] Y. Xiong, A. He, and C. Quan, “Security analysis of a double-image encryption technique based on an asymmetric
algorithm,” J. Opt. Soc. Amer. A, vol. 35, no. 2, pp. 320–326, Feb. 2018.
[21] S. Jiao, G. Li, C. Zhou, W. Zou, and X. Li, “Special ciphertext-only attack to double random phase encryption by
plaintext shifting with speckle correlation,” J. Opt. Soc. Amer. A, vol. 35, no. 1, pp. A1–A6, Jan. 2018.
[22] Y. Xiong, A. He, and C. Quan, “Hybrid attack on an optical cryptosystem based on phase-truncated Fourier transforms
and a random amplitude mask,” Appl. Opt., vol. 57, no. 21, pp. 6010–6016, Jul. 2018.
[23] Y. Xiong, A. He, and C. Quan, “Specific attack and security enhancement to optical image cryptosystem based on two
random masks and interference,” Opt. Lasers Eng., vol. 107, pp. 142–148, Aug. 2018.
[24] L. Wang, G. Li, Q. Wu, and G. Situ, “Cyphertext-only attack on the joint-transform-correlator-based optical encryption:
Experimental demonstration,” Appl. Opt. vol. 58, no. 5, pp. A197–A201, Feb. 2019.
[25] Y. Xiong, A. He, and C. Quan, “Security analysis and enhancement of a cryptosystem based on phase truncation and
a designed amplitude modulator,” Appl. Opt., vol. 58, no. 3, pp. 695–703, Jan. 2019.
[26] G. Luan, A. Li, D. Zhang, and D. Wang, “Asymmetric image encryption and authentication based on equal modulus
decomposition in the Fresnel transform domain,” IEEE Photon. J., vol. 11, no. 1, Dec. 2018, Art. no. 6900207.
[27] M. Shan, L. Liu, B. Liu, and Z. Zhong, “Security-enhanced optical interference-based multiple-image encryption using
a modified multiplane phase retrieval algorithm,” Opt. Eng., vol. 57, no. 8, Aug. 2018, Art. no. 083103.
[28] Y. Xiong, C. Quan, and C. J. Tay, “Multiple image encryption scheme based on pixel exchange operation and vector
decomposition,” Opt. Lasers Eng., vol. 101, pp. 113–121, Feb. 2018.
[29] S. Liansheng, W. Jiaohao, T. Ailing, and A. Asundi, “Optical image hiding under framework of computational ghost
imaging based on an expansion strategy,” Opt. Exp., vol. 27, no. 5, pp. 7213–7225, Mar. 2019.
[30] J. Chen, Y. Zhang, J. Li, and L. Zhang, “Security enhancement of double random phase encoding using rear-mounted
phase masking,” Opt. Lasers. Eng., vol. 101, pp. 51–59, Feb. 2018.
[31] W. Qin and X. Peng, “Asymmetric cryptosystem based on phase-truncated Fourier transforms,” Opt. Lett., vol. 35,
no. 2, pp. 118–120, Jan. 2010.
[32] X. Wang and D. Zhao, “A special attack on the asymmetric cryptosystem based on phase-truncated Fourier transforms,”
_Opt. Commun., vol. 285, no. 6, pp. 1078–1081, Mar. 2012._
[33] Y. Wang, C. Quan, and C. J. Tay, “Improved method of attack on an asymmetric cryptosystem based on phase-truncated
Fourier transform,” Appl. Opt. vol. 54, no. 22, pp. 6974–6881, Aug. 2015.
[34] S. K. Rajput and N. K. Nishchal, “Fresnel domain nonlinear optical image encryption scheme based on Gerchberg
Saxton phase-retrieval algorithm,” Appl. Opt., vol. 53, no. 3, pp. 418–425, Jan. 2014.
[35] S. K. Rajput and N. K. Nishchal, “An optical encryption and authentication scheme using asymmetric keys,” J. Opt.
_Soc. Amer. A, vol. 31, no. 6, pp. 1233–1238, Jun. 2014._
##### V l 11 N 5 O t b 2019 7801514
-----
| 13,073
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/JPHOT.2019.2936236?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/JPHOT.2019.2936236, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://ieeexplore.ieee.org/ielx7/4563994/8793208/08807146.pdf"
}
| 2,019
|
[] | true
| 2019-10-01T00:00:00
|
[
{
"paperId": "0f7c8e4ec398386c571131fa87914e63421894d0",
"title": "Optical image hiding under framework of computational ghost imaging based on an expansion strategy."
},
{
"paperId": "3d4b848868018dd90506579e0d15e7fe65e00bd9",
"title": "Cyphertext-only attack on the joint-transform-correlator-based optical encryption: experimental demonstration."
},
{
"paperId": "4c9385c4c7f7ae13685ddeb7c59cb6b5351353b0",
"title": "Asymmetric Image Encryption and Authentication Based on Equal Modulus Decomposition in the Fresnel Transform Domain"
},
{
"paperId": "5d773d08a51bbe16ec1b7f124b6bcbab437551ef",
"title": "Security analysis and enhancement of a cryptosystem based on phase truncation and a designed amplitude modulator."
},
{
"paperId": "35400d638d4f0c5755ea3c04df2014a911850202",
"title": "Security-enhanced optical interference-based multiple-image encryption using a modified multiplane phase retrieval algorithm"
},
{
"paperId": "c0693a70a06048b0edd043086df86e1a746880fd",
"title": "Specific attack and security enhancement to optical image cryptosystem based on two random masks and interference"
},
{
"paperId": "bc6cf2744645686fb1136b0b503d137bd3542003",
"title": "Hybrid attack on an optical cryptosystem based on phase-truncated Fourier transforms and a random amplitude mask."
},
{
"paperId": "8c490e1359345933cb104598c1e29cfa0d1bbd54",
"title": "Security analysis of a double-image encryption technique based on an asymmetric algorithm."
},
{
"paperId": "6d6bee25f9d21dd327e012fc25479f5057a63f9b",
"title": "Security enhancement of double random phase encoding using rear-mounted phase masking"
},
{
"paperId": "bba2a7e1a82efdb25e86fbbc34976f23d724f838",
"title": "Multiple image encryption scheme based on pixel exchange operation and vector decomposition"
},
{
"paperId": "d446f43b5da7f5ed721b6614aee81e92a7ff29a7",
"title": "Improved method of attack on an asymmetric cryptosystem based on phase-truncated Fourier transform."
},
{
"paperId": "7f2059deb4a393d79c9a067e888b929425f5d67c",
"title": "Simplified optical image encryption approach using single diffraction pattern in diffractive-imaging-based scheme."
},
{
"paperId": "e7fe02a73c4a73411685c7e2f06fef6af5c148e8",
"title": "Advances in optical security systems"
},
{
"paperId": "6ec80cd56f439dbe5acf1490837b9cd1bbee19d6",
"title": "An optical encryption and authentication scheme using asymmetric keys."
},
{
"paperId": "798cd4e073b5d42587894c68e534760eac83591d",
"title": "Fresnel domain nonlinear optical image encryption scheme based on Gerchberg-Saxton phase-retrieval algorithm."
},
{
"paperId": "2429487dcd047a15340c0a51fbf717d2c738bde4",
"title": "Ciphertext-only attack on a joint transform correlator encryption system."
},
{
"paperId": "829d7aa0ce25c6b475d369f2c8849e5f7048e523",
"title": "Vulnerability to chosen-plaintext attack of a general optical encryption model with the architecture of scrambling-then-double random phase encoding."
},
{
"paperId": "e258ce376758a8fcce331a790bd7ba1e9fd81b90",
"title": "A special attack on the asymmetric cryptosystem based on phase-truncated Fourier transforms"
},
{
"paperId": "63c7ebddde8ea99d562a3bb63e0df94f6b7b8f5e",
"title": "Optical image encryption based on diffractive imaging."
},
{
"paperId": "027f80a680b38763d8c25db219dc0df8cba8c832",
"title": "Dual encryption scheme of images using polarized light."
},
{
"paperId": "57be7c874687cdb085245113622e2817a4b14c67",
"title": "Asymmetric cryptosystem based on phase-truncated Fourier transforms."
},
{
"paperId": "25451446bc4a2c94d8086a4c8909be96abfc25c8",
"title": "Optical image encryption based on interference of polarized light."
},
{
"paperId": "000ca1b05b6ca18145a4944d86d58519ed671a1e",
"title": "Optical image encryption based on interference."
},
{
"paperId": "4ca3e5d2312667ef66b9c2ccac918681262f2c8f",
"title": "Chosen-plaintext attack on lensless double-random phase encoding in the Fresnel domain."
},
{
"paperId": "1edd576ca78b4850dafefa48ef0ed6373c19b560",
"title": "Two-step phase-shifting interferometry and its application in image encryption."
},
{
"paperId": "cf017a2885cbce9cb977d16be5d5c4dd0b61f74c",
"title": "A known-plaintext heuristic attack on the Fourier plane encryption algorithm."
},
{
"paperId": "9898b43ca8dfe3b8f77ecbfc318a0a7cbfdacd40",
"title": "Multiplexing encrypted data by using polarized light"
},
{
"paperId": "6c63bb43f3f60e20875de2888e517c7dd98a58f6",
"title": "Vulnerability to chosen-cyphertext attacks of optical encryption schemes based on double random phase keys."
},
{
"paperId": "aace28e77e5329a9765675b86e4b5314510a59b2",
"title": "Optical image encryption by random shifting in fractional Fourier domains."
},
{
"paperId": "ced54bc0f0f7039588f5ad50895dcc7baa4654d0",
"title": "Secure optical memory system with polarization encryption."
},
{
"paperId": "cb6b90fad23aa75630199deba03037c415104a51",
"title": "Encrypting three-dimensional information with digital holography."
},
{
"paperId": "f42ffb44456ce201d7781a383f55b8e9fca91fbf",
"title": "A polymeric optical pattern-recognition system for security verification"
},
{
"paperId": "8a50514d1c071aacfc1408c8fc431c57f53e9875",
"title": "Optical image encryption based on input plane and Fourier plane random encoding."
},
{
"paperId": "27c799b5c40c6a195de0980431d833344e5c8212",
"title": "Special ciphertext-only attack to double random phase encryption by plaintext shifting with speckle correlation."
},
{
"paperId": "1d9ffdec11f8701d059cab85bdfd46b8d0e5c426",
"title": "Securing information by use of digital holography."
}
] | 13,073
|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/00c4e0ca1cfa6e6c9619d7d44fe37e21b310fcda
|
[
"Medicine"
] | 0.858632
|
A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
|
00c4e0ca1cfa6e6c9619d7d44fe37e21b310fcda
|
PLoS ONE
|
[
{
"authorId": "2197390758",
"name": "Javier Falces Marin"
},
{
"authorId": "2124600285",
"name": "David Díaz Pardo de Vera"
},
{
"authorId": "2180103926",
"name": "Eduardo López Gonzalo"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Plo ONE",
"PLOS ONE",
"PLO ONE"
],
"alternate_urls": [
"http://www.plosone.org/"
],
"id": "0aed7a40-85f3-4c66-9e1b-c1556c57001b",
"issn": "1932-6203",
"name": "PLoS ONE",
"type": "journal",
"url": "https://journals.plos.org/plosone/"
}
|
Market making is a high-frequency trading problem for which solutions based on reinforcement learning (RL) are being explored increasingly. This paper presents an approach to market making using deep reinforcement learning, with the novelty that, rather than to set the bid and ask prices directly, the neural network output is used to tweak the risk aversion parameter and the output of the Avellaneda-Stoikov procedure to obtain bid and ask prices that minimise inventory risk. Two further contributions are, first, that the initial parameters for the Avellaneda-Stoikov equations are optimised with a genetic algorithm, which parameters are also used to create a baseline Avellaneda-Stoikov agent (Gen-AS); and second, that state-defining features forming the RL agent’s neural network input are selected based on their relative importance by means of a random forest. Two variants of the deep RL model (Alpha-AS-1 and Alpha-AS-2) were backtested on real data (L2 tick data from 30 days of bitcoin–dollar pair trading) alongside the Gen-AS model and two other baselines. The performance of the five models was recorded through four indicators (the Sharpe, Sortino and P&L-to-MAP ratios, and the maximum drawdown). Gen-AS outperformed the two other baseline models on all indicators, and in turn the two Alpha-AS models substantially outperformed Gen-AS on Sharpe, Sortino and P&L-to-MAP. Localised excessive risk-taking by the Alpha-AS models, as reflected in a few heavy dropdowns, is a source of concern for which possible solutions are discussed.
|
# PLOS ONE
|a1111111111 a1111111111 a1111111111|Col2|
|---|---|
OPEN ACCESS
**Citation:** Falces Marin J, Dı´az Pardo de Vera D,
Lopez Gonzalo E (2022) A reinforcement learning
approach to improve the performance of the
Avellaneda-Stoikov market-making algorithm.
[PLoS ONE 17(12): e0277042. https://doi.org/](https://doi.org/10.1371/journal.pone.0277042)
[10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042)
**Editor:** J. E. Trinidad Segovia, University of
Almeria, SPAIN
**Received:** April 11, 2022
**Accepted:** October 19, 2022
**Published:** December 20, 2022
**Peer Review History:** PLOS recognizes the
benefits of transparency in the peer review
process; therefore, we enable the publication of
all of the content of peer review and author
responses alongside final, published articles. The
editorial history of this article is available here:
[https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042)
**Copyright:** © 2022 Falces Marin et al. This is an
open access article distributed under the terms of
[the Creative Commons Attribution License, which](http://creativecommons.org/licenses/by/4.0/)
permits unrestricted use, distribution, and
reproduction in any medium, provided the original
author and source are credited.
**Data Availability Statement:** [https://github.com/](https://github.com/javifalces/HFTFramework)
[javifalces/HFTFramework.](https://github.com/javifalces/HFTFramework)
**Funding:** The author(s) received no specific
funding for this work.
RESEARCH ARTICLE
## A reinforcement learning approach to improve the performance of the Avellaneda- Stoikov market-making algorithm
**Javier Falces Marin** **[ID](https://orcid.org/0000-0002-3891-8023)** ***, David Dı´az Pardo de Vera, Eduardo Lopez Gonzalo**
Escuela Te´cnica Superior de Ingenieros de Telecomunicacio´n, SSR, Universidad Polite´cnica de Madrid,
Madrid, Spain
- [email protected]
### Abstract
Market making is a high-frequency trading problem for which solutions based on reinforce
ment learning (RL) are being explored increasingly. This paper presents an approach to
market making using deep reinforcement learning, with the novelty that, rather than to set
the bid and ask prices directly, the neural network output is used to tweak the risk aversion
parameter and the output of the Avellaneda-Stoikov procedure to obtain bid and ask prices
that minimise inventory risk. Two further contributions are, first, that the initial parameters
for the Avellaneda-Stoikov equations are optimised with a genetic algorithm, which parame
ters are also used to create a baseline Avellaneda-Stoikov agent (Gen-AS); and second,
that state-defining features forming the RL agent’s neural network input are selected based
on their relative importance by means of a random forest. Two variants of the deep RL
model (Alpha-AS-1 and Alpha-AS-2) were backtested on real data (L2 tick data from 30
days of bitcoin–dollar pair trading) alongside the Gen-AS model and two other baselines.
The performance of the five models was recorded through four indicators (the Sharpe, Sor
tino and P&L-to-MAP ratios, and the maximum drawdown). Gen-AS outperformed the two
other baseline models on all indicators, and in turn the two Alpha-AS models substantially
outperformed Gen-AS on Sharpe, Sortino and P&L-to-MAP. Localised excessive risk-taking
by the Alpha-AS models, as reflected in a few heavy dropdowns, is a source of concern for
which possible solutions are discussed.
#### **1 Introduction**
In securities markets, liquidity, that is, both the availability of assets for buyers and a demand
for the same for sellers, is provided by market makers. (Foucault et al. [1] define liquidity more
precisely as ‘ *the degree to which an order can be executed within a short time frame at a price*
*close to the consensus value of the security* . *’ Conversely*, *a price that deviates substantially from*
*this consensus value indicates illiquidity* .’) Market makers provide liquidity as they exploit the
market microstructure of orderbooks–which contain the minutest representation of trading
data–where pending trade orders in a venue are placed in two price-ordered lists: a bid list
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 1 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
**Competing interests:** The authors have declared
that no competing interests exist.
**Abbreviations:** T, Daily closing time; t j, Current
time instance (at arrival of the latest, the j [th], market
tick); τ i, Time instance at the start of the i [th] 5second action cycle of the RL agent; *p* *[m]* ( *t* *j* ), Current
market midprice (at time *t* ); *I* ( *t* *j* ), Inventory held by
the agent (at time *t* *i* ); γ, Risk aversion of the agent;
*σ* [2], Variance of the market midprice; w, Size of
window (in number of ticks) to estimate the
variance of the market midprice; r, Reservation
price; π n, n [th] time interval for orderbook update
rate calculation; *δ* *[a]*, *δ* *[b]*, Distance to the midprice
from the reservation price on the ask ( *δ* *[a]* ) or bid
( *δ* *[b]* ) side; kna,knb, Liquidity parameter for the ask
(kna) or bid (knb) side (for the n [th] time interval; **Fig 1. Orderbook snapshot for btc-usd.**
λna,λnb, Arrival rate of orderbook updates on the [https://doi.org/10.1371/journal.pone.0277042.g001](https://doi.org/10.1371/journal.pone.0277042.g001)
ask (λna) or bid (λnb) side, for time interval *π* *n* ; *p* *[a]*,
*p* *[b]*, Ask ( *p* *[a]* ) or bid ( *p* *[b]* ) price to be quoted; S, State with purchase orders and an ask list with sell orders, with orders on either list quoting both a
space of the RL agent; A, Action space of the RL quantity of assets and the price at which the buyer or seller, respectively, are willing to trade
agent; R, Reward value of the RL algorithm; γ d, them. The difference between the lowest ask price and highest bid price for an asset is called
Discount factor of the RL algorithm; α, Learning
rate of the RL algorithm; s, Current state of the the spread. Fig 1 shows a section of an orderbook where bid quotes (left side) and ask quotes
agent; *s* [0], Prospective next state of the agent; a, (right side) meet across a spread of 0.01 (8761.41 −8761.40). Market makers place both bid
Action taken by the agent from its current state; *a* [0], and ask quotes in the orderbook, thus generating demand for and supply of the asset for proProspective next action of the agent; *Q* *i* ( *s*, *a* ), Q- spective sellers and buyers, respectively.
value for state *s* and action *a* (at time *τ* *i* ); *R* ( *τ* *i* ),
The cumulative profit (or loss) resulting from a market maker’s operations comes from the
Asymmetric dampened P&L (at time *τ* *i* ); Ψ( *τ* *i* ),
Open P&L at time *τ* *i* ; Δ *m* ( *τ* *i* ), Speculative P&L (the successive execution of trades on both sides of the spread. This profit from the spread is endanvalue difference between the open P&L and the gered when the market maker’s buy and sell operations are not balanced overall in volume,
close P&L). since this will increase the dealer’s asset inventory. The larger the inventory is, be it positive
(long stock) or negative (short stock), the higher the holder’s exposure to market movements.
Hence, market makers try to minimize risk by keeping their inventory as close to zero as possible. Market makers tend to do better in mean-reverting environments, whereas market
momentum, in either direction, hurts their performance.
Inventory management is therefore central to market making strategies (see section 2 for
an overview of these), and particularly important in high-frequency algorithmic trading. In an
influential paper [2], Avellaneda and Stoikov expounded a strategy addressing market maker
inventory risk. Essentially, the Avellaneda-Stoikov (AS) algorithm derives optimal bid and ask
quotes for the market maker to place at any given moment, by leveraging a statistical model of
the expected sizes and arrival times of market orders, given certain market parameters and a
specified degree of risk aversion in the market maker’s quoting policy. The optimal bid and
ask quotes are obtained from a set of formulas built around these parameters. These formulas
prescribe the AS strategy for placing limit orders. The rationale behind the strategy is, in Avellaneda and Stoikov’s words, to perform a ‘ *balancing act between the dealer’s personal risk con-*
*siderations and the market environment* ’ [ibid.].
The AS algorithm is static in its reliance on analytical formulas to generate bid and ask
quotes based on the real-time input values for the market mid-price of the security and the
current stock inventory held by the market maker. These formulas (as we will see in section 2)
have fixed parameters to model the market maker’s aversion to risk and the statistical properties of market orders.
In this paper we present a limit order placement strategy based on a well-known reinforcement learning (RL) algorithm. The peculiarity of our approach is that, rather than relying on
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 2 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
this RL algorithm directly to determine what limit orders to place (as all other machine learning-based methods in the literature do, to our knowledge), we still use the AS algorithm to
determine bid and ask quotes. We use the RL algorithm to modify the risk aversion parameter
and to skew the AS quotes based on a characterization of the latest steps of market activity.
Another distinctive feature of our work is the use of a genetic algorithm to determine the
parameters of the AS formulas, which we use as a benchmark, to offer a fairer performance
comparison to our RL algorithm.
The paper is organized as follows. The Avellaneda-Stoikov procedure underpinning the
market-making actions in the models under discussion is explained in Section 2. Section 3
provides an overview of reinforcement learning and its uses in algorithmic trading. The deep
reinforcement learning models (Alpha-AS-1 and Alpha-AS-2) developed to work with the
Avellaneda-Stoikov algorithm are presented in detail in Section 4, together with an Avellaneda-Stoikov model (Gen-AS) without RL with parameters obtained with a genetic algorithm.
Section 5 describes the experimental setup for backtests that were performed on our RL models, the Gen-AS model and two simple baselines. The results obtained from these tests are discussed in Section 6. The concluding Section 7 summarises the approach and findings, and
outlines ideas for model improvement.
#### **2 Background: The Avellaneda-Stoikov procedure**
In 2008, Avellaneda and Stoikov published a procedure to obtain bid and ask quotes for highfrequency market-making trading [2, 3]. The successive orders generated by this procedure
maximize the expected exponential utility of the trader’s profit and loss (P&L) profile at a
future time, *T* (usually, the daily closing time for trade), for a given level of agent inventory
risk aversion. Intuitively, the underlying idea is, first, to adjust the market mid-price taking
into account the size of the stock inventory held by the agent, the market volatility and the
time remaining until *T*, these all being factors affecting inventory risk, and adjusting also
according to the agent’s sensitivity to this risk (i.e., the risk aversion, which is assumed to be
constant); then the agent’s bid and ask quotes are set around this adjusted mid-price, called the
*reservation* price, at a distance at which their probability of execution is optimal, i.e., it leads,
through repeated application, to the maximization of profit at time *T* .
The procedure, therefore, has two steps, which are applied at each time increment as
follows.
1. Set the reservation price, *r* :
*r* ð *t* *j* Þ ¼ *p* *[m]* ð *t* *j* Þ � *I* ð *t* *j* Þgs [2] ð *T* � *t* *j* Þ ð1Þ
where *t* *j* is the current time upon arrival of the j [th] market tick, *p* *[m]* ( *t* *j* ) is the current market
mid-price, *I* ( *t* *j* ) is the current size of the inventory held, *γ* is a constant that models the
agent’s risk aversion, and *σ* [2] is the variance of the market midprice, a measure of volatility.
We should note that *r* is actually the average of a bid indifference price ( *r* *[b]* ) and an ask indifference price ( *r* *[a]* ), which are defined mathematically to be, respectively, the stock bid and
ask quote prices at which the agent’s expected P&L utility will be the same whether a stock
is bought or not (for the bid indifference price) or sold or not (in the case of the ask indifference price), thus making the agent indifferent to placing orders at these prices. This consideration makes *r* *[b]* and *r* *[a]* (rather than s) reasonable reference prices around which to
construct the market maker’s spread. Avellaneda and Stoikov define *r* *[b]* and *r* *[a]*, however, for
a passive agent with no orders in the limit order book. In practice, as Avellaneda and Stoikov did in their original paper, when an agent is running and placing orders both *r* *[b]* and ra
*r* *[a]* are approximated by the average of the two, *r* [2].
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 3 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
−
2. Calculate the spread ( *p* *[a]* *p* *[b]* ):
d *[a]* ð Þ ¼ *t* [1] þ [1] ð2Þ
2 [gs] [2] *[ T]* � [ �] *[t]* *[j]* � g *[ln]* � [ 1][ þ][ g] *k* *[a]* �
d *[b]* ð Þ ¼ *t* [1] þ [1] ð3Þ
2 [gs] [2] *[ T]* � [ �] *[t]* *[j]* � g *[ln]* � [ 1][ þ][ g] *k* *[b]* �
Here, *δ* is the distance from the reservation price, *r*, at which bid and ask quotes will be generated, on either side of *r* . The *k* parameter models order book liquidity, with larger values corresponding to higher trading intensity. For a specific time interval, *π* *n* = *t* *n* − *t* *n* −1, *k* can be
estimated as done in [3]:
*k* *[a]* *n* [¼][ l] l *[a]* *nn* *[a]* [�][�] [l][l] *[a]* *n* *[a]* *n* � � 11
*k* *[b]* *n* [¼][ l] l *[b]* *nn* *[b]* [�][�] [l][l] *[b]* *n* *[b]* *n* � � 11
ð4Þ
ð5Þ
where l *[a]* *n* [and][ l] *[b]* *n* [are the orderbook update arrival rates on the ask and bid sides, respectively, in]
the time interval *π* *n* = *t* *n* − *t* *n* −1 . Note that this approach, following Aldridge’s [3], allows us to
estimate the *k* parameters simply by counting the order arrivals in each time interval, *π* *n* . No
further parameter is needed to characterise the asset’s liquidity (such as *A*, if we were to model
order arrival rates by the exponential law *λ* ( *δ* ) = *Ae* [−] *[k][δ]* . as in [2, 4]).
We apply a symmetric spread around the reservation price. Hence, we set the ask price, *p* *[a]*,
and the bid price, *p* *[b]*, as:
*p* *[a]* ¼ *r* þ d *[a]* ð6Þ
*p* *[b]* ¼ *r* � d *[b]* ð7Þ
where all terms are evaluated at time *t* .
*j*
From these equations we see that the larger a positive inventory held ( *I* ) is, the lower the reservation price drops below the market mid-price. This will skew the ask and bid prices downward with respect to the market mid-price, making selling stock more likely than buying it.
Conversely, the greater a negative inventory is, the more skewed the ask and bid prices will be
above the market mid-price, thus increasing the probability of buying stock and decreasing
that of selling it. The combined effect is to pull the inventory back toward zero, and hence also
the risk inherent to holding it. The expression for *r* (Eq (1)) ensures the AS strategy is sensitive
to price volatility ( *σ* ), by widening the spread when volatility is high. Thus, order placement is
more cautious when the market is more unpredictable, which reduces risk. Inventory risk also
diminishes as trade draws closer to termination time T, since the market has less time in which
to move. This is reflected in the AS procedure by the convergence of *r* ( *t* *j* ) to *p* *[m]* ( *t* *j* ) (Eq (1)) and
the narrowing of the spread (Eq (2) as *t* *j* ! *T* . We also observe that the difference between the
reservation price and the market mid-price is proportional to the agent’s risk aversion. As
regards the market liquidity parameter, *k*, a low (high) value models a market with low (high)
trading intensity. With fewer market updates, placing quotes entails a greater inventory risk;
conversely, high market intensity reduces inventory risk, as both buy and sell orders are more
likely to be executed, keeping the inventory balanced. The risk management associated with *k*
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 4 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
is addressed by Eq (2), by making the spread increase as *k* decreases (thus further decreasing
the probability that the orders placed will be executed within a given time interval), and vice
versa.
The models underlying the AS procedure, as well as its implementations in practice, rely on
certain assumptions. Statistical assumptions are made in deriving the formulas that solve the
P&L maximization problem. First, it is assumed that the agent’s orders are executed at a Poisson rate which decreases as the spread increases (i.e., the farther away from the market midprice an order is placed at, the more time should tend to elapse before it is executed); second,
the arrival frequency of market updates is assumed to be constant; and third, the distribution
of the size of these orders, as well as their market impact (which is an estimation of the price
change a buy or sell order of a certain magnitude can affect on the arrival rates of market
orders), are taken to follow some given law [2]. For instance, Avellaneda and Stoikov [2]
(ibid.) illustrate their method using a power law to model market order size distribution and a
logarithmic law to model the market impact of orders. Furthermore, as already mentioned, the
agent’s risk aversion ( *γ* ) is modelled as constant in the AS formulas. Finally, as noted above,
implementations of the AS procedure typically use the reservation price ( *r* ) as an approximation for both the bid and ask indifference prices.
The AS model generates bid and ask quotes that aim to maximize the market maker’s P&L
profile for a given level of inventory risk the agent is willing to take, relying on certain assumptions regarding the microstructure and stochastic dynamics of the market. Extensions to the
AS model have been proposed, most notably the Gue´ant-Lehalle-Fernandez-Tapia approximation [5], and in a recent variation of it by Bergault et al. [6], which are currently used by major
market making agents. Nevertheless, in practice, deviations from the model scenarios are to be
expected. Under real trading conditions, therefore, there is room for improvement upon the
orders generated by the closed-form AS model and its variants.
One way to improve the performance of an AS model is by tweaking the values of its constants to fit more closely the trading environment in which it is operating. In section 4.2, we
describe our approach of using genetic algorithms to optimize the values of the AS model constants using trading data from the market we will operate in. Alternatively, we can resort to
machine learning algorithms to adjust the AS model constants and/or its output ask and bid
prices dynamically, as patterns found in market-related data evolve. To this approach, more
specifically one based on deep reinforcement learning, we turn to next.
#### **3 Related work on machine learning in trading**
One of the most active areas of research in algorithmic trading is, broadly, the application of
machine learning algorithms to derive trading decisions based on underlying trends in the volatile and hard to predict activity of securities markets. Machine learning (ML) is being applied
to time series prediction (for instance, of next-day prices [7, 8]); risk management (e.g., in [9]
a ML model is substituted for the commonly used Principal Components Analysis approach),
and the improvement or discovery of factors in factor investing [10–13]. Machine learning
approaches have been explored to obtain dynamic limit order placement strategies that
attempt to adapt in real time to changing market conditions. Collado and Creamer [14] performed time series forecasting using dynamic programming; deep neural networks have found
undervalued equities [15]; reinforcement learning has been used successfully in execution
algorithms to lessen market impact [16], as well as to hedge a derivatives portfolio, simulating
liquidity, market impact and transaction costs by learning from a nonlinear environment [17].
As regards market making, the AS algorithm, or versions of it [3], have been used as benchmarks against which to measure the improved performance of the machine learning
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 5 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
algorithms proposed, either working with simulated data [18] or in backtests [8] with real
data. The literature on machine learning approaches to market making is extensive.
We now turn to uses in algorithmic trading of a specific branch of machine learning: reinforcement learning.
#### **3.1 A brief overview of the reinforcement learning paradigm**
A branch of machine learning that has drawn particularly strong attention from the field of
algorithmic trading is *reinforcement learning* (RL), already a feature in some of the aforementioned work. Through interaction with its environment, a reinforcement learning algorithm
learns a *policy* to guide its actions, with the goal of optimizing a reward that it obtains by said
interaction. The policy determines what action it is best to perform in a given situation, as part
of a sequence of actions, such that when the sequence terminates the cumulative reward is
maximized. The RL paradigm is built upon the following elements (Fig 2): an agent with a
quantifiable goal acts upon its environment according to information it receives from the environment regarding both its state (which may have changed at least partly as a consequence of
the agent’s previous actions) and the goal-relevant consequences of the agent’s previous
actions, quantified as a cumulative reward to be maximized.
Applied to market making, the goal of the RL agent is to maximize the expected P&L profile
utility at some future time, T. In each action-reward cycle the agent reads the current state of the
order book (its environment): the market mid-price and details of the order book microstructure.
As its actions in pursuit of its goal, the agent places buy and sell orders in the order book. From
these orders it obtains a reward: a profit or a loss. The reward, together with the new state of the
order book (which will have changed through the accumulated actions of all the agents operating
in the market), are taken into account by the agent to decide its actions in the next cycle.
The interplay between the agent and its environment can be modelled as a Markov Decision Process (MDP), which defines:
- A state space (S): the set of states the environment can be in.
- An action space (A): the set of actions available to the agent.
- A transition function (T ) that specifies the probabilities of transitioning from a given state
to another when a given action is executed.
- A reward function (R), that associates a reward with each transition.
- A discount factor ( *γ* ) by which future rewards are given less weight than more immediate
ones when estimating the value of an action (an action’s value is its relative worth in terms of
the maximization of the cumulative reward at termination time).
Typically, in the beginning the agent does not know the transition and reward functions. It
must explore actions in different states and record how the environment responds in each
case. Through repeated *exploration* the agent gradually learns the relationships between states,
**Fig 2. The reinforcement learning paradigm.** (Adapted from [Sutton & Barto] [19]).
[https://doi.org/10.1371/journal.pone.0277042.g002](https://doi.org/10.1371/journal.pone.0277042.g002)
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 6 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
actions and rewards. It can then start *exploiting* this knowledge to apply an action selection
policy that takes it closer to achieving its reward maximization goal.
A wide variety of RL techniques have been developed to allow the agent to learn from the
rewards it receives as a result of its successive interactions with the environment. Deep reinforcement learning (DRL) is a subfamily of RL algorithms based on artificial neural networks,
that in recent years have surpassed human ability to solve problems that were previously unassailable via machine learning approaches, due primarily to the vast decision space to be
explored. A notable example is Google’s AlphaGo project [20], in which a deep reinforcement
learning algorithm was given the rules of the game of Go, and it then taught itself to play so
well that it defeated the human world champion. AlphaGo learned by playing against itself
many times, registering the moves that were more likely to lead to victory in any given situation, thus gradually improving its overall strategies. The same concept has been applied to
train a machine to play Atari video games competently, feeding a convolutional neural network with the pixel values of successive screen stills from the games [21].
#### **3.2 Reinforcement learning in algorithmic trading**
These successes with games have attracted attention from other areas, including finance and
algorithmic trading. The large amount of data available in these fields makes it possible to run
reliable environment simulations with which to train DRL algorithms. DRL is widely used in
the algorithmic trading world, primarily to determine the best action (buy or sell) to take in
trading by candles, by predicting what the market is going to do. For instance, Lee and Jangmin [22] used Q-learning with two pairs of agents cooperating to predict market trends
(through two “signal” agents, one on the buy side and one on the sell side) and determine a
trading strategy (through a buy “order” agent and a sell “order” agent). RL has also been used
to dose buying and selling optimally, in order to reduce the market impact of high-volume
trades which would damage the trader’s returns [16].
In most of the many applications of RL to trading, the purpose is to create or to clear an
asset inventory. The more specific context of market making has its own peculiarities. DRL
has been used generally to determine the actions of placing bid and ask quotes directly [23–
26], that is, to decide when to place a buy or sell order and at what price, without relying on
the AS model. Gue´ant and Manziuk [27] have proposed a DRL-based approach to deriving
approximations to the optimal bid and ask quotes for P&L maximization across a large number assets (corporate bonds), overcoming the insurmountable obstacle faced by analytical
approaches to solving the high-dimensional systems of equations involved (the familiar *curse*
*of dimensionality* ). Spooner [24] proposed a RL system in which the agent could choose from a
set of 10 spread sizes on the buy and the sell side, with the asymmetric dampened P&L as the
reward function (instead of the plain P&L). Combining a deep Q-network (DQN) (see Section
4.1.7) with a convolutional neural network (CNN), Juchli [23] achieved improved performance over previous benchmarks. Kumar [26], who uses Spooner’s RL algorithm as a benchmark, proposes using deep recurrent Q-networks (DRQN) as an improved alternative to
DQNs for a time-series data environment such as trading. Gasˇperov and Konstanjčar [25]
tackle the problem be means of an ensemble of supervised learning models that provide predictive buy/sell signals as inputs to a DRL network trained with a genetic algorithm. The same
authors have recently explored the use of a soft actor-critic RL algorithm in market making, to
obtain a continuous action space of spread values [28]. Comprehensive examinations of the
use of RL in market making can be found in Gasˇperov et al. [29] and Patel [30].
What is common to all the above approaches is their reliance on learning agents to place
buy and sell orders directly. That is, these agents decide the bid and ask prices of their
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 7 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
orderbook quotes at each execution step. The main contribution we present in this paper
resides in delegating the quoting to the mathematically optimal Avellaneda-Stoikov procedure.
What our RL algorithm determines are, as we shall see shortly, the values of the main parameters of the AS model. It is then the latter that calculates the optimal bid and ask prices at each
step.
#### **4 Models**
The RL agents (Alpha-AS) developed to use the Avellaneda-Stoikov equations to determine
their actions (the bid and ask prices place in the orderbook) are described in Section 4.1. An
agent that simply applies the Avellaneda-Stoikov procedure with fixed parameters (Gen-AS),
and the genetic algorithm to obtain said parameters, are presented in Section 4.2.
#### **4.1 The Alpha-AS model**
Hasselt, Guez and Silver [31] developed an algorithm they called double DQN. Double DQN
is a deep RL approach, more specifically deep Q-learning, that relies on two neural networks,
as we shall see shortly (in Section 4.1.7). In this paper we present a double DQN applied to the
market-making decision process.
**4.1.1 The concept.** The usual approach in algorithmic trading research is to use machine
learning algorithms to determine the buy and sell orders directly. These orders are the output
actions of each execution cycle. In contrast, we propose maintaining the Avellaneda-Stoikov
procedure as the basis upon which to determine the orders to be placed. We use a reinforcement learning algorithm, a double DQN, to adjust, at each trading step, the values of the
parameters that are modelled as constants in the AS procedure. The actions performed by our
RL agent are the setting of the AS parameter values for the next execution cycle. With these values, the AS model will determine the next reservation price and spread to use for the following
orders. In other words, we do not entrust the entire order placement decision process to the
RL algorithm, learning through blind trial and error. Rather, taking inspiration from Teleña
[32], we mediate the order placement decisions through the AS model (our “avatar”, taking
the term from [32]), leveraging its ability to provide quotes that maximize profit in the ideal
case. In humble homage to Google’s AlphaGo programme, we will refer to our double DQN
algorithm as *Alpha-Avellaneda-Stoikov (Alpha-AS)* .
**4.1.2 Background.** double DQN [31] builds on Deep Q-learning, which in turn is based
on the Q-learning algorithm.
*Q-learning* . Q-learning is an early RL algorithm for Markov decision processes, developed
from Bellman’s recursive Q-value iteration algorithm [33] for estimating, for each possible
state-action pair, ( *s*, *a* ), the sum of future rewards (the Q-value) that will be accrued by choosing that action from that state, assuming all future choices will be optimal (i.e., assuming the
action chosen in any given state arrived at in future steps will be the one with the highest Qvalue). The Q-value iteration algorithm assumes that both the transition probability matrix
and the reward matrix are known.
The Q-learning algorithm, on the other hand, estimates the Q-values–the *Q* *s*, *a* matrix–with
no prior knowledge of the transition probabilities or of the rewards. At each iteration, *i*, the
values in the *Q* *s*, *a* matrix are updated taking into account the observed reward obtained from
the latest state-action pair, as described by the following equation [19]:
*Q* *i* þ1 ð *s; a* Þ ¼ *Q* *i* ð *s; a* Þ þ a½ *R* ð *s; a* Þ þ g *d* max *Q* *i* ð *s* [0] *; a* [0] Þ � *Q* *i* ð *s; a* Þ� ð8Þ
*a* [0]
where:
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 8 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
- *R* ( *s*, *a* ) is the latest reward obtained from state *s* by taking action *a* .
- *s* [0] is the state the MDP has transitioned to when taking action *a* from state *s*, to which it
arrived at the previous iteration.
- max *a* 0 *Q* *i* ð *s* [0] *; a* [0] Þ is the highest Q-value estimate (corresponding to action *a* [0] ) already stored for
the new state, *s* [0], from among those of all the state-action pairs available in state *s* [0] .
- *γ* *d* is a discount factor ( *γ* *d* 2[0, 1]) by which future expected rewards are given less weight in
the current Q-value than the latest observed reward. ( *γ* *d* is usually denoted simply as *γ*, but in
this paper we reserve the latter to denote the risk aversion parameter of the AS procedure).
- *α* is the learning rate ( *α* 2[0, 1]), which reduces to a fraction the amount of change that is
applied to *Q* *i* ( *s*, *a* ) from the observation of the latest reward and the expectation of optimal
future rewards. This limits the influence of a single observation on the Q-value to which it
contributes.
- *Q* *i* ( *s*, *a* ) is known as the *prediction* Q-value.
- The ½ *R* ð *s; a* Þ þ g *d* max *a* 0 *Q* *i* ð *s* [0] *; a* [0] Þ� term is referred to as the *target* Q-value.
The algorithm combines an exploration strategy to reach an increasing number of states
and try the different available actions to obtain examples with which to estimate the optimal
Q-value for each state-action pair, with an exploitation policy that uses the obtained Q-value
estimates to select, at each step, an action with the aim of maximising the total future reward.
Balancing exploration and exploitation advantageously is a central challenge in RL.
*Deep Q-learning* . For even moderately large numbers of states and actions, let alone when
the state space is practically continuous (which is the case presented in this paper), it becomes
computationally prohibitive to maintain a *Q* *s*, *a* matrix and iteratively to get the values contained in it to converge to the optimal Q-value estimates. To overcome this problem, a deep
Q-network (DQN) approximates the *Q* *s*, *a* matrix using a deep neural network. The DQN computes an approximation of the Q-values as a function, *Q* ( *s*, *a*, ***θ*** ), of a parameter vector, ***θ***, of
tractable size. To train a DQN is to let it evolve the values of these internal parameters based
on the agent’s experiences acting in its environment, so that the value function approximated
with them maps the input state to Q-values that increasingly approach the optimal Q-values
for that state. There are various methods to achieve this, a particularly common one being gradient descent.
The general architecture of a DQN is as follows:
- Input layer: for an MDP with a state space determined by the combinations of values that a
set of variables may take (as is the case of the Alpha-AS model we describe in Section 4.1),
the input layer of a DQN will typically have one neuron for each input variable.
- Output layer: one neuron per action available to the agent. Each output neuron will give the
new Q-value estimate for the corresponding action, after processing the latest observation
vector input to the network.
- One or several hidden layers, the structure of which can vary greatly from system to system.
Thus, the DQN approximates a Q-learning function by outputting for each input state, *s*, a
vector of Q-values, which is equivalent (approximately) to checking the row for *s* in a *Q* *s*, *a*
matrix to obtain the Q-value for each action from that state.
A second problem with Q-learning is that performance can be unstable. Increasing the
number of training experiences may result in a decrease in performance; effectively, a loss of
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 9 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
learning. To improve stability, a DQN stores its experiences in a *replay buffer*, in terms of the
value function given by Eq (8), where now the Q-value estimates are not stored in a matrix but
obtained as the outputs of the neural network, given the current state as its input. A *policy*
*function* is then applied to decide the next action. A common function is an *ε-greedy* policy
that balances exploration and exploitation, randomly exploring new actions from the current
state with probability ε, and otherwise (with probability 1−ε) exploiting the knowledge contained in the neural network by performing the action it recommends as its output given the
current state. approximate Q-values stored for the state. The DQN then learns periodically,
with batches of random samples drawn from the replay buffer, thus covering more of the state
space, which accelerates the learning while diminishing the influence of single or of correlated
experiences on the learning process.
*Double DQNs* . Double DQNs [31] represent a further improvement on DQN algorithms, in
terms of training stability and performance. Using a single DQN to determine both the prediction and the target Q-values results in random overestimations of the latter values (ibid.). To
address this problem, as their name suggests, double DQNs rely on two DQNs: a *prediction*
DQN and a *target* DQN. The prediction DQN works as the DQNs discussed so far, but with
target values set by the target DQN. The target DQN is structurally identical to the prediction
DQN. However, the parameters of the target DQN, are updated only once every given number
of training iterations, simply by copying the parameters of the prediction DQN, which in the
meantime will have been modified by exposure to new experiences.
Both the prediction DQN and the target DQN are used to solve the Bellman Eq (8) and
obtain *Q* *i* +1 ( *s*, *a* ) at each iteration. Once again, the prediction DQN provides *Q* *i* ( *s*, *a* ) while the
target DQN gives max *a* 0 *Q* *i* ð *s* [0] *; a* [0] Þ. We can now write the value function of the double DQN as:
*Q* *i* þ1 ð *s; a* Þ ¼ *PredictionDQN* *i* ð *s; a* Þ þ a½ *R* ð *s; a* Þ þ g *d* *max* *TargetDQN* *i* ð *s* [0] *; a* [0] Þ
*a* [0]
� *PredictionDQN* *i* ð *s; a* Þ� ð9Þ
The exploitation policy function chooses the next action, *a* [0], as that which maximises the
output of the *prediction* DQN:
*a* [0] ¼ *max* *PredictionDQN* *i* ð *s; a* Þ ð10Þ
*a*
We model the market-agent interplay as a Markov Decision Process with initially unknown
state transition probabilities and rewards.
**4.1.3 Time step (Δ** ***τ*** **=** ***τ*** ***i*** **+1** **−** ***τ*** ***i*** **).** The time step of the action-reward cycle is 5 seconds of
trading time. The agent is going to repeat the chosen action at every orderbook tick that occurs
throughout the time step. It will accumulate the reward obtained through the repeated application of the action during this time. As we shall see shortly, the actions specify two things: the
risk aversion parameter in the AS formulas and a skew applied to the prices returned by the
formulas. Repeating the action simply means setting these (and only these) two parameters,
risk aversion and skew, to the same values for the duration of the 5-second time window. With
these parameters thus updated every 5 seconds, fresh bid and ask prices are obtained at every
tick, with the latest market information, through the application of the AS formulas.
**4.1.4 States (** ***S*** **).** We characterize the Alpha-AS agent and its environment (the market)
through a set of state-defining features. We divide the feature set conceptually into two subsets
(adapting the nomenclature in [19]):
- *Private indicators*, consisting of features describing the state of the agent.
- *Market indicators*, consisting of features describing the state of the environment.
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 10 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
The features will reflect inventory levels, market prices and other indicators derived from
these. For each indicator considered, we define *N* features holding the bucket identifiers corresponding to the current value of the feature (denoted with the suffix *X* = 0) and its values for
the previous *N-* 1 ticks or candles (denoted with the suffixes *X* = 1, 2, . . . *N-* 1, respectively). In
other words, the agent will use a horizon of the *N* latest values of each feature, that is, the values
the feature has taken in the last *N* ticks (orders entered in the order book, as well as cancellations). That is, the values for each feature are stored in a circular First-In First-Out queue of
size *N*, with overwriting. Should more than *N* ticks occur in the 5-second window, only the last
*N* will be in the queue for consideration when determining the actions for the next 5-second
time step; conversely, in the rare event that fewer than *N* ticks occur in a time step, some values
from the previous time step will still be in the queue, and thus taken into account again. The
value of *N* will vary for different features, as specified below, and in the case of the market candle indicators it refers to candles, not ticks. In each case, a value of *N* was chosen large enough
to provide the agent with a sufficiently rich state space from which to learn, while also small
enough that training demands a manageable amount of time and resources.
The feature quantities are very fine-grained. To derive a manageable number of states from
the combinations of all possible feature values, we defined for each a set of value buckets, as
follows:
a. The feature values are discretised by rounding to a number of decimals, *d*, specific to each
type of feature ( *d* = 3 or 7).
b. The ranges of possible values of the features that are defined in relation to the market midprice, are truncated to the interval [−1, 1] (i.e., if a value exceeds 1 in magnitude, it is set to
1 if it is positive or -1 if negative).
Together, a) and b) result in a set of 2×10 *[d]* contiguous buckets of width 10 [−] *[d]*, ranging from
−1 to 1, for each of the features defined in relative terms. Approximately 80% of their values lie
in the interval [−0.1, 0.1], while roughly 10% lie outside the [−1, 1] interval. Values that are
very large can have a disproportionately strong influence on the statistical normalisation of all
values prior to being inputted to the neural networks. By trimming the values to the [−1, 1]
interval we limit the influence of this minority of values. The price to pay is a diminished
nuance in the learning from very large values, while retaining a higher sensitivity for the
majority, which are much smaller. By truncating we also limit potentially spurious effects of
noise in the data, which can be particularly acute with cryptocurrency data.
A full set of buckets, one for each *selected* feature, is associated with a state. That is, the
agent designates the same state for a particular combination of feature buckets, regardless of
the precise values obtained for each feature (as long as they fall in the corresponding statedefining buckets).
To further reduce the number of states considered by the RL agent and so lessen the familiar *curse of dimensionality* [19], taking inspiration from [34], we selected the top 35% from the
complete set of defined features, as determined by their scores on feature importance metrics
for random forest classifiers (see *Feature selection*, below).
*Indicators and feature definition* : *Private indicators* . The agent describes itself by the amount
of inventory it holds and the reward it receives after performing actions. For each indicator the
agent defines 5 features ( *N* = 5), to hold its current ( *X* = 0) and its 4 previous values. The values
are rounded to 3 decimals ( *d* = 3). (This results in a total of 2000 buckets of size 0.001, from
values -1 to 1, with the lowest bucket being assigned to any feature value -0.999 or lower, and
the highest bucket to any value above 0.999, however large.)
The features are as follows (with 0 � *X* � 4):
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 11 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
**inventory_X** : inventory level, divided by the inventory quantity quoted.
- **score_X** : the cumulative Asymmetric dampened P&L (see the Reward specification below)
obtained so far in the current day of trading, divided by the inventory quantity quoted.
As we shall see shortly, the reward function is the Asymmetric dampened P&L obtained in
the current 5-second time step. In contrast, the total P&L accrued so far in the day is what has
been added to the agent’s state space, since it is reasonable for this value to affect the agent’s
assessment of risk, and hence also how it manipulates its risk aversion as part of its ongoing
actions.
*Market indicators* . The Alpha-AS agent describes its environment through two sets of market indicators: market tick indicators and market candle indicators. Market tick indicators are
updated every time a new order appears in the orderbook; market candle indicators are
updated at regular time intervals, and they reflect the overall market change in the last interval
(which may have seen any number of ticks). We set the candle duration to 1 minute of trading.
*Market tick indicators* . For each market tick indicator the agent defines 10 features ( *N* = 10),
to hold its current ( *X* = 0) and its 9 previous values. The values are rounded to 7 decimals
( *d* = 7, yielding 2�10 [7] buckets). All price-related tick features (but not the quantity-related features) are given as their difference to the current mid-price (the midpoint between the best ask
price and best bid price in the orderbook).
The market tick features are the following (with 0 � *X* � 9):
**ask_price_X** : the best ask price.
**ask_qty_X** : the quantity of assets available in the market at the best ask price.
**bid_price_X** : the best bid price.
**bid_qty_X** : the quantity of assets that are currently being bid for in the market at the best
bid price.
**spread_X** : the best ask price minus the best bid price in the orderbook.
**last_close_price_X** : the price at which the latest trade was executed in the market.
- **microprice_X** : the orderbook microprice [35], as defined by Eq (11).
- **imbalance_X** : the orderbook imbalance, as defined by Eq (12).
*microprice* ¼ *[AskQty]* [0] [ �] *[AskPrice]* [0] [ þ] *[ BidQty]* [0] [ �] *[BidPrice]* [0]
*AskQty* 0 þ *BidQty* 0
ð11Þ
where the 0 subscript denotes the best orderbook price level on the ask and on the bid side, i.e.,
the price levels of the lowest ask and of the highest bid, respectively.
*imbalance* ¼
max *depth*
P *level* ¼0 *[BidQty]* *level* [�] [P] [max] *level* ¼ *[depth]* 0 *[AskQty]* *level*
max *depth*
P *level* ¼0 *[BidQty]* *level* [þ][ P] [max] *level* ¼ *[depth]* 0 *[AskQty]* *level*
ð12Þ
where [P] [max] *level* *[depth]* ¼0 [ð�Þ][ is the sum of the corresponding quantity over all of the orderbook levels]
(best to worse price).
*Market candle indicators* . For each market candle indicator the agent defines 3 features
( *N* = 3), to hold its value for the current candle ( *X* = 0) and the 2 previous candles ( *X* = 1 and
*X* = 2, respectively). The values are rounded to 3 decimals ( *d* = 3). The market candle features
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 12 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
are normalized by the open mid-price (i.e., the mid-price at the start of the candle). They are
the following (with 0 � *X* � 2):
- **close_X** : the last mid-price in candle **X** (divided by the open mid-price for the candle).
- **low_X** : the lowest mid-price in candle **X** (divided by the open mid-price for the candle).
- **high_X** : the highest mid-price in candle **X** (divided by the open mid-price for the candle).
- **ma** : the mean of the 3 **close_X** values.
- **std** : the standard deviation of the 3 **close_X** values.
- **min** : the lowest mid-price in the latest 3 candles (i.e., the lowest of the **low_X** values).
- **max** : the highest mid-price in the latest 3 candles (i.e., the highest of the **high_X** values).
*Feature selection* . Reducing the number of features considered by the RL agent in turn dramatically reduces the number of states. This helps the algorithm learn and improves its performance by reducing latency and memory requirements.
Following the approach in Lo´pez de Prado [34], where random forests are applied to an
automatic classification task, we performed a selection from among our market features (tick
and candle), based on a random forest classifier. We did not include the 10 private features
(the 5 latest inventory levels and 5 latest rewards) in the feature selection process, as we want
our algorithms always to take these agent-related (as opposed to environment-related) values
into account.
The target for the random forest classifier is simply the sign of the difference in mid-prices
at the start and the end of each 5-second timestep. That is, classification is based on whether
the mid-price went up or down in each timestep. The labels are the state features themselves.
Three standard feature importance metrics were used to select the 35% of all market features that had the greatest impact on the output of the agent’s reward function (we relied on
MLfinlab’s python implementation to calculate these three metrics [36]:
*Mean decrease impurity* (MDI), a feature-specific measure of the mean reduction of
weighted impurity over all the nodes in the tree ensemble that partition the data samples
according to the values of that feature [34]. We used entropy as the impurity metric. The
8.75% highest-scoring features on MDI were retained.
*Mean decrease accuracy* (MDA), a feature-specific estimate of average decrease in classification accuracy, across the tree ensemble, when the values of the feature are permuted between
the samples of a test input set [34]. To obtain MDA values we applied a random forest classifier to the dataset split in 4 folds. The 8.75% highest-scoring features on MDA were retained.
*Single feature importance* (SFI), an out-of-sample estimator of the individual importance of
each feature, that avoids the substitution effect found with MDI and MDA (important features are ignored when highly correlated with other important features). The 17.5% highestscoring features on SFI were retained.
The data on which the metrics for our market features were calculated correspond to one
full day of trading (7 [th] December 2020). The selection of features based on these three metrics
reduced their number from 112 to 22 (there was some overlap in the features selected by the
different metrics). The features retained by each importance indicator are shown in Table 1.
The two most important features for all three methods are the latest bid and ask quantities
in the orderbook ( *ask_qty_0* and *ask_qty_0* ), followed closely by the bid and ask quantities
immediately prior to the latest orderbook update ( *ask_qty_1* and *ask_qty_1* ) and the latest best
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 13 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
**Table 1. Features ordered by importance according to the metrics MDI, MDA and SFI.**
|Rank|MDI|MDA|SFI|
|---|---|---|---|
|1|bid_qty_0|bid_qty_0|ask_qty_0|
|2|ask_qty_0|ask_qty_0|bid_qty_0|
|3|ask_qty_1|microprice_0|ask_qty_1|
|4|microprice_0|ask_price_0|bid_qty_1|
|5|ask_qty_3|bid_qty_1|ask_price_0|
|6|bid_price_0|bid_price_0|spread_0|
|7|ask_price_0|spread_0|bid_price_0|
|8|bid_qty_1|ask_qty_2|low_0|
|9|last_close_price_4|midprice_8|microprice_8|
|10|||spread_8|
|11|||ask_price_8|
|12|||bid_price_8|
|13|||ask_price_4|
|14|||spread_4|
|15|||bid_price_4|
|16|||ask_qty_2|
|17|||bid_qty_2|
|18|||high_4|
(The first appearance of a feature, from left to right, is shown in bold).
[https://doi.org/10.1371/journal.pone.0277042.t001](https://doi.org/10.1371/journal.pone.0277042.t001)
ask and bid prices ( *ask_price_0* and *bid_price_0* ). There is a general predominance of features
corresponding to the latest orderbook movements (i.e., those denominated with low numerals,
primarily 0 and 1). This may be a consequence of the markedly stochastic nature of market
behaviour, which tends to limit the predictive power of any feature to proximate market movements. Hence the heightened importance of the latest market tick when determining the following action, even if the actor is beholden to take the same action repeatedly during the next
5 seconds, only re-evaluating the action-determining market features after said period has
elapsed. Nevertheless, the prices 4 and 8 orderbook movements prior the action setting instant
also make fairly a strong appearance in the importance indicator lists (particularly for SFI),
suggesting the existence of slightly longer-term predictive component that may be tapped into
profitably.
The total number of features retained to define the states for our agents is therefore 32: the
10 private features and these 22 market features.
**4.1.5 Actions (** ***A*** **).** The actions taken by the Alpha-AS agent rely on the ask and bid prices
given by the Avellaneda-Stoikov procedure. As its action, to repeat for the duration of the time
step, the agent chooses the values of two parameters to apply to this procedure: risk aversion
( *γ* ) and skew (which alters the ask and bid prices obtained with the AS method). For each of
these parameters the values are chosen from a finite set, as follows:
- **Risk aversion (** ***γ*** **)** : a parameter of the AS model itself, as discussed in Section 2. At each time
step, before applying the AS procedure to obtain ask and bid prices (Eqs (1), (2) and (3)), the
agent selects a value for *γ* from the set {0.01, 0.1, 0.2, 0.9}. We restrict the agent’s choice to
these values so that it usually behaves with high risk aversion (high values of *γ* ) but is also
able to switch to a more aggressive, low risk aversion strategy (setting *γ* = 0.01) when it
deems the conditions so require.
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 14 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
- **Skew** : after a bid and ask price are obtained by the AS procedure, the agent modifies them by
a fraction given by the skew. The modified formulas for the ask and bid price are:
*p* *[a]* ¼ ð *r* þ d *a* Þð1 þ *Skew* Þ ð13Þ
*p* *[b]* ¼ ð *r* � d *b* Þð1 þ *Skew* Þ ð14Þ
Where the value for the Skew is chosen from the set {−0.1, −0.05, 0, 0.05, 0.1}. Therefore, by
choosing a *Skew* value the Alpha-AS agent can shift the output price upwards or downwards
by up to 10%.
The combination of the choice of one from among four available values for *γ*, with the
choice of one among five values for the skew, consequently results in 20 possible actions for
the agent to choose from, each being a distinct ( *γ*, skew) pair. We chose a discrete action space
for our experiment to apply RL to manipulate AS-related parameters, aiming keep the algorithm as simple and quickly trainable as possible. A continuous action space, as the one used
to choose spread values in [28], may possibly perform better, but the algorithm would be more
complex and the training time greater.
The AS model has further parameters to set: the time interval, π, for the estimation of order
book liquidity ( *k* ), and the window, *w*, of orderbook ticks to consider when determining market volatility (as the standard deviation of the mid-price, *σ* ). Unlike *γ* and skew, the values for π
and *w* are not set through actions of the Alpha-AS agent. Instead, they are fixed at the values
reached by genetic selection for the direct AS model (see Section 4.2).
**4.1.6 Reward (** ***R*** **).** The reward is given by the **Asymmetric dampened P&L** [23, 37] (Eq
(15)). This is obtained from the algorithm’s P&L, discounting the losses from speculative positions. The Asymmetric dampened P&L penalizes speculative positions, as speculative profits
are not added while losses are discounted.
*R* ð *t* *i* Þ ¼ Cðt *i* Þ � *max* ½0 *; I* ðt *i* ÞD *m* ðt *i* Þ� ð15Þ
where C( *τ* *i* ) is the open P&L for the 5-second action time step, *I* ( *τ* *i* ) is the inventory held by
the agent and Δ *m* ( *τ* *i* ) is the speculative P&L (the difference between the open P&L and the
close P&L), at time *τ* *i*, which is the end of the *i* th 5-second agent action cycle.
**4.1.7 The Alpha-AS deep Q-learning algorithm.** With the above definition of our
Alpha-AS agent and its orderbook environment, states, actions and rewards, we can now
revisit the reinforcement learning model introduced in Section (4.1.2) and specify the AlphaAS RL model.
Fig 3 illustrates the model structure. The Alpha-AS agent receives an update of the orderbook (its environment) every time a market tick occurs. The Alpha-AS agent records the new
market tick information by modifying the appropriate market features it keeps as part of its
state representation. The agent also places one bid and one ask order in response to every tick.
Once every 5 seconds, the agent records the asymmetric dampened P&L it has obtained as its
reward for placing these bid and ask orders during the latest 5-second time step. Based on the
market state and the agent’s private indicators (i.e., its latest inventory levels and rewards), a
prediction neural network outputs an action to take. As defined above, this action consists in
setting the value of the risk aversion parameter, *γ*, in the Avellaneda-Stoikov formula to calculate the bid and ask prices, and the skew to be applied to these. The agent will place orders at
the resulting skewed bid and ask prices, once every market tick during the next 5-second time
step.
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 15 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
**Fig 3. The Alpha-Avellaneda-Stoikov workflow.**
[https://doi.org/10.1371/journal.pone.0277042.g003](https://doi.org/10.1371/journal.pone.0277042.g003)
Consequently, the Alpha-AS agent adapts its bid and ask order prices dynamically, reacting
closely (at 5-second steps) to the changing market. This 5-second interval allows the Alpha-AS
algorithm to acquire experience trading with a certain bid and ask price repeatedly under
quasi-current market conditions. As we shall see in Section 4.2, the parameters for the direct
Avellaneda-Stoikov model to which we compare the Alpha-AS model are fixed (using a genetic
algorithm) at a parameter tuning step once every 5 days of trading data.
The reinforcement learning algorithm works as follows:
Initialize *Q* ( *s*, *a* ) to 0
For each episode:
For *t* = 0. . . *T* :
Record the current state, *s*
Every 5 seconds (i.e., if *t* % *Timestep* = 0):
Apply policy function:
Choose the action, *a* to take from the current state, *s*,
using either:
exploration (with prob. *ε* ): set a random ( *γ*, *skew* )
pair
or
exploitation (with prob. 1− *ε* ): obtain a ( *γ*, *skew* ) pair
from the neural network
Take action *a* : apply the Avellaneda-Stoikov formulas with
( *γ*, *skew* )
Update the memory replay buffer
*Q* *i* ð *s; a* Þ ¼ ð1 � aÞ *PredictionDQN* ð *s; a* Þ þ a½ *R* ð *s; a* Þ þ g *d* ð *max* *a* 0 ð *TargetDQN* ð *s* [0] *; a* [0] ÞÞ�
If *t* % *training* _ *predict* _ *period* = 0: train Prediction DQN
If *t* % *training* _ *target* _ *period* = 0: train Target DQN
The memory replay buffer is a 10,000×84 matrix with a column for each available action
and for each of the features that describe the market states. Its rows fill up with successive experiences recorded at every market tick. Each row contains the private and market feature values
defining the MDP’s state, *s* ; the latest rewards, *r*, associated with each of the 20 actions, when
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 16 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
they were last taken from that state; and the feature values defining the next state, *s* [0], arrived at
from *s* by taking action *a* . When the memory replay buffer is full, the ten thousand experiences
recorded in it are used to train the prediction DQN. Subsequently, this network is trained with
a new batch of experiences every 4 hours (in trading data time). The target DQN is trained
from the prediction DQN (the former is a copy of the latter) once every 2 training steps of the
prediction network (i.e., every 8 hours-worth of trading data).
At the start of every 5-second time step, the latest state (as defined in Section 4.1.4) is fed as
input to the prediction DQN. The sought-after Q values–those corresponding to past experiences of taking actions from this state– are then computed for each of the 20 available actions,
using both the prediction DQN and the target DQN (Eq (9)).
An *ε* -greedy policy is followed to determine the action to take during the next 5-second
window, choosing between exploration (random action selection), with probability *ε*, and
exploitation (selection of the action currently with the highest Q value), with probability 1- *ε* .
The selected action is then taken repeatedly, once every market tick, in the following 5-second
window, at the end of which the reward (the Asymmetric Dampened P&L) obtained from this
repeated execution of the action is computed.
*Neural network architectures* . The prediction DQN receives as input the state-defining features, with their values normalised, and it outputs a value between 0 and 1 for each action.
Hence, it has 32 input neurons (one per feature) and 20 output neurons (one per action available to the agent). The DQN has two hidden layers, each with 104 neurons, all applying a ReLu
activation function. The output layer neurons perform linear activation.
At each training step (every 4 hours) the parameters of the prediction DQN are updated
using gradient descent. An early stopping strategy is followed on 25% of the training sets to
avoid overfitting. The architecture of the target DQN is identical to that of the prediction
DQN, the parameters of the former being copied from the latter every 8 hours.
We tested two variants of our Alpha-AS model, differing in the architecture of their hidden
layers. Initial tests with a DNN featuring two dense hidden layers were followed by tests using
a RNN with two LSTM (long short-term memory) hidden layers, encouraged by results
reported using this architecture [26, 38].
#### **4.2 Gen-AS: Avellaneda-Stoikov model with genetically tuned parameters**
There are two basic parameters to be determined in our direct Avellaneda-Stoikov model (Eqs
(1)–(3)): risk aversion ( *γ* ) and the time interval, *π*, for the estimation of the order book liquidity parameter ( *k* ) (no further quantities need to be specified in our AS model, as discussed in
Section 2). We also need to specify the size of the window of ticks, *w*, used to estimate volatility
( *σ* ). The size of this parameter space is large, and we need to find the values that make the AS
model perform as close to optimally as possible. One way to achieve this would be to calibrate
the parameters using closed formulas derived from reasonable statistical models, in the line
explored by Ferna´ndez-Tapia [4] Another option is to rely on genetic algorithms, which have
been applied widely to calibrate machine learning models [39–41]. In algorithmic trading they
are commonly used to find the parameter values of a trading model that produce the highest
profit [42]. This motivated us to lean on a genetic algorithm to find the best-performing values
for our parameters [43].
The genetic algorithm described below decides the values for the parameters throughout
the test period, based on the relative performance over the latest full day of trading achieved by
a population of models with differing values for their parameters To underline the biological
metaphor, the set of parameters, or *genes*, on which the model is being tuned is called a *chro-*
*mosome* . Genetic algorithms compare the performance of a population of copies of a model,
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 17 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
each with random variations, called *mutations*, in the values of the genes present in its chromosomes. The best-performing models, that is, the model instances which achieve the highest
score on a *fitness function*, are selected to create from them a new generation of models by
introducing further mutations and by mixing the chromosomes of the selected parent models,
a procedure referred to as *crossover* . This process of random mutation, crossover, and selection
of the fittest is iterated over a number of generations, with the genetic pool gradually evolving.
Finally, the best-performing model overall, with its corresponding parameter values contained
in its chromosome, is retained for subsequent application to the problem at hand. In our case,
it will be the AS model used as a baseline against which to compare the performance of our
Alpha-AS model.
**Parameters and data.** For our Gen-AS model we define a chromosome with three genes,
corresponding to the aforementioned parameters. We seek the best-performing values for
these parameters, within the following ranges (in which we deem the values are reasonable):
- Risk aversion ( *γ* ): [0.01, 0.9].
- Time interval ( *π* ) to estimate *k* : [1, 10].
- Tick window size ( *w* ): [5, 25].
Our fitness function is the Sharpe ratio, defined as follows:
*Sharpe ratio* ¼ *mean* ð *returns* Þ *=std* ð *returns* Þ ð16Þ
We performed genetic search at the beginning of the experiment, aiming to obtain the values of the AS model parameters that yield the highest Sharpe ratio, working on the same orderbook data.
**Procedure.** Our algorithm works through 10 generations of instances of the AS model,
which we will refer to as *individuals*, each with a different chromosomal makeup (parameter
values). In the first generation, 45 individuals were created by assigning to each of the four
genes random values within the defined ranges. These individuals run (in parallel) through the
orderbook data, and are then ranked according to the Sharpe ratio they have attained. For
each subsequent generation 45 new individuals run through the data and then added to the
cumulative population, retaining all the individuals from previous generations. The 10 generations thus yield a total of 450 individuals, ranked by their Sharpe ratio. Note that, since we
retain all individuals from generation to generation, the highest Sharpe ratio the cumulative
population never decreases in subsequent generations.
The chromosomes of the 45 individuals that are added to the pool in each generation are
determined as follows. An average of 70% of chromosomes are created by crossover and 30%
by mutation. More precisely, each new chromosome has a probability of 0.7 of being created
by crossover and of 0.3 of being created by mutation. We now describe how our mutation and
crossover mechanisms work:
*Mutation (asexual reproduction)* . A single parent individual is selected randomly from the
current population (all the individuals created so far in previous generations), with a selection
probability proportional to the Sharpe score it has achieved (thus, higher-scoring individuals
have a greater probability of passing on their genes). The chromosome of the selected individual is then extracted and a truncated Gaussian noise is applied to its genes (truncated, so that
the resulting values don’t fall outside the defined intervals). The new genetic values form the
chromosome of the offspring model. The mean of the added Gaussian noise is 0; its standard
deviation starts at twice the value range of each of the genes, to generate the second-generation
offspring, and it is decreased exponentially to generate each subsequent generation (the standard deviation is one hundredth of the gene’s value range, to generate the 10 [th] generation).
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 18 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
*Crossover (sexual reproduction)* . Two parents are selected from the current population.
Again, the probability of selecting a specific individual for parenthood is proportional to the
Sharpe ratio it has achieved. A weighted average of the values of the two parents’ genes is then
computed.
Let *γ* *x*, *w* *x* *and k* *x* be the parameter values of the first parent, *x*, and *γ* *y*, *w* *y* *and k* *y* the genes of
the second parent, *y* . The genes of the offspring, *O*, will be determined as:
g *O* ¼ *a* g *x* þ ð1 � *a* Þg *y* ð17Þ
*w* *O* ¼ *bw* *x* þ ð1 � *b* Þ *w* *y* ð18Þ
p *O* ¼ *c* p *x* þ ð1 � *c* Þp *y* ð19Þ
where *a*, *b*, *c* and *d* are random values between 0.2 and 0.8.
**Initial parameter tuning results.** The data for the first use of the genetic algorithm was
the full day of trading on 8 [th] December 2020.
The parameter values of this best-performing instance of the AS model are the following:
- Risk aversion: *γ* = 0.624.
- Tick window size: *w* = 25.
- Time interval: *π* = 1 minute.
- As stated in Section 4.1.7, these values for *w* and *k* are taken as the fixed parameter values for
the Alpha-AS models. They are not recalibrated periodically for the Gen-AS so that their values do not differ from those used throughout the experiment in the Alpha-AS models. If *w*
and *k* were different for Gen-AS and Alpha-AS, it would be hard to discern whether
observed differences in the performance of the models are due to the action modifications
learnt by the RL algorithm or simply the result of differing parameter optimisation values.
Alternatively, *w* and *k* could be recalibrated periodically for the Gen-AS model and the new
values introduced into the Alpha-AS models as well. However, this would require discarding
the prior training of the latter every time *w* and *k* are updated, forcing the Alpha-AS models
to restart their learning process every time.
#### **5. Experimental setup**
All tests and training were run on the same computer, with an *AMD Ryzen Threadripper*
*2990WX 3* . *0GHz* CPU and 64GB of RAM, running on windows 10 x64 with python 3.6 and
java 1.8.
#### **5.1 Data and test procedure**
The dataset used contains the L2 orderbook updates and market trades from the btc-usd (bitcoin–dollar pair), for the period from 7 [th] December 2020 to 8 [th] January 2021, with 12 hours of
trading data recorded for each day. Most of the data, the Java source code and the results are
accessible from the project’s GitHub repository [44].
For every day of data the number of ticks occurring in each 5-second interval had positively
skewed, long-tailed distributions. The means of these thirty-two distributions ranged from 33
to 110 ticks per 5-second interval, the standard deviations from 21 to 67, the minimums ran
from 0 to 20, the maximums from 233 to 1338, and the skew ranged from 1.0 to 4.4.
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 19 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
The btc-usd data for 7 [th] December 2020 was used to obtain the feature importance values
with the MDI, MDA and SFI metrics, to select the most important features to use as input to
the Alpha-AS neural network model.
The btc-usd data for the following day, 8 [th] December 2020, was used for two purposes:
- To start filling Alpha-AS memory replay buffer and training the model (Section 5.2).
- To perform the first genetic tuning of the baseline AS model parameters (Section 4.2).
The resulting Gen-AS model, two non-AS baselines (based on Gasˇperov [25]) and the two
Alpha-AS model variants were run with the rest of the dataset, from 9 [th] December 2020 to 8 [th]
January 2021 (30 days), and their performance compared.
#### **5.2 Training**
In the training phase we fit our two Alpha-AS models with data from a full day of trading (8 [th]
December 2020). In this, the most time-consuming step of the backtest process, our algorithms
learned from their trading environment what AS model parameter values to choose every five
seconds of trading (in those 5 seconds; see Section 4.1.3).
We were able to achieve some parallelisation by running five backtests simultaneously on
different CPU cores. Each process filled its own memory replay buffer. Upon finalization of
the five parallel backtests, the five respective memory replay buffers were merged. This constituted one training iteration. 10 such training iterations were completed, all on data from the
same full day of trading, with the memory replay buffer resulting from each iteration fed into
the next. The replay buffer obtained from the final iteration was used as the initial one for the
test phase. At this point the trained neural network model had 10,000 rows of experiences and
was ready to be tested out-of-sample against the baseline AS models.
The training time for each Alpha-AS model was approximately 7 hours.
#### **5.3 Test models and performance indicators**
We compared the performance of our two Alpha-AS model variants with three baseline models. To reiterate, our two Alpha-AS double DQN architectures differed as follows:
- Alpha-AS-1 uses a DNN with two dense hidden layers.
- Alpha-AS-2 uses a RNN with two LSTM hidden layers.
Our three baseline models:
- AS-Gen: the Avellaneda-Stoikov model with genetically tuned parameters (described in Section 4.2).
- Fixed Offset with Inventory Constraints (Constant Spread) [25]: FOIC is a constant spread
model that places a buy order and a sell order on the first level of the orderbook, until an
inventory limit is reached. When this happens, only one side of the algorithm operates (buy
or sell), in order to offset the inventory and so reduce market risk.
- Linear in Inventory with Inventory Constraints (Linearly constant spread) [25]: LIIC is also
a constant linear spread algorithm that can place first-level quotes on both sides of the market. It differs from FOIC in its inventory offset strategy to reduce risk: in LIIC the quantity of
the buy (sell) orders is decreased linearly as the positive (negative) inventory increases.
When a positive (negative) inventory threshold is reached, buy (sell) orders are interrupted.
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 20 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
The following performance indicators were used to compare the models at the end of each
test day:
- Sharpe ratio: a measure of risk-adjusted return (given by Eq (16)). The Sharpe ratio contrasts
returns against their volatility, penalizing higher values of the latter (regardless of whether
the returns are positive or negative).
- Sortino ratio: a variation of the Sharpe ratio that penalizes the volatility of negative returns
only (Eq (20)).
*mean* ð *returns* Þ
*Sortino ratio* ¼ ð20Þ
*std* ð *negative returns* Þ
- Maximum drawdown (Max DD) [25]: the largest drop in portfolio value between any two
instants throughout the current test day (less is better).
- P&L to Mean Absolute Position ratio (P&L-to-MAP) [25]: a measure of return (the Open
P&L) relative to inventory size, *I* (Eq (16)). Lower inventory levels, whether positive or negative, yield higher P&L-to-MAP values, reflecting the lower risk.
*P* & *L to Map* ¼ Cð *t* *i* Þ ð21Þ
*mean* ðj *I* jÞ
#### **6 Results**
The performance results for the 30 days of testing of the two Alpha-AS models against the
three baseline models are shown in Tables 2–5. All ratios are computed from Close P&L
returns (Section 4.1.6), except P&L-to-MAP, for which the open P&L is used. Figures in bold
(underlined) are the best (second best) values among the five models for the corresponding
test days. Figures for Alpha-AS 1 and 2 are given in green (red) if their value is higher (lower)
than that for the AS-Gen model for the same day. Higher (green) is better than lower (red) for
the Sharpe ratio, the Sortino ratio and P&L-to-MAP, while the opposite is true for Max DD.
The bottom row (‘Days best’) in each table totals the number of days for which each model
achieved the best score for the corresponding performance indicator. Figures in parenthesis
are the number of days the Alpha-AS model in question (1 or 2) was second best only to the
other Alpha-AS model (and therefore would have computed another overall ‘win’ had it competed alone against the baseline and AS-Gen models).
Tables 2 to 5 show performance results over 30 days of test data, by indicator ( **2.** Sharpe
ratio; **3.** Sortino ratio; **4.** Max DD; **5.** P&L-to-MAP), for the two baseline models (FOIC and
LIIC), the Avellaneda-Stoikov model with genetically optimised parameters (AS-Gen) and the
two Alpha-AS models.
Table 6 compares the results of the Alpha-AS models, combined, against the two baseline
models and Gen-AS. The figures represent the percentage of wins of one among the models in
each group against all the models in the other group, for the corresponding performance
indicator.
A Kruskal-Wallis test shows that there are strongly significant differences across the models
for each of the four daily performance indicators ( *H* ð4Þ *Sharpe* ¼ 66 *:* 22 *; H* ð4Þ *Sortino* ¼ 66 *:* 10 *;*
*H* ð4Þ *Max* � *DD* ¼ 54 *:* 80 *; H* ð4Þ *P* & *L* � *to* � *MAP* ¼ 106 *:* 30; *p <* 10 [�] [10] in all cases).
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 21 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
|Table 2. Sharpe ratio.|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|Day|FOIC|LIIC|AS-Gen|Alpha-AS 1|Alpha-AS 2|
|1|-0.36|-0.31|-0.39|-0.10|0.10|
|2|-0.53|-0.51|-0.38|-0.05|-0.22|
|3|-0.29|-0.32|-0.25|-0.18|-0.02|
|4|-0.27|-0.36|-0.32|0.04|-0.10|
|5|-0.47|-0.51|-0.43|-0.06|-0.07|
|6|-0.53|-0.52|-0.42|-0.31|0.02|
|7|-0.42|-0.60|-0.51|-0.17|-0.07|
|8|-0.39|-0.46|-0.28|-0.18|-0.08|
|9|-0.29|-0.52|-0.39|-0.01|-0.09|
|10|-0.57|-0.51|-0.27|-0.46|-0.23|
|11|-0.32|-0.38|-0.43|-0.16|-0.06|
|12|-0.27|-0.57|-0.24|-0.07|0.01|
|13|-0.43|-0.32|-0.24|-0.01|-0.15|
|14|-0.51|-0.30|-0.20|0.17|-0.29|
|15|-0.37|-0.29|-0.26|-0.26|-0.12|
|16|-0.54|-0.14|-0.41|0.18|-0.06|
|17|-0.51|-0.40|-0.13|-0.11|-0.03|
|18|-0.46|-0.27|-0.22|-0.04|-0.12|
|19|-0.51|-0.47|-0.21|-0.07|0.00|
|20|-0.30|-0.31|-0.06|-0.03|-0.04|
|21|-0.57|-0.44|0.10|-0.41|-0.03|
|22|-0.49|0.02|-0.21|-0.12|-0.19|
|23|-0.57|-0.52|-0.28|-0.32|-0.28|
|24|-0.42|-0.50|-0.31|-0.36|-0.48|
|25|-0.51|-0.30|-0.04|0.19|-0.01|
|26|-0.51|-0.41|-0.04|-0.02|0.06|
|27|-0.35|-0.06|-0.01|0.00|-0.38|
|28|-0.30|-0.08|-0.18|-0.29|-0.28|
|29|-0.56|-0.16|0.04|-0.15|-0.20|
|30|-0.31|-0.39|-0.27|-0.04|-0.43|
|Days best|0|2|4|12 (+11)|12 (+7)|
|Median|-0.45|-0.39|-0.26|-0.09|-0.09|
|Mean|-0.43|-0.36|-0.24|-0.11|-0.13|
|Std. Dev.|0.10|0.16|0.15|0.16|0.14|
[https://doi.org/10.1371/journal.pone.0277042.t002](https://doi.org/10.1371/journal.pone.0277042.t002)
Post-hoc Mann-Whitney tests were conducted to analyse selected pairwise differences
between the models regarding these performance indicators. The results are summarised in
Table 7.
#### **Sharpe ratio**
The Sharpe ratio is a measure of mean returns that penalises their volatility. Table 2 shows that
one or the other of the two Alpha-AS models achieved better Sharpe ratios, that is, better riskadjusted returns, than all three baseline models on 24 (12+12) of the 30 test days. Furthermore,
on 9 of the 12 days for which Alpha-AS-1 had the best Sharpe ratio, Alpha-AS-2 had the second best; conversely, there are 11 instances of Alpha-AS-1 performing second best after
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 22 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
|Table 3. Sortino ratio.|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|Day|FOIC|LIIC|AS-Gen|Alpha-AS 1|Alpha-AS 2|
|1|-0.34|-0.30|-0.38|-0.10|0.23|
|2|-0.47|-0.46|-0.36|-0.08|-0.22|
|3|-0.29|-0.32|-0.26|-0.18|-0.02|
|4|-0.27|-0.35|-0.34|0.10|-0.10|
|5|-0.44|-0.46|-0.42|-0.06|-0.07|
|6|-0.48|-0.47|-0.40|-0.30|0.03|
|7|-0.39|-0.52|-0.46|-0.16|-0.07|
|8|-0.37|-0.42|-0.27|-0.19|-0.08|
|9|-0.28|-0.48|-0.38|-0.02|-0.09|
|10|-0.50|-0.47|-0.28|-0.42|-0.23|
|11|-0.31|-0.36|-0.39|-0.16|-0.07|
|12|-0.26|-0.50|-0.24|-0.08|0.01|
|13|-0.40|-0.32|-0.25|-0.01|-0.21|
|14|-0.46|-0.29|-0.20|0.47|-0.28|
|15|-0.35|-0.28|-0.27|-0.27|-0.12|
|16|-0.49|-0.16|-0.41|1.06|-0.06|
|17|-0.45|-0.38|-0.14|-0.15|-0.03|
|18|-0.43|-0.26|-0.23|-0.07|-0.12|
|19|-0.47|-0.44|-0.24|-0.07|0.00|
|20|-0.32|-0.31|-0.08|-0.04|-0.05|
|21|-0.50|-0.41|0.19|-0.39|-0.03|
|22|-0.45|0.04|-0.25|-0.12|-0.20|
|23|-0.50|-0.47|-0.29|-0.32|-0.27|
|24|-0.39|-0.46|-0.30|-0.34|-0.43|
|25|-0.46|-0.29|-0.04|0.46|-0.01|
|26|-0.47|-0.39|-0.05|-0.05|0.08|
|27|-0.34|-0.08|-0.01|0.00|-0.36|
|28|-0.29|-0.11|-0.21|-0.30|-0.28|
|29|-0.49|-0.16|0.06|-0.15|-0.19|
|30|-0.30|-0.37|-0.28|-0.08|-0.40|
|Days best|0|2|3|12 (+10)|13 (+9)|
|Median|-0.42|-0.37|-0.27|-0.09|-0.09|
|Mean|-0.40|-0.34|-0.24|-0.07|-0.12|
|Std. Dev.|0.08|0.14|0.15|0.29|0.15|
[https://doi.org/10.1371/journal.pone.0277042.t003](https://doi.org/10.1371/journal.pone.0277042.t003)
Alpha-AS-2. Thus, the Alpha-AS models came 1 [st] *and* 2 [nd] on 20 out of the 30 test days (67%).
The AS-Gen model was a distant third, with 4 wins on Sharpe. The mean and the median of
the Sharpe ratio over all test days was better for both Alpha-AS models than for the Gen-AS
model (although the statistical significance of the difference was at best marginal after Bonferroni correction), and in turn the Gen-AS model performed significantly better on Sharpe than
the two non-AS baselines.
#### **Sortino ratio**
Similarly, on the Sortino ratio, one or the other of the two Alpha-AS models performed better,
that is, obtained better negative risk-adjusted returns, than all the baseline models on 25 (12
+13) of the 30 days. Again, on 9 of the 12 days for which Alpha-AS-1 had the best Sharpe ratio,
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 23 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
**Table 4. Maximum drawdown.**
|Day|FOIC|LIIC|AS-Gen|Alpha-AS 1|Alpha-AS 2|
|---|---|---|---|---|---|
|1|39.41|39.56|4.05|0.07|6.99|
|2|28.72|31.20|3.87|0.64|7.66|
|3|31.36|10.47|0.45|2.61|0.00|
|4|18.29|20.47|3.05|0.08|8.09|
|5|27.76|35.76|4.04|0.10|0.16|
|6|17.37|16.20|2.97|5.07|0.71|
|7|17.59|25.05|6.68|0.39|0.11|
|8|96.81|90.21|22.48|4.97|0.23|
|9|111.75|162.30|6.48|0.06|127.32|
|10|94.41|77.03|1.11|45.51|9.76|
|11|95.33|149.33|18.17|4.64|0.28|
|12|24.03|60.17|4.57|2.13|5.70|
|13|69.68|30.26|3.39|1,869.89|12.07|
|14|92.46|43.99|3.35|3.74|59.63|
|15|43.85|24.67|2.88|41.24|2.42|
|16|38.43|5.49|3.74|0.22|0.48|
|17|131.80|101.02|1.62|16.98|0.03|
|18|141.45|56.23|9.43|6.48|1.16|
|19|200.47|259.65|3.94|118.57|21.22|
|20|21.76|22.28|0.86|0.03|0.13|
|21|93.37|61.43|1.98|45.03|0.01|
|22|118.91|1.02|4.45|1.15|28.48|
|23|64.97|68.67|4.24|21.86|32.02|
|24|476.14|703.11|26.64|153.98|509.06|
|25|222.26|115.24|0.03|9.48|20.24|
|26|555.30|245.81|0.84|65.28|97.17|
|27|200.44|13.29|0.19|6.37|110.83|
|28|84.33|6.23|3.90|42.85|67.48|
|29|353.84|27.23|3.27|1.94|455.20|
|30|309.28|365.19|14.28|108.75|554.01|
|Days best|0|1|11|9 (+4)|9 (+3)|
|Median|92.92|41.78|3.81|5.02|7.88|
|Mean|127.39|95.62|5.57|86.00|71.29|
|Std. Dev.|135.60|142.99|6.48|339.22|152.00|
[https://doi.org/10.1371/journal.pone.0277042.t004](https://doi.org/10.1371/journal.pone.0277042.t004)
Alpha-AS-2 had the second best; and for 10 of the 13 test days for which after Alpha-AS-2
obtained the best Sortino ratio, Alpha-AS-1 performed second best. *Both* Alpha-AS models
performed better than the rest on 19 days. Meanwhile, AS-Gen, again the best of the rest, won
on Sortino on only 3 test days. The mean and the median of the Sortino ratio were better for
both Alpha-AS models than for the Gen-AS model (again with only marginal statistical significance), and for the latter it was significantly better than for the two non-AS baselines.
#### **Maximum drawdown**
Maximum drawdown (Max DD) registers the largest loss of portfolio value registered between
any two points of a full day of trading. By identifying the largest losses from any peak within
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 24 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
|Table 5. P&L-to-MAP.|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|Day|FOIC|LIIC|AS-Gen|Alpha-AS 1|Alpha-AS 2|
|1|-201,046.82|-124,815.40|-1,643.93|-0.20|21.51|
|2|-37,830.20|-244,586.13|-1,064.75|-1.10|-16.40|
|3|-85,397.04|-10,492.71|-13.44|-5.96|-0.01|
|4|-76,534.03|-39,925.66|-3,370.04|0.20|-18.10|
|5|-31,910.31|-91,931.80|-1,171.24|-0.15|-0.30|
|6|-73,352.04|-21,918.20|-1,053.34|-14.44|7.50|
|7|-29,077.48|-87,900.74|-39,443.20|-0.65|-0.18|
|8|-114,697.71|-163,935.67|-43,171.85|-11.53|-0.36|
|9|-265,431.69|-2,658,430.60|-47,010.74|-0.04|-300.74|
|10|-679,194.37|-666,451.16|-37.22|-283.04|-19.79|
|11|-180,195.11|-316,767.04|-15,074.68|-7.21|-0.62|
|12|-153,020.33|-204,990.67|-50,665.89|-2.37|2.46|
|13|-280,686.62|-92,545.21|-5,358.47|-14,041.08|-54.67|
|14|-348,744.49|-132,878.74|-2,080.53|62.85|-121.49|
|15|-238,994.79|-62,593.42|-1,221.36|-1,001.73|-3.29|
|16|-85,973.96|-6,453.16|-2,522.39|10.60|-0.70|
|17|-583,164.73|-2,118,249.56|-2,779.62|-26.71|-0.04|
|18|-255,274.76|-262,997.46|-16,438.08|-7.40|-1.70|
|19|-973,167.02|-539,787.43|-2,288.63|-590.29|-1.57|
|20|-18,379.71|-40,276.07|-1,939.48|-0.06|-0.19|
|21|-258,629.43|-133,033.02|1,695.84|-287.66|-0.01|
|22|-311,208.74|749.20|-5,558.30|-2.83|-44.51|
|23|-62,458.91|-176,342.18|-9,187.70|-153.45|-51.84|
|24|-59,514,101.49|-1,377,661.44|-93,473.30|-1,490.48|-1,531.04|
|25|-163,066.66|-693,103.94|-0.88|163.63|-3.98|
|26|-9,915,984.62|-758,394.51|-414.18|-67.99|298.49|
|27|-171,010.82|-9,592.09|-0.97|0.41|-280.94|
|28|-69,672.79|-9,169.87|-12,425.81|-801.50|-152.62|
|29|-236,205.98|-49,300.73|2,309.20|-5.30|-1,282.23|
|30|-173,457.54|-746,529.51|-13,023.30|-136.72|-1,404.35|
|Days best|0|1|2|11 (+14)|16 (+9)|
|Median|-176,826.33|-132,955.88|-2,405.51|-6.59|-2.50|
|Mean|-2,519,595.67|-394,676.83|-12,280.94|-623.41|-165.39|
|Std. Dev.|10,910,922.98|630,895.76|21,519.58|2,559.30|433.33|
[https://doi.org/10.1371/journal.pone.0277042.t005](https://doi.org/10.1371/journal.pone.0277042.t005)
each day, this indicator can be leveraged to monitor and learn from downward trends in
rewards that are longer stretching than those captured by the Sortino ratio, and penalize the
actions that led to them in the market context in which they were taken.
On this performance indicator, AS-Gen was the overall best performing model, winning on
11 days. The mean Max DD for the AS-Gen model over the entire test period was visibly the
**Table 6. Number of days either Alpha-AS-1 or Alpha-AS-2 scored best out of all tested models, for each of the**
|four performance indicators.|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||Sharpe|Sortino|Max DD|P&L Map|
|1st and 2nd place days for Alpha-AS 1 & 2|20|19|7|23|
[https://doi.org/10.1371/journal.pone.0277042.t006](https://doi.org/10.1371/journal.pone.0277042.t006)
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 25 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
**Table 7. Mann-Whitney tests comparing the four daily performance indicator values (Sharpe, Sortino, Max DD and P&L-to-MAP) obtained for the Gen-AS model**
**with the corresponding values obtained for the other models, over the 30 test days.** (Reported: Mann-Whitney U, significance level (p, with Bonferroni correction) and
ffiffiffiffiffi
effect size ( *r* ¼ *Z=* p30)).
|Comparison|Performance indicator|Col3|Col4|Col5|
|---|---|---|---|---|
|Gen-AS vs.:|Sharpe|Sortino|Max DD|P&L-to-MAP|
|FOIC|U ¼ 128:5; p < 10� 5; r ¼ 0:87|U ¼ 139:5; p < 10� 4; r ¼ 0:84|U ¼ 11; p < 10� 9; r ¼ 1:18|U ¼ 26; p < 10� 8; r ¼ 1:14|
|LIIC|U ¼ 238; p < :05; r ¼ 0:57|U ¼ 247; p < :05; r ¼ 0:55|U ¼ 56; p < 10� 7; r ¼ 1:06|U ¼ 85; p < 10� 6; r ¼ 0:96|
|Alpha-AS-1|U ¼ 253; p < :1; r ¼ � 0:53|U ¼ 255:5; p < :1; r ¼ � 0:53|U ¼ 378:5; p > :2|U ¼ 147; p < 10� 4; r ¼ � 0:82|
|Alpha-AS-2|U ¼ 260:5; p < :1; r ¼ � 0:51|U ¼ 244; p < :05; r ¼ � 0:56|U ¼ 366:5; p > :2|U ¼ 85; p < 10� 4; r ¼ � 0:86|
[https://doi.org/10.1371/journal.pone.0277042.t007](https://doi.org/10.1371/journal.pone.0277042.t007)
lowest (best), and its standard deviation was also the lowest by far from among all models. In
comparison, both the mean and the standard deviation of the Max DD for the Alpha-AS models were very high. We note that the fact that the standard deviation was so high for the AlphaAS models, and accounting for the day victories Alpha-AS 1 and 2 ‘stole’ from one another,
they would have achieved the best Max DD performance for 13 and 12 of the test days, respectively, both slightly better than AS-Gen. Indeed, the differences in Max DD performance
between Gen-AS and either of the Alpha-AS models, over all test days, are not statistically significant, despite the large differences in means. The latter are a result of extreme outliers for
the Alpha-AS models from days in which these obtained a very poor (i.e., high) value for Max
DD. The medians, however, are very similar to the median for the Gen-AS model.
Nevertheless, it is still interesting to note that AS-Gen performs much better on this indicator than on the others, relative to the Alpha-AS models. To understand why this may be so, we
recall that AS-Gen does not alter the risk aversion parameter after it has been set through
genetic selection to the value that performs best on the initial test data, nor does it modify the
spread given by the AS formulas, which is mathematically optimal to the extent that its parameter values are realistic. This means that, *provided its parameter values describe the market envi-*
*ronment closely enough*, the pure AS model is guaranteed to output the bid and ask prices that
minimise inventory risk, and *any* deviation from this strategy will entail a greater risk.
Throughout a full day of trading, it is more likely than within shorter time frames that there
will be intervals at which the market is indeed closely matched by the AS formula parameters.
The greater inventory risk taken by the Alpha-AS models during such intervals can be punished with greater losses. Occasionally the losses may be large (as an example, Table 4 reveals
that Alpha-AS-1 suffered an exceptionally large Max DD of 1,869.89 on test day 13), though
further testing would be required to ascertain whether or not these extreme values are actually
outliers due to chance alone. Conversely, the gains may also be greater, a benefit which is
indeed reflected unequivocally in the results obtained for the P&L-to-MAP performance
indicator.
#### **P&L-to-MAP**
On the P&L-to-MAP ratio, Alpha-AS-1 was the best-performing model for 11 test days, with
Alpha-AS-2 coming second on 9 of them, whereas Alpha-AS-2 was the best-performing model
on P&L-to-MAP for 16 of the test days, with Alpha-AS-1 coming second on 14 of these. Here
the single best-performing model was Alpha-AS-2, winning for 16 days and coming second on
10 (on 9 of which losing to Alpha-AS-1). Alpha-AS-1 had 11 victories and placed second 16
times (losing to Alpha-AS-2 on 14 of these). AS-Gen had the best P&L-to-MAP ratio only for
2 of the test days, coming second on another 4. The mean and the median P&L-to-MAP ratio
were very significantly better for both Alpha-AS models than the Gen-AS model.
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 26 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
**Table 8. Comparison of values for Max DD and P&L-to-MAP between the Gen-AS model and the Alpha-AS models (αAS1 and αAS2).** The “Sign comparison of
value differences” side of the table (right) highlights in green (Alpha-AS “better”) the test days for which the respective Alpha-AS models performed worse on Max DD but
better on P&L-to-MAP relative to the Gen-AS model, the latter being the more desirable indicator in which to perform well (since maximizing the P&L profile is the central
point of the AS method). Conversely, test days for which the Alpha-ASs did worse than Gen-AS on P&L-to-MAP in spite of performing better on Max DD are highlighted
in red (Alpha-AS “worse”).
|Day|Value difference (GenAS−αASx)|Col3|Col4|Col5|Sign comparison of value differences between Max DD and P&L-to-MAP|Col7|
|---|---|---|---|---|---|---|
||Max DD||P&L-to-MAP||||
||αAS1|αAS2|αAS1|αAS2|αAS1|αAS2|
|1|3.98|-2.94|1,643.73|1,665.44|Same|Opposite (α better)|
|2|3.23|-3.79|1,063.65|1,048.35|Same|Opposite (α better)|
|3|-2.16|0.45|7.48|13.43|Opposite (α better)|Same|
|4|2.97|-5.04|3,370.24|3,351.94|Same|Opposite (α better)|
|5|3.94|3.88|1,171.09|1,170.94|Same|Same|
|6|-2.10|2.26|1,038.90|1,060.84|Opposite (α better)|Same|
|7|6.29|6.57|39,442.55|39,443.02|Same|Same|
|8|17.51|22.25|43,160.32|43,171.49|Same|Same|
|9|6.42|-120.84|47,010.70|46,710.00|Same|Opposite (α better)|
|10|-44.40|-8.65|-245.82|17.43|Same|Opposite (α better)|
|11|13.53|17.89|15,067.47|15,074.06|Same|Same|
|12|2.44|-1.13|50,663.52|50,668.35|Same|Opposite (α better)|
|13|-1,866.50|-8.68|-8,682.61|5,303.80|Same|Opposite (α better)|
|14|-0.39|-56.28|2,143.38|1,959.04|Opposite (α better)|Opposite (α better)|
|15|-38.36|0.46|219.63|1,218.07|Opposite (α better)|Same|
|16|3.52|3.26|2,532.99|2,521.69|Same|Same|
|17|-15.36|1.59|2,752.91|2,779.58|Opposite (α better)|Same|
|18|2.95|8.27|16,430.68|16,436.38|Same|Same|
|19|-114.63|-17.28|1,698.34|2,287.06|Opposite (α better)|Opposite (α better)|
|20|0.83|0.73|1,939.42|1,939.29|Same|Same|
|21|-43.05|1.97|-1,983.50|-1,695.85|Same|Opposite (α worse)|
|22|3.30|-24.03|5,555.47|5,513.79|Same|Opposite (α better)|
|23|-17.62|-27.78|9,034.25|9,135.86|Opposite (α better)|Opposite (α better)|
|24|-127.34|-482.42|91,982.82|91,942.26|Opposite (α better)|Opposite (α better)|
|25|-9.45|-20.21|164.51|-3.10|Opposite (α better)|Same|
|26|-64.44|-96.33|346.19|712.67|Opposite (α better)|Opposite (α better)|
|27|-6.18|-110.64|1.38|-279.97|Opposite (α better)|Same|
|28|-38.95|-63.58|11,624.31|12,273.19|Opposite (α better)|Opposite (α better)|
|29|1.33|-451.93|-2,314.50|-3,591.43|Opposite (α worse)|Same|
|30|-94.47|-539.73|12,886.58|11,618.95|Opposite (α better)|Opposite (α better)|
|Days αASx better / worse|14 / 16|12 / 18|26 / 4|26 / 4|13 / 1|15 / 1|
[https://doi.org/10.1371/journal.pone.0277042.t008](https://doi.org/10.1371/journal.pone.0277042.t008)
On the whole, the Alpha-AS models are doing the better job at accruing gains while keeping
inventory levels under control.
Table 8 provides further insight combining the results for Max DD and P&L-to-MAP.
From the negative values (highlighted in red) in the Max DD columns, we see that Alpha-AS-1
had a larger Max DD (i.e., performed worse) than Gen-AS on 16 of the 30 test days. However,
on 13 of those days Alpha-AS-1 achieved a better P&L-to-MAP score than Gen-AS, substantially so in many instances. Only on one day (day 29) was the trend reversed, with Gen-AS performing slightly worse than Alpha-AS-1 on Max DD, but then performing better than AlphaAS-1 on P&L-to-MAP. The comparison with Alpha-AS-2 follows the same pattern.
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 27 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
From these considerations we may conclude that, while the Alpha-AS models take greater
risks with their bid and ask prices, hence the comparatively poor performance on Max DD,
they nevertheless obtain much better profit-to-inventory ratios (P&L-to-MAP), thus displaying superior inventory risk management compared to the baseline models.
Two important observations can be drawn from these results:
1. Gen-AS performs better than the baseline models, as expected from a model that is
designed to place bid and ask prices that minimize inventory risk optimally (by mathematical construction) given a set of parameter values that are themselves optimized periodically
from market data using a genetic algorithm.
2. Overall, both Alpha-AS models obtain higher and more stable returns, as well as a better
P&L-to-inventory profile than AS-Gen and the non-AS baseline models. That is, they
achieve a better P&L profile with less exposure to market movements.
The latter is an important feature for market maker algorithms. Indeed, this result is particularly noteworthy as the Avellaneda-Stoikov method sets as its goal precisely to minimize the
inventory risk. Nevertheless, the flexibility that the Alpha-AS models are given to move and
stretch the bid and ask price spread entails that the Alpha-AS models can, and sometimes do,
operate locally with higher risk. This sometimes leads to poorer performance indicator values,
most notably a higher Max DD. Recalling that Max DD is a high watermark record of peak-totrough portfolio value drops throughout a full day of trading, it provides a snapshot of overall
performance that reveals the Alpha-AS models may operate with more aggressive bid and ask
quotes than regular AS (albeit with the non-regular feature of genetically tuned parameters).
Overall performance is more meaningfully obtained from the other indicators (Sharpe, Sortino
and P&L-to-MAP), which show that, at the end of the day, the Alpha-AS models’ strategy pays
off.
No significant differences were found between the two Alpha-AS models.
#### **7 Conclusions**
Reinforcement learning algorithms have been shown to be well-suited for use in high frequency trading (HFT) contexts [16, 24–26, 37, 45, 46], which require low latency in placing
orders together with a dynamic logic that is able to adapt to a rapidly changing environment.
In the literature, reinforcement learning approaches to market making typically employ models that act directly on the agent’s order prices, without taking advantage of knowledge we may
have of market behaviour or indeed findings in market-making theory. These models, therefore, must learn everything about the problem at hand, and the learning curve is steeper and
slower to surmount than if relevant available knowledge were to be leveraged to guide them.
We have designed a market making agent that relies on the Avellaneda-Stoikov procedure
to minimize inventory risk. It does so by acting on the risk aversion parameter of the Avellaneda-Stoikov equations and using these equations to calculate the bid and ask prices that are
optimum for the chosen level of risk aversion, insofar as the other parameters in the equations
reflect the market environment accurately. The agent can also skew the bid and ask prices output by the Avellaneda-Stoikov procedure, tweaking them and, by so doing, potentially counteract the limitations of a static Avellaneda-Stoikov model by reacting to local market
conditions. The agent learns to adapt its risk aversion and skew its bid and ask prices under
varying market behaviour through reinforcement learning using two variants (Alpha-AS-1
and Alpha-AS-2) of a double DQN architecture. The central notion is that, by relying on a procedure developed to minimise inventory risk (the Avellaneda-Stoikov procedure) by way of
prior knowledge, the RL agent can learn more quickly and effectively.
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 28 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
A second contribution is the setting of the initial parameters of the Avellaneda-Stoikov procedure by means of a genetic algorithm working with real backtest data. This is an efficient
way of arriving at quasi-optimal values for these parameters given the market environment in
which the agent begins to operate. From this point, the RL agent can gradually diverge as it
learns by operating in the changing market.
Backtests were performed on 30 days of bitcoin-dollar pair (BTC-USD) market data, comparing the performance of the Alpha-AS models with that of two standard baseline models
and a third baseline model implementing the Avellaneda-Stoikov procedure but without a RL
agent tweaking its parameters or output bid and ask prices. This Avellaneda-Stoikov baseline
model (Gen-AS) constitutes another original contribution, to our knowledge, in that its
parameters are optimised using a genetic algorithm working on a day’s worth of data prior to
the test data. The genetic algorithm selects the best-performing values (on the Sharpe ratio)
found for the Gen-AS parameters on the corresponding day of data. This procedure helps
establish AS parameter values that fit initial market conditions. The same set of parameters
obtained for the Gen-AS model are used to specify the initial Alpha-AS models. The goal with
this approach is to offer a fair comparison of the former with the latter. By training with fullday backtests on real data respecting the real-time activity latencies, the models obtained are
readily adaptable for use in a real market trading environment.
The performance of the Alpha-AS models in terms of the Sharpe, Sortino and P&L-toMAP ratios (particularly the latter) was substantially superior to that of the Gen-AS model,
which in turn was superior to that of the two standard baselines. On the other hand, the performance of the Alpha-AS models on maximum drawdown varied significantly on different test
days, losing to Gen-AS on over half of them, a reflection of their greater aggressiveness, made
possible by their relative freedom of action. Overall, however, days of substantially better performance relative to the non-Alpha-AS models far outweigh those with poorer results, and at
the end of the day the Alpha-AS models clearly achieved the best and least exposed P&L profiles. The approach, therefore, seems promising.
The results obtained suggest avenues to explore for further improvement. Drawdowns were
our algorithm’s most apparent weakness. It can be addressed in various ways. First, the reward
function can be tweaked to penalise drawdowns more directly. Other indicators, such as the
Sortino ratio, can also be used in the reward function itself. Another approach is to explore
risk management policies that include discretionary rules. Alternatively, experimenting with
further layers to learn such policies autonomously may ultimately yield greater benefits, as
indeed may simply altering the number of layers and neurons, or the loss functions, in the current architecture.
Our motivation to continue base the trading actions on the AS formulas (rather than having
the RL-based agent determine the quotes directly) is that these formulas furnish approximations to the theoretically optimal bid and ask quotes, albeit based on assumptions regarding
the statistical behaviour of the market which may fall short of being realistic (as has been
observed, e.g., in [25]). This potential weakness of the analytical AS approach notwithstanding,
we believe the theoretical optimality of its output approximations is not to be undervalued. On
the contrary, we find value in using it as a starting point from which to diverge dynamically,
taking into account the most recent market behaviour.
The original Avellaneda-Stoikov model was chosen as a starting point for our research.
Notable refinements of the AS approach have since been proposed, such as Gue´ant’s [5]
closed-form solution to the market maker problem for both single and multiple assets, modelling the mid-price and trades as Brownian movements, and Bergault et al.’s more recent contribution [6] also inspired by the Gue´ant-Lehalle-Fernandez-Tapia approximations [4]. We
plan to use such approximations in further tests with our RL approach.
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 29 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
The training of the neural network has room for improvement through systematic optimisation of the network’s parameters. Characterisation of different market conditions and specific training under them, with appropriate data (including carefully crafted synthetic data),
can also broaden and improve the agent’s strategic repertoire. The agent’s action space itself
can potentially also be enriched profitably, by adding more values for the agent to choose from
and making more parameters settable by the agent, beyond the two used in the present study
(i.e., risk aversion and skew). In the present study we have simply chosen the finite value sets
for these two parameters that we deem reasonable for modelling trading strategies of differing
levels of risk. This helps to keep the models simple and shorten the training time of the neural
network in order to test the idea of combining the Avellaneda-Stoikov procedure with reinforcement learning. The results obtained in this fashion encourage us to explore refinements
such as models with continuous action spaces. Similarly, the suite of state features may also be
extended to include other signals, including sentiment indicators and typical HFT indicators
such as Probability of Informed Trading (PIN) and Volume Synchronized Probability of
Informed Trading (VPIN) that can help to uncover dynamics based on trusted trader information [35]. The logic of the Alpha-AS model might also be adapted to exploit alpha signals [47].
We relied on random forests to filter state-defining features based on their importance
according to three indicators. Various techniques are worth exploring in future work for this
purpose, such as PCA, Autoencoders, Shapley values [48] or Cluster Feature Importance (CFI)
[49]. Other modifications to the neural network architectures presented here may prove
advantageous. We mention neuroevolution to train the neural network using genetic algorithms [50] and adversarial networks [24] to improve the robustness of the market making
algorithm.
In future work we will experiment combining these ideas. We also plan to compare the performance of the Alpha-AS models with that of leading RL models in the literature that do not
work with the Avellaneda-Stoikov procedure.
#### **Author Contributions**
**Conceptualization:** Javier Falces Marin, David Dı´az Pardo de Vera.
**Data curation:** Javier Falces Marin.
**Formal analysis:** Javier Falces Marin, Eduardo Lopez Gonzalo.
**Funding acquisition:** Eduardo Lopez Gonzalo.
**Investigation:** Javier Falces Marin.
**Methodology:** Eduardo Lopez Gonzalo.
**Project administration:** David Dı´az Pardo de Vera.
**Resources:** Javier Falces Marin.
**Software:** Javier Falces Marin.
**Supervision:** David Dı´az Pardo de Vera, Eduardo Lopez Gonzalo.
**Validation:** Javier Falces Marin, Eduardo Lopez Gonzalo.
**Visualization:** Javier Falces Marin.
**Writing – original draft:** Javier Falces Marin.
**Writing – review & editing:** David Dı´az Pardo de Vera, Eduardo Lopez Gonzalo.
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 30 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
#### **References**
**1.** Foucault T, Pagano M, Roell A. Market Liquidity: Theory, Evidence, and Policy. New York: Oxford Uni[versity Press; 2013. https://doi.org/10.1093/acprof:oso/9780199936243.001.0001](https://doi.org/10.1093/acprof%3Aoso/9780199936243.001.0001)
**2.** Avellaneda M, Stoikov S. High-frequency trading in a limit order book. Quant Finance. 2008; 8: 217–
[224. https://doi.org/10.1080/14697680701381228](https://doi.org/10.1080/14697680701381228)
**3.** Aldridge I. High-Frequency Trading, Aldridge, A practical Guide to Algorithmic Strategies and Trading
Systems. Manuscript, Toulouse University, IDEI. 2009. pp. 1–839.
**4.** Gue´ant O, Lehalle C-A, Tapia JF. Dealing with the Inventory Risk. A solution to the market making prob[lem. 2011. https://doi.org/10.1007/s11579-012-0087-0](https://doi.org/10.1007/s11579-012-0087-0)
**5.** [Gue´ant O. Optimal market making. Appl Math Finance. 2017; 24: 112–154. https://doi.org/10.1080/](https://doi.org/10.1080/1350486X.2017.1342552)
[1350486X.2017.1342552](https://doi.org/10.1080/1350486X.2017.1342552)
**6.** Bergault P, Evangelista D, Gue´ant O, Vieira D. Closed-form Approximations in Multi-asset Market Mak[ing. Appl Math Finance. 2021; 28: 101–142. https://doi.org/10.1080/1350486X.2021.1949359](https://doi.org/10.1080/1350486X.2021.1949359)
**7.** Creamer G. Model calibration and automated trading agent for Euro futures. Quant Finance. 2012; 12:
[531–545. https://doi.org/10.1080/14697688.2012.664921](https://doi.org/10.1080/14697688.2012.664921)
**8.** Creamer G, Freund Y. A Boosting Approach for Automated Trading. The Journal of Trading. 2007; 2:
[84–96. https://doi.org/10.3905/jot.2007.688953](https://doi.org/10.3905/jot.2007.688953)
**9.** [Heaton JB, Polson NG, Witte JH. Deep Portfolio Theory. 2016; 1–17. Available: https://www.](https://www.researchgate.net/publication/303494294_Deep_Portfolio_Theory)
[researchgate.net/publication/303494294_Deep_Portfolio_Theory](https://www.researchgate.net/publication/303494294_Deep_Portfolio_Theory)
**10.** Asness CS. The Siren Song of Factor Timing. Journal of Portfolio Management. 2016;Special Issue.
[https://doi.org/10.2139/ssrn.2763956](https://doi.org/10.2139/ssrn.2763956)
**11.** Feng G, He J. Factor Investing: Hierarchical Ensemble Learning. SSRN Electronic Journal. 2019; 1–29.
[https://doi.org/10.2139/ssrn.3326617](https://doi.org/10.2139/ssrn.3326617)
**12.** Houweling P, van Zundert J. Factor Investing in the Corporate Bond Market. Financial Analysts Journal.
[2017;73. https://doi.org/10.2139/ssrn.2516322](https://doi.org/10.2139/ssrn.2516322)
**13.** [Kakushadze Z. 101 Formulaic Alphas. Wilmott. 2016; 72–81. https://doi.org/10.1002/wilm.10525](https://doi.org/10.1002/wilm.10525)
**14.** Collado RA, Creamer GG. Time series forecasting with a learning algorithm: an approximate dynamic
programming approach. 22nd International Conference on Computational Statistics (COMPSTAT).
2016; 1–21.
**15.** Asness CS, Liew JM, Pedersen LH, Thapar AK. Deep Value. The Journal of Portfolio Management.
[2021; 47: 11–40. https://doi.org/10.3905/jpm.2021.1.215](https://doi.org/10.3905/jpm.2021.1.215)
**16.** Nevmyvaka Y, Yi F, Kearns M. Reinforcement learning for optimized trade execution. ACM Interna[tional Conference Proceeding Series. 2006; 148: 673–680. https://doi.org/10.1145/1143844.1143929](https://doi.org/10.1145/1143844.1143929)
**17.** Buehler H, Gonon L, Teichmann J, Wood B. Deep hedging. Quant Finance. 2019; 19: 1271–1291.
[https://doi.org/10.1080/14697688.2019.1571683](https://doi.org/10.1080/14697688.2019.1571683)
**18.** Franco-Pedroso J, Gonzalez-Rodriguez J, Planas M, Cubero J, Cobo R, Pablos F. The ETS Challenges: A Machine Learning Approach to the Evaluation of Simulated Financial Time Series for Improv[ing Generation Processes. Institutional Investor Journals Umbrella. 2019; 1: 68–86. https://doi.org/10.](https://doi.org/10.3905/jfds.2019.1.3.068)
[3905/jfds.2019.1.3.068](https://doi.org/10.3905/jfds.2019.1.3.068)
**19.** Barto RSS and AG. Reinforcement Learning. 2nd ed. Sorgna, editor. The MIT Press; 2005.
**20.** Silver D, Huang A, Maddison CJ, Guez A, Sifre L, van den Driessche G, et al. Mastering the game of
[Go with deep neural networks and tree search. Nature. 2016; 529: 484–489. https://doi.org/10.1038/](https://doi.org/10.1038/nature16961)
[nature16961 PMID: 26819042](https://doi.org/10.1038/nature16961)
**21.** Mnih V, Kavukcuoglu K, Silver D, Graves A, Antonoglou I, Wierstra D, et al. Playing Atari with Deep
Reinforcement Learning. 2013.
**22.** Lee JW, Park J, O J, Lee J, Hong E. A multiagent approach to Q-learning for daily stock trading. IEEE
Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans. 2007; 37: 864–877.
[https://doi.org/10.1109/TSMCA.2007.904825](https://doi.org/10.1109/TSMCA.2007.904825)
**23.** Juchli M. Limit order placement optimization with Deep Reinforcement Learning: Learning from patterns
[in cryptocurrency market data. 2018. Available: https://repository.tudelft.nl/islandora/object/uuid%](https://repository.tudelft.nl/islandora/object/uuid%3Ae2e99579-541b-4b5a-8cbb-36ea17a4a93a)
[3Ae2e99579-541b-4b5a-8cbb-36ea17a4a93a](https://repository.tudelft.nl/islandora/object/uuid%3Ae2e99579-541b-4b5a-8cbb-36ea17a4a93a)
**24.** Spooner T, Savani R. Robust Market Making via Adversarial Reinforcement Learning. 29th Interna[tional Joint Conference on Artificial Intelligence. 2020. https://doi.org/10.24963/ijcai.2020/626](https://doi.org/10.24963/ijcai.2020/626)
**25.** Gasperov B, Kostanjcar Z. Market Making with Signals through Deep Reinforcement Learning. IEEE
[Access. 2021; 9: 61611–61622. https://doi.org/10.1109/ACCESS.2021.3074782](https://doi.org/10.1109/ACCESS.2021.3074782)
**26.** Kumar P. Deep Recurrent Q-Networks for Market Making.
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 31 / 32
-----
##### PLOS ONE A reinforcement learning approach to improve the performance of the Avellaneda-Stoikov market-making algorithm
**27.** Gue´ant O, Manziuk I. Deep Reinforcement Learning for Market Making in Corporate Bonds: Beating
[the Curse of Dimensionality. Appl Math Finance. 2019; 26: 387–452. https://doi.org/10.1080/](https://doi.org/10.1080/1350486X.2020.1714455)
[1350486X.2020.1714455](https://doi.org/10.1080/1350486X.2020.1714455)
**28.** Gasperov B, Kostanjcar Z. Deep Reinforcement Learning for Market Making Under a Hawkes Process[Based Limit Order Book Model. IEEE Control Syst Lett. 2022; 6: 2485–2490. https://doi.org/10.1109/](https://doi.org/10.1109/LCSYS.2022.3166446)
[LCSYS.2022.3166446](https://doi.org/10.1109/LCSYS.2022.3166446)
**29.** Gasˇperov B, Begusˇić S, S [ˇ ] imović PP, Kostanjčar Z. Reinforcement learning approaches to optimal mar[ket making. Mathematics. MDPI; 2021. https://doi.org/10.3390/math9212689](https://doi.org/10.3390/math9212689)
**30.** [Patel Y. Optimizing Market Making using Multi-Agent Reinforcement Learning. 2018. Available: http://](http://arxiv.org/abs/1812.10252)
[arxiv.org/abs/1812.10252](http://arxiv.org/abs/1812.10252)
**31.** van Hasselt H, Guez A, Silver D. Deep reinforcement learning with double Q-Learning. 30th AAAI Conference on Artificial Intelligence, AAAI 2016. 2016; 2094–2100.
**32.** Teleña SA [´ ] . Trading 2.0: Learning-Adaptive Machines. 1st ed. Lulu, editor. Lulu; 2012.
**33.** [Watkins CJCH Dayan P. Q-learning. Mach Learn. 1992; 8: 279–292. https://doi.org/10.1007/](https://doi.org/10.1007/BF00992698)
[BF00992698](https://doi.org/10.1007/BF00992698)
**34.** Lo´pez de Prado MM. Advances in Financial Machine Learning. 1st ed. Wiley, editor. Wiley; 2018.
**35.** Cartea A [´ ], Jaimungal S, Penalva J. Algorithmic and High-Frequency Trading. Cambridge University
[Press; 2015. Available: https://books.google.es/books?id=5dMmCgAAQBAJ](https://books.google.es/books?id=5dMmCgAAQBAJ)
**36.** [Hudson & Thames. Ml Finlab. In: github [Internet]. 2020. Available: https://github.com/hudson-and-](https://github.com/hudson-and-thames/mlfinlab)
[thames/mlfinlab](https://github.com/hudson-and-thames/mlfinlab)
**37.** Spooner T, Savani R, Fearnley J, Koukorinis A. Market making via reinforcement learning. Proceedings
of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS. 2018;
[1: 434–442. https://doi.org/10.1007/978-1-4842-5127-0_4](https://doi.org/10.1007/978-1-4842-5127-0%5F4)
**38.** Kapturowski S, Ostrovski G, Quan J, Munos R, Dabney W. Recurrent experience replay in distributed
reinforcement learning. 7th International Conference on Learning Representations, ICLR 2019. 2019;
1–19.
**39.** Chen S-H. Genetic Algorithms and Genetic Programming in Computational Finance: An Overview of
[the Book. Genetic Algorithms and Genetic Programming in Computational Finance. 2002; 1–26. https://](https://doi.org/10.1007/978-1-4615-0835-9%5F1)
[doi.org/10.1007/978-1-4615-0835-9_1](https://doi.org/10.1007/978-1-4615-0835-9%5F1)
**40.** Fortin FA, de Rainville FM, Gardner MA, Parizeau M, Gagńe C. DEAP: Evolutionary algorithms made
easy. Journal of Machine Learning Research. 2012; 13: 2171–2175.
**41.** van Krevelen DWF. Genetic Algorithm Control using Reinforcement Learning—{I}ntroducing the auto[tune and auto-control ({ATAC}) framework. 2007. https://doi.org/10.13140/RG.2.1.3351.1446](https://doi.org/10.13140/RG.2.1.3351.1446)
**42.** Ferna´ndez-Blanco P, Bodas-Sagi D, Soltero F, Hidalgo JI. Technical market indicators optimization
using evolutionary algorithms. GECCO’08: Proceedings of the 10th Annual Conference on Genetic and
[Evolutionary Computation 2008. 2008; 1851–1857. https://doi.org/10.1145/1388969.1388989](https://doi.org/10.1145/1388969.1388989)
**43.** [Brownlee J. Clever Algorithms. Search. 2011. Available: http://www.cleveralgorithms.com](http://www.cleveralgorithms.com)
**44.** [Falces J. HFTFramework. In: github [Internet]. 2021. Available: https://github.com/javifalces/](https://github.com/javifalces/HFTFramework)
[HFTFramework](https://github.com/javifalces/HFTFramework)
**45.** Dixon MF, Halperin I, Bilokon P. Applications of Reinforcement Learning. Machine Learning in Finance.
[2020; 347–418. https://doi.org/10.1007/978-3-030-41068-1_10](https://doi.org/10.1007/978-3-030-41068-1%5F10)
**46.** Sadighian J. Extending Deep Reinforcement Learning Frameworks in Cryptocurrency Market Making.
[2020; 1–19. Available: http://arxiv.org/abs/2004.06985](http://arxiv.org/abs/2004.06985)
**47.** Cartea A [´ ], Wang Y. Market Making with Alpha Signals. Capital Markets: Market Microstructure eJournal.
[2019. https://doi.org/10.2139/ssrn.3439440](https://doi.org/10.2139/ssrn.3439440)
**48.** Lipovetsky S, Conklin M. Analysis of regression in game theory approach. Appl Stoch Models Bus Ind.
[2001; 17: 319–330. https://doi.org/10.1002/asmb.446](https://doi.org/10.1002/asmb.446)
**49.** Lo´pez de Prado MM. Machine Learning for Asset Managers. Machine Learning for Asset Managers.
[2020. https://doi.org/10.1017/9781108883658](https://doi.org/10.1017/9781108883658)
**50.** Such FP, Madhavan V, Conti E, Lehman J, Stanley KO, Clune J. Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning.
[2017. Available: http://arxiv.org/abs/1712.06567](http://arxiv.org/abs/1712.06567)
[PLOS ONE | https://doi.org/10.1371/journal.pone.0277042](https://doi.org/10.1371/journal.pone.0277042) December 20, 2022 32 / 32
-----
| 35,899
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC9767337, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0277042&type=printable"
}
| 2,022
|
[
"JournalArticle"
] | true
| 2022-12-20T00:00:00
|
[
{
"paperId": "3ff3e4e49ce26e7a2809f570bfb5a8f5c6c70ac6",
"title": "Deep Reinforcement Learning for Market Making Under a Hawkes Process-Based Limit Order Book Model"
},
{
"paperId": "3b66d890a6eb95bb906296be35cf32efd1b65add",
"title": "Reinforcement Learning Approaches to Optimal Market Making"
},
{
"paperId": "108b764c198bac7e1ebc8c71563dafa3a4854103",
"title": "Relation-Aware Transformer for Portfolio Policy Learning"
},
{
"paperId": "f701416f6325e0a3d539d4583b12052e6872f618",
"title": "Extending Deep Reinforcement Learning Frameworks in Cryptocurrency Market Making"
},
{
"paperId": "71258d3413c5f50750842686e757981accc2c597",
"title": "Machine Learning for Asset Managers (Chapter 1)"
},
{
"paperId": "e42ff1fe7ee2e343779b560c385b245f2d7ffdb8",
"title": "Deep Reinforcement Learning for Market Making in Corporate Bonds: Beating the Curse of Dimensionality"
},
{
"paperId": "729bca00acf384f81c3036f1bd527f31ab28b9c3",
"title": "Market Making with Alpha Signals"
},
{
"paperId": "15c1de6dcc2c6926431fbd69f9fac91fcedda49b",
"title": "Factor Investing: A Bayesian Hierarchical Approach"
},
{
"paperId": "d68b608f8384f0436d467fd1811dd5a44c59643c",
"title": "Optimizing Market Making using Multi-Agent Reinforcement Learning"
},
{
"paperId": "b701d7965ea7ac8d0754886dbd406253933c064f",
"title": "The ETS Challenges: A Machine Learning Approach to the Evaluation of Simulated Financial Time Series for Improving Generation Processes"
},
{
"paperId": "eb9a6601b336cb9e6d84938585ba048c41544dd3",
"title": "Closed-form Approximations in Multi-asset Market Making"
},
{
"paperId": "8ede7ddf99986d69562455bc8d69222fc3e27350",
"title": "Recurrent Experience Replay in Distributed Reinforcement Learning"
},
{
"paperId": "5304fa70d844da391cd12e45e38b57ab37195024",
"title": "Market Making via Reinforcement Learning"
},
{
"paperId": "5728b151bb5ab14d71ea05299b3f630fc64de31f",
"title": "Deep hedging"
},
{
"paperId": "920e561cee4fd4888212ed127d51aa09c02be6c3",
"title": "Advances in Financial Machine Learning: Numerai's Tournament (seminar slides)"
},
{
"paperId": "819bcae49054e00cef3c0972d48b4e40a525f4d9",
"title": "Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning"
},
{
"paperId": "c551a66e460b6fb69555a134f01cced5a486b953",
"title": "Deep Value"
},
{
"paperId": "ed1201656b7a68876c39186c2f97c731d2200498",
"title": "Factor Investing in the Corporate Bond Market"
},
{
"paperId": "4597601206954c6ca60605902b9f80e3de36cd71",
"title": "Time Series Forecasting With a Learning Algorithm: An Approximate Dynamic Programming Approach"
},
{
"paperId": "886e33407a0cc5869ad98a82a353998bea5d38ac",
"title": "101 Formulaic Alphas: 101 Formulaic Alphas"
},
{
"paperId": "6a16391ddf3469ca9cd1e6c69ba0e123bd3cde7f",
"title": "Deep Portfolio Theory"
},
{
"paperId": "cc790af4dad95d5a98228c40952e3174c64f32ee",
"title": "Optimal market making"
},
{
"paperId": "713e14652d78b4169ce308a4dc0b53d70d617b68",
"title": "The Siren Song of Factor Timing"
},
{
"paperId": "826dddb57bf2f1ef0527163d2262a40eeb73c2e5",
"title": "Algorithmic and High Frequency Trading"
},
{
"paperId": "846aedd869a00c09b40f1f1f35673cb22bc87490",
"title": "Mastering the game of Go with deep neural networks and tree search"
},
{
"paperId": "3b9732bb07dc99bde5e1f9f75251c6ea5039373e",
"title": "Deep Reinforcement Learning with Double Q-Learning"
},
{
"paperId": "dc7ab80d61f8d731eb39d19da95e74e4ba2cd6ee",
"title": "High Frequency Trading"
},
{
"paperId": "2319a491378867c7049b3da055c5df60e1671158",
"title": "Playing Atari with Deep Reinforcement Learning"
},
{
"paperId": "fedda623eacd6ea567c5f969a3b4771dbbfb5920",
"title": "Market Liquidity: Theory, Evidence, and Policy"
},
{
"paperId": "81d83b61784a935ef1ccc45662dd5ae603aa3da2",
"title": "Model calibration and automated trading agent for Euro futures"
},
{
"paperId": "507afa808cb52cd6ba690ed07e7d83a3fb6bec81",
"title": "Dealing with the inventory risk: a solution to the market making problem"
},
{
"paperId": "c1dd950b719d8d789c7c459c216b2460de5ada47",
"title": "Dealing with the Inventory Risk"
},
{
"paperId": "e7e3adc997c6d42de2656be50c9f446d1297539f",
"title": "Technical market indicators optimization using evolutionary algorithms"
},
{
"paperId": "98e6e6d759ac60bff783c76a8bcc60bb4282b311",
"title": "High-frequency trading in a limit order book"
},
{
"paperId": "74155751ab3aeef962a88fb6de449cac1a939b75",
"title": "A Multiagent Approach to $Q$-Learning for Daily Stock Trading"
},
{
"paperId": "eb3edccd6b426cb1a8e821f76128409d144340fc",
"title": "A Boosting Approach for Automated Trading"
},
{
"paperId": "235a9b0733accac54785ec272fcba0c07f4e8b0f",
"title": "Reinforcement learning for optimized trade execution"
},
{
"paperId": "b3e3c54cca2aee76d65c855f389a52cfa34f3816",
"title": "Genetic Algorithms and Genetic Programming in Computational Finance"
},
{
"paperId": "b7ef403dae5c7873ddf279e2a62a51e7b03038c7",
"title": "Analysis of regression in game theory approach"
},
{
"paperId": "03b7e51c52084ac1db5118342a00b5fbcfc587aa",
"title": "Q-learning"
},
{
"paperId": "d09dd3cd352d5d8347f153647e47cd25dc3bb41b",
"title": "Market Making With Signals Through Deep Reinforcement Learning"
},
{
"paperId": null,
"title": "HFTFramework."
},
{
"paperId": "f9f5ff86ec2a20a4246e22724fc146acfceb9a5c",
"title": "Applications of Reinforcement Learning"
},
{
"paperId": null,
"title": "Ml Finlab"
},
{
"paperId": "bf3844f3eb8162721209c2dfd36f30bad7cac337",
"title": "Limit order placement optimization with Deep Reinforcement Learning: Learning from patterns in cryptocurrency market data"
},
{
"paperId": "e25e044425a17faf678551c18d99eabbd207f0d0",
"title": "30th AAAI Conference on Artificial Intelligence, AAAI 2016"
},
{
"paperId": "80e9cdc3679e56f3dd33a498bb8e30164b7bb578",
"title": "DEAP: evolutionary algorithms made easy"
},
{
"paperId": null,
"title": "Trading 2.0: Learning-Adaptive Machines"
},
{
"paperId": null,
"title": "Clever Algorithms."
},
{
"paperId": null,
"title": "Genetic Algorithm Control using Reinforcement Learning—{I}ntroducing the auto-tune and auto-control ({ATAC}) framework"
},
{
"paperId": "8f39bf0226639383800658b09795035e1080676f",
"title": "Genetic Algorithms and Genetic Programming in Computational Finance: An Overview of the Book"
},
{
"paperId": "7a4e1f542918011367a7ac1da016085993f53739",
"title": "Deep Recurrent Q-Networks for Market Making"
}
] | 35,899
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.